Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 48 min 23 sec ago

Andrew Pollock: [life] Explaining "special needs"

Sun, 13/04/2014 - 00:24

I got one of those rare opportunities to calibrate Zoe's outlook on people on Friday. I feel pretty happy with the job I did.

Once we arrived at the New Farm Park ferry terminal, the girls wanted to have some morning tea, so we camped out in the terminal to have something to eat. Kim had had packed two poppers (aka "juice boxes") for Sarah so they both got to have one. Nice one, Kim!

Not long after we started morning tea, an older woman with some sort of presumably intellectual disability and her carer arrived to wait for a ferry. I have no idea what the disability was, but it presented as her being unable to speak. She'd repeatedly make a single grunting noise, and held her hands a bit funny, and would repeatedly stand up and walk in a circle, and try to rummage through the rubbish bin next to her. I exchanged a smile with her carer. The girls were a little bit wary of her because she acting strange. Sarah whispered something to me inquiring what was up with her. Zoe asked me to accompany her to the rubbish bin to dispose of her juice box.

I didn't feel like talking about the woman within her earshot, so I waited until they'd boarded their ferry, and we'd left the terminal before talking about the encounter. It also gave me a little bit of time to construct my explanation in my head.

I specifically wanted to avoid phrases like "something wrong" or "not right". For all I knew she could have had cerebral palsy, and had a perfectly good brain trapped inside a malfunctioning body.

So I explained that the woman had "special needs" and that people with special needs have bodies or brains that don't work the same way as us, and so just like little kids, they need an adult carer to take care of them so they don't hurt themselves or get lost. In the case of the woman we'd just seen, she needed a carer to make sure she didn't get lost or rummage through the rubbish bin.

That explanation seemed to go down pretty well, and that was the end of that. Maybe next time such circumstances permit, I'll try striking up a conversation with the carer.

Categories: Elsewhere

Mario Lang: Emacs Chess

Sat, 12/04/2014 - 19:00

Between 2001 and 2004, John Wielgley wrote emacs-chess, a rather complete Chess library for Emacs. I found it around 2004, and was immediately hooked. Why? Because Emacs is configurable, and I was hoping that I could customize the chessboard display much more than with any other console based chess program I have ever seen. And I was right. One of the four chessboard display types is exactly what I was looking for, chess-plain.el:

┌────────┐ 8│tSlDjLsT│ 7│XxXxXxXx│ 6│ ⡀ ⡀ ⡀ ⡀│ 5│⡀ ⡀ ⡀ ⡀ │ 4│ ⡀ ⡀ ⡀ ⡀│ 3│⡀ ⡀ ⡀ ⡀ │ 2│pPpPpPpP│ 1│RnBqKbNr│ └────────┘ abcdefgh

This might look confusing at first, but I have to admit that I grew rather fond of this way of displaying chess positions as ASCII diagrams. In this configuration, initial letters for (mostly) German chess piece names are used for the black pieces, and English chess piece names are used for the white pieces. Uppercase is used to indicate if a piece is on a black square and braille dot 7 is used to indicate an empty black square.

chess-plain is completely configurable though, so you can have more classic diagrams like this as well:

┌────────┐ 8│rnbqkbnr│ 7│pppppppp│ 6│ + + + +│ 5│+ + + + │ 4│ + + + +│ 3│+ + + + │ 2│PPPPPPPP│ 1│RNBQKBNR│ └────────┘ abcdefgh

Here, upper case letters indicate white pieces, and lower case letters black pieces. Black squares are indicated with a plus sign.

However, as with many Free Software projects, Emacs Chess was rather dormant in the last 10 years. For some reason that I can not even remember right now, my interest in Emacs Chess has been reignited roughly 5 weeks ago.

Universal Chess Interface

It all began when I did a casual apt-cache serch for chess engines, only to discover that a number of free chess engines had been developed and packaged for Debian in the last 10 years. In 2004 there was basically only GNUChess, Phalanx and Crafty. These days, a number of UCI based chess engines have been added, like Stockfish, Glaurung, Fruit or Toga2. So I started by learning how the new chess engine communication protocol, UCI, actually works. After a bit of playing around, I had a basic engine module for Emacs Chess that could play against Stockfish. After I had developed a thin layer for all things that UCI engines have in common (chess-uci.el), it was actually very easy to implement support for Stockfish, Glaurung and Fruit in Emacs Chess. Good, three new free engines supported.

Opening books

When I learnt about the UCI protocol, I discovered that most UCI engines these days do not do their own book handling. In fact, it is sort of expected from the GUI to do opening book moves. And here one thing led to another. There is quite good documentation about the Polyglot chess opening book binary format on the net. And since I absolutely love to write binary data decoders in Emacs Lisp (don't ask, I don't know why) I immediately started to write Polyglot book handling code in Emacs Lisp, see chess-polyglot.el.

It turns out that it is relatively simple and actually performs very good. Even a lookup in an opening book bigger than 100 megabytes happens more or less instantaneously, so you do not notice the time required to find moves in an opening book. Binary search is just great. And binary searching binary data in Emacs Lisp is really fun :-).

So Emacs Chess can now load and use polyglot opening book files. I integrated this functionality into the common UCI engine module, so Emacs Chess, when fed with a polyglot opening book, can now choose moves from that book instead of consulting the engine to calculate a move. Very neat! Note that you can create your own opening books from PGN collections, or just download a polyglot book made by someone else.

Internet Chess Servers

Later I reworked the internet chess server backend of Emacs Chess a bit (sought games are now displayed with tabulated-list-mode), and found and fixed some (rather unexpected) bugs in the way how legal moves are calculated (if we take the opponents rook, their ability to castle needs to be cleared).

Emacs Chess supports two of the most well known internet chess servers. The Free Internet Chess Server (FICS) and (ICC).

A Chess engine written in Emacs Lisp

And then I rediscovered my own little chess engine implemented in Emacs Lisp. I wrote it back in 2004, but never really finished it. After I finally found a small (but important) bug in the static position evaluation function, I was motivated enough to fix my native Emacs Lisp chess engine. I implemented quiescence search so that captue combinations are actually evaluated and not just pruned at a hard limit. This made the engine quite a bit slower, but it actually results in relatively good play. Since the thinking time went up, I implemented a small progress bar so one can actually watch what the engine is doing right now. chess-ai.el is a very small Lisp impelemtnation of a chess engine. Static evaluation, alpha beta and quiescence search included. It covers the basics so to speak. So if you don't have any of the above mentioned external engines installed, you can even play a game of Chess against Emacs directly.

Other features

The feature list of Emacs Chess is rather impressive. You can not just play a game of Chess against an engine, you can also play against another human (either via ICS or directly from Emacs to Emacs), view and edit PGN files, solve chess puzzles, and much much more. Emacs Chess is really a universal chess interface for Emacs.

Emacs-chess 2.0

In 2004, John and I were already planning to get emacs-chess 2.0 out the door. Well, 10 years have passed, and both of us have forgotten about this wonderful codebase. I am trying to change this. I am in development/maintainance mode for emacs-chess again. John has also promised to find a bit of time to work on a final 2.0 release.

If you are an Emacs user who knows and likes to play Chess, please give emacs-chess a whirl. If you find any problems, please file an Issue on Github, or better yet, send us a Pull Requests.

There is an emacs-chess Debian package which has not been updated in a while. If you want to test the new code, be sure to grab it from GitHub directly. Once we reach a state that at least feels like stable, I am going to update the Debian package of course.

Categories: Elsewhere

Lars Wirzenius: Programming performance art

Sat, 12/04/2014 - 16:26

Thirty years ago I started to learn programming. To celebrate this, I'm doing a bit of programming as a sort of performance art. I will write a new program, from scratch, until it is ready for me to start using it for real. The program won't be finished, but it will be ready for my own production use. It'll be something I have wanted to have for a while, but I'm not saying beforehand what it will be. For me, the end result is interesting; for you, the interesting part is watching me be stupid and make funny mistakes.

The performance starts Friday, 18 April 2014, at 09:00 UTC. I apologise if this is an awkward time for you. No time is good for everyone, so I picked a time that is good for me.

Run the following command to see what the local time will be for you.

date --date '2014-04-18 09:00:00 UTC'

While I write this program, I will broadcast my terminal to the Internet for anyone to see. For instructions, see the page.

There will be an IRC channel as well: #distix on the OFTC network ( Feel free to join there if you want to provide real time feedback (the laugh track).

Categories: Elsewhere

Russ Allbery: Accumulated haul

Sat, 12/04/2014 - 06:24

Wow, it's been a while since I've done this. In part because I've not had much time for reading books (which doesn't prevent me from buying them).

Jared Bernstein & Dean Baker — Getting Back to Full Employment (non-fiction)
James Coughtrey — Six Seconds of Moonlight (sff)
Philip J. Davis & Reuben Hersh — The Mathematical Experience (non-fiction)
Debra Dunbar — A Demon Bound (sff)
Andy Duncan & Ellen Klages — Wakulla Springs (sff)
Dave Eggers & Jordan Bass — The Best of McSweeny's (mainstream)
Siri Hustvedt — The Blazing World (mainstream)
Jacqueline Koyanagi — Ascension (sff)
Ann Leckie — Ancillary Justice (sff)
Adam Lee — Dark Heart (sff)
Seanan McGuire — One Salt Sea (sff)
Seanan McGuire — Ashes of Honor (sff)
Seanan McGuire — Chimes at Midnight (sff)
Seanan McGuire — Midnight Blue-Light Special (sff)
Seanan McGuire — Indexing (sff)
Naomi Mitchinson — Travel Light (sff)
Helaine Olen — Pound Foolish (non-fiction)
Richard Powers — Orfeo (mainstream)
Veronica Schanoes — Burning Girls (sff)
Karl Schroeder — Lockstep (sff)
Charles Stross — The Bloodline Feud (sff)
Charles Stross — The Traders' War (sff)
Charles Stross — The Revolution Trade (sff)
Matthew Thomas — We Are Not Ourselves (mainstream)
Kevin Underhill — The Emergency Sasquatch Ordinance (non-fiction)
Jo Walton — What Makes This Book So Great? (non-fiction)

So, yeah. A lot of stuff.

I went ahead and bought nearly all of the novels Seanan McGuire had out that I'd not read yet after realizing that I'm going to eventually read all of them and there's no reason not to just own them. I also bought all of the Stross reissues of the Merchant Princes series, even though I had some of the books individually, since I think it will make it more likely I'll read the whole series this way.

I have so much stuff that I want to read, but I've not really been in the mood for fiction. I'm trying to destress enough to get back in the mood, but in the meantime have mostly been reading non-fiction or really light fluff (as you'll see from my upcoming reviews). Of that long list, Ancillary Justice is getting a lot of press and looks interesting, and Lockstep is a new Schroeder novel. 'Nuff said.

Kevin Underhill is the author of Lowering the Bar, which you should read if you haven't since it's hilarious. I'm obviously looking forward to that.

The relatively obscure mainstream novels here are more Powell's Indiespensible books. I will probably cancel that subscription soon, at least for a while, since I'm just building up a backlog, but that's part of my general effort to read more mainstream fiction. (I was a bit disappointed since there were several months with only one book, but the current month finally came with two books again.)

Now I just need to buckle down and read. And play video games. And do other things that are fun rather than spending all my time trying to destress from work and zoning in front of the TV.

Categories: Elsewhere

Russ Allbery: Review: Cryptography Engineering

Sat, 12/04/2014 - 04:51

Review: Cryptography Engineering, by Niels Ferguson, et al.

Publisher: Wiley Copyright: 2010 ISBN: 0-470-47424-6 Format: Kindle Pages: 384

Subtitled Design Principals and Practical Applications, Cryptography Engineering is intended as an overview and introduction to cryptography for the non-expert. It doesn't dive deeply into the math, although there is still a fairly thorough mathematical introduction to public-key cryptography. Instead, it focuses on the principles, tools, and algorithms that are the most concretely useful to a practitioner who is trying to design secure systems rather than doing theoretical cryptography.

The "et al." in the author summary hides Bruce Schneier and Tadayoshi Kohno, and this book is officially the second edition of Practical Cryptography by Ferguson and Schneier. Schneier's name will be familiar from, along other things, Applied Cryptography, and I'll have more to say later about which of the two books one should read (and the merits of reading both). But one of the immediately-apparent advantages of Cryptography Engineering is that it's recent. Its 2010 publication date means that it recommends AES as a block cipher, discusses MD5 weaknesses, and can discuss and recommend SHA-2. For the reader whose concern with cryptography is primarily "what should I use now for new work," this has huge benefit.

"What should I use for new work" is the primary focus of this book. There is some survey of the field, but that survey is very limited compared to Applied Cryptography and is tightly focused on the algorithms and approaches that one might reasonably propose today. Cryptography Engineering also attempts to provide general principles and simplifying assumptions to steer readers away from trouble. One example, and the guiding principle for much of the book, is that any new system needs at least a 128-bit security level, meaning that any attack will require 2128 steps. This requirement may be overkill in some edge cases, as the authors point out, but when one is not a cryptography expert, accepting lower security by arguments that sound plausible but may not be sound is very risky.

Cryptography Engineering starts with an overview of cryptography, the basic tools of cryptographic analysis, and the issues around designing secure systems and protocols. I like that the authors not only make it clear that security programming is hard but provide a wealth of practical examples of different attack methods and failure modes, a theme they continue throughout the book. From there, the book moves into a general discussion of major cryptographic areas: encryption, authentication, public-key cryptography, digital signatures, PKI, and issues of performance and complexity.

Part two starts the in-depth discussion with chapters on block ciphers, block cipher modes, hash functions, and MACs, which together form part two (message security). The block cipher mode discussion is particularly good and includes algorithms newer than those in Applied Cryptography. This part closes with a walkthrough of constructing a secure channel, in pseudocode, and a chapter on implementation issues. The implementation chapters throughout the book are necessarily more general, but for me they were one of the most useful parts of the book, since they take a step back from the algorithms and look at the perils and pitfalls of using them to do real work.

The third part of the book is on key negotiation and encompasses random numbers, prime numbers, Diffie-Hellman, RSA, a high-level look at cryptographic protocols, and a detailed look at key negotiation. This will probably be the hardest part of the book for a lot of readers, since the introduction to public-key is very heavy on math. The authors feel that's unavoidable to gain any understanding of the security risks and attack methods against public-key. I'm not quite convinced. But it's useful information, if heavy going that requires some devoted attention.

I want to particularly call out the chapter on random numbers, though. This is an often-overlooked area in cryptography, particularly in introductions for the non-expert, and this is the best discussion of pseudo-random number generators I've ever seen. The authors walk through the design of Fortuna as an illustration of the issues and how they can be avoided. I came away with a far better understanding of practical PRNG design than I've ever had (and more sympathy for the annoying OpenSSL ~/.rnd file).

The last substantial part of the book is on key management, starting with a discussion of time and its importance in cryptographic protocols. From there, there's a discussion of central trusted key servers and then a much more comprehensive discussion of PKI, including the problems with revocation, key lifetime, key formats, and keeping keys secure. The concluding chapter of this part is a very useful discussion of key storage, which is broad enough to encompass passwords, biometrics, and secure tokens. This is followed by a short part discussing standards, patents, and experts.

A comparison between this book and Applied Cryptography reveals less attention to the details of cryptographic algorithms (apart from random number generators, where Cryptography Engineering provides considerably more useful information), wide-ranging surveys of algorithms, and underlying mathematics. Cryptography Engineering also makes several interesting narrowing choices, such as skipping stream ciphers almost entirely. Less surprisingly, this book covers only a tiny handful of cryptographic protocols; there's nothing here about zero-knowledge proofs, blind signatures, bit commitment, or even secret sharing, except a few passing mentions. That's realistic: those protocols are often extremely difficult to understand, and the typical security system doesn't use them.

Replacing those topics is considerably more discussion of implementation techniques and pitfalls, including more assistance from the authors on how to choose good cryptographic building blocks and how to combine them into useful systems. This is a difficult topic, as they frequently acknowledge, and a lot of the advice is necessarily fuzzy, but they at least provide an orientation. To get much out of Applied Cryptography, you needed a basic understanding of what cryptography can do and how you want to use it. Cryptography Engineering tries to fill in that gap to the point where any experienced programmer should be able to see what problems cryptography can solve (and which it can't).

That brings me back to the question of which book you should read, and a clear answer: start here, with Cryptography Engineering. It's more recent, which means that the algorithms it discusses are more directly applicable to day-to-day work. The block cipher mode and random number generator chapters are particularly useful, even if, for the latter, one will probably use a standard library. And it takes more firm stands, rather than just surveying. This comes with the risk of general principles that aren't correct in specific situations, but I think for most readers the additional guidance is vital.

That said, I'm still glad I read Applied Cryptography, and I think I would still recommend reading it after this book. The detailed analysis of DES in Applied Cryptography is worth the book by itself, and more generally the survey of algorithms is useful in showing the range of approaches that can be used. And the survey of cryptographic protocols, if very difficult reading, provides tools for implementing (or at least understanding) some of the fancier and more cutting-edge things that one can do with cryptography.

But this is the place to start, and I wholeheartedly recommend Cryptography Engineering to anyone working in computer security. Whether you're writing code, designing systems, or even evaluating products, this is a very useful book to read. It's a comprehensive introduction if you don't know anything about the field, but deep enough that I still got quite a bit of new information from it despite having written security software for years and having already read Applied Cryptography. Highly recommended. I will probably read it from cover to cover a second time when I have some free moments.

Rating: 9 out of 10

Categories: Elsewhere

Russell Coker: Replacement Credit Cards and Bank Failings

Sat, 12/04/2014 - 02:25

I just read an interesting article by Brian Krebs about the difficulty in replacing credit cards [1].

The main reason that credit cards need to be replaced is that they have a single set of numbers that is used for all transactions. If credit cards were designed properly for modern use (IE since 2000 or so) they would act as a smart-card as the recommended way of payment in store. Currently I have a Mastercard and an Amex card, the Mastercard (issued about a year ago) has no smart-card feature and as Amex is rejected by most stores I’ve never had a chance to use the smart-card part of a credit card. If all American credit cards had a smart card feature which was recommended by store staff then the problems that Brian documents would never have happened, the attacks on Target and other companies would have got very few card numbers and the companies that make cards wouldn’t have a backlog of orders.

If a bank was to buy USB smart-card readers for all their customers then they would be very cheap (the hardware is simple and therefore the unit price would be low if purchasing a few million). As banks are greedy they could make customers pay for the readers and even make a profit on them. Then for online banking at home the user could use a code that’s generated for the transaction in question and thus avoid most forms of online banking fraud – the only possible form of fraud would be to make a $10 payment to a legitimate company become a $1000 payment to a fraudster but that’s a lot more work and a lot less money than other forms of credit card fraud.

A significant portion of all credit card transactions performed over the phone are made from the customer’s home. Of the ones that aren’t made from home a significant portion would be done from a hotel, office, or other place where a smart-card reader might be conveniently used to generate a one-time code for the transaction.

The main remaining problem seems to be the use of raised numbers. Many years ago it used to be common for credit card purchases to involve using some form of “carbon paper” and the raised numbers made an impression on the credit card transfer form. I don’t recall ever using a credit card in that way, I’ve only had credit cards for about 18 years and my memories of the raised numbers on credit cards being used to make an impression on paper only involve watching my parents pay when I was young. It seems likely that someone who likes paying by credit card and does so at small companies might have some recent experience of “carbon paper” payment, but anyone who prefers EFTPOS and cash probably wouldn’t.

If the credit card number (used for phone and Internet transactions in situations where a smart card reader isn’t available) wasn’t raised then it could be changed by posting a sticker with a new number that the customer could apply to their card. The customer wouldn’t even need to wait for the post before their card could be used again as the smart card part would never be invalid. The magnetic stripe on the card could be changed at any bank and there’s no reason why an ATM couldn’t identify a card by it’s smart-card and then write a new magnetic stripe automatically.

These problems aren’t difficult to solve. The amounts of effort and money involved in solving them are tiny compared to the costs of cleaning up the mess from a major breach such as the recent Target one, the main thing that needs to be done to implement my ideas is widespread support of smart-card readers and that seems to have been done already. It seems to me that the main problem is the incompetence of financial institutions. I think the fact that there’s no serious competitor to Paypal is one of the many obvious proofs of the incompetence of financial companies.

The effective operation of banks is essential to the economy and the savings of individuals are guaranteed by the government (so when a bank fails a lot of tax money will be used). It seems to me that we need to have national banks run by governments with the aim of financial security. Even if banks were good at their business (and they obviously aren’t) I don’t think that they can be trusted with it, an organisation that’s “too big to fail” is too big to lack accountability to the citizens.

Related posts:

  1. Football Cards and Free Kittens My cousin Greg Coker has created an eBay auction for...
  2. The Millennium Seed Bank Jonathan Drori gave an interesting TED talk about the Millenium...
  3. systemd – a Replacement for init etc The systemd projecct is an interesting concept for replacing init...
Categories: Elsewhere

Chris Lamb: My 2014 race schedule

Sat, 12/04/2014 - 00:58

«Swim 2.4 miles! Bike 112 miles! Run 26.2 miles! Brag for the rest of your life...»

Whilst 2013 was based around a "70.3"-distance race, in my second year in triathlon I will be targetting training solely around my first Ironman-distance event.

I chose the Ironman event in Klagenfurt, Austria not only because the location lends a certain tone to the occasion but because the course is suited to my relative strengths within the three disciplines.

Compared to 2013 I've made the following conscious changes to my race scheduling and selection:

  • Fewer races in general to allow for more generous spacing between events, resulting in more training, recovery and life.
  • No sprint-distance triathlons as they do not provide enough enjoyment or suitable training for the IM distance given their logistical outlay.
  • Prefering cycling over running time trials: general performance in triathlon—paradoxically including your run performance—is primarily based around bike fitness.
  • Prefering smaller events over "mass-participation" ones.

Readers unfamiliar with triathlon training may observe that despite my primary race finishing with a marathon-distance run, I am not racing a standalone marathon in preparation. This is a common practice, justified by the run-specific training leading up to the event as well as the long recovery period afterwards compromising your training overall.

For similar reasons, I have also chosen not to race the "70.3" distance event in 2014. Whether to do so is a more contentious issue than whether to run a marathon, but it resolved itself once I could not find an event that was scheduled suitably and I could convince myself that most of the benefits could be achieved through other means.

April 13th

Cambridge Duathlon (link)

Run: 7.5km, bike: 40km, run: 7.5km

May 11th

St Neots Olympic Tri (link)

Swim: 1,500m, bike: 40km, run: 10km

May 17th

ECCA 50-mile cycling time trial (link)

Course: E2/50C

June 1st

Icknield RC 100-mile cycling time trial (link)

Course: F1/100

June 15th

Cambridge Triathlon (link)

Swim: 1,500m, bike: 40km, run: 10km

June 29th

Ironman Austria (link)

Swim 2.4km, bike: 190km, run: 42.2km

Categories: Elsewhere

Chris Lamb: 2014 race schedule

Sat, 12/04/2014 - 00:58

«Swim 2.4 miles! Bike 112 miles! Run 26.2 miles! Brag for the rest of your life...»

Where 2013 was based around a "70.3"-distance race, in my second year in triathlon I will be targetting my first Ironman-distance event.

I chose the Ironman event in Klagenfurt, Austria (pictured) not only because the location lends a certain tone to the occasion but because the course is suited to my relative strengths within the three disciplines.

Compared to 2013 I've made the following conscious changes to my race scheduling and selection:

  • Fewer races in general to allow for more generous spacing between events, resulting in more training, recovery and life.
  • No sprint-distance triathlons as they do not provide enough enjoyment or suitable training for the IM distance given their logistical outlay.
  • Prefering cycling over running time trials: general performance in triathlon—paradoxically including your run performance—is primarily based around bike fitness.
  • Prefering smaller events over "mass-participation" ones.

Readers unfamiliar with triathlon training may observe that despite my primary race finishing with a marathon-distance run, I am not racing a standalone marathon in preparation. This is a common practice, justified by the run-specific training leading up to the event as well as the long recovery period afterwards compromising your training overall.

For similar reasons, I have also chosen not to race the "70.3" distance event in 2014. Whether to do so is a more contentious issue than whether to run a marathon, but it resolved itself once I could not find an event that was scheduled suitably and I could convince myself that most of the benefits could be achieved through other means.

April 13th

Cambridge Duathlon (link)

Run: 7.5km, bike: 40km, run: 7.5km

May 11th

St Neots Olympic Tri (link)

Swim: 1,500m, bike: 40km, run: 10km

May 17th

ECCA 50-mile cycling time trial (link)

50 miles. Course: E2/50C

June 1st

Icknield RC 100-mile cycling time trial (link)

100 miles. Course: F1/100

June 15th

Cambridge Triathlon (link)

Swim: 1,500m, bike: 40km, run: 10km

June 29th

Ironman Austria (link)

Swim 2.4km, bike: 190km, run: 42.2km

Categories: Elsewhere

Wouter Verhelst: Review: John Scalzi: Redshirts

Fri, 11/04/2014 - 17:25

I'm not much of a reader anymore these days (I used to be when I was a young teenager), but I still do tend to like reading something every once in a while. When I do, I generally prefer books that can be read front to cover in one go—because that allows me to immerse myself into the book so much more.

John Scalzi's book is... interesting. It talks about a bunch of junior officers on a starship of the "Dub U" (short for "Universal Union"), which flies off into the galaxy to Do Things. This invariably involves away missions, and on these away missions invariably people die. The title is pretty much a dead giveaway; but in case you didn't guess, it's mainly the junior officers who die.

What I particularly liked about this book is that after the story pretty much wraps up, Scalzi doesn't actually let it end there. First there's a bit of a tie-in that has the book end up talking about itself; after that, there are three epilogues in which the author considers what this story would do to some of its smaller characters.

All in all, a good read, and something I would not hesitate to recommend.

Categories: Elsewhere

Ian Campbell: qcontrol 0.5.3

Fri, 11/04/2014 - 17:04

I've just released qcontrol 0.5.3. Changes since the last release:

  • Reduce spaminess of temperature control (Debian bug #727150).
  • Support for enabling/disabling RTC on ts219 and ts41x. Patch from Michael Stapelberg (Debian bug #732768).
  • Support for Synology Diskstation and Rackstation NASes. Patch from Ben Peddell.
  • Return correct result from direct command invocation (Debian bug #617439).
  • Fix ts41x LCD detection.
  • Improved command line argument parsing.
  • Lots of internal refactoring and cleanups.

Get it from gitorious or

The Debian package will be uploaded shortly.

Categories: Elsewhere

Steve Kemp: Putting the finishing touches to a nodejs library

Fri, 11/04/2014 - 16:14

For the past few years I've been running a simple service to block blog/comment-spam, which is (currently) implemented as a simple JSON API over HTTP, with a minimal core and all the logic in a series of plugins.

One obvious thing I wasn't doing until today was paying attention to the anchor-text used in hyperlinks, for example:

<a href="">buy viagra</a>

Blocking on the anchor-text is less prone to false positives than blocking on keywords in the comment/message bodies.

Unfortunately there seem to exist no simple nodejs modules for extracting all the links, and associated anchors, from a random Javascript string. So I had to write such a module, but .. given how small it is there seems little point in sharing it. So I guess this is one of the reasons why there often large gaps in the module ecosystem.

(Equally some modules are essentially applications; great that the authors shared, but virtually unusable, unless you 100% match their problem domain.)

I've written about this before when I had to construct, and publish, my own cidr-matching module.

Anyway expect an upload soon, currently I "parse" HTML and BBCode. Possibly markdown to follow, since I have an interest in markdown.

Categories: Elsewhere

Lars Wirzenius: Applying the Doctorow method to coding

Fri, 11/04/2014 - 09:27

When you have a big goal, do at least a little of it every day. Cory Doctorow writes books and stuff, and writes for at least twenty minutes every day. I write computer software, primarily Obnam, my backup program, and recently wrote the first rough draft of a manual for it, by writing at least a little every day. In about two months I got from nothng to something that is already useful to people.

I am now applying this to coding as well. Software development is famously an occupation that happens mostly in one's brain and where being in hack mode is crucial. Getting into hack mode takes time and a suitable, distraction-free environment.

I have found, however, that there are a lot of small, quick tasks that do not require a lot of concentration. Fixing wordings of error messages, making small, mechanical refactorings, confirming bugs by reproducing them and writing test cases to reproduce them, etc. I have foubd that if I've prepared for and planned such tasks properly, in the GTD planning phase, I can do such tasks even on trains and traun stations.

This is important. I commute to work and if I can spend the time I wait for a train, or on the train, productively, I can significant, real progress. But to achieve this I really do have to do the preparation beforehand. Th 9:46 train to work is much too noisy to do any real thinking in.

Categories: Elsewhere

Hideki Yamane: given enough eyeballs, but...

Fri, 11/04/2014 - 08:35
"given enough eyeballs, all bugs are shallow"Oh, right?

And we easily make mistakes because we're human, you know.

Categories: Elsewhere

Joey Hess: propellor introspection for DNS

Fri, 11/04/2014 - 07:05

In just released Propellor 0.3.0, I've improved improved Propellor's config file DSL significantly. Now properties can set attributes of a host, that can be looked up by its other properties, using a Reader monad.

This saves needing to repeat yourself:

hosts = [ host "" & stdSourcesList Unstable & Hostname.sane -- uses hostname from above

And it simplifies docker setup, with no longer a need to differentiate between properties that configure docker vs properties of the container:

-- A generic webserver in a Docker container. , Docker.container "webserver" "joeyh/debian-unstable" & Docker.publish "80:80" & Docker.volume "/var/www:/var/www" & Apt.serviceInstalledRunning "apache2"

But the really useful thing is, it allows automating DNS zone file creation, using attributes of hosts that are set and used alongside their other properties:

hosts = [ host "" & ipv4 "" & cname "" & Docker.docked hosts "openid-provider" & cname "" & Docker.docked hosts "ancient-kitenet" , host "" & Dns.primary "" hosts ]

Notice that hosts is passed into Dns.primary, inside the definition of hosts! Tying the knot like this is a fun haskell laziness trick. :)

Now I just need to write a little function to look over the hosts and generate a zone file from their hostname, cname, and address attributes:

extractZoneFile :: Domain -> [Host] -> ZoneFile extractZoneFile = gen . map hostAttr where gen = -- TODO

The eventual plan is that the cname property won't be defined as a property of the host, but of the container running inside it. Then I'll be able to cut-n-paste move docker containers between hosts, or duplicate the same container onto several hosts to deal with load, and propellor will provision them, and update the zone file appropriately.

Also, Chris Webber had suggested that Propellor be able to separate values from properties, so that eg, a web wizard could configure the values easily. I think this gets it of the way there. All that's left to do is two easy functions:

overrideAttrsFromJSON :: Host -> JSON -> Host exportJSONAttrs :: Host -> JSON

With these, propellor's configuration could be adjusted at run time using JSON from a file or other source. For example, here's a containerized webserver that publishes a directory from the external host, as configured by JSON that it exports:

demo :: Host demo = Docker.container "webserver" "joeyh/debian-unstable" & Docker.publish "80:80" & dir_to_publish "/home/mywebsite" -- dummy default & Docker.volume (getAttr dir_to_publish ++":/var/www") & Apt.serviceInstalledRunning "apache2" main = do json <- readJSON "my.json" let demo' = overrideAttrsFromJSON demo writeJSON "my.json" (exportJSONAttrs demo') defaultMain [demo']
Categories: Elsewhere

Andrew Pollock: [life] Day 73: A fourth-generation friendship

Fri, 11/04/2014 - 06:24

Oh man, am I exhausted.

I've known my friend Kim for longer than we remembered. Until Zoe was born, I thought the connection was purely that our grandmothers knew each other. After Zoe was born, and we gave her my birth mother's name as her middle name, Kim's mother sent me a message indicating that she knew my mother. More on that in a moment.

Kim and I must have interacted when we were small, because it predates my memory of her. My earliest memories are of being a pen pal with her when she lived in Kingaroy. She had a stint in South Carolina, and then in my late high school years, she moved relatively close to me, at Albany Creek, and we got to have a small amount of actual physical contact.

Then I moved to Canberra, and she moved to Melbourne, and it was only due to the wonders of Facebook that we reconnected while I was in the US.

Fast forward many years, and we're finally all back in Brisbane again. Kim is married and has a daughter named Sarah who is a couple of years older than Zoe, and could actually pass of as her older sister. She also has as a younger son. Since we've been back in Brisbane, we've had many a play date at each other's homes, and the girls get along famously, to the point where Sarah was talking about her "best friend Zoe" at show and tell at school.

The other thing I learned since reconnecting with Kim in the past year, is that Kim's aunt and my mother were in the same grade at school. Kim actually arranged for me to have a coffee with her aunt when she was visiting from Canberra, and she told me a bunch of stuff about my Mum that I didn't know, so that was really nice.

Kim works from home part time, and I offered to look after Sarah for a day in the school holidays as an alternative to her having to go to PCYC holiday care. Today was that day.

I picked up Zoe from Sarah this morning, as it was roughly in the same direction as Kim's place, and made more sense, and we headed over to Kim's place to pick up Sarah. We arrived only a couple of minutes later than the preferred pick up time, so I was pretty happy with how that worked out.

The plan was to bring Sarah back to our place, and then head over to New Farm Park on the CityCat and have a picnic lunch and a play in the rather fantastic playground in the park over there.

I hadn't made Zoe's lunch prior to leaving the house, so after we got back home again, I let the girls have a play while I made Zoe's lunch. After some play with Marble Run, the girls started doing some craft activity all on their own on the balcony. It was cute watching them try to copy what each other were making. One of them tried gluing two paper cups together by the narrow end. It didn't work terribly well because there wasn't a lot of surface to come into contact with each other.

I helped the girls with their craft activity briefly, and then we left on foot to walk to the CityCat terminal. Along the way, I picked up some lunch for myself at the Hawthorne Garage and added it to the small Esky I was carrying with Zoe's lunchbox in it. It was a beautiful day for a picnic. It was warm and clear. I think Sarah found the walk a bit long, but we made it to the ferry terminal relatively incident free. We got lucky, and a ferry was just arriving, and as it happened, they had to change boats, as they do from time to time at Hawthorne, so we would have had plenty of time regardless, as everyone had to get off one boat and onto a new one.

We had a late morning tea at the New Farm Park ferry terminal after we got off, and then headed over to the playground. I claimed a shady spot with our picnic blanket and the girls did their thing.

I alternated between closely shadowing them around the playground and letting them run off on their own. Fortunately they stuck together, so that made keeping track of them slightly easier.

For whatever reason, Zoe was in a bit of a grumpier mood than normal today, and wasn't taking too kindly to the amount of turn taking that was necessary to have a smoothly oiled operation. Sarah (justifiably) got a bit whiny when she didn't get an equitable amount of time getting the call the shots on what the they did, but aside from that they got along fine.

There was another great climbing tree, which had kids hanging off it all over the place. Both girls wanted to climb it, but needed a little bit of help getting started. Sarah lost her nerve before Zoe did, but even Zoe was a surprisingly trepidatious about it, and after shimmying a short distance along a good (but high) branch, wanted to get down.

The other popular activity was a particularly large rope "spider web" climbing frame, which Sarah was very adept at scaling. It was a tad too big for Zoe to manage though, and she couldn't keep up, which frustrated her quite a bit. I was particularly proud of how many times she returned to it to try again, though.

We had our lunch, a little more play time, and the obligatory ice cream. I'd contemplated catching the CityCat further up-river to Sydney Street to then catching the free CityHopper ferry, but the thought of then trying to get two very tired girls to walk from the Hawthorne ferry terminal back home didn't really appeal to me all that much, so I decided to just head back home.

That ended up being a pretty good call, because as it was, trying to get the two of them back home was like herding cats. Sarah was fine, but Zoe was really dragging the chain and getting particularly grumpy. I had to deploy every positive parenting trick that I currently have in my book to keep Zoe moving, but we got there eventually. Fortunately we didn't have any particularly deadline.

The girls did some more playing at home while I collapsed on the couch for a bit, and then wanted to do some more craft. We made a couple of crowns and hot-glued lots of bling onto them.

We drove back to Kim's place after that, and the girls played some more there. Sarah nearly nodded off on the way home. Zoe was surprisingly chipper. The dynamic changed completely once we were back at Sarah's house. Zoe seemed fine to take Sarah's direction on everything, so I wonder how much of things in the morning were territorial, and Sarah wasn't used to Zoe calling the shots when she was at Zoe's place.

Kim invited us to stay for dinner. I wasn't really feeling like cooking, and the girls were having a good time, so I decided to stay for dinner, and after they had a bath together we headed home. Zoe stayed awake all the way home, and went to bed without any fuss.

It's pretty hot tonight, and I'm trialling Zoe sleeping without white noise, so we'll see how tonight pans out.

Categories: Elsewhere

Dirk Eddelbuettel: RcppCNPy 0.2.3

Fri, 11/04/2014 - 02:49
R 3.1.0 came out today. Among the (impressive and long as usual) list of changes is the added ability to specify CXX_STD = CXX11 in order to get C++11 (or the best available subset on older compilers). This brings a number of changes and opportunities which are frankly too numerous to be discussed in this short post. But it also permits us, at long last, to use long long integer types.

For RcppCNPy, this means that we can finally cover NumPy integer data (along with the double precision we had from the start) on all platforms. Python encodes these as an int64, and that type was unavailable (at least in 32-bit OSs) until we got long long made available to us by R. So today I made the change to depend on R 3.1.0, and select C++11 which allowed us to free the code from a number if #ifdef tests. This all worked out swimmingly and the new package has already been rebuilt for Windows.

I also updated the vignette, and refreshed its look and feel. Full changes are listed below.

Changes in version 0.2.3 (2014-04-10)
  • src/Makevars now sets CXX_STD = CXX11 which also provides the long long type on all platforms, so integer file support is no longer conditional.

  • Consequently, code conditional on RCPP_HAS_LONG_LONG_TYPES has been simplified and is no longer conditional.

  • The package now depends on R 3.1.0 or later to allow this.

  • The vignette has been updated and refreshed to reflect this.

CRANberries also provides a diffstat report for the latest release. As always, feedback is welcome and the rcpp-devel mailing list off the R-Forge page for Rcpp is the best place to start a discussion.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Steve Kemp: A small assortment of content

Thu, 10/04/2014 - 17:34

Today I took down my KVM-host machine, rebooting it and restarting all of my guests. It has been a while since I'd done so and I was a little nerveous, as it turned out this nerveousness was prophetic.

I'd forgotten to hardwire the use of proxy_arp so my guests were all broken when the systems came back online.

If you're curious this is what my incoming graph of email SPAM looks like:

I think it is obvious where the downtime occurred, right?

In other news I'm awaiting news from the system administration job I applied for here in Edinburgh, if that doesn't work out I'll need to hunt for another position..

Finally I've started hacking on my console based mail-client some more. It is a modal client which means you're always in one of three states/modes:

  • maildir - Viewing a list of maildir folders.
  • index - Viewing a list of messages.
  • message - Viewing a single message.

As a result of a lot of hacking there is now a fourth mode/state "text-mode". Which allows you to view arbitrary text, for example scrolling up and down a file on-disk, to read the manual, or viewing messages in interesting ways.

Support is still basic at the moment, but both of these work:

-- -- Show a single file -- show_file_contents( "/etc/passwd" ) global_mode( "text" )


function x() txt = { "${colour:red}Steve", "${colour:blue}Kemp", "${bold}Has", "${underline}Definitely", "Made this work" } show_text( txt ) global_mode( "text") end x()

There will be a new release within the week, I guess, I just need to wire up a few more primitives, write more of a manual, and close some more bugs.

Happy Thursday, or as we say in this house, Hyvää torstai!

Categories: Elsewhere

Joey Hess: Kite: a server's tale

Thu, 10/04/2014 - 17:17

My server, Kite, is finishing its 20th year online.

It started as, a 486 under the desk in my dorm room. Early on, it bounced around the DNS --,, -- before landing on The hardware has changed too, from a succession of desktop machines, it eventually turned into a 2u rack-mount server in the CCCP co-op. And then it went virtual, and international, spending a brief time in Amsterdam, before relocating to England and the kvm-hosting co-op.

Through all this change, and no few reinstalls from scratch, it's had a single distinct personality. This is a multi-user unix system, of the old school, carefully (and not-so-carefully) configured and administered to perform a grab-bag of functions. Whatever the users need.

I read the hacknews newsgroup, and I see, in their descriptions of their server in 1984, the prototype of Kite and all its ilk.

It's consistently had a small group of users, a small subset of my family and friends. Not quite big enough to really turn into a community, and we wall and talk less than we once did.

Exhibit: Kite as it appeared in the 90's

[Intentionally partially broken, being able to read the cgi source code is half the fun.]

Kite was an early server on the WWW, and garnered mention in books and print articles. Not because it did anything important, but because there were few enough interesting web sites that it slightly stood out.

Many times over these 20 years I've wondered what will be the end of Kite's story. It seemed like I would either keep running it indefinitely, or perhaps lose interest. (Or funding -- it's eaten a lot of cash over the years, especially before the current days of $5/month VPS hosting.) But I failed to anticipate what seems to really be happening to it. Just as I didn't fathom, when kite was perched under my desk, that it would one day be some virtual abstract machine in a unknown computer in anther country.

Now it seems that what will happen to Kite is that most of the important parts of it will split off into a constellation of specialized servers. The website, including the user sites, has mostly moved to The DNS server, git server and other crucial stuff is moving to various VPS instances and containers. (The exhibit above is just one more automatically deployed, soulless container..) A large part of Kite has always been about me playing with bleeding-edge stuff and installing random new toys; that has moved to a throwaway personal server at which might be gone tomorrow (or might keep running for free for years).

What it seems will be left is a shell box, with IMAP access to a mail server, and a web server for legacy /~user/ sites, and a few tools that my users need (including that pine program some of them are still stuck on.)

Will it be worth calling that Kite?

[ Kite users: This transition needs to be done by December when the current host is scheduled to be retired. ]

Categories: Elsewhere

Craig Small: WordPress update needed for stable too

Thu, 10/04/2014 - 15:10

Yesterday I mentioned that wordpress had an important security update to 3.8.2  The particular security bugs also impact the stable Debian version of wordpress, so those patches have been backported.  I’ve uploaded the changes to the security team so hopefully there will new package soon.

The version you are looking for will be 3.6.1+dfsg-1~deb7u2 and will be on the Debian security mirrors.

Categories: Elsewhere

Michal &#268;iha&#345;: Heartbleed fun

Thu, 10/04/2014 - 10:00

You probably know about heartbleed bug in OpenSSL as it is so widespread that it got to mainstream medias as well. As I'm running Debian Wheezy on my servers, they were affected as well.

The updated OpenSSL library was installed immediately after it has been released, but there was still option that somebody got private data from the server before (especially as the vulnerability exists for quite some time). So I've revoked and reissued all SSL certificates. This has nice benefit that they now use SHA 256 intermediate CA compared to SHA 1 which was used on some of them before.

Though there is no way to figure out whether there was some information leak or not, I have decided to reset all access tokens for OAuth (eg. GitHub), so if you have used GitHub login for Weblate, you will have to reauthenticate.

Filed under: English phpMyAdmin Weblate | 0 comments | Flattr this!

Categories: Elsewhere