Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 10 min 13 sec ago

Elena 'valhalla' Grandi: DUCC-IT

1 hour 27 min ago
There is exactly one month left before DUCC-IT, the Debian Ubuntu Community Conference Italia: a great chance to meet your free software developing neighborhoods.

This year it will be just one day, in Cesena, and it will include events targeted to both the community and a wider public.

The Call for Paper is still open, but only for a few days, so if you want to propose a talk/session hurry up!

#duccit14 @Debian
Categories: Elsewhere

Andrew Pollock: [life] Day 79: Magic, flu shots, and play dates and dinner

12 hours 38 min ago

Zoe slept until 7:45am this morning, which is absolutely unheard of in our house. She did wake up at about 5:15am yelling out for me because she'd kicked her doona off and lost Cowie, but went back to sleep once I sorted that out.

She was super grumpy when she woke up, which I mostly attributed to being hungry, so I got breakfast into her as quickly as possible and she perked up afterwards.

Today there was a free magic show at the Bulimba Library at 10:30am, so we biked down there. I really need to work on curbing Zoe's procrastination. We started trying to leave the house at 10am, and as it was, we only got there with 2 minutes to spare before the show started.

Magic Glen put on a really good show. He was part comedian, part sleight of hand magician, and he did a very entertaining show. There were plenty of gags in it for the adults. Zoe started out sitting in my lap, but part way through just got up and moved closer to the front to sit with the other kids. I think she enjoyed herself. I'd have no hesitation hiring this guy for a future birthday party.

Zoe had left her two stuffed toys from the car at Megan's house on Tuesday after our Port of Brisbane tour, and so after the magic show we biked to her place to retrieve them. It was close to lunch by this stage, so we stayed for lunch, and the girls had a bit of a play in the back yard while Megan's little sister napped.

It was getting close to time to leave for our flu shots, so I decided to just bike directly to the doctor from Megan's place. I realised after we left that we'd still left the stuffed toys behind, but the plan was to drive back after our flu shots and have another swim their neighbour's pool, so it was all good.

We got to the doctor, and waited for Sarah to arrive. Sarah and I weren't existing patients at Zoe's doctor, but we'd decided to get the flu shot as a family to try and ease the experience for Zoe. We both had to do new patient intake stuff before we had a consult with Zoe's doctor and got prescriptions for the flu shot.

I popped next door to the adjacent pharmacy get the prescriptions filled, and then the nurse gave us the shots.

For the last round of vaccinations that Zoe received, she needed three, and she screamed the building down at the first jab. The poor nurse was very shaken, so we've been working to try and get her to feel more at ease about this one.

Zoe went first, and she took a deep breath, and she was winding up to freak out when she had her shot, but then it was all over, and she let the breath go, and looked around with a kind of "is that it?" reaction. She didn't even cry. I was so proud of her.

I got my shot, and then Sarah got hers, and we had to sit in the waiting room for 10 minutes to make sure we didn't turn into pumpkins, and we were on our way.

We biked home, I grabbed our swim gear, and we drove back to Megan's place.

The pool ended up being quite cold. Megan didn't want to get in, and Zoe didn't last long either. Megan's Mum was working back late, so I invited Megan, her Dad and her sister over for dinner, and we headed home so I could prepare it. One of Zoe's stuffed toys had been located.

We had a nice dinner of deviled sausages made in the Thermomix, and for a change I didn't have a ton of leftovers. Jason had found the other stuffed toy in his truck, so we'd finally tracked them both down.

After Megan and family went home, I got Zoe to bed without much fuss, and pretty much on time. I think she should sleep well tonight.

Categories: Elsewhere

Wouter Verhelst: Call for help for DVswitch maintenance

Wed, 16/04/2014 - 18:24

I've taken over "maintaining" DVswitch from Ben Hutchings a few years ago, since Ben realized he didn't have the time anymore to work on it well.

After a number of years, I have to admit that I haven't don a very good job. Not becase I didn't want to work on it, but mainly because I don't have enough time to fix DVswitch against the numerous moving targets that it uses; the APIs of libav and of liblivemedia are fluent enough that just making sure everything remains compilable and in working order is quite a job.

DVswitch is used by many people; DebConf, FOSDEM, and the CCC are just a few examples, but I know of at least three more.

Most of these (apart from DebConf and FOSDEM) maintain local patches which I've been wanting to merge into the upstream version of dvswitch. However, my time is limited, and over the past few years I've not been able to get dvswitch into a state where I confidently felt I could upload it into Debian unstable for a release. One step we took in order to get that closer was to remove the liblivemedia dependency (which implied removing the support for RTSP sources). Unfortunately, the resulting situation wasn't good enough yet, since libav had changed API enough that current versions of DVswitch compiled against current versions of libav will segfault if you try to do anything useful.

I must admit to myself that I don't have the time and/or skill set to maintain DVswitch on an acceptable level all by myself. So, this is a call for help:

If you're using DVswitch for your conference and want to continue doing so, please talk to us. The first things we'll need to do:

  • Massage the code back into working order (when compiled against current libav)
  • Fix my buildbot instance so that my grand plan of having nightly build/test runs against libav master actually works.
  • Merge patches from the suse and CCC people that look nice
  • Properly release dvswitch 0.9 (or maybe 1.0?)
  • Party!

See you there?

Categories: Elsewhere

Richard Hartmann: secure password storage

Wed, 16/04/2014 - 08:47

Dear lazyweb,

for obvious reaons I am in the process of cycling out a lot of passwords.

For the last decade or so, I have been using openssl.vim to store less-frequently-used passwords and it's still working fine. Yet, it requires some manual work, not least of which manually adding random garbage at the start of the plain text (and in other places) every time I save my passwords. In the context of changing a lot of passwords at once, this has started to become tedious. Plus, I am not sure if a tool of the complexity and feature-set of Vim is the best choice for security-critical work on encrypted files.

Long story short, I am looking for alternatives. I did some research but couldn't come up with anything I truly liked; as there's bound to be tools which fit the requirements of like-minded people, I decided to ask around a bit.

My personal short-list of requirements is:

  • Strong crypto
  • CLI-based
  • Must add random padding at the front of the plain text and ideally in other places as well
  • Should ideally pad the stored file to a few kB so size-based attacks are foiled
  • Must not allow itself to be swapped out, etc
  • Must not be hosted, cloud-based, as-a-service, or otherwise compromised-by-default
  • Should offer a way to search in the decrypted plain text, nano- or vi-level of comfort are fine
  • Both key-value storage or just a large free-form text area would be fine with a slight preference for free-form text

Any and all feedback appreciated. Depending on the level of feedback, I may summarize my own findings and suggestions into a follow-up post.

Categories: Elsewhere

Andrew Pollock: [life] Day 78: Alginate, dragon boats and relatives

Wed, 16/04/2014 - 05:54

I ordered some alginate the other day, and it arrived yesterday, but we were out, so I had to pick it up from the post office this morning.

Anshu and I picked it up before Zoe was dropped off. We had a couple of attempts at making some, but didn't quite get the ratios or the quantity right, and we were too slow, so we'll have to try again. The plan is to try and make a cast of Zoe's hand, since we were messing around with plaster of Paris recently. I've found a good Instructable to try and follow.

Nana and her dragon boating team were competing in the Australian Dragon Boat Championships over Easter, and her first race was today. It also ended up that today was the best day to try and go and watch, so when she called to say her first race would be around noon, I quickly decided we should jump in the car and head up to Kawana Waters.

We abandoned the alginate, and I slapped together a picnic lunch for Zoe and I, and we bid Anshu farewell and drove up.

Zoe's fever seemed to break yesterday afternoon after Sarah picked her up, and she slept well, but despite all that, she napped in the car on the way up, which was highly unusual, but helped pass the time. She woke up when we arrived. I managed to get a car park not too far from the finish line, and we managed to find Nana, whose team was about the enter the marshaling area.

Her boat was closest to the shore we were watching from, and her boat came second in their qualifying round for the 200 metre race, meaning they went straight through to the semi-finals.

The semi-finals were going to be much later, and I wanted to capitalise on the fact that we were going to have to drive right past my Mum and Dad's place on the way home to try and see my sister and her family, since we missed them on Monday.

We headed back after lunch and a little bit of splashing around in the lake, and ended up staying for dinner at Mum and Dad's. Zoe had a great time catching up with her cousin Emma, and fooling around with Grandpa and Uncle Michael.

She got to bed a little bit late by the time we got home, but I'm hopeful she'll sleep well tonight.

Categories: Elsewhere

David Pashley: Bad Password Policies

Wed, 16/04/2014 - 03:03

After the whole Heartbleed fiasco, I’ve decided to continue my march towards improving my online security. I’d already begun the process of using LastPass to store my passwords and generate random passwords for each site, but I hadn’t completed the process, with some sites still using the same passwords, and some having less than ideal strength passwords, so I spent some time today improving my password position. Here’s some of the bad examples of password policy I’ve discovered today.

First up we have Live.com. A maximum of 16 characters from the Microsoft auth service. Seems to accept any character though.

 

This excellent example is from creditexpert.co.uk, one of the credit agencies here in the UK. They not only restrict to 20 characters, they restrict you to @, ., _ or |. So much for teaching people how to protect themselves online.

Here’s Tesco.com after attempting to change my password to ”QvHn#9#kDD%cdPAQ4&b&ACb4x%48#b”. If you can figure out how this violates their rules, I’d love to know. And before you ask, I tried without numbers and that still failed so it can’t be the “three and only three” thing. The only other idea might be that they meant “‘i.e.” rather than “e.g.”, but I didn’t test that.

Here’s a poor choice from ft.com, refusing to accept non-alphanumeric characters. On the plus side they did allow the full 30 characters in the password.

 

The finest example of a poor security policy is a company who will remain nameless due to their utter lack of security. Not only did they not use HTTPS, they accepted a 30 character password and silently truncated it to 20 characters, the reason I know this is because when I logged out and tried to log in again and then used the “forgot my password” option, they emailed me the password in plain text.

I have also been setting up two-factor authentication where possible. Most sites use the Google Authenticator application on your mobile to give you a 6 digit code to type in in addition to your password. I highly recommend you set it up too. There’s a useful list of sites that implement 2FA and links to their documentation at http://twofactorauth.org/.

I realise that my choice LastPass requires me to trust them, but I think the advantages outweigh the disadvantages of having many sites using the same passwords and/or low strength passwords. I know various people cleverer than me have looked into their system and failed to find any obvious flaws.

The post Bad Password Policies appeared first on David Pashley.com.

Categories: Elsewhere

Petter Reinholdtsen: FreedomBox milestone - all packages now in Debian Sid

Tue, 15/04/2014 - 22:10

The Freedombox project is working on providing the software and hardware to make it easy for non-technical people to host their data and communication at home, and being able to communicate with their friends and family encrypted and away from prying eyes. It is still going strong, and today a major mile stone was reached.

Today, the last of the packages currently used by the project to created the system images were accepted into Debian Unstable. It was the freedombox-setup package, which is used to configure the images during build and on the first boot. Now all one need to get going is the build code from the freedom-maker git repository and packages from Debian. And once the freedombox-setup package enter testing, we can build everything directly from Debian. :)

Some key packages used by Freedombox are freedombox-setup, plinth, pagekite, tor, privoxy, owncloud and dnsmasq. There are plans to integrate more packages into the setup. User documentation is maintained on the Debian wiki. Please check out the manual and help us improve it.

To test for yourself and create boot images with the FreedomBox setup, run this on a Debian machine using a user with sudo rights to become root:

sudo apt-get install git vmdebootstrap mercurial python-docutils \ mktorrent extlinux virtualbox qemu-user-static binfmt-support \ u-boot-tools git clone http://anonscm.debian.org/git/freedombox/freedom-maker.git \ freedom-maker make -C freedom-maker dreamplug-image raspberry-image virtualbox-image

Root access is needed to run debootstrap and mount loopback devices. See the README in the freedom-maker git repo for more details on the build. If you do not want all three images, trim the make line. Note that the virtualbox-image target is not really virtualbox specific. It create a x86 image usable in kvm, qemu, vmware and any other x86 virtual machine environment. You might need the version of vmdebootstrap in Jessie to get the build working, as it include fixes for a race condition with kpartx.

If you instead want to install using a Debian CD and the preseed method, boot a Debian Wheezy ISO and use this boot argument to load the preseed values:

url=http://www.reinholdtsen.name/freedombox/preseed-jessie.dat

I have not tested it myself the last few weeks, so I do not know if it still work.

If you wonder how to help, one task you could look at is using systemd as the boot system. It will become the default for Linux in Jessie, so we need to make sure it is usable on the Freedombox. I did a simple test a few weeks ago, and noticed dnsmasq failed to start during boot when using systemd. I suspect there are other problems too. :) To detect problems, there is a test suite included, which can be run from the plinth web interface.

Give it a go and let us know how it goes on the mailing list, and help us get the new release published. :) Please join us on IRC (#freedombox on irc.debian.org) and the mailing list if you want to help make this vision come true.

Categories: Elsewhere

Bálint Réczey: Proposing amd64-hardened architecture for Debian

Tue, 15/04/2014 - 12:02

Facing last week’s Heartbleed bug the need for improving the security of our systems became more apparent than usually. In Debian there are widely used methods for Hardening packages at build time and guidelines for improving the default installations’ security.

Employing such methods usually come at an expense, for example slower code execution of binaries due to additional checks or additional configuration steps when setting up a system. Balancing between usability and security Debian chose an approach which would satisfy the most users by using C/C++ features which only slightly decrease execution speed of built binaries and by using reasonable defaults in package installations.

All the architectures supported by  Debian aims using the same methods for enhancing security but it does not have to stay the same way. Amd64 is the most widely used architecture of Debian according to popcon and amd64 hardware comes with powerful CPU-s. I think there would be a significant amount of people (being one of them :-)) who would happily use a version of Debian with more security features enabled by default sacrificing some CPU power and installing and setting up additional packages.

My proposal for serving those security-focused users is introducing a new architecture targeting amd64 hardware, but with more security related C/C++ features turned on for every package (currently hardening has to be enabled by the maintainers in some way) through compiler flags as a start.

Introducing the new architecture would also let package maintainers enabling additional dependencies and build rules selectively for the new architecture improving the security further. On the users’ side the advantage of having a separate security enhanced architecture instead of a Debian derivative is the potential of installing a set of security enhanced packages using multiarch. You could have a fast amd64 installation as a base and run Apache or any other sensitive server from the amd64-hardened packages!

I have sent the proposal for discussion to debian-dev, too. Please join the discussion there or leave a comment here.

Categories: Elsewhere

Andrew Pollock: [life] Day 77: Port of Brisbane tour

Tue, 15/04/2014 - 07:14

Sarah dropped Zoe around this morning at about 8:30am. She was still a bit feverish, but otherwise in good spirits, so I decided to stick with my plan for today, which was a tour of the Port of Brisbane.

Originally the plan had been to do it with Megan and her Dad, Jason, but Jason had some stuff to work on on his house, so I offered to take Megan with us to allow him more time to work on the house uninterrupted.

I was casting around for something to do to pass the time until Jason dropped Megan off at 10:30am, and I thought we could do some foot painting. We searched high and low for something I could use as a foot washing bucket, other than the mop bucket, which I didn't want to use because of potential chemical residue. I gave up because I couldn't anything suitable, and we watched a bit of TV instead.

Jason dropped Megan around, and we immediately jumped in the car and headed out to the Port. I missed the on ramp for the M4 from Lytton Road, and so we took the slightly longer Lytton Road route, which was fine, because we had plenty of time to kill.

The plan was to get there for about 11:30am, have lunch in the observation cafe on the top floor of the visitor's centre building, and then get on the tour bus at 12:30pm. We ended up arriving much earlier than 11:30am, so we looked around the foyer of the visitor's centre for a bit.

It was quite a nice building. The foyer area had some displays, but the most interesting thing (for the girls) was an interactive webcam of the shore bird roost across the street. There was a tablet where you could control the camera and zoom in and out on the birds roosting on a man-made island. That passed the time nicely. One of the staff also gave the girls Easter eggs as we arrived.

We went up to the cafe for lunch next. The view was quite good from the 7th floor. On one side you could look out over the bay, notably Saint Helena Island, and on the other side you got quite a good view of the port operations and the container park.

Lunch didn't take all that long, and the girls were getting a bit rowdy, running around the cafe, so we headed back downstairs to kill some more time looking at the shore birds with the webcam, and then we boarded the bus.

It was just the three of us and three other adults, which was good. The girls were pretty fidgety, and I don't think they got that much out of it. The tour didn't really go anywhere that you couldn't go yourself in your own car, but you did get running commentary from the driver, which made all the difference. The girls spent the first 5 minutes trying to figure out where his voice was coming from (he was wired up with a microphone).

The thing I found most interesting about the port operations was the amount of automation. There were three container terminals, and the two operated by DP World and Hutchinson Ports employed fully automated overhead cranes for moving containers around. Completely unmanned, they'd go pick a container from the stack and place it on a waiting truck below.

What I found even more fascinating was the Patrick terminal, which used fully automated straddle carriers, which would, completely autonomously move about the container park, pick up a container, and then move over to a waiting truck in the loading area and place it on the truck. There were 27 of these things moving around the container park at a fairly decent clip.

Of course the girls didn't really appreciate any of this, and half way through the tour Megan was busting to go to the toilet, despite going before we started the tour. I was worried about her having an accident before we got back, she didn't, so it was all good.

I'd say in terms of a successful excursion, I'd score it about a 4 out of 10, because the girls didn't really enjoy the bus tour all that much. I was hoping we'd see more ships, but there weren't many (if any) in port today. They did enjoy the overall outing. Megan spontaneously thanked me as we were leaving, which was sweet.

We picked up the blank cake I'd ordered from Woolworths on the way through on the way home, and then dropped Megan off. Zoe wanted to play, so we hung around for a little while before returning home.

Zoe watched a bit more TV while we waited for Sarah to pick her up. Her fever picked up a bit more in the afternoon, but she was still very perky.

Categories: Elsewhere

Dirk Eddelbuettel: BH release 1.54.0-2

Tue, 15/04/2014 - 03:47
Yesterday's release of RcppBDT 0.2.3 lead to an odd build error. If one used at the same time a 32-bit OS, a compiler as recent as g++ 4.7 and the Boost 1.54.0 headers (directly or via the BH package) then the file lexical_cast.hpp barked and failed to compile for lack of an 128-bit integer (which is not a surprise on a 32-bit OS).

After looking at this for a bit, and looking at some related bug report, I came up with a simple fix (which I mentioned in an update to the RcppBDT 0.2.3 release post). Sleeping over it, and comparing to the Boost 1.55 file, showed that the hunch was right, and I have since made a new release 1.54.0-2 of the BH package which contains the fix.

Changes in version 1.54.0-2 (2014-04-14)
  • Bug fix to lexical_cast.hpp which now uses the test for INT128 which the rest of Boost uses, consistent with Boost 1.55 too.

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

Comments and suggestions are welcome via the mailing list or issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Colin Watson: Porting GHC: A Tale of Two Architectures

Tue, 15/04/2014 - 03:45

We had some requests to get GHC (the Glasgow Haskell Compiler) up and running on two new Ubuntu architectures: arm64, added in 13.10, and ppc64el, added in 14.04. This has been something of a saga, and has involved rather more late-night hacking than is probably good for me.

Book the First: Recalled to a life of strange build systems

You might not know it from the sheer bulk of uploads I do sometimes, but I actually don't speak a word of Haskell and it's not very high up my list of things to learn. But I am a pretty experienced build engineer, and I enjoy porting things to new architectures: I'm firmly of the belief that breadth of architecture support is a good way to shake out certain categories of issues in code, that it's worth doing aggressively across an entire distribution, and that, even if you don't think you need something now, new requirements have a habit of coming along when you least expect them and you might as well be prepared in advance. Furthermore, it annoys me when we have excessive noise in our build failure and proposed-migration output and I often put bits and pieces of spare time into gardening miscellaneous problems there, and at one point there was a lot of Haskell stuff on the list and it got a bit annoying to have to keep sending patches rather than just fixing things myself, and ... well, I ended up as probably the only non-Haskell-programmer on the Debian Haskell team and found myself fixing problems there in my free time. Life is a bit weird sometimes.

Bootstrapping packages on a new architecture is a bit of a black art that only a fairly small number of relatively bitter and twisted people know very much about. Doing it in Ubuntu is specifically painful because we've always forbidden direct binary uploads: all binaries have to come from a build daemon. Compilers in particular often tend to be written in the language they compile, and it's not uncommon for them to build-depend on themselves: that is, you need a previous version of the compiler to build the compiler, stretching back to the dawn of time where somebody put things together with a big magnet or something. So how do you get started on a new architecture? Well, what we do in this case is we construct a binary somehow (usually involving cross-compilation) and insert it as a build-dependency for a proper build in Launchpad. The ability to do this is restricted to a small group of Canonical employees, partly because it's very easy to make mistakes and partly because things like the classic "Reflections on Trusting Trust" are in the backs of our minds somewhere. We have an iron rule for our own sanity that the injected build-dependencies must themselves have been built from the unmodified source package in Ubuntu, although there can be source modifications further back in the chain. Fortunately, we don't need to do this very often, but it does mean that as somebody who can do it I feel an obligation to try and unblock other people where I can.

As far as constructing those build-dependencies goes, sometimes we look for binaries built by other distributions (particularly Debian), and that's pretty straightforward. In this case, though, these two architectures are pretty new and the Debian ports are only just getting going, and as far as I can tell none of the other distributions with active arm64 or ppc64el ports (or trivial name variants) has got as far as porting GHC yet. Well, OK. This was somewhere around the Christmas holidays and I had some time. Muggins here cracks his knuckles and decides to have a go at bootstrapping it from scratch. It can't be that hard, right? Not to mention that it was a blocker for over 600 entries on that build failure list I mentioned, which is definitely enough to make me sit up and take notice; we'd even had the odd customer request for it.

Several attempts later and I was starting to doubt my sanity, not least for trying in the first place. We ship GHC 7.6, and upgrading to 7.8 is not a project I'd like to tackle until the much more experienced Haskell folks in Debian have switched to it in unstable. The porting documentation for 7.6 has bitrotted more or less beyond usability, and the corresponding documentation for 7.8 really isn't backportable to 7.6. I tried building 7.8 for ppc64el anyway, picking that on the basis that we had quicker hardware for it and didn't seem likely to be particularly more arduous than arm64 (ho ho), and I even got to the point of having a cross-built stage2 compiler (stage1, in the cross-building case, is a GHC binary that runs on your starting architecture and generates code for your target architecture) that I could copy over to a ppc64el box and try to use as the base for a fully-native build, but it segfaulted incomprehensibly just after spawning any child process. Compilers tend to do rather a lot, especially when they're built to use GCC to generate object code, so this was a pretty serious problem, and it resisted analysis. I poked at it for a while but didn't get anywhere, and I had other things to do so declared it a write-off and gave up.

Book the Second: The golden thread of progress

In March, another mailing list conversation prodded me into finding a blog entry by Karel Gardas on building GHC for arm64. This was enough to be worth another look, and indeed it turned out that (with some help from Karel in private mail) I was able to cross-build a compiler that actually worked and could be used to run a fully-native build that also worked. Of course this was 7.8, since as I mentioned cross-building 7.6 is unrealistically difficult unless you're considerably more of an expert on GHC's labyrinthine build system than I am. OK, no problem, right? Getting a GHC at all is the hard bit, and 7.8 must be at least as capable as 7.6, so it should be able to build 7.6 easily enough ...

Not so much. What I'd missed here was that compiler engineers generally only care very much about building the compiler with older versions of itself, and if the language in question has any kind of deprecation cycle then the compiler itself is likely to be behind on various things compared to more typical code since it has to be buildable with older versions. This means that the removal of some deprecated interfaces from 7.8 posed a problem, as did some changes in certain primops that had gained an associated compatibility layer in 7.8 but nobody had gone back to put the corresponding compatibility layer into 7.6. GHC supports running Haskell code through the C preprocessor, and there's a __GLASGOW_HASKELL__ definition with the compiler's version number, so this was just a slog tracking down changes in git and adding #ifdef-guarded code that coped with the newer compiler (remembering that stage1 will be built with 7.8 and stage2 with stage1, i.e. 7.6, from the same source tree). More inscrutably, GHC has its own packaging system called Cabal which is also used by the compiler build process to determine which subpackages to build and how to link them against each other, and some crucial subpackages weren't being built: it looked like it was stuck on picking versions from "stage0" (i.e. the initial compiler used as an input to the whole process) when it should have been building its own. Eventually I figured out that this was because GHC's use of its packaging system hadn't anticipated this case, and was selecting the higher version of the ghc package itself from stage0 rather than the version it was about to build for itself, and thus never actually tried to build most of the compiler. Editing ghc_stage1_DEPS in ghc/stage1/package-data.mk after its initial generation sorted this out. One late night building round and round in circles for a while until I had something stable, and a Debian source upload to add basic support for the architecture name (and other changes which were a bit over the top in retrospect: I didn't need to touch the embedded copy of libffi, as we build with the system one), and I was able to feed this all into Launchpad and watch the builders munch away very satisfyingly at the Haskell library stack for a while.

This was all interesting, and finally all that work was actually paying off in terms of getting to watch a slew of several hundred build failures vanish from arm64 (the final count was something like 640, I think). The fly in the ointment was that ppc64el was still blocked, as the problem there wasn't building 7.6, it was getting a working 7.8. But now I really did have other much more urgent things to do, so I figured I just wouldn't get to this by release time and stuck it on the figurative shelf.

Book the Third: The track of a bug

Then, last Friday, I cleared out my urgent pile and thought I'd have another quick look. (I get a bit obsessive about things like this that smell of "interesting intellectual puzzle".) slyfox on the #ghc IRC channel gave me some general debugging advice and, particularly usefully, a reduced example program that I could use to debug just the process-spawning problem without having to wade through noise from running the rest of the compiler. I reproduced the same problem there, and then found that the program crashed earlier (in stg_ap_0_fast, part of the run-time system) if I compiled it with +RTS -Da -RTS. I nailed it down to a small enough region of assembly that I could see all of the assembly, the source code, and an intermediate representation or two from the compiler, and then started meditating on what makes ppc64el special.

You see, the vast majority of porting bugs come down to what I might call gross properties of the architecture. You have things like whether it's 32-bit or 64-bit, big-endian or little-endian, whether char is signed or unsigned, that sort of thing. There's a big table on the Debian wiki that handily summarises most of the important ones. Sometimes you have to deal with distribution-specific things like whether GL or GLES is used; often, especially for new variants of existing architectures, you have to cope with foolish configure scripts that think they can guess certain things from the architecture name and get it wrong (assuming that powerpc* means big-endian, for instance). We often have to update config.guess and config.sub, and on ppc64el we have the additional hassle of updating libtool macros too. But I've done a lot of this stuff and I'd accounted for everything I could think of. ppc64el is actually a lot like amd64 in terms of many of these porting-relevant properties, and not even that far off arm64 which I'd just successfully ported GHC to, so I couldn't be dealing with anything particularly obvious. There was some hand-written assembly which certainly could have been problematic, but I'd carefully checked that this wasn't being used by the "unregisterised" (no specialised machine dependencies, so relatively easy to port but not well-optimised) build I was using. A problem around spawning processes suggested a problem with SIGCHLD handling, but I ruled that out by slowing down the first child process that it spawned and using strace to confirm that SIGSEGV was the first signal received. What on earth was the problem?

From some painstaking gdb work, one thing I eventually noticed was that stg_ap_0_fast's local stack appeared to be being corrupted by a function call, specifically a call to the colourfully-named debugBelch. Now, when IBM's toolchain engineers were putting together ppc64el based on ppc64, they took the opportunity to fix a number of problems with their ABI: there's an OpenJDK bug with a handy list of references. One of the things I noticed there was that there were some stack allocation optimisations in the new ABI, which affected functions that don't call any vararg functions and don't call any functions that take enough parameters that some of them have to be passed on the stack rather than in registers. debugBelch takes varargs: hmm. Now, the calling code isn't quite in C as such, but in a related dialect called "Cmm", a variant of C-- (yes, minus), that GHC uses to help bridge the gap between the functional world and its code generation, and which is compiled down to C by GHC. When importing C functions into Cmm, GHC generates prototypes for them, but it doesn't do enough parsing to work out the true prototype; instead, they all just get something like extern StgFunPtr f(void);. In most architectures you can get away with this, because the arguments get passed in the usual calling convention anyway and it all works out, but on ppc64el this means that the caller doesn't generate enough stack space and then the callee tries to save its varargs onto the stack in an area that in fact belongs to the caller, and suddenly everything goes south. Things were starting to make sense.

Now, debugBelch is only used in optional debugging code; but runInteractiveProcess (the function associated with the initial round of failures) takes no fewer than twelve arguments, plenty to force some of them onto the stack. I poked around the GCC patch for this ABI change a bit and determined that it only optimised away the stack allocation if it had a full prototype for all the callees, so I guessed that changing those prototypes to extern StgFunPtr f(); might work: it's still technically wrong, not least because omitting the parameter list is an obsolescent feature in C11, but it's at least just omitting information about the parameter list rather than actively lying about it. I tweaked that and ran the cross-build from scratch again. Lo and behold, suddenly I had a working compiler, and I could go through the same build-7.6-using-7.8 procedure as with arm64, much more quickly this time now that I knew what I was doing. One upstream bug, one Debian upload, and several bootstrapping builds later, and GHC was up and running on another architecture in Launchpad. Success!

Epilogue

There's still more to do. I gather there may be a Google Summer of Code project in Linaro to write proper native code generation for GHC on arm64: this would make things a good deal faster, but also enable GHCi (the interpreter) and Template Haskell, and thus clear quite a few more build failures. Since there's already native code generation for ppc64 in GHC, getting it going for ppc64el would probably only be a couple of days' work at this point. But these are niceties by comparison, and I'm more than happy with what I got working for 14.04.

The upshot of all of this is that I may be the first non-Haskell-programmer to ever port GHC to two entirely new architectures. I'm not sure if I gain much from that personally aside from a lot of lost sleep and being considered extremely strange. It has, however, been by far the most challenging set of packages I've ported, and a fascinating trip through some odd corners of build systems and undefined behaviour that I don't normally need to touch.

Categories: Elsewhere

Richard Hartmann: git-annex corner case: Changing commit messages retroactively and after syncing

Tue, 15/04/2014 - 00:12

This is half a blog post and half a reminder for my future self.

So let's say you used the following commands:

git add foo git annex add bar git annex sync # move to different location with different remotes available git add quux git annex add quuux git annex sync

what I wanted to happen was to simply sync the already committed stuff to the other remotes. What happened instead was git annex sync's automagic commit feature (which you can not disable, it seems) doing its job: Commit what was added earlier and use "git-annex automatic sync" as commit message.

This is not a problem in and as of itself, but as this is my my master annex and as I managed to maintain clean commit messages for the last few years, I felt the need to clean this mess up.

Changing old commit messages is easy:

git rebase --interactive HEAD~3

pick the r option for "reword" and amend the two commit messages. I did the same on my remote and all the branches I could find with git branch -a. Problem is, git-annex pulls in changes from refs which are not shown as branches; run git annex sync and back are the old commits along with a merge commit like an ugly cherry on top. Blegh.

I decided to leave my comfort zone and ended up with the following:

# always back up before poking refs git clone --mirror repo backup git reset --hard 1234 git show-ref | grep master # for every ref returned, do: git update-ref $ref 1234

rinse repeat for every remote, git annex sync, et voilà. And yes, I avoided using an actual loop on purpose; sometimes, doing things slowly and by hand just feels safer.

For good measure, I am running

git fsck && git annex fsck

on all my remotes now, but everything looks good up to now.

Categories: Elsewhere

Daniel Kahn Gillmor: OTR key replacement (heartbleed)

Mon, 14/04/2014 - 19:45
I'm replacing my OTR key for XMPP because of heartbleed (see below).

If the plain ASCII text below is mangled beyond verification, you can retrieve a copy of it from my web site that should be able to be verified.

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 OTR Key Replacement for XMPP dkg@jabber.org =========================================== Date: 2014-04-14 My main XMPP account is dkg@jabber.org. I prefer OTR [0] conversations when using XMPP for private discussions. I was using irssi to connect to XMPP servers, and irssi relies on OpenSSL for the TLS connections. I was using it with versions of OpenSSL that were vulnerable to the "Heartbleed" attack [1]. It's possible that my OTR long-term secret key was leaked via this attack. As a result, I'm changing my OTR key for this account. The new, correct OTR fingerprint for the XMPP account at dkg@jabber.org is: F8953C5D 48ABABA2 F48EE99C D6550A78 A91EF63D Thanks for taking the time to verify your peers' fingerprints. Secure communication is important not only to protect yourself, but also to protect your friends, their friends and so on. Happy Hacking, --dkg (Daniel Kahn Gillmor) Notes: [0] OTR: https://otr.cypherpunks.ca/ [1] Heartbleed: http://heartbleed.com/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQJ8BAEBCgBmBQJTTBF+XxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRFQjk2OTEyODdBN0FEREUzNzU3RDkxMUVB NTI0MDFCMTFCRkRGQTVDAAoJEKUkAbEb/fpcYwkQAKLzEnTV1lrK6YrhdvRnuYnh Bh9Ad2ZY44RQmN+STMEnCJ4OWbn5qx/NrziNVUZN6JddrEvYUOxME6K0mGHdY2KR yjLYudsBuSMZQ+5crZkE8rjBL8vDj8Dbn3mHyT8bAbB9cmASESeQMu96vni15ePd 2sB7iBofee9YAoiewI+xRvjo2aRX8nbFSykoIusgnYG2qwo2qPaBVOjmoBPB5YRI PkN0/hAh11Ky0qQ/GUROytp/BMJXZx2rea2xHs0mplZLqJrX400u1Bawllgz3gfV qQKKNc3st6iHf3F6p6Z0db9NRq+AJ24fTJNcQ+t07vMZHCWM+hTelofvDyBhqG/r l8e4gdSh/zWTR/7TR3ZYLCiZzU0uYNd0rE3CcxDbnGTUS1ZxooykWBNIPJMl1DUE zzcrQleLS5tna1b9la3rJWtFIATyO4dvUXXa9wU3c3+Wr60cSXbsK5OCct2KmiWY fJme0bpM5m1j7B8QwLzKqy/+YgOOJ05QDVbBZwJn1B7rvUYmb968yLQUqO5Q87L4 GvPB1yY+2bLLF2oFMJJzFmhKuAflslRXyKcAhTmtKZY+hUpxoWuVa1qLU3bQCUSE MlC4Hv6vaq14BEYLeopoSb7THsIcUdRjho+WEKPkryj6aVZM5WnIGIS/4QtYvWpk 3UsXFdVZGfE9rfCOLf0F =BGa1 -----END PGP SIGNATURE-----
Categories: Elsewhere

Christine Spang: PyCon 2014 retrospective

Mon, 14/04/2014 - 18:15

PyCon 2014 happened. (Sprints are still happening.)

This was my 3rd PyCon, but my first year as a serious contributor to the event, which led to an incredibly different feel. I also came as a person running a company building a complex system in Python, and I loved having the overarching mission of what I'm building driving my approach to what I chose to do. PyCon is one of the few conferences I go to where the feeling of acceptance and at-homeness mitigates the introvert overwhelm at nonstop social interaction. It's truly a special event and community.

Here are some highlights:

  • I gave a tutorial about search, which was recorded in its entirety... if you watch this video, I highly recommend skipping the hands-on parts where I'm just walking around helping people out.
  • I gave a talk! It's called Subprocess to FFI, and you can find the video here. Through three full iterations of dry runs with feedback, I had a ton of fun preparing this talk. I'd like to give more like it in the future as I continue to level up my speaking skills.
  • Allen Downey came to my talk and found me later to say hi. Omg amazing, made my day.
  • Aux Vivres and Dieu du Ciel, amazing eats and drink with great new and old friends. Special shout out to old Debian friends Micah Anderson, Matt Zimmerman, and Antoine Beaupré for a good time at Dieu du Ciel.
  • The Geek Feminism open space was a great place to chill out and always find other women to hang with, much thanks to Liz Henry for organizing it.
  • Talking to the community from the Inbox booth on Startup Row in the Expo hall on Friday. Special thanks for Don Sheu and Yannick Gingras for making this happen, it was awesome!
  • The PyLadies lunch. Wow, was that amazing. Not only did I get to meet Julia Evans (who also liked meeting me!), but there was an amazing lineup of amazing women telling everyone about what they're doing. This and Noami Ceder's touching talk about openly transitioning while being a member of the Python community really show how the community walks the walk when it comes to diversity and is always improving.
  • Catching up with old friends like Biella Coleman, Selena Deckelmann, Deb Nicholson, Paul Tagliamonte, Jessica McKellar, Adam Fletcher, and even friends from the bay area who I don't see often. It was hard to walk places without getting too distracted running into people I knew, I got really good at waving and continuing on my way.

I didn't get to go to a lot of talks in person this year since my personal schedule was so full, but the PyCon video team is amazing as usual, so I'm looking forward to checking out the archive. It really is a gift to get the videos up while energy from the conference is still so high and people want to check out things they missed and share the talks they loved.

Thanks to everyone, hugs, peace out, et cetera!

Categories: Elsewhere

Craig Small: mutt ate my i key

Mon, 14/04/2014 - 15:11

I did a large upgrade tonight and noticed there was a mutt upgrade, no biggie really….Except my I have for years (incorrectly?) used the “i” key when reading a specific email to jump back to the list of emails, or from index to pager in mutt speak.

Instead of my pager of mails, I got “No news servers defined!” The fix is rather simple, in muttrc put

bind pager i exit

and you’re back to using the i key the wrong way again like me.

 

Categories: Elsewhere

Chris Lamb: Race report: Cambridge Duathlon 2014

Mon, 14/04/2014 - 14:59

(This is my first race of the 2014 season.)


I had entered this race in 2013 and found it was effective for focusing winter training. As triathlons do not typically start until May in the UK, scheduling earlier races can be motivating in the colder winter months.

I didn't have any clear goals for the race except to blow out the cobwebs and improve on my 2013 time. I couldn't set reasonable or reliable target times after considerable "long & slow" training over the winter but I did want to test some new equipment and tactics, especially race pacing with a power meter, but also a new wheelset, crankset and helmet.

Preparation was both accidentally and deliberately compromised: I did very little race-specific training as my season is based around an entirely different intensity of race, but compounding this I was confined to bed the weekend before. Additionally, when resuming training midweek I managed to pick up a slight tightness in my hamstring.

Sleep was average in the preceding days and I felt moderately fresh on race morning. Nutrition-wise, I had porridge and bread with jam for breakfast, a PowerGel before the race, 750ml of PowerBar Perform on the bike along with a "Hydro" PowerGel with caffeine at approximately 30km.

Run 1 (7.5km)

A few minutes before the start my race number belt—the only truly untested equipment that day—refused to tighten. However, I decided that once the race began I would either ignore it or even discard it, risking disqualification.

Despite letting everyone go up the road, my first km was still too fast so I dialed down the effort, settling into a "10k" pace and began overtaking other runners. The Fen winds and drag-strip uphill from 3km provided a bit of pacing challenge for someone used to shelter and shorter hills but I kept a metered effort through into transition.

Time
33:01 (4:24/km, T1: 00:47) — Last year: 37:47 (5:02/km)
Bike (40km)

Although my 2014 bike setup features a power meter, I had not yet had the chance to perform an FTP test outdoors and thus was not able to calculate a definitive target power for the bike leg. However, data from my road bike suggested I set a power ceiling of 250W on the longer hills.

This was extremely effective in avoiding going "into the red" and compromising the second run. This lends yet more weight to the idea that a power meter in multisport events is "almost like cheating".

I was not entirely comfortable with my bike position: not only were my thin sunglasses making me raise my head more than I needed to, I found myself creeping forward onto the nose of my saddle. This is sub-optimal, even if only considering that I am not training in that position.

Overall, the bike was uneventful with the only memorable moment provided by a wasp that got stuck between my head and a helmet vent. Coming into transition I didn't feel like I had really pushed myself that hard—probably a good sign—but the time difference from last year's bike leg (1:16:11) was a little underwhelming.

Time
1:10:45 (T2: 00:58)
Run 2 (7.5km)

After leaving transition, my legs were extremely uncooperative and I had great difficulty in pacing myself in the first kilometer. Concentrating hard on reducing my cadence as well as using my rehearsed mental cue, I managed to settle down.

The following 4 kilometers were definitely a mental struggle rather than a physical one, modulo having to force a few burps to ease some discomfort, possibly from drinking too much or too fast on the bike.

I had planned to "unload" as soon as I reached 6km but I didn't really have it in me. Whilst I am physiologically faster compared to last year, I suspect the lack of threshold-level running over the winter meant the required mental component of digging deep will require some coaxing to return.

However, it is said that you have successfully paced a duathlon if the second run faster than the first. On this criterion, this was a success, but it would have a been a bonus to have really felt completely completely drained at the end of the day, if only from a neo-Calvinist perspective.

Time
32:46 (4:22/km) / Last year: 38:10 (5:05/km)
Overall
Total time
2:18:19

A race that goes almost entirely to plan is a bit of a paradox – there's certainly satisfaction in setting goals and hitting them without issue, but this is a slow-burning fire rather than a fireworks display.

However, it was satisfying to learn that I managed to finish 5th in my age group despite this race attracting an extremely strong field. As an indicator, the age-group athlete ranked immediately above me was seven minutes faster and the overall winner finished in 1:54:53.

The race has identified the following areas that I will work on before my next race:

  • Perform an outdoors FTP on my time-trial bike outdoors to develop optimum power plan.
  • Do a few more brick runs, at least to re-acclimatise the feeling.
  • Schedule another bike fit.

Although not strictly race-related, I also need to find techniques for transporting a bike on public transport less stressful.

(Full results & full 2014 race schedule)

Categories: Elsewhere

Bits from Debian: DPL election is over, Lucas Nussbaum re-elected

Mon, 14/04/2014 - 08:10

The Debian Project Leader election has concluded and the winner is Lucas Nussbaum. Of a total of 1005 developers, 401 developers voted using the Condorcet method.

More information about the result is available in the Debian Project Leader Elections 2014 page.

The new term for the project leader will start on April 17th and expire on April 17th 2015.

Categories: Elsewhere

Andrew Pollock: [life] Day 76: Dora + Fever

Mon, 14/04/2014 - 07:46

We had a bit of a rough night last night. I noticed Zoe was pretty hot when she had a nap yesterday after not really eating much lunch. She still had a mild fever after her nap, so I gave her some paracetamol (aka acetaminophen, that one weirded me out when I moved to the US) and called for a home doctor to check her ears out.

Her ears were fine, but her throat was a little red. The doctor said it was probably a virus. Her temperature wasn't so high at bed time, so I skipped the paracetamol, and she went to bed fine.

She did wake up at about 1:30am and it took me until 3am to get her back to bed. I think it was a combination of the fever and trying to phase out her white noise, but she just didn't want to sleep in her bed or her room. At 3am I admitted defeat and let her sleep with me.

She had only a slightly elevated temperature this morning, and otherwise seemed in good spirits. We were supposed to go to a family lunch today, because my sister and brother are in town with their respective families, but I figured we'd skip that on account that Zoe may have still had something, and coupled with the poor night's sleep, I wasn't sure how much socialising she was going to be up for.

My ear has still been giving me grief, and I had a home doctor check it yesterday as well, and he said the ear canal was 90% blocked. First thing this morning I called up to make an appointment with my regular doctor to try and get it flushed out. The earliest appointment I could get was 10:15am.

So we trundled around the corner to my doctor after a very slow start to the day. I got my ear cleaned out and felt like a million bucks afterwards. We went to Woolworths to order an undecorated mud slab cake, so I can try doing a trial birthday cake. I've given up on trying to do the sitting minion, and significantly scaled back to just a flat minion slab cake. The should be ready tomorrow.

The family thing was originally supposed to be tomorrow, and was only moved to today yesterday. My original plan had been to take Zoe to a free Dora the Explorer live show that was on in the Queen Street Mall.

I decided to revert back to the original plan, but by this stage, it was too late to catch the 11am show, so the 1pm show was the only other option. We had a "quick" lunch at home, which involved Zoe refusing the eat the sandwich I made for her and me convincing her otherwise.

Then I got a time-sensitive phone call from a friend, and once I'd finished dealing with that, there wasn't enough time to take any form of public transport and get there in time, so I decided to just drive in.

We parked in the Myer Centre car park, and quickly made our way up to the mall, and made it there comfortably with 5 minutes to spare.

The show wasn't anything much to phone home about. It was basically just 20 minutes of someone in a giant Dora suit acting out was was essentially a typical episode of Dora the Explorer, on stage, with a helper. Zoe started out wanting to sit on my lap, but made a few brief forays down to the "mosh pit" down the front with the other kids, dancing around.

After the show finished, we had about 40 minutes to kill before we could get a photo with Dora, so we wandered around the Myer Centre. I let Zoe choose our destinations initially, and we browsed a cheap accessories store that was having a sale, and then we wandered downstairs to one of the underground bus station platforms.

After that, we made our way up to Lincraft, and browsed. We bought a $5 magnifying glass, and I let Zoe do the whole transaction by herself. After that it was time to make our way back down for the photo.

Zoe made it first in line, so we were in and out nice and quick. We got our photos, and they gave her a little activity book as well, which she thought was cool, and then we headed back down the car park.

In my haste to park and get top side, I hadn't really paid attention to where we'd parked, and we came down via different elevators than we went up, so by the time I'd finally located the car, the exit gate was trying to extract an extra $5 parking out of me. Fortunately I was able to use the intercom at the gate and tell my sob story of being a nincompoop, and they let us out without further payment.

We swung by the Valley to clear my PO box, and then headed home. Zoe spontaneously announced she'd had a fun day, so that was lovely.

We only had about an hour and half to kill before Sarah was going to pick up Zoe, so we just mucked around. Zoe looked at stuff around the house with her magnifying glass. She helped me open my mail. We looked at some of the photos on my phone. Dayframe and a Chromecast is a great combination for that. We had a really lovely spell on the couch where we took turns to draw on her Magna Doodle. That was some really sweet time together.

Zoe seemed really eager for her mother to arrive, and kept asking how much longer it was going to be, and going outside our unit's front door to look for her.

Sarah finally arrived, and remarked that Zoe felt hot, and so I checked her temperature, and her fever had returned, so whatever she has she's still fighting off.

I decided to do my Easter egg shopping in preparation for Sunday. A friend suggested this cool idea of leaving rabbit paw tracks all over the house in baby powder, and I found a template online and got that all ready to go.

I had a really great yoga class tonight. Probably one of the best I've had in a while in terms of being able to completely clear my head.

I'm looking forward to an uninterrupted night's sleep tonight.

Categories: Elsewhere

Matthew Garrett: Real-world Secure Boot attacks

Mon, 14/04/2014 - 05:22
MITRE gave a presentation on UEFI Secure Boot at SyScan earlier this month. You should read the the presentation and paper, because it's really very good.

It describes a couple of attacks. The first is that some platforms store their Secure Boot policy in a run time UEFI variable. UEFI variables are split into two broad categories - boot time and run time. Boot time variables can only be accessed while in boot services - the moment the bootloader or kernel calls ExitBootServices(), they're inaccessible. Some vendors chose to leave the variable containing firmware settings available during run time, presumably because it makes it easier to implement tools for modifying firmware settings at the OS level. Unfortunately, some vendors left bits of Secure Boot policy in this space. The naive approach would be to simply disable Secure Boot entirely, but that means that the OS would be able to detect that the system wasn't in a secure state[1]. A more subtle approach is to modify the policy, such that the firmware chooses not to verify the signatures on files stored on fixed media. Drop in a new bootloader and victory is ensured.

But that's not a beautiful approach. It depends on the firmware vendor having made that mistake. What if you could just rewrite arbitrary variables, even if they're only supposed to be accessible in boot services? Variables are all stored in flash, connected to the chipset's SPI controller. Allowing arbitrary access to that from the OS would make it straightforward to modify the variables, even if they're boot time-only. So, thankfully, the SPI controller has some control mechanisms. The first is that any attempt to enable the write-access bit will cause a System Management Interrupt, at which point the CPU should trap into System Management Mode and (if the write attempt isn't authorised) flip it back. The second is to disable access from the OS entirely - all writes have to take place in System Management Mode.

The MITRE results show that around 0.03% of modern machines enable the second option. That's unfortunate, but the first option should still be sufficient[2]. Except the first option requires on the SMI actually firing. And, conveniently, Intel's chipsets have a bit that allows you to disable all SMI sources[3], and then have another bit to disable further writes to the first bit. Except 40% of the machines MITRE tested didn't bother setting that lock bit. So you can just disable SMI generation, remove the write-protect bit on the SPI controller and then write to arbitrary variables, including the SecureBoot enable one.

This is, uh, obviously a problem. The good news is that this has been communicated to firmware and system vendors and it should be fixed in the future. The bad news is that a significant proportion of existing systems can probably have their Secure Boot implementation circumvented. This is pretty unsurprisingly - I suggested that the first few generations would be broken back in 2012. Security tends to be an iterative process, and changing a branch of the industry that's historically not had to care into one that forms the root of platform trust is a difficult process. As the MITRE paper says, UEFI Secure Boot will be a genuine improvement in security. It's just going to take us a little while to get to the point where the more obvious flaws have been worked out.

[1] Unless the malware was intelligent enough to hook GetVariable, detect a request for SecureBoot and then give a fake answer, but who would do that?
[2] Impressively, basically everyone enables that.
[3] Great for dealing with bugs caused by YOUR ENTIRE COMPUTER BEING INTERRUPTED BY ARBITRARY VENDOR CODE, except unfortunately it also probably disables chunks of thermal management and stops various other things from working as well.

comments
Categories: Elsewhere

Steve Kemp: Is lumail a stepping stone?

Mon, 14/04/2014 - 01:30

I'm pondering a rewrite of my console-based mail-client.

While it is "popular" it is not popular.

I suspect "console-based" is the killer.

I like console, and I ssh to a remote server to use it, but having different front-ends would be neat.

In the world of mailpipe, etc, is there room for a graphic console client? Possibly.

The limiting factor would be the lack of POP3/IMAP.

Reworking things such that there is a daemon to which a GUI, or a console client, could connect seems simple. The hard part would obviously be working the IPC and writing the GUI. Any toolkit selected would rule out 40% of the audience.

In other news I'm stalling on replying to emails. Irony.

Categories: Elsewhere

Pages