Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 24 min 5 sec ago

Chris Lamb: start-stop-daemon: --exec vs --startas

Mon, 28/07/2014 - 15:15

start-stop-daemon is the classic tool on Debian and derived distributions to manage system background processes. A typical invokation from an initscript is as follows:

start-stop-daemon \ --quiet \ --oknodo \ --start \ --pidfile /var/run/daemon.pid \ --exec /usr/sbin/daemon \ -- -c /etc/daemon.cfg -p /var/run/daemon.pid

The basic operation is that it will first check whether /usr/sbin/daemon is not running and, if not, execute /usr/sbin/daemon -c /etc/daemon.cfg -p /var/run/daemon.pid. This process then has the responsibility to daemonise itself and write the resulting process ID to /var/run/daemon.pid.

start-stop-daemon then waits until /var/run/daemon.pid has been created as the test of whether the service has actually started, raising an error if that doesn't happen.

(In practice, the locations of all these files are parameterised to prevent DRY violations.)

Idempotency

By idempotence we are mostly concerned with repeated calls to /etc/init.d/daemon start not starting multiple versions of our daemon.

This might not seem to be particularly big issue at first but the increased adoption of stateless configuration management tools such as Ansible (which should be completely free to call start to ensure a started state) mean that one should be particularly careful of this apparent corner case.

In its usual operation, start-stop-daemon ensures only one instance of the daemon is running with the --exec parameter: if the specified pidfile exists and the PID it refers to is an "instance" of that executable, then it is assumed that the daemon is already running and another copy is not started. This is handled in the pid_is_exec method (source) - the /proc/$PID/exe symlink is resolved and checked against the value of --exec.

Interpreted scripts

However, one case where this doesn't work is interpreted scripts. Lets look at what happens if /usr/sbin/daemon is such a script, eg. a file that starts:

#!/usr/bin/env python # [..]

The problem this introduces is that /proc/$PID/exe now points to the interpreter instead, often with an essentially non-deterministic version suffix:

$ ls -l /proc/14494/exe lrwxrwxrwx 1 www-data www-data 0 Jul 25 15:18 /proc/14494/exe -> /usr/bin/python2.7

When this process is examined using the --exec mechanism outlined above it will be rejected as an instance of /usr/sbin/daemon and therefore another instance of that daemon will be incorrectly started.

--startas

The solution is to use the --startas parameter instead. This omits the /proc/$PID/exe check and merely tests whether a PID with that number is running:

start-stop-daemon \ --quiet \ --oknodo \ --start \ --pidfile /var/run/daemon.pid \ --startas /usr/sbin/daemon \ -- -c /etc/daemon.cfg -p /var/run/daemon.pid

Whilst it is therefore less reliable (in that the PID found in the pidfile could actually be an entirely different process altogether) it's probably an acceptable trade-off against the case of running multiple instances of that daemon.

This danger can be ameliorated by using some of start-stop-daemon's other matching tests, such as --user or even --name.

Categories: Elsewhere

Daniel Pocock: Secure that Dictaphone

Mon, 28/07/2014 - 07:35

2014 has been a big year for dictaphones so far.

First, it was France and the secret recordings made by Patrick Buisson during the reign of President Sarkozy.

Then, a US court ordered the release of the confidential Boston College tapes, part of an oral history project. Originally, each participant had agreed their recording would only be released after their death. Sinn Fein leader Gerry Adams was arrested and questioned over a period of 100 hours and released without charge.

Now Australia is taking its turn. In #dictagate down under, a senior political correspondent from a respected newspaper recorded (most likely with consent) some off-the-record comments of former conservative leader Ted Baillieu. Unfortunately, this journalist misplaced the dictaphone at the state conference of Baillieu's arch-rivals, the ALP. A scandal quickly errupted.

Secure recording technology

There is no question that electronic voice recordings can be helpful for people, including journalists, researchers, call centers and many other purposes. However, the ease with which they can now be distributed is only dawning on people.

Twenty years ago, you would need to get the assistance of a radio or TV producer to disseminate such recordings so widely. Today there is email and social media. The Baillieu tapes were emailed directly to 400 people in a matter of minutes.

Just as technology brings new problems, it also brings solutions. Encryption is one of them.

Is encryption worthwhile?

Coverage of the Snowden revelations has revealed that many popular security technologies are not one hundred percent safe. In each of these dictaphone cases, however, NSA-level expertise was not a factor. Even the most simplistic encryption would have caused endless frustration to the offenders who distributed the Baillieu tape.

How can anybody be sure encryption is reliable?

Part of the problem is education. Everybody using the technology needs to be aware of the basic concepts, for example, public key cryptography.

Another big question mark is back doors. There is ongoing criticism of Apple iPhone/iPod devices and the many ways that their encryption can be easily disabled by Apple engineers and presumably many former staff, security personnel and others. The message is clear: proprietary, closed-source solutions should be avoided. Free and open source technologies are the alternative. If a company does not give you the source code, how can anybody independently audit their code for security? With encryption software, what use is it if nobody has verified it?

What are the options?

However, given that the majority of people don't have a PhD in computer science or mathematics, are there convenient ways to get started with encryption?

Reading is a good start. The Code Book by Simon Singh (author of other popular science books like Fermat's Last Theorem) is very accessible, not classified and assumes no formal training in mathematics. Even for people who do know these topics inside out, it is a good book to share with friends and family.

The Guardian Project (no connection with Guardian Media of Edward Snowden fame) aims to provide a secure and easy to use selection of apps for pocket devices. This project has practical applications in business, journalism and politics alike.

How should a secure dictaphone app work?

Dictaphone users typically need to take their dictaphones in the field, so there is a risk of losing it or having it stolen. A strong security solution in this situation may involve creating an RSA key pair on a home/office computer, keeping the private key on the home computer and putting the public key on the dictaphone device. Configured this way, the dictaphone will not be able to play back any of the recordings itself - the user will always have to copy them to the computer for decryption.

Categories: Elsewhere

Russ Allbery: AFS::PAG 1.02

Mon, 28/07/2014 - 02:22

This is primarily a testing exercise. I moved my software release process and web page generation to a different host, and wanted to run through a release of a package to make sure that I got all the details right.

It's still a bit clunky, and I need to tweak the process, but it's close enough.

That said, there are a few minor changes in this module (which provides the minimum C glue required to do AFS operations from Perl — only the pieces that can't be duplicated by calling command-line programs). I'm improving the standardization of my Perl distributions, so I've moved NEWS to Changes and switched to the Lancaster Consensus environment variables for controlling testing. I also added some more pieces to the package metadata.

You can get the latest version from the AFS::PAG distribution page.

Categories: Elsewhere

Christian Perrier: [life] Running update July 26th 2014

Sun, 27/07/2014 - 13:10
Dog, long time since I blogged about my running activities. Apparently, I didn't since......

So, well, that will be a long update as many things happened during the first half of 2014 when it comes at running, for me.

January: I was recovering from a fatigue fracture injury inherited from last races in 2013. As a consequence, I resumed running only on Jan 7th. Therefore I cancelled my participation to the "Semi Raid 28", an night orienteering raid of about 50-60km in southern neighbourhood of Paris. Instead, I actually offerred my help to organizers in collecting orienteering signs after the race (the longest one : 120km). So, I ended up spending over 24 hours running in woods and hunting down hidden signs with the same information than runners. My only advantage was that I was able to use my car to go from one point to another. Still, I ended up running over 70km in many small parts, often alone in the dark woods with my headlamp, on very muddy areas...and collecting nearly 80 huge signs.

February: Everything was going well and I for instance ran a great half-marathon in Bullion (south of Paris) in 1h3821" (great for a quite hilly race)....until I twisted my left ankle while running back from work. A quite severe twist, though no bone damage, thankfully. I had to stop running, again, for 3 years. Biking to/from work was the replacement activity....

March: I resumed running on March 10th, one week before a quite difficult trail race in my neighbourhood (30km "only" but up to 800 meters positive climb). That race was a preparation (and a test after the injury) for my 3rd participation to "Paris Ecotrail", a 80km trail race in woods of the South-West area of Paris, ending in the Eiffel Tower area. Indeed, both went very well, though I was very careful with my ankle. I finally broke my record at Ecotrail, finishing the race in 9h08 (to be compared to 9h36 last year and 11h15 the year before).

April: Paris marathon was scheduled one week after Ecotrail. Everybody will tell you that running a marathon one week after a 80km race is kinda crazy.....which is why I made it..:-). That was my 3rd Paris marathon and my 12th marathon overall. However, this year, no record in sight. The challenge was running the marathon....dressed as SpongeBob (you know me, right?). I actually had great fun doing that and was happy to get zillions of cheering all over the race, from the crowd. I finally completed the race in 4h30, which is, after all, not that far from the time of my very first marathon (4h12). The only drawback was that the succession of quite very long distance runs made my left knee suffer as it never happened before. As a consequence, I (again) had to stop running for nearly one month before we found that I was quite sensitive to pronation, which the succession of long and slow races made worse.

May: so finally afterthese (very) long weeks, I could gradually resume running, which finally culminated in mid-May with the 50km race "trail des Cerfs", in the Rambouillet Forest, closed to our place. This quite long but not too difficult trail race ("only" 800 meters positive climb overall) was completed in 5h16, which was completely unexpected, given the low training during the previous weeks.

June: no race during that month. The entire month was focused on preparing the Montagn'hard race of July 5th: so several training sessions with a lot of climbing either by running or by fast walking (nordic style) as well as downhill run training (always important for moutain trail).

July: the second "big peak" of my 2014 season was scheduled for July 5th: "La Montagn'hard", a moutain trail race close to Les Contamines in the neighbourhood of Chamonix, the french moutaineering Mekkah. "Only" 60 kilometers....but close to 5000 meters positive climb. Montagn'hard is among the thoughest moutain trail races in France and therefore a "must do" for trail runners. This race week-end includes also a 105km ultra-race, which is often said to be as hard, even maybe harder, than the very famous "Ultra Trail du Mont-Blanc" trail in Chamonix. Still, for my second only season in moutain trail running, I decided to be "wise" and stick with the "medium" version (after all, my experience, as of now with moutain trails were only two quite "short" ones). Needless to say, it has indeed been a GREAT race. The environment is wonderful ("Miage" side of the Mont-Blanc range), the race goes through great place (Col de Tricot, noticeably) and I made a great result by finishing80th out of 3250+ runners, in 12h18, while my target time was around 13 hours.

This is where I am now. Nearly one month after Montagn'hard, I'm deeply training for my next Big Goal: The "Sur la Trace des Ducs de Savoie" or "TDS", one of the 4 races of the Ultra Trail du Mont-Blanc week, in end August (during DebConf): 120km, nearly 7500m positive climp, between Courmayeur and Chamonix, through several passes, up to 2600m height. Yet another challenge: my first "over 24h" race, with a full night out in the moutains.

You'll certainly hear again from me about that...:-)

Categories: Elsewhere

Christian Perrier: Developers per country (July 2014)

Sun, 27/07/2014 - 09:37
This is time again for my annual report about the number of developers per country.

This is now the sixth edition of this report. Former editions:

So, here we are with the July 2014 version, sorted by the ratio of *active* developers per million population for each country.

Act: number of active developers Dev: total number of developers A/M: number of active devels per million pop. D/M: number of devels per million pop. 2009: rank in 2009 2010: rank in 2010 2011: rank in 2011 (June) 2012: rank in 2012 (June) 2013: rank in 2012 (July) 2014: rank now Code Name Population Act Dev Dev Act/Million Dev/Million 2009 2010 June 2011 June 2012 July 2013 July 2014
fi Finland 5259250 19 31 3,61 5,89 1 1 1 1 1 1
ie Ireland 4670976 13 17 2,78 3,64 13 9 6 2 2 2
nz New Zealand 4331600 11 15 2,54 3,46 4 3 5 7 7 3 * mq Martinique 396404 1 1 2,52 2,52

3 4 4 4
se Sweden 9088728 22 37 2,42 4,07 3 6 7 5 5 5
ch Switzerland 7870134 19 29 2,41 3,68 2 2 2 3 3 6 * no Norway 4973029 11 14 2,21 2,82 5 4 4 6 6 7 * at Austria 8217280 18 29 2,19 3,53 6 8 10 10 10 8 * de Germany 81471834 164 235 2,01 2,88 7 7 9 9 8 9 * lu Luxemburg 503302 1 1 1,99 1,99 8 5 8 8 9 10 * fr France 65350000 101 131 1,55 2 12 12 11 11 11 11
au Australia 22607571 32 60 1,42 2,65 9 10 12 12 12 12
be Belgium 11071483 14 17 1,26 1,54 10 11 13 13 13 13
uk United-Kingdom 62698362 77 118 1,23 1,88 14 14 14 14 14 14
nl Netherlands 16728091 18 40 1,08 2,39 11 13 15 15 15 15
ca Canada 33476688 34 63 1,02 1,88 15 15 17 16 16 16
dk Denmark 5529888 5 10 0,9 1,81 17 17 16 17 17 17
es Spain 46754784 34 56 0,73 1,2 16 16 19 18 18 18
it Italy 59464644 36 52 0,61 0,87 23 22 22 19 19 19
hu Hungary 10076062 6 12 0,6 1,19 18 25 26 20 24 20 * cz Czech Rep 10190213 6 6 0,59 0,59 21 20 21 21 20 21 * us USA 313232044 175 382 0,56 1,22 19 21 25 24 22 22
il Israel 7740900 4 6 0,52 0,78 24 24 24 25 23 23
hr Croatia 4290612 2 2 0,47 0,47 20 18 18 26 25 24 * lv Latvia 2204708 1 1 0,45 0,45 26 26 27 27 26 25 * bg Bulgaria 7364570 3 3 0,41 0,41 25 23 23 23 27 26 * sg Singapore 5183700 2 2 0,39 0,39


33 33 27 * uy Uruguay 3477778 1 2 0,29 0,58 22 27 28 28 28 28
pl Poland 38441588 11 15 0,29 0,39 29 29 30 30 30 29 * jp Japan 127078679 36 52 0,28 0,41 30 28 29 29 29 30 * lt Lithuania 3535547 1 1 0,28 0,28 28 19 20 22 21 31 * gr Greece 10787690 3 4 0,28 0,37 33 38 34 35 35 32 * cr Costa Rica 4301712 1 1 0,23 0,23 31 30 31 31 31 33 * by Belarus 9577552 2 2 0,21 0,21 35 36 39 39 32 34 * ar Argentina 40677348 8 10 0,2 0,25 34 33 35 32 37 35 * pt Portugal 10561614 2 4 0,19 0,38 27 32 32 34 34 36 * sk Slovakia 5477038 1 1 0,18 0,18 32 31 33 36 36 37 * rs Serbia 7186862 1 1 0,14 0,14



38 38
tw Taiwan 23040040 3 3 0,13 0,13 37 34 37 37 39 39
br Brazil 192376496 18 21 0,09 0,11 36 35 38 38 40 40
cu Cuba 11241161 1 1 0,09 0,09
38 41 41 41 41
co Colombia 45566856 4 5 0,09 0,11 41 44 46 47 46 42 * kr South Korea 48754657 4 6 0,08 0,12 39 39 42 42 42 43 * gt Guatemala 13824463 1 1 0,07 0,07



43 44 * ec Ecuador 15007343 1 1 0,07 0,07
40 43 43 45 45
cl Chile 16746491 1 2 0,06 0,12 42 41 44 44 47 46 * za South Africa 50590000 3 10 0,06 0,2 38 48 48 48 48 47 * ru Russia 143030106 8 9 0,06 0,06 43 42 47 45 49 48 * mg Madagascar 21281844 1 1 0,05 0,05 44 37 40 40 50 49 * ro Romania 21904551 1 2 0,05 0,09 45 43 45 46 51 50 * ve Venezuela 28047938 1 1 0,04 0,04 40 45 50 49 44 51 * my Malaysia 28250000 1 1 0,04 0,04

49 50 52 52
pe Peru 29907003 1 1 0,03 0,03 46 46 51 51 53 53
tr Turkey 74724269 2 2 0,03 0,03 47 47 52 52 54 54
ua Ukraine 45134707 1 1 0,02 0,02 48 53 58 59 55 55
th Thailand 66720153 1 2 0,01 0,03 50 50 54 54 56 56
eg Egypt 80081093 1 3 0,01 0,04 51 51 55 55 57 57
mx Mexico 112336538 1 1 0,01 0,01 49 49 53 53 58 58
cn China 1344413526 10 14 0,01 0,01 53 53 57 56 59 59
in India 1210193422 8 9 0,01 0,01 52 52 56 57 60 60
sv El Salvador 7066403 0 1 0 0,14

36 58 61 61































969 1561 62,08%







A few interesting facts:
  • New Zealand bumps from rank 7 to rank 3, thanks to one new active developer
  • Switzerland loses one developer and goes donw to rank 6
  • Norway also slightly goes down by losing one developer
  • With two more developers, Austria climbs up to rank 8 and overtakes Germany...;-)
  • Hungary climbs a little bit by gaining one developer
  • Singapore doubles its number of developers from 1 to 2 and bumps from 33 to 27
  • One rank up too for Poland that gained one developer
  • Down to rank 31 for Lithuania by losing one developer
  • Up to rank 32 for Greece with 4 developers instead of 3
  • Argentina goes up by havign two more developers (it lost 2 last year)
  • Up from 46 to 42 for Colombia by winning one more developer
  • One more developer and Russia climps from 49 to 48
  • One less for Venezuela that has only one developer left...:-(
  • No new country this year. Less movement towards "the universal OS"?
  • We have 12 more active Debian developers and 26 more developers overall. Less progression than last year
  • The ratio of active developers increases is nearly stable though slightly decreasing
Categories: Elsewhere

Holger Levsen: 20140726-the-future-is-now

Sat, 26/07/2014 - 14:08
Do you remember the future?

Unless you are over 60, you weren't promised flying cars. You were promised an oppressive cyberpunk dystopia. Here you go.

(Source: found in the soup)

Luckily the future today is still unwritten. Shape it well.

Categories: Elsewhere

Richard Hartmann: Release Critical Bug report for Week 30

Fri, 25/07/2014 - 23:58

I have been asked to publish bug stats from time to time. Not exactly sure about the schedule yet, but I will try and stick to Fridays, as in the past; this is for the obvious reason that it makes historical data easier to compare. "Last Friday of each month" may or may not be too much. Time will tell.

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1511
    • Affecting Jessie: 431 That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 383 Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 44 bugs are tagged 'patch'. Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 20 bugs are marked as done, but still affect unstable. This can happen due to missing builds on some architectures, for example. Help investigate!
        • 319 bugs are neither tagged patch, nor marked done. Help make a first step towards resolution!
      • Affecting Jessie only: 48 Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 0 bugs are in packages that are unblocked by the release team.
        • 48 bugs are in packages that are not unblocked.

Graphical overview of bug stats thanks to azhag:

Categories: Elsewhere

Juliana Louback: Extending an XTuple Business Object

Fri, 25/07/2014 - 17:06

xTuple is in my opinion incredibly well designed; the code is clean and the architecture ahderent to a standardized structure. All this makes working with xTuple software quite a breeze.

I wanted to integrate JSCommunicator into the web-based xTuple version. JSCommunicator is a SIP communication tool, so my first step was to create an extension for the SIP account data. Luckily for me, the xTuple development team published an awesome tutorial for writing an xTuple extension.

xTuple cleverly uses model based business objects for the various features available. This makes customizing xTuple very straightforward. I used the tutorial mentioned above for writing my extension, but soon noticed my goals were a little different. A SIP account has 3 data fields, these being the SIP URI, the account password and an optional display name. xTuple currently has a business object in the core code for a User Account and it would make a lot more sense to simply add my 3 fields to this existing business object rather than create another business object. The tutorial very clearly shows how to extend a business object with another business object, but not how to extend a business object with only new fields (not a whole new object).

Now maybe I’m just a whole lot slower than most people, but I had a ridiculously had time figuring this out. Mind you, this is because I’m slow, because the xTuple documentation and code is understandable and as self-explanatory as it gets. I think it just takes a bit to get used to. Either way, I thought this just might be useful to others so here is how I went about it.

Setup

First you’ll have to set up your xTuple development environment and fork the xtuple and xtuple-extesions repositories as shown in this handy tutorial. A footnote I’d like to add is please verify that your version of Vagrant (and anything else you install) is the one listed in the tutorial. I think I spent like two entire days or more on a wild goose (bug) chase trying to set up my environment when the cause of all the errors was that I somehow installed an older version of Vagrant - 1.5.4 instead of 1.6.3. Please don’t make the same mistake I did. Actually if for some reason you get the following error when you try using node:

<<ERROR 2014-07-10T23:52:46.948Z>> Unrecoverable exception. Cannot call method 'extend' of undefinedTypeError: Cannot call method 'extend' of undefined at /home/vagrant/dev/xtuple/lib/backbone-x/source/model.js:37:39 at Object.<anonymous> (/home/vagrant/dev/xtuple/lib/backbone-x/source/model.js:1364:3) ... at /home/vagrant/dev/xtuple/lib/backbone-x/source/model.js:37:39 at Object.<anonymous> (/home/vagrant/dev/xtuple/lib/backbone-x/source/model.js:1364:3) ...

chances are, you have the wrong version. That’s what happened to me. The Vagrant Virtual Development Environment automatically installs and configures everything you need, it’s ready to go. So if you find yourself installing and updating and apt-gets and etc, you probably did something wrong.

Coding So by now we should have the Vagrant Virtual Development Environment set up and the web app up and running and accessible at localhost:8443. So far so good.

Disclaimer: You will note that much of this is similar to xTuple’s tutorial but there are some small but important differences. Other Disclaimer: I’m describing how I did it, which may or may not be ‘up to snuff’. Works for me though.

Schema First let’s make a schema for the table we will create with the new custom fields. Be sure to create the correct directory stucture, aka
<div class="highlight"><pre>/path/to/xtuple-extensions/source/<YOUR EXTENSION NAME>/database/source </pre></div> , in my case <div class="highlight"><pre>/path/to/xtuple-extensions/source/sip_account/database/source </pre></div> , and create the file <div class="highlight"><pre>create_sa_schema.sql </pre></div> , ‘sa’ is the name of my schema. This file will contain the following lines:

do $$ /* Only create the schema if it hasn't been created already */ var res, sql = "select schema_name from information_schema.schemata where schema_name = 'sa'", res = plv8.execute(sql); if (!res.length) { sql = "create schema sa; grant all on schema sa to group xtrole;" plv8.execute(sql); } $$ language plv8;

Of course, feel free to replace ‘sa’ with your schema name of choice. All the code described here can be found in my xtuple-extensions fork, on the sip_ext branch.

Table We’ll create a table containing your custom fields and a link to an existing table - the table for the existing business object you want to extend. If you’re wondering why, here’s a good explanation, the case in question is adding fields to the Contact business object.

You need to first figure out what table you want to link to. This might not be uber easy. I think the best way to go about it is to look at the ORMs. The xTuple ORMs are a JSON mapping between the SQL tables and the object-oriented world above the database, they’re .json files found at path/to/xtuple/node_modules/xtuple/enyo-client/database/orm/models for the core business objects and at path/to/xtuplenyo-client/extensions/source//database/orm/models for exension business objects. I'll give two examples. If you look at [contact.json](https://github.com/xtuple/xtuple/blob/master/enyo-client/database/orm/models/contact.json#L6) you will see that the Contact business object refers to the table "cntct". Look for the "type": "Contact" on the [line above](https://github.com/xtuple/xtuple/blob/master/enyo-client/database/orm/models/contact.json#L5), so we know it's the "Contact" business object. In my case, I wanted to extend the UserAccount and UserAccountRelation business objects, so check out [user_account.json](https://github.com/xtuple/xtuple/blob/master/enyo-client/database/orm/models/user_account.json). The table listed for [UserAccount is xt.usrinfo](https://github.com/xtuple/xtuple/blob/master/enyo-client/database/orm/models/user_account.json#L314) and the table listed for [UserAccountRelation is xt.usrlite](https://github.com/xtuple/xtuple/blob/master/enyo-client/database/orm/models/user_account.json#L448). A closer look at these files ([usrinfo.sql](https://github.com/xtuple/xtuple/blob/master/enyo-client/database/source/xt/views/usrinfo.sql) and [usrlite.sql](https://github.com/xtuple/xtuple/blob/master/enyo-client/database/source/xt/tables/usrlite.sql)) revealed that usrinfo is in fact a view and usrlite is 'A light weight table of user information used to avoid punishingly heavy queries on the public usr view'. I chose to refer to xt.usrlite - that or I received error messages when trying the other names, will confirm later.

Now I’ll make the file <div class="highlight"><pre>/path/to/xtuple-extensions/source/sip_account/database/source/usrlitesip.sql </pre></div> , to create a table with my custom fields plus the link to the urslite table. Don’t quote me on this, but I’m under the impression that this is the norm for names for joining tables, the name of the table you are referring to (‘usrlite’ in this case) and your extension name. Content of usrlitesip.sql:

select xt.create_table('usrlitesip', 'sa'); select xt.add_column('usrlitesip','usrlitesip_id', 'serial', 'primary key', 'sa'); select xt.add_column('usrlitesip','usrlitesip_usr_username', 'text', 'references xt.usrlite (usr_username)', 'sa'); select xt.add_column('usrlitesip','usrlitesip_uri', 'text', '', 'sa'); select xt.add_column('usrlitesip','usrlitesip_name', 'text', '', 'sa'); select xt.add_column('usrlitesip','usrlitesip_password', 'text', '', 'sa'); comment on table sa.usrlitesip is 'Joins User with SIP account';

Breaking it down, line 1 creates the table named ‘usrlitesip’ (no duh), line 2 is for the primary key (self-explanatory). You can then add any columns you like, just be sure to add one that references the table you want to link to. I checked [usrlite.sql and saw the primary key is usr_username}(https://github.com/xtuple/xtuple/blob/master/enyo-client/database/source/xt/tables/usrlite.sql#L3), be sure to use the primary key of the table you are referencing.

You can check what you made by executing the .sql files like so:

$ cd /path/to/xtuple-extensions/source/sip_account/database/source $ psql -U admin -d dev -f create_sa_schema.sql $ psql -U admin -d dev -f usrlitesip.sql

After which you will see the empty table if you enter:

$ psql -U admin -d dev -c "select * from sa.usrlitesip;"

Now create the file <div class="highlight"><pre>/path/to/xtuple-extensions/source/sip_account/database/source/manifest.js </pre></div> to put the files together and in the right order. It should contain:

{ "name": "sip_account", "version": "1.4.1", "comment": "Sip Account extension", "loadOrder": 999, "dependencies": ["crm"], "databaseScripts": [ "create_sa_schema.sql", "usrlitesip.sql", "register.sql" ] }

I think the “name” has to be the same you named your extension directory as in <div class="highlight"><pre>/path/to/xtuple-extensions/source/<YOUR EXTENSION NAME> </pre></div> . I think the “comment” can be anything, you want your “loadOrder” to be high so it’s the last thing installed (as it’s an add on.) So far we are doing exactly what’s instructed in the xTuple tutorial. It’s repetitive, but I think you can never have too many examples to compare to. In “databaseScripts” you will list the two .sql files you just created for the schema and the table, plus another file to be made in the same directory named <div class="highlight"><pre>register.sql </pre></div> .

I’m not sure why you have to make the <div class="highlight"><pre>register.sql </pre></div> or even if you indeed have to. If you leave the file empty, there will be a build error, so put a ‘;’ in the <div class="highlight"><pre>register.sql </pre></div> or remove the line <div class="highlight"><pre>"register.sql" </pre></div> from <div class="highlight"><pre>manifest.js </pre></div> as I think for now we are good without it.

Now let’s update the database with our new extension:

$ cd /path/to/xtuple $ ./scripts/build_app.js -d dev -e ../xtuple-extensions/source/sip_account $ psql -U admin -d dev -c "select * from xt.ext;"

That last command should display a table with a list of extensions; the ones already in xtuple like ‘crm’ and ‘billing’ and some others plus your new extension, in this case ‘sip_account’. When you run <div class="highlight"><pre>$ ./scripts/build_app.js -d dev -e ../xtuple-extensions/source/sip_account </pre></div> you’ll probably see a message along the lines of “ has no client code, not building client code" and that's fine because yeah, we haven't worked on the client code yet.

ORM Here’s where things start getting different. So ORMs link your object to an SQL table. But we DON’T want to make a new business object, we want to extend an existing business object, so the ORM we will make will be a little different than the xTuple tutorial. Steve Hackbarth kindly explained this new business object/existing business object ORM concept here.

First we’ll create the directory <div class="highlight"><pre>/path/to/xtuple-extensions/source/sip_account/database/orm/ext </pre></div> , according to xTuple convention. ORMS for new business objects would be put in <div class="highlight"><pre>/path/to/xtuple-extensions/source/sip_account/database/orm/models </pre></div> . Now we’ll create the .json file <div class="highlight"><pre>/path/to/xtuple-extensions/source/sip_account/database/orm/ext/user_account.jscon </pre></div> for our ORM. Once again, don’t quote me on this, but I think the name of the file should be the name of the business object you are extending, as is done in the turorial example extending the Contact object. In our case, UserAccount is defined in user_account.json and that’s what I named my extension ORM too. Here’s what you should place in it:

[ { "context": "sip_account", "nameSpace": "XM", "type": "UserAccount", "table": "sa.usrlitesip", "isExtension": true, "isChild": false, "comment": "Extended by Sip", "relations": [ { "column": "usrlitesip_usr_username", "inverse": "username" } ], "properties": [ { "name": "uri", "attr": { "type": "String", "column": "usrlitesip_uri", "isNaturalKey": true } }, { "name": "displayName", "attr": { "type": "String", "column": "usrlitesip_name" } }, { "name": "sipPassword", "attr": { "type": "String", "column": "usrlitesip_password" } } ], "isSystem": true }, { "context": "sip_account", "nameSpace": "XM", "type": "UserAccountRelation", "table": "sa.usrlitesip", "isExtension": true, "isChild": false, "comment": "Extended by Sip", "relations": [ { "column": "usrlitesip_usr_username", "inverse": "username" } ], "properties": [ { "name": "uri", "attr": { "type": "String", "column": "usrlitesip_uri", "isNaturalKey": true } }, { "name": "displayName", "attr": { "type": "String", "column": "usrlitesip_name" } }, { "name": "sipPassword", "attr": { "type": "String", "column": "usrlitesip_password" } } ], "isSystem": true } ]

Note the “context” is my extension name, because the context + nameSpace + type combo has to be unique. We already have a UserAccount and UserAccountRelation object in the “XM” namespace in the “xtuple” context in the original user_account.json, now we will have a UserAccount and UserAccountRelation object in the “XM” namespace in the “sip_account” conext. What else is important? Node that “isExtension” is true on lines 7 and 47 and the “relations” item contains the “column” of the foreign key we referenced.

This is something you might want to verify: “column” (lines 12 and 52) is the name of the attribute on your table. When we made a reference to the primary key usr_usrname from the xt.usrlite table we named that column usrlitesip_usr_usrname. But the “inverse” is not the .sql name but rather the attribute name associated with the original sql table in the original ORM. Did I lose you? I had a lot of trouble with this silly thing. In the original ORM that created a new UserAccount business object, the primary key attribute is named “username”, as can be seen here. That is what should be used for the “inverse” value. Not the sql column name (usr_username) but the object attribute name (username). I’m emphasizing this because I made that mistake and if I can spare you the pain I will.

If we rebuild our extension everything should come along nicely, but you won’t see any changes just yet in the web app because we haven’t created the client code.

Client Create the directory <div class="highlight"><pre>/path/to/xtuple-extensions/source/sip_account/client </pre></div> which is where we’ll keep all the client code. I want the fields I added to show up on the form to create a new User Account, so I need to extend the view for the User Account workspace. I’ll start by creating a directory <div class="highlight"><pre>/path/to/xtuple-extensions/source/sip_account/client/views </pre></div> and in it creating a file named ‘workspace.js’ containing this code:

XT.extensions.sip_account.initWorkspace = function () { var extensions = [ {kind: "onyx.GroupboxHeader", container: "mainGroup", content: "_sipAccount".loc()}, {kind: "XV.InputWidget", container: "mainGroup", attr: "uri" }, {kind: "XV.InputWidget", container: "mainGroup", attr: "displayName" }, {kind: "XV.InputWidget", container: "mainGroup", type:"password", attr: "sipPassword" } ]; XV.appendExtension("XV.UserAccountWorkspace", extensions); };

So I’m initializing my workspace and creating an array of items to add (append) to view XV.UserAccountWorkspace. The first ‘item’ is this onyx.GroupboxHeader which is a pretty divider for my new items, the kind you find in the web app at Setup > User Accounts, like ‘Overview’. I have no idea what other options there are for container other than “mainGroup”, so let’s stick to that. I’ll explain <div class="highlight"><pre>content: "_sipAccount".loc() </pre></div> in a bit. Next I created three input fields of the XV.InputWidget kind. This also confused me a bit as there are different kinds of input to be used, like dropdowns and checkboxes. The only advice I can give is snoop around the webapp, find an input you like and look up the corresponding workspace.js file to see what was used.

What we just did is (should be) enough for the new fields to show up on the User Account form. But before we see things change, we have to package the client. Create the file <div class="highlight"><pre>/path/to/xtuple-extensions/source/sip_account/client/views/package.js </pre></div> . This file is needed to ‘package’ groups of files and indicates the order the files should be loaded (for more on that, see this). For now, all the file will contain is:

enyo.depends( "workspace.js" );

You also need to package the ‘views’ directory containing workspace, so create the file Create the file <div class="highlight"><pre>/path/to/xtuple-extensions/source/sip_account/client/package.js </pre></div> and in it show that the directory ‘views’ and its contents must be part of the higher level package:

enyo.depends( "views" );

I like to think of it as a box full of smaller boxes.

This will sound terrible, but apparently you also need to create the file <div class="highlight"><pre>/path/to/xtuple-extensions/source/sip_account/client/core.js </pre></div> containing this line:

XT.extensions.icecream = {};

I don’t know why. As soon as I find out I’ll be sure to inform you.

As we’ve added a file to the client directory, be sure to update
<div class="highlight"><pre>/path/to/xtuple-extensions/source/sip_account/client/package.js </pre></div> so it included the new file:

enyo.depends( "core.js", "views" );

Translations Remember ““_sipAccount”.loc()” in our workspace.js file? xTuple has great internationalization support and it’s easy to use. Just create the directory and file <div class="highlight"><pre>/path/to/xtuple-extensions/source/sip_account/client/en/strings.js </pre></div> and in it put key-value pairs for labels and their translation, like this:

(function () { "use strict"; var lang = XT.stringsFor("en_US", { "_sipAccount": "Sip Account", "_uri": "Sip URI", "_displayName": "Display Name", "_sipPassword": "Password" }); if (typeof exports !== 'undefined') { exports.language = lang; } }());

So far I included all the labels I used in my Sip Account form. If you write the wrong label (key) or forget to include a corresponding key-value pair in strings.js, xTuple will simply name your lable “_labelName”, underscore and all.

Now build your extension and start up the server:

$ cd /path/to/xtuple $ ./scripts/build_app.js -d dev -e ../xtuple-extensions/source/sip_account $ node node-datasource/main.js

If the server is already running, just stop it and restart it to reflect your changes.

Now if you go to Setup > User Accounts and click the “+” button, you should see a nice little addition to the form with a ‘Sip Account’ divider and three new fields. Nice, eh?

Categories: Elsewhere

Steve Kemp: The selfish programmer

Fri, 25/07/2014 - 15:16

Once upon a time I wrote a piece of software for scheduling the classes available to a college.

There was a bug in the scheduler: Students who happened to be named 'Steve Kemp' had a significantly higher chance (>=80% IIRC) of being placed in lessons where the class makeup was more than 50% female.

This bug was never fixed. Which was nice, because I spent several hours both implementing and disguising this feature.

I'm was a bad coder when I was a teenager.

These days I'm still a bad coder, but in different ways.

Categories: Elsewhere

Wouter Verhelst: Multiarchified eID libraries for Debian

Fri, 25/07/2014 - 13:44

A few weeks back, I learned that some government webinterfaces require users to download a PDF files, sign them with their eID, and upload the signed PDF document. On Linux, the only way to do this appeared to be to download Adobe Reader for Linux, install the eID middleware, make sure that the former would use the latter, and from there things would just work.

Except for the bit where Adobe Reader didn't exist in a 64-bit version. Since the eid middleware packages were not multiarch ready, that meant you couldn't use Adobe Reader to create signatures with your eID card on a 64-bit Linux distribution. Which is, pretty much, "just about everything out there".

For at least the Debian packages, that has been fixed now (I still need to handle the RPM side of things, but that's for later). When I wanted to test just now if everything would work right, however...

... I noticed that Adobe no longer provides any downloads of the Linux version of Adobe Reader. They're just gone. There is an ftp.adobe.com containing some old versions, but nothing more recent than a 5.x version.

Well, I suppose that settles that, then.

Regardless, the middleware package has been split up and multiarchified, and is ready for early adopters. If you want to try it out, you should:

  • run dpkg --add-architecture i386 if you haven't yet enabled multiarch
  • Install the eid-archive package, as usual
  • Edit /etc/apt/sources.list.d/eid.list, and enable the continuous repository (that is, remove the # at the beginning of the line)
  • run dpkg-reconfigure eid-archive, so that the key for the continuous repository is enabled
  • run apt-get update
  • run apt-get -t continuous install eid-mw to upgrade your middleware to the version in continuous
  • run apt-get -t continuous install libbeidpkcs11-0:i386 to install the 32-bit middleware version.
  • run your 32-bit application and sign things.

You should, however, note that the continuous repository is named so because it contains the results of our continuous integration system; that is, every time a commit is done to the middleware, packages in this repository are updated automatically. This means the software in the continuous repository might break. Or it might eat your firstborn. Or it might cause nasal daemons. As such, FedICT does not support these versions of the middleware. Don't try the above if you're not prepared to deal with that...

Categories: Elsewhere

Tim Retout: London.pm's July 2014 tech meeting

Fri, 25/07/2014 - 09:36

Last night, I went to the London.pm tech meeting, along with a couple of colleagues from CV-Library. The talks, combined with the unusually hot weather we're having in the UK at the moment, combined with my holiday all last week, make it feel like I'm at a software conference. :)

The highlight for me was Thomas Klausner's talk about OX (and AngularJS). We bought him a drink at the pub later to pump him for information about using Bread::Board, with some success. It was worth the long, late commute back to Southampton.

All very enjoyable, and I hope they have more technical meetings soon. I'm planning to attend the London Perl Workshop later in the year.

Categories: Elsewhere

Gunnar Wolf: Nice read: «The Fasinatng … Frustrating … Fascinating History of Autocorrect»

Fri, 25/07/2014 - 05:18

A long time ago, I did some (quite minor!) work on natural language parsing. Most of what I got was the very basic rudiments on what needs to be done to begin with. But I like reading some texts on the subject every now and then.

I am also a member of the ACM — Association for Computing Machinery. Most of you will be familiar with it, it's one of the main scholarly associations for the field of computing. One of the basic perks of being an ACM member is the subscription to a very nice magazine, Communications of the ACM. And, of course, although I enjoy the physical magazine, I like reading some columns and articles as they appear along the month using the RSS feeds. They also often contain pointers to interesting reads on other media — As happened today. I found quite a nice article, I think, worth sharing with whoever thinks I have interesting things to say.

They published a very short blurb titled The Fasinatng … Frustrating … Fascinating History of Autocorrect. I was somewhat skeptical reading it links to an identically named article, published in Wired. But gave it a shot, anyway...

The article follows a style that's often abused and not very amusing, but I think was quite well done: The commented interview. Rather than just drily following through an interview, the writer tells us a story about that interview. And this is the story of Gideon Lewis-Kraus interviewing Dean Hachamovitch, the creator of the much hated (but very much needed) autocorrect feature that appeared originally in Microsoft Word.

The story of Hachamovitch's work (and its derivations, to the much maligned phone input predictors) over the last twenty-something years is very light to read, very easy to enjoy. I hope you find it as interesting as I did.

Categories: Elsewhere

Craig Small: PHP uniqid() not always a unique ID

Thu, 24/07/2014 - 14:17

For quite some time modern versions of JFFNMS have had a problem. In large installations hosts would randomly appear as down with the reachability interface going red. All other interface types worked, just this one.

Reachability interfaces are odd, because they call fping or fping6 do to the work. The reason is because to run a ping program you need to have root access to a socket and to do that is far too difficult and scary in PHP which is what JFFNMS is written in.

To capture the output of fping, the program is executed and the output captured to a temporary file. For my tiny setup this worked fine, for a lot of small setups this was also fine. For larger setups, it was not fine at all. Random failed interfaces and, most bizzarely of all, even though a file disappearing. The program checked for a file to exist and then ran stat in a loop to see if data was there. The file exist check worked but the stat said file not found.

At first I thought it was some odd load related problem, perhaps the filesystem not being happy and having a file there but not really there. That was, until someone said “Are these numbers supposed to be the same?”

The numbers he was referring to was the filename id of the temporary file. They were most DEFINITELY not supposed to be the same. They were supposed to be unique. Why were they always unique for me and not for large setups?

The problem is with the uniqid() function. It is basically a hex representation of the time.  Large setups often have large numbers of child processes for polling devices. As the number of poller children increases, the chance that two child processes start the reachability poll at the same time and have the same uniqid increases. It’s why the problem happened, but not all the time.

The stat error was another symptom of this bug, what would happen was:

  • Child 1 starts the poll, temp filename abc123
  • Child 2 starts the poll in the same microsecond, temp filename is also abc123
  • Child 1 and 2 wait poller starts, sees that the temp file exists and goes into a loop of stat and wait until there is a result
  • Child 1 finishes, grabs the details, deletes the temporary file
  • Child 2 loops, tries to run stat but finds no file

Who finishes first is entirely dependent on how quickly the fping returns and that is dependent on how quicky the remote host responds to pings, so its kind of random.

A minor patch to use tempnam() instead of uniqid() and adding the interface ID in the mix for good measure (no two children will poll the same interface, the parent’s scheduler makes sure of that.) The initial responses is that it is looking good.

 

Categories: Elsewhere

Martin Pitt: vim config for Markdown+LaTeX pandoc editing

Thu, 24/07/2014 - 11:38

I have used LaTeX and latex-beamer for pretty much my entire life of document and presentation production, i. e. since about my 9th school grade. I’ve always found the LaTeX syntax a bit clumsy, but with good enough editor shortcuts to insert e. g. \begin{itemize} \item...\end{itemize} with just two keystrokes, it has been good enough for me.

A few months ago a friend of mine pointed out pandoc to me, which is just simply awesome. It can convert between a million document formats, but most importantly take Markdown and spit out LaTeX, or directly PDF (through an intermediate step of building a LaTeX document and calling pdftex). It also has a template for beamer. Documents now look soo much more readable and are easier to write! And you can always directly write LaTeX commands without any fuss, so that you can use markdown for the structure/headings/enumerations/etc., and LaTeX for formulax, XYTex and the other goodies. That’s how it should always should have been! ☺

So last night I finally sat down and created a vim config for it:

"-- pandoc Markdown+LaTeX ------------------------------------------- function s:MDSettings() inoremap <buffer> <Leader>n \note[item]{}<Esc>i noremap <buffer> <Leader>b :! pandoc -t beamer % -o %<.pdf<CR><CR> noremap <buffer> <Leader>l :! pandoc -t latex % -o %<.pdf<CR> noremap <buffer> <Leader>v :! evince %<.pdf 2>&1 >/dev/null &<CR><CR> " adjust syntax highlighting for LaTeX parts " inline formulas: syntax region Statement oneline matchgroup=Delimiter start="\$" end="\$" " environments: syntax region Statement matchgroup=Delimiter start="\\begin{.*}" end="\\end{.*}" contains=Statement " commands: syntax region Statement matchgroup=Delimiter start="{" end="}" contains=Statement endfunction autocmd BufRead,BufNewFile *.md setfiletype markdown autocmd FileType markdown :call <SID>MDSettings()

That gives me “good enough” (with some quirks) highlighting without trying to interpret TeX stuff as Markdown, and shortcuts for calling pandoc and evince. Improvements appreciated!

Categories: Elsewhere

Matthew Palmer: First Step with Clojure: Terror

Thu, 24/07/2014 - 02:30
$ sudo apt-get install -y leiningen [...] $ lein new scratch [...] $ cd scratch $ lein repl Downloading: org/clojure/clojure/1.3.0/clojure-1.3.0.pom from repository central at http://repo1.maven.org/maven2 Transferring 5K from central Downloading: org/sonatype/oss/oss-parent/5/oss-parent-5.pom from repository central at http://repo1.maven.org/maven2 Transferring 4K from central Downloading: org/clojure/clojure/1.3.0/clojure-1.3.0.jar from repository central at http://repo1.maven.org/maven2 Transferring 3311K from central [...]

Wait… what? lein downloads some random JARs from a website over HTTP1, with, as far as far I can tell, no verification that what I’m asking for is what I’m getting (has nobody ever heard of Man-in-the-Middle attacks in Maven land?). It downloads a .sha1 file to (presumably) do integrity checking, but that’s no safety net – if I can serve you a dodgy .jar, I can serve you an equally-dodgy .sha1 file, too (also, SHA256 is where all the cool kids are at these days). Finally, jarsigner tells me that there’s no signature on the .jar itself, either.

It gets better, though. The repo1.maven.org site is served by the fastly.net2 pseudo-CDN3, which adds another set of points in the chain which can be subverted to hijack and spoof traffic. More routers, more DNS zones, and more servers.

I’ve seen Debian take a kicking more than once because packages aren’t individually signed, or because packages aren’t served over HTTPS. But at least Debian’s packages can be verified by chaining to a signature made by a well-known, widely-distributed key, signed by two Debian Developers with very well-connected keys.

This repository, on the other hand… oy gevalt. There are OpenPGP (GPG) signatures available for each package (tack .asc onto the end of the .jar URL), but no attempt was made to download the signatures for the .jar I downloaded. Even if the signature was downloaded and checked, there’s no way for me (or anyone) to trust the signature – the signature was made by a key that’s signed by one other key, which itself has no signatures. If I were an attacker, it wouldn’t be hard for me to replace that key chain with one of my own devising.

Even ignoring everyone living behind a government- or company-run intercepting proxy, and everyone using public wifi, it’s pretty well common knowledge by now (thanks to Edward Snowden) that playing silly-buggers with Internet traffic isn’t hard to do, and there’s no shortage of evidence that it is, in fact, done on a routine basis by all manner of people. Serving up executable code to a large number of people, in that threat environment, with no way for them to have any reasonable assurance that code is trustworthy, is very disappointing.

Please, for the good of the Internet, improve your act, Maven. Putting HTTPS on your distribution would be a bare minimum. There are attacks on SSL, sure, but they’re a lot harder to pull off than sitting on public wifi hijacking TCP connections. Far better would be to start mandating signatures, requiring signature checks to pass, and having all signatures chain to a well-known, widely-trusted, and properly secured trust root. Signing all keys that are allowed to upload to maven.org with a “maven.org distribution root” key (itself kept in hardware and only used offline), and then verifying that all signatures chain to that key, wouldn’t be insanely difficult, and would greatly improve the security of the software supply chain. Sure, it wouldn’t be perfect, but don’t make the perfect the enemy of the good. Cost-effective improvements are possible here.

Yes, security is hard. But you don’t get to ignore it just because of that, when you’re creating an attractive nuisance for anyone who wants to own up a whole passel of machines by slipping some dodgy code into a widely-used package.

  1. To add insult to injury, it appears to ignore my http_proxy environment variable, and the repo1.maven.org server returns plain-text error responses with Content-Type: text/xml. But at this point, that’s just icing on the shit cake.

  2. At one point in the past, my then-employer (a hosting provider) blocked Fastly’s caching servers from their network because they took down a customer site with a massive number of requests to a single resource, and the incoming request traffic was indistinguishable from a botnet-sourced DDoS attack. The requests were coming from IP space registered to a number of different ISPs, with no distinguishing rDNS (184-106-82-243.static.cloud-ips.com doesn’t help me to distinguish between “I’m a professionally-run distributed proxy” and “I’m a pwned box here to hammer your site into the ground”).

  3. Pretty much all of the new breed of so-called CDNs aren’t actually pro-actively distributing content, they’re just proxies. That isn’t a bad thing, per se, but I rather dislike the far-too-common practice of installing varnish (and perhaps mod_pagespeed, if they’re providing “advanced” capabilities) on a couple of AWS instances, and hanging out your shingle as a CDN. I prefer a bit of truth in my advertising.

Categories: Elsewhere

Russ Allbery: WebAuth 4.6.1

Thu, 24/07/2014 - 00:59

This is a bug-fix release of the WebAuth site-wide web authentication system. As is typical, I accumulated a variety of minor bug fixes and improvements that I wanted to get into a release before starting larger work (in this case, adding JSON support for the user information service protocol).

The most severe bug fix is something that only folks at Stanford would notice: support for AuthType StanfordAuth was broken in the 4.6.0 release. This is for legacy compatibility with WebAuth 2.5. It has been fixed in this release.

In other, more minor bug fixes, build issues when remctl support is disabled have been fixed, expiring password warnings are shown in WebLogin after any POST-based authentication, the confirmation page is forced if authorization identity switching is available, the username field is verified before multifactor authentication to avoid subsequent warnings, newlines and tabs are allowed in the XML sent from the WebKDC for user messages, empty RT and ST parameters are correctly diagnosed, and there are some documentation improvements.

The main new feature in this release is support for using FAST armor during password authentication in mod_webkdc. A new WebKdcFastArmorCache directive can be set to point at a Kerberos ticket cache to use for FAST armor. If set, FAST is required, so the KDC must support it as well. This provides better wire security for the initial password authentication to protect against brute-force dictionary attacks against the password by a passive eavesdropper.

This release also adds a couple of new factor types, mp (mobile push) and v (voice), that Stanford will use as part of its Duo Security integration.

Note that, for the FAST armor feature, there is also an SONAME bump in the shared library in this release. Normally, I wouldn't bump the SONAME in a minor release, but in this case the feature was fairly minor and most people will not notice the change, so it didn't feel like it warranted a major release. I'm still of two minds about that, but oh well, it's done and built now. (At least I noticed that the SONAME bump was required prior to the release.)

You can get the latest release from the official WebAuth distribution site or from my WebAuth distribution pages.

Categories: Elsewhere

Lior Kaplan: Testing PHPNG on Debian/Ubuntu

Wed, 23/07/2014 - 23:01

We (at Zend) want to help people get more involved in testing PHPNG (PHP next generation), so we’re started to provide binaries for it, although it’s still a branch on top of PHP’s master branch. See more details about PHPNG on Zeev Suraski’s blog post.

The binaries (64bit) are compatible with Debian testing/unstable and Ubuntu Trusty (14.04) and up. The mod_php is built for Apache 2.4 which all three flavors have.

The repository is at http://repos.zend.com/zend-server/early-access/phpng/

Installation instructions:

# wget http://repos.zend.com/zend.key -O- 2> /dev/null | apt-key add -
# echo “deb http://repos.zend.com/zend-server/early-access/phpng/ trusty zend” > /etc/apt/sources.list.d/phpng.list
# apt-get update
# apt-get install php5

For the task of providing these binaries, I had a pleasure of combining my experience as a member of the Debian PHP team and a Debian Developer with stuff more internal to the PHP development process. Using the already existing Debian packaging enabled me to test more builds scenarios easily (and report problems accoredingly). Hopefully this could also be translated back into providing more experimental packages for Debian and making sure Debian packages are ready for the PHP released after PHP 5.6.


Filed under: Debian GNU/Linux, PHP
Categories: Elsewhere

Petter Reinholdtsen: 98.6 percent done with the Norwegian draft translation of Free Culture

Wed, 23/07/2014 - 22:40

This summer I finally had time to continue working on the Norwegian docbook version of the 2004 book Free Culture by Lawrence Lessig, to get a Norwegian text explaining the problems with todays copyright law. Yesterday, I finally completed translated the book text. There are still some foot/end notes left to translate, the colophon page need to be rewritten, and a few words and phrases still need to be translated, but the Norwegian text is ready for the first proof reading. :) More spell checking is needed, and several illustrations need to be cleaned up. The work stopped up because I had to give priority to other projects the last year, and the progress graph of the translation show this very well:

If you want to read the result, check out the github project pages and the PDF, EPUB and HTML version available in the archive directory.

Please report typos, bugs and improvements to the github project if you find any.

Categories: Elsewhere

Michael Prokop: Book Review: The Docker Book

Wed, 23/07/2014 - 22:16

Docker is an open-source project that automates the deployment of applications inside software containers. I’m responsible for a docker setup with Jenkins integration and a private docker-registry setup at a customer and pre-ordered James Turnbull’s “The Docker Book” a few months ago.

Recently James – he’s working for Docker Inc – released the first version of the book and thanks to being on holidays I already had a few hours to read it AND blog about it. (Note: I’ve read the Kindle version 1.0.0 and all the issues I found and reported to James have been fixed in the current version already, jey.)

The book is very well written and covers all the basics to get familiar with Docker and in my opinion it does a better job at that than the official user guide because of the way the book is structured. The book is also a more approachable way for learning some best practices and commonly used command lines than going through the official reference (but reading the reference after reading the book is still worth it).

I like James’ approach with “ENV REFRESHED_AT $TIMESTAMP” for better controlling the cache behaviour and definitely consider using this in my own setups as well. What I wasn’t aware is that you can directly invoke “docker build $git_repos_url” and further noted a few command line switches I should get more comfortable with. I also plan to check out the Automated Builds on Docker Hub.

There are some references to further online resources, which is relevant especially for the more advanced use cases, so I’d recommend to have network access available while reading the book.

What I’m missing in the book are best practices for running a private docker-registry in a production environment (high availability, scaling options,…). The provided Jenkins use cases are also very basic and nothing I personally would use. I’d also love to see how other folks are using the Docker plugin, the Docker build step plugin or the Docker build publish plugin in production (the plugins aren’t covered in the book at all). But I’m aware that this are fast moving parts and specialised used cases – upcoming versions of the book are already supposed to cover orchestration with libswarm, developing Docker plugins and more advanced topics, so I’m looking forward to further updates of the book (which you get for free as existing customer, being another plus).

Conclusion: I enjoyed reading the Docker book and can recommend it, especially if you’re either new to Docker or want to get further ideas and inspirations what folks from Docker Inc consider best practices.

Categories: Elsewhere

Tanguy Ortolo: GNU/Linux graphic sessions: suspending your computer

Wed, 23/07/2014 - 14:45

Major desktop environments such as Xfce or KDE have a built-in computer suspend feature, but when you use a lighter alternative, things are a bit more complicated, because basically: only root can suspend the computer. There used to be a standard solution to that, using a D-Bus call to a running daemon upowerd. With recent updates, that solution first stopped working for obscure reasons, but it could still be configured back to be usable. With newer updates, it stopped working again, but this time it seems it is gone for good:

$ dbus-send --system --print-reply \ --dest='org.freedesktop.UPower' \ /org/freedesktop/UPower org.freedesktop.UPower.Suspend Error org.freedesktop.DBus.Error.UnknownMethod: Method "Suspend" with signature "" on interface "org.freedesktop.UPower" doesn't exist

The reason seems to be that upowerd is not running, because it no longer provides an init script, only a systemd service. So, if you do not use systemd, you are left with one simple and stable solution: defining a sudo rule to start the suspend or hibernation process as root. In /etc/sudoers.d/power:

%powerdev ALL=NOPASSWD: /usr/sbin/pm-suspend, \ /usr/sbin/pm-suspend-hybrid, \ /usr/sbin/pm-hibernate

That allows members of the powderdev group to run sudo pm-suspend, sudo pm-suspend-hybrid and sudo pm-hibernate, which can be used with a key binding manager such as your window manager's or xbindkeys. Simple, efficient, and contrary to all that ever-changing GizmoKit and whatsitd stuff, it has worked and will keep working for years.

Categories: Elsewhere

Pages