Elsewhere

Edison Wong: Announcing TWBS jQuery: Simple jQuery Update for Drupal 7

Planet Drupal - Thu, 02/01/2014 - 05:46

During TWBS development upgrade Drupal 7 core jQuery libraries into its latest version for Bootstrap is a must. BTW jQuery Update seems not my cup of tea because it give me too much trobule within previous site building experience: too complicated, bundle everything within its own archive (which I love to manage 3rd party libraries with drush make and Libraries API), and it is really too much for my use case. So why not just work out a simplified version?

After some research and development during Christmas holiday, I would like to introduce my helper module named "TWBS jQuery". The goal of TWBS jQuery is to provide a handy support for jQuery upgrade and act as the helper module for on going Drupal-Bootstrap-Remix development.

All replacement will be handled automatically. No additional configuration is required.

Key Features Getting Started

Download and install with drush manually:

drush -y dl --dev twbs_jquery drush -y make --no-core sites/all/modules/twbs_jquery/twbs_jquery.make

Package into your own drush .make file (e.g. drustack_core.make):

api = 2 core = 7.x projects[twbs_jquery][download][branch] = 7.x-3.x projects[twbs_jquery][download][type] = git projects[twbs_jquery][download][url] = http://git.drupal.org/project/twbs_jquery.git projects[twbs_jquery][subdir] = contrib Live Demo

TWBS jQuery is now integrated into DruStack distribution, so you can try it in a live sandbox with simplytest.me.

Why Another jQuery Module?

For general and generic jQuery update functionality, you should always consider another jQuery Update module which started since 2007-04-26.

On the other hand you should consider about using this module because of:

  • Purely design for assist TWBS, which means you will have the best compatibility when using both together
  • Fetch libraries directly from original repository and handle initialization with Libraries API; jQuery Update bundle all libraries into it's own code repository and initialize manually
  • Only support latest official version of libraries which result as no additional configuration required; jQuery Update support multiple version of jQuery
  • Much simple implementation which handle all upgrade and replacement automatically; jQuery Update provide more customization options
Author

Please feel free to test it out and comment with your idea. Let's enjoy simplified jQuery update experience ;-)

Tags Drupal Development jQuery
Categories: Elsewhere

Keith Packard: Present-pixmap-lifetimes-part-deux

Planet Debian - Thu, 02/01/2014 - 05:20
Pixmap ID Lifetimes under Present Redirection (Part Deux)

I recently wrote about pixmap content and ID lifetimes. I think the pixmap content lifetime stuff was useful, but the pixmap ID stuff was not quite right. I’ve just copied the section from the previous post and will update it.

PresentRedirect pixmap ID lifetimes (reprise)

A compositing manager will want to know the lifetime of the pixmaps delivered in PresentRedirectNotify events to clean up whatever local data it has associated with it. For instance, GL compositing managers will construct textures for each pixmap that need to be destroyed when the pixmap disappears.

Present encourages pixmap destruction

The PresentPixmap request says:

PresentRegion holds a reference to ‘pixmap’ until the presentation occurs, so ‘pixmap’ may be immediately freed after the request executes, even if that is before the presentation occurs.

Breaking this when doing Present redirection seems like a bad idea; it’s a very common pattern for existing 2D applications.

New pixmap IDs for everyone

Because pixmap IDs for present contents are owned by the source application (and may be re-used immediately when freed), Present can’t use that ID in the PresentRedirectNotify event. Instead, it must allocate a new ID and send that instead. The server has it’s own XID space to use, and so it can allocate one of those and bind it to the same underlying pixmap; the pixmap will not actually be destroyed until both the application XID and the server XID are freed.

The compositing manager will receive this new XID, and is responsible for freeing it when it doesn’t need it any longer. Of course, if the compositing manager exits, all of the XIDs passed to it will automatically be freed.

I considered allocating client-specific XIDs for this purpose; the X server already has a mechanism for allocating internal IDs for various things that are to be destroyed at client shut-down time. Those XIDs have bit 30 set, and are technically invalid XIDs as X specifies that the top three bits of every XID will be zero. However, the cost of using a server ID instead (which is a valid XID) is small, and it’s always nice to not intentionally break the X protocol (even as we continue to take advantage of “accidental” breakages).

(Reserving the top three bits of XIDs and insisting that they always be zero was a nod to the common practice of weakly typed languages like Lisp. In these languages, 32-bit object references were typed by using few tag bits (2-3) within the value. Most of these were just pointers to data, but small integers, that fit in the remaining bits (29-30), could be constructed without allocating any memory. By making XIDs only 29 bits, these languages could be assured that all XIDs would be small integers.)

Pixmap Destroy Notify

The Compositing manager needs to know when the application is done with the pixmap so that it can clean up when it is also done; destroying the extra pixmap ID it was given and freeing any other local resources. When the application sends a FreePixmap request, that XID will become invalid, but of course the pixmap itself won’t be freed until the compositing manager sends a FreePixmap request with the extra XID it was given.

Because the pixmap ID known by the compositing manager is going to be different from the original application XID, we need an event delivered to the compositing manager with the new XID, which makes this event rather Present specific. We don’t need to select for this event; the compositing manager must handle it correctly, so we’ll just send it whenever the composting manager has received that custom XID.

PresentPixmapDestroyNotify ┌─── PresentPixmapDestroyNotify type: CARD8 XGE event type (35) extension: CARD8 Present extension opcode sequence-number: CARD16 length: CARD32 0 evtype: CARD16 Present_PixmapDestroyNotify eventID: XFIXESEVENTID event-window: WINDOW pixmap: pixmap └───

This event is delivered to the clients selecting for SubredirectNotify for pixmaps which were delivered in a PresentRedirectNotify event and for which the originating application Pixmap has been destroyed. Note that the pixmap may still be referenced by other objects within the X server, by a pending PresentPixmap request, as a window background, GC tile or stipple or Picture drawable (among others), but this event serves to notify the selecting client that the application is no longer able to access the underlying pixmap with it’s original Pixmap ID.

Categories: Elsewhere

Francois Marier: Running your own XMPP server on Debian or Ubuntu

Planet Debian - Thu, 02/01/2014 - 04:45

In order to get closer to my goal of reducing my dependence on centralized services, I decided to setup my own XMPP / Jabber server on a Linode VPS running Debian wheezy. I chose ejabberd since it was recommended by the RTC Quick Start website and here's how I put everything together.

DNS and SSL

My personal domain is fmarier.org and so I created the following DNS records:

jabber-gw CNAME fmarier.org _xmpp-client._tcp SRV 5 0 5222 fmarier.org _xmpp-server._tcp SRV 5 0 5269 fmarier.org

Then I went to get a free XMPP SSL certificate for jabber-gw.fmarier.org from StartSSL. This is how I generated the CSR (Certificate Signing Request) on a high-entropy machine:

openssl req -new -newkey rsa:2048 -nodes -out ssl.csr -keyout ssl.key -subj "/C=NZ/CN=jabber-gw.fmarier.org"

I downloaded the signed certificate as well as the StartSSL intermediate certificate and combined them this way:

cat ssl.crt ssl.key sub.class1.server.ca.pem > ejabberd.pem ejabberd installation

Installing ejabberd on Debian is pretty simple and I mostly followed the steps on the Ubuntu wiki with an additional customization to solve the Pidgin "Not authorized" connection problems.

  1. Install the package, using "admin" as the username for the administrative user:

    apt-get install ejabberd
  2. Set the following in /etc/ejabberd/ejabberd.cfg (don't forget the trailing dots!):

    {acl, admin, {user, "admin", "fmarier.org"}}. {hosts, ["fmarier.org"]}. {fqdn, "jabber-gw.fmarier.org"}.
  3. Copy the SSL certificate into the /etc/ejabberd/ directory and set the permissions correctly:

    chown root:ejabberd /etc/ejabberd/ejabberd.pem chmod 640 /etc/ejabberd/ejabberd.pem
  4. Restart the ejabberd daemon:

    /etc/init.d/ejabberd restart
  5. Create a new user account for yourself:

    ejabberdctl register me fmarier.org P@ssw0rd1!
  6. Open up the following ports on the server's firewall:

    iptables -A INPUT -p tcp --dport 5222 -j ACCEPT iptables -A INPUT -p tcp --dport 5269 -j ACCEPT
Client setup

On the client side, if you use Pidgin, create a new account with the following settings in the "Basic" tab:

  • Protocol: XMPP
  • Username: me
  • Domain: fmarier.org
  • Password: P@ssw0rd1!

and the following setting in the "Advanced" tab:

  • Connection security: Require encryption

From this, I was able to connect to the server without clicking through any certificate warnings.

If you want to make sure that XMPP federation works, add your GMail address as a buddy to the account and send yourself a test message.

Categories: Elsewhere

Bryan Braun: There's more than one way to save a node

Planet Drupal - Thu, 02/01/2014 - 04:22

Every day, millions of nodes are saved. It happens every time content is created, migrated, or updated. It's probably the most common content management task in Drupal.

But there are lots of ways you can change the node-save experience for your users, and there are many contributed modules that offer alternative approaches to saving nodes. Here are a few that I like.

Add another

The Add Another module allows gives users the option to save a node while quickly creating a new one. You can choose to add the option to the admin form itself, or as part of the save confirmation message. It's great for those content types, like "Image" or "Video" for example, where your users find themselves creating a series of nodes in succession.

Hide Submit

Occasionally you'll see an issue where an end user clicks submit on a the node-edit form and, being ignorant of the fact that the request is being processed, clicks submit several more times to see  if it's broken. Sometimes this can lead to multiple form submissions, resulting in bad things like duplicate content. The Hide Submit module does one simple thing: Prevent forms from being submitted multiple times. It does this by disabling the submit button once it's been clicked, with settings to fade out the button, append text, or hide it all together. This prevents errors, but it also signals to the user that the submission is in progress, helping to alleviate a bit of the frustration.

Publish Button

"Does the word "Save" mean that I'm saving a draft or does it mean that I'm publishing the content live?" While it may be obvious for those familiar with Drupal, the intent of the button isn't always clear for new users. The Publish Button module aims to make it more explicit by splitting up the "Save" button into two buttons: "Save" and "Publish". If a node is published, the publish button is replaced with an "Unpublish" button.

Node Buttons Edit

What if you have your own idea on what button text should be used? You could use "string overrides" module for a universal approach to text customization, but the Node Buttons Edit module gives you a straightforward admin page for customizing the button text specifically. No need to incur the additional overhead if you don't have to.

More Buttons

The More Buttons module gives you the option of turning on more buttons (shocking, I know), to further customize your content saving experience. For example, you may want a "Save and Continue" button to save the status of the current input while continuing to make changes. Or maybe you'd like a cancel button, to close the form altogether. If so, this module makes these (and other options) available to you.

So next time you see users tripping over the node saving workflow, remember that you, as a sitebuilder, have a handful of options at your disposal to make things a bit more clear.

Categories: Elsewhere

netsperience 2.x: Building a Video Playlist for JW Player 6 with Drupal 7 Views

Planet Drupal - Thu, 02/01/2014 - 02:31

I took over a Drupal 7 project building a web application for college students to upload original videos about their school, and for schools to manage, group, and share the videos.

It's a startup privately funded by the principal, and we are literally working on a shoestring. My previous experience with media in Drupal led the principal to contact me via LinkedIn.

When it came time to build a video playlist in Drupal Views for JW Player version="6.7.4071" (formerly known as Longtail Video), I found very little useful documentation. In fact, someone suggested that those who know how are not interested in sharing their knowlege. -- but not me 

There are a couple of videos on YouTube by Bryan Ollendyke for Drupal 6. But a lot has changed in Drupal since then.

The Goal:

Back to the playlist: Site admins can mark a video featured by ticking a checkbox on the custom video content type. Now I want to display those featured videos as a playlist.

read more

Categories: Elsewhere

Russ Allbery: 2013 Book Reading in Review

Planet Debian - Thu, 02/01/2014 - 00:24

What a strange year.

2013 was marked by a whole sequence of entirely unexpected events, including multiple major work upheavals. For large chunks of the year, I had very little time or emotional energy for personal reading goals, and particularly for writing reviews. I declared personal amnesty on most of my intentions halfway through the year, and all the totals will reflect that. On the plus side (although not for reading and reviews), it was a great year for video games.

Next year, there will be no specific goals. Between continuing work fallout, a very busy project schedule, my intent to keep playing a lot of video games, and various other personal goals I want to take on, I'm going to take the pressure off of reading. Things will be read and reviews will be written (and I'm going to make more of an effort to write reviews shortly after reading books), but I'm not going to worry about how many.

The below statistics are confined to the books I reviewed in 2012. I read six more books that I've not yet reviewed, due to the chaos at the end of the year. Those will be counted in 2014.

There were no 10 out of 10 books this year, partly due to the much lower reading totals and partly due to my tendency this year to turn to safe comfort reading, which is reliably good but unlikely to be exceptional. There were, however, several near-misses that were worth calling out.

My favorite book of the year was Neal Stephenson's Anathem, which narrowly missed a 10 for me due to some fundamental problems with the plot premise. But this is still an excellent book: the best novel about the practice of science and philosophy that I've ever read. Also deserving mention are K.E. Lane's And Playing the Role of Herself, lovely and intelligent lesbian romance that's likely to appeal even to people who would not normally try that genre, and Guy Gavriel Kay's River of Stars. The latter isn't quite at the level of Kay's earlier Under Heaven, but it's still an excellent work of alternate historical fiction in a memorable setting.

A special honorable mention goes to Lisa O'Donnell's The Death of Bees. It requires a lot of warnings for very dark subject matter and a rather abrupt ending, but it's been a long time since I've cared that much about the characters of a book.

My favorite non-fiction book of the year was Gary J. Hudson's They Had to Go Out, a meticulously researched account of a tragic Coast Guard mission. The writing is choppy, the editing could have been better, and it's clear that the author is not a professional writer, but it's the sort of detailed non-fiction account that can only be written by someone who's been there and lived through similar experiences. Also worth mentioning is Mark Jason Dominus's Higher Order Perl, which was the best technical book I read all year and which I found quite inspiring for my own programming.

The full analysis includes some additional personal reading statistics, probably only of interest to me.

Categories: Elsewhere

Acquia: 2013 Greatest Hits – Meet Angie Byron: The Return of Webchick

Planet Drupal - Wed, 01/01/2014 - 21:50

Angie and I were at Acquia headquarters in Massachusetts at the same time in the spring of 2013. This gave us the chance to sit down and chat in front of the camera about all things Drupal. Highlights from our conversation became two podcasts with accompanying video. In this part of the conversation, Angie talks about how she got into Drupal and more.

Categories: Elsewhere

Tim Retout: 2014

Planet Debian - Wed, 01/01/2014 - 19:02

So, happy new year. :)

I watched many 30c3 talks via the streams over Christmas - they were awesome. I especially enjoyed finding out (in the Tor talk) that the Internet Watch Foundation need to use Tor when checking out particularly dodgy links online, else people just serve them up pictures of kittens.

Today's fail: deciding to set up OpenVPN, then realising the OpenVZ VPS I was planning to use would not support /dev/net/tun.

I'm back at work tomorrow, preparing for the January surge of people looking for jobs. Tonight, the first Southampton Perl Mongers meeting of the year.

Categories: Elsewhere

Debian Sysadmin Team: Martin Zobel-Helas: Dropping security.geo.debian.org zone

Planet Debian - Wed, 01/01/2014 - 16:48

While setting up GeoDNS for parts of the debian.org zone, we set up a new subzone security.geo.debian.org. This was mainly due to the fact we didn't want to mess up the existing zone while experimenting with GeoDNS.

Now that our GeoDNS setup has been working for more than half a year without any problems, we will drop this zone.

We will do that in two phases.

Phase 1

Beginning on July 1st, we will redirect all requests to security.geo.debian.org to a static webpage indicating that this subzone is deprecated and should not be used any more. If you still have security.geo.debian.org in your apt sources.list, updates will fail.

Phase 2

On August 1st, we will stop serving the subzone security.geo.debian.org from our DNS servers.

Conclusion

In case you use a security.geo.debian.org entry in your /etc/apt/sources.list, now is the best time to change that entry to security.debian.org. Both zones currently serve the same content.

Categories: Elsewhere

Debian Sysadmin Team: Peter Palfrader: Securing the Debian zones

Planet Debian - Wed, 01/01/2014 - 16:48

We are in the process of deploying DNSSEC, the DNS Security Extensions, on the Debian zones. This means properly configured resolvers will be able to verify the authenticity of information they receive from the domain name system.

The plan is to introduce DNSSEC in several steps so that we can react to issues that arise without breaking everything at once.

We will start with serving signed debian.net and debian.com zones. Assuming nobody complains loudly enough the various reverse zones and finally the debian.org zone will follow. Once all our zones are signed we will publish our trust anchors in ISC's DLV Registry, again in stages.

The various child zones that are handled differently from our normal DNS infrastructure (mirror.debian.net, alioth, bugs, ftp, packages, security, volatile, www) will follow at a later date.

We are using bind 9.6 for NSEC3 support and our fork of RIPE's DNSSEC Key Management Tools for managing our keys because we believe that it integrates nicely with our existing DNS helper scripts, at least until something better becomes available.

We will use NSEC3RSASHA1 with key sizes of 1536 bits for the KSK and 1152 bits for the ZSK. Signature validity period will most likely be four weeks, with a one week signature publication period (cf. RFC4641: DNSSEC Operational Practices).

Zone keys rollovers will happen regularly and will not be announced in any specific way. Key signing key rollovers will probably be announced on the debian-infrastructure-announce list until such time that our zones are reachable from a signed root. KSK rollovers for our own child zones (www.d.o et al.), once signed, will not be announced because we can just put proper DS records in the respective parent zone.

Until we announce the first set of trust anchors on the mailinglist the keysets present in DNS should be considered experimental. They can be changed at any time, without observing standard rollover practices.

Please direct questions or comments to either the debian-admin or, if you want a more public forum, the debian-project list at lists.debian.org.

See also:

-- Peter Palfrader

Categories: Elsewhere

Debian Sysadmin Team: Martin Zobel-Helas: RFH: ferm integration into dsa-puppet.git

Planet Debian - Wed, 01/01/2014 - 16:48

The Debian Project currently runs about 100 machines all over the world with different services. Those are mainly managed by the Debian System Administration team. For central configuration management we use Puppet. The Puppet config we use is publicly available here.

Our next goal is to have a more or less central configuration of our iptables rules on all those machines. Some of the machines have home-brewed firewall scripts, some use ferm.

Your mission, if you choose to accept it, is to provide us with a new dsa-puppet git branch with a module "ferm" that we can roll out to all our hosts.

It might want to use information from the other puppet modules like "apache2_security_mirror" or "buildd" to decide which incoming traffic should be allowed.

DSA will of course provide you with all necessary further information.

Categories: Elsewhere

Debian Sysadmin Team: Martin Zobel-Helas: raff.debian.org about to be shut down

Planet Debian - Wed, 01/01/2014 - 16:48

We are re-purposing raff.debian.org as a new security mirror to be shipped to South America. The raff.debian.org most of you have used as buildd.debian.org in the past will cease to exist in the near future.

raff.debian.org will cease acting as a dns server for the debian.org zone in a few days, therfore we will add two new hosts, senfl.d.o and ravel.d.o.

All other services formerly hosted on raff.d.o should have been already moved by now, we just want to encourage those of you who had a login on raff, to backup your home directory if you care of the data stored there, as we will not move that.

Current plan is to shut down current raff by end of the year (so in about 7 days).

Categories: Elsewhere

Debian Sysadmin Team: Martin Zobel-Helas: Howto mess up the Debian Project homepage

Planet Debian - Wed, 01/01/2014 - 16:48

I recently blogged about the GeoDNS setup we plan for security.debian.org. Even though all DSA team members agree that the GeoDNS setup for security.debian.org should come alive as soon as possible, we still fear to break an important service like security.d.o.

Yesterday I decided without further ado to float a trial balloon and converted DNS entries for the Debian Project homepage to our GeoDNS setup. While doing so, we found out that some part of our automatic deployment scripts still need to be adjusted to serve more than one subdomain of the project.

That setup is live for about eighteen hours now, and the project homepage now resolves it IPs via GeoDNS. For now, we are using senfl.d.o for Northern America, www.de.debian.org and www.debian.at for Europe and klecker.d.o for the rest of the world. From what I can see from GeoDNS logs, it seems to work fine, and the load stays reasonably low, so after a short test period we might add additional services like security.debian.org to GeoDNS.

Categories: Elsewhere

Dominique Dumont: Config::Model::OpenSsh now supports OpenSSH 6.4 configuration

Planet Debian - Wed, 01/01/2014 - 16:30

Hello

This was long overdue.

I’ve released Config::Model::OpenSsh 1.232. This new release
supports a lot of new parameters that were added to OpenSSH 6.0 and later versions, like AllowAgentForwarding, AuthenticationMethods or AuthorizedKeysCommand.

This new version is also available in Debian.

Happy new year !


Tagged: Config::Model, OpenSsh, Perl
Categories: Elsewhere

Andrew Pollock: [life] It's a new year, it's a new me

Planet Debian - Wed, 01/01/2014 - 15:27

Well, it's 2014. The annus horribilus of 2013 is behind me. I'm entering 2014 in a much better place (on multiple levels) than I did for 2013. I'm positively excited to see what 2014 brings me.

I was very fortunate to have someone I looked up to reach out to help me when my marriage fell apart. One of the things he said to me was "A year from now, you won't recognise yourself". He was right. I am such a different person today than I was a year ago. I'm happier, fitter, more self-confident. I'm focusing on different things in my life.

One thing that has been a casualty, has been my geeky pursuits. There were a few times last year where I questioned if I was still a geek. I am. I've just deprioritised it. Instead, I've been putting my own self-care and my daughter above all else, meaning that my running has probably been the one thing I'd call a "hobby" at the moment.

I've got some stuff in the pipeline that I'm not quite ready to write about yet, but I'm expecting a rearrangement of my time that will hopefully allow for some more geeky endeavours in 2014. I'm also hoping I can regain some momentum on blogging.

Here's what I'd say my goals are, going into 2014:

  • Do P90X
  • Run in the Gold Coast Half-Marathon
  • Get involved with Debian again

plus some other stuff I'm not ready to talk about yet.

Categories: Elsewhere

Christian Perrier: [life] [running] 2013 summary

Planet Debian - Wed, 01/01/2014 - 15:14
Yet another yearly summary of my running activities. As I already explained, now running took precedence over free software activities as you'll notice below...

So, what happened on that front in 2013 for running bubulle?

I finally managed to run 4603 km during the year, which is 1700km more than 2012. Definitively an explosion as I actually ran over 12.5km every day.

I added to this 206km with my moutain bike, all of them achieved during December while I was (and still am) suffering from an injury (fatigue fracture).

These 4603km were covered in 458 hours, so a bit over 19 days running and very very slightly over 10km/h.

  • 2061 kilometers training on hard (and mostly flat) soil (road running). Mostly my daily commute runs.
  • 1977 kilometers training on trail running (either in the country, woods, forests, or in moutains). Mostly my week-end (and holidays, and DebConf...) runs.
  • 85 kilometers competition on roads (indeed own marathon and two half-marathons)
  • 471 kilometers competition in trail running
It seems that my favourite race type choice is obvious, here: I'm, by very far, privileging, trail races, either in the country or forests in my neighbourhood....or in moutains.

As a result, also, the combined positive climb is faily high, too. Though harder to estimate than distance and time, I end up with about 72,900 meters positive climb (to be compared to last year's 33,000 meters...:-)). Clearly, my summer in moutains made the difference, here....

The longest time period without running, this year, has been 3 days. That's really insane...:-). Particularly when one notices that I happened to run *every day* during 40 consecutive days at one moment, between August and September. Who says that running is indeed a full part of my life? To be honest, I actually didn't run since December 11th (while writing this on January 1st) because of my tibia injury...but I did bike, so that's still some kind of sports, right?

Indeed, there have been 282 days in this year where I ran (or biked) at least once. Last year was 205 so....drug addiction is still increasing.

Most active month: November with 471km. Less active month: February with 334km (which is indeed more than 2012 most active month..:-))

This year was also a year of records:

  • half-marathon personal best with 1h34' in September at Bois d'Arcy (down by 3 minutes)
  • marathon personal best with 3h25'19" in October in Toulouse (down by 8 minutes)
  • once again, great progress in my 3rd consecutive Le Puy-Firminy race which I ran in 7h15 instead of 8h22 last year. Same progress for 80km Ecotrail with 9h36 instead of 11h05 the year before

I ran 14 official races during the year :

  • four "ultra" races: 75km Saintélyon in December, by night, 70km Le Puy-Firminy in November, by night, 50km Cenis Trail in August (moutain race) and 80km Paris Ecotrail in March
  • two "maratrails": Chamonix Marathon du Mont-Blanc, 42km, and Trail des Lavoirs in Chevreuse, 44km
  • one marathon: Toulouse
  • two 35km trail races
  • two half-marathons
  • three trail races around 20km.

How about next^W this year? Well, my season's peaks are currently being secured:

  • Paris 80km Ecotrail again in March. Followed, the next Sunday, by the Paris Marathon, which I'll run dressed as Spongebob...:-)
  • Trail des Cerfs (50km trail race in Rambouillet Forest, close to my place), in May
  • Montagn'hard 60 kilometers moutain trail race in French Alps (with 5000 meters positive climb), in July
  • Courmayeur-Champeix-Chamonix, one of the 4 races of the Ultra Trail du Mont-Blanc week in Chamonix, in late August (yes, during DebConf 14!). 101km and 6100m positive climb. This is assuming that I'm lucky to get picked at the lottery (there are too many peopl who want to run this race). If I'm not....I'll probably pick another 100km moutain race somewhere in French Alps in late August
  • Sainté Urban Trail in October: a "trail" race over the hills of Saint-Étienne, my birth city, for about 30 kilometers and 1000m positive climb (and gazillion of steps). Back to roots!
  • Le Puy-Firminy 70km again in November. 4th time in a row. Will I again run it in one hour less than the year before? Probably hard to imagine.
  • Origole trail race: one of the most difficult trail races in France : 75 kilometers and 2000m positive climb in Rambouillet Forest, close to Paris, by night. Usually incredibly muddy and cold.
No marathon as of now...:-). Or, indeed one (Paris in April), but more for fun : I hope to complete it in....less than 5 hours...but still in Spongebob Squarepants suit. You may want to watch TV on April 6th..:-)

As I wrote last year : all this of course is assuming that no injury comes up. I currently need to recover from the current one, though it doesn't seem as bad as it seemed a few weeks ago. We'll see on January 1st 2015...:-)

Categories: Elsewhere

Vincent Bernat: Testing infrastructure with serverspec

Planet Debian - Wed, 01/01/2014 - 13:05

Checking if your servers are configured correctly can be done with IT automation tools like Puppet, Chef, Ansible or Salt. They allow an administrator to specify a target configuration and ensure it is applied. They can also run in a dry-run mode and report servers not matching the expected configuration.

On the other hand, serverspec is a tool to bring the well known RSpec, a testing tool for the Ruby programming language frequently used for test-driven development, to the infrastructure world. It can be used to remotely test server state through an SSH connection.

Why one would use such an additional tool? Many things are easier to express with a test than with a configuration change, like for example checking that a service is correctly installed by checking it is listening to some port.

Getting started

Good knowledge of Ruby may help but is not a prerequisite to the use of serverspec. Writing tests feels like writing what we expect in plain English. If you think you need to know more about Ruby, here are two short resources to get started:

serverspec’s homepage contains a short and concise tutorial on how to get started. Please, read it. As a first illustration, here is a test checking a service is correctly listening on port 80:

describe port(80) do it { should be_listening } end

The following test will spot servers still running with Debian Squeeze instead of Debian Wheezy:

describe command("lsb_release -d") do it { should return_stdout /wheezy/ } end

Conditional tests are also possible. For example, we want to check the miimon parameter of bond0, but only when the interface is present:

has_bond0 = file('/sys/class/net/bond0').directory? # miimon should be set to something other than 0, otherwise, no checks # are performed. describe file("/sys/class/net/bond0/bonding/miimon"), :if => has_bond0 do it { should be_file } its(:content) { should_not eq "0\n" } end

serverspec comes with a complete documentation of available resource types (like port and command) that can be used after the keyword describe.

When a test is too complex to be expressed with simple expectations, it can be specified with arbitrary commands. In the below example, we check if memcached is configured to use almost all the available system memory:

# We want memcached to use almost all memory. With a 2GB margin. describe "memcached" do it "should use almost all memory" do total = command("vmstat -s | head -1").stdout # ➊ total = /\d+/.match(total)[0].to_i total /= 1024 args = process("memcached").args # ➋ memcached = /-m (\d+)/.match(args)[1].to_i (total - memcached).should be > 0 (total - memcached).should be < 2000 end end

A bit more arcane, but still understandable: we combine arbitrary shell commands (in ➊) and use of other serverspec resource types (in ➋).

Advanced use

Out of the box, serverspec provides a strong fundation to build a compliance tool to be run on all systems. It comes with some useful advanced tips, like sharing tests among similar hosts or executing several tests in parallel.

I have setup a GitHub repository to be used as a template to get the following features:

  • assign roles to servers and tests to roles;
  • parallel execution;
  • report generation & viewer.
Host classification

By default, serverspec-init generates a template where each host has its own directory with its unique set of tests. serverspec only handles test execution on remote hosts: the test execution flow (which tests are executed on which servers) is delegated to some Rakefile1. Instead of extracting the list of hosts to test from a directory hiearchy, we can extract it from a file (or from an LDAP server or from any source) and attach a set of roles to each of them:

hosts = File.foreach("hosts") .map { |line| line.strip } .map do |host| { :name => host.strip, :roles => roles(host.strip), } end

The roles() function should return a list of roles for a given hostname. It could be something as simple as this:

def roles(host) roles = [ "all" ] case host when /^web-/ roles << "web" when /^memc-/ roles << "memcache" when /^lb-/ roles << "lb" when /^proxy-/ roles << "proxy" end roles end

In the snippet below, we create a task for each server as well as a server:all task that will execute the tests for all hosts (in ➊). Pay attention, in ➋, at how we attach the roles to each server.

namespace :server do desc "Run serverspec to all hosts" task :all => hosts.map { |h| h[:name] } # ➊ hosts.each do |host| desc "Run serverspec to host #{host[:name]}" ServerspecTask.new(host[:name].to_sym) do |t| t.target = host[:name] # ➋: Build the list of tests to execute from server roles t.pattern = './spec/{' + host[:roles].join(",") + '}/*_spec.rb' end end end

You can check the list of tasks created:

$ rake -T rake check:server:all # Run serverspec to all hosts rake check:server:web-10 # Run serverspec to host web-10 rake check:server:web-11 # Run serverspec to host web-11 rake check:server:web-12 # Run serverspec to host web-12 Parallel execution

By default, each task is executed when the previous one has finished. With many hosts, this can take some time. rake provides the -j flag to specify the number of tasks to be executed in parallel and the -m flag to apply parallelism to all tasks:

$ rake -j 10 -m check:server:all Reports

rspec is invoked for each host. Therefore, the output is something like this:

$ rake spec env TARGET_HOST=web-10 /usr/bin/ruby -S rspec spec/web/apache2_spec.rb spec/all/debian_spec.rb ...... Finished in 0.99715 seconds 6 examples, 0 failures env TARGET_HOST=web-11 /usr/bin/ruby -S rspec spec/web/apache2_spec.rb spec/all/debian_spec.rb ...... Finished in 1.45411 seconds 6 examples, 0 failures

This does not scale well if you have dozens or hundreds of hosts to test. Moreover, the output is mangled with parallel execution. Fortunately, rspec comes with the ability to save results in JSON format. Those per-host results can then be consolidated into a single JSON file. All this can be done in the Rakefile:

  1. For each task, set rspec_opts to --format json --out ./reports/current/#{target}.json. This is done automatically by the subclass ServerspecTask which also handles passing the hostname in an environment variable and a more concise and colored output.

  2. Add a task to collect the generated JSON files into a single report. The test source code is also embedded in the report to make it self-sufficient. Moreover, this task is executed automatically by adding it as a dependency of the last serverspec-related task.

Have a look at the complete Rakefile for more details on how this is done.

A very simple web-based viewer can handle those reports2. It shows the test results as a matrix with failed tests in red:

Clicking on any test will display the necessary information to troubleshoot errors, including the test short description, the complete test code, the expectation message and the backtrace:

I hope this additional layer will help making serverspec another feather in the “IT” cap, between an automation tool and a supervision tool.

  1. A Rakefile is a Makefile where tasks and their dependencies are described in plain Ruby. rake will execute them in the appropriate order. 

  2. The viewer is available in the GitHub repository in the viewer/ directory. 

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator - Elsewhere