Feed aggregator

Vardot: Hello Drupal: Free Training Session at PSUT

Planet Drupal - Mon, 20/04/2015 - 08:21
Events

In part of our joint educational initiative with Acquia, we’re back in 2015 with new training sessions at universities to educate students on the benefits and value of Drupal as a leading content management system. Students who are interested in Drupal and open source technologies, have the chance to learn more about Drupal from Vardot and Acquia, and experience first hand, installing and setting up Drupal.

The first event will take place in Princess Sumaya University for Technology (PSUT) on the 29th April 2015 at 12:30 PM. You can learn more about the event on Acquia’s Training Event Page

Tags:  Drupal Planet Drupal Training Acquia Drupal Title:  Hello Drupal: Free Training Session at PSUT
Categories: Elsewhere

Russ Allbery: Review: The Girls from Alcyone

Planet Debian - Mon, 20/04/2015 - 06:28

Review: The Girls from Alcyone, by Cary Caffrey

Series: Girls from Alcyone #1 Publisher: Tealy Copyright: 2011 ISBN: 1-105-33727-8 Format: Kindle Pages: 315

Sigrid is a very special genetic match born to not particularly special parents, deeply in debt in the slums of Earth. That's how she finds herself being purchased by a mercenary corporation at the age of nine, destined for a secret training program involving everything from physical conditioning to computer implants, designed to make her a weapon. Sigrid, her friend Suko, and the rest of their class are a special project of the leader of the Kimura corporation, one that's controversial even among the corporate board, and when the other mercenary companies unite against Kimura's plans, they become wanted contraband.

This sounds like it could be a tense SF thriller, but I'll make my confession at the start of the review: I had great difficulty taking this book seriously. Initially, it had me wondering what horrible alterations and mind control Kimura was going to impose on the girls, but it very quickly turned into, well, boarding school drama, with little of the menace I was expecting. Not that bullying, or the adults who ignore it to see how the girls will handle it themselves, are light-hearted material, but it was very predictable. As was the teenage crush that grows into something deeper, the revenge on the nastiest bully that the protagonist manages to not be responsible for, and the conflict between unexpectedly competent girls and an invasion of hostile mercenaries.

I'm not particularly well-read or informed about the genre, so I'm not the best person to make this comparison, but the main thing The Girls from Alcyone reminded me of was anime or manga. The mix of boarding-school interpersonal relationships, crushes and passionate love, and hypercompetent female action heroes who wear high heels and have constant narrative attention on their beauty had that feel to it. Add in the lesbian romance and the mechs (of sorts) that show up near the end of the story, and it's hard to shake the feeling that one is reading SF yuri as imagined by a North American author.

The other reason why I had a hard time taking this seriously is that it's over-the-top action sequences (it's the Empire Strikes Back rescue scene!) mixed with rather superficial characterization, with one amusing twist: female characters almost always end up being on the side of the angels. Lady Kimura, when she appears, turns into exactly the sort of mentor figure that one would expect given the rest of the story (and the immediate deference she got felt like it was lifted from anime). The villains, meanwhile, are hissable and motivated by greed or control. While there's a board showdown, there's no subtle political maneuvering, just a variety of more or less effective temper tantrums.

I found The Girls from Alcyon amusing, and even fun to read in places, but that was mostly from analyzing how closely it matched anime and laughing at how reliably it delivered characteristic tropes. It thoroughly embraces its action-hero story full of beautiful, deadly women, but it felt more like a novelization of a B-grade sci-fi TV show than serious drama. It's just not well-written or deep enough for me to enjoy it as a novel. None of the characters were particularly engaging, partly because they were so predictable. And the deeper we got into the politics behind the plot, the less believable I found any of it.

I picked this up, along with several other SFF lesbian romances, because sometimes it's nice to read a story with SFF trappings, a positive ending, and a lack of traditional gender roles. The Girls from Alcyone does have most of those things (the gender roles are tweaked but still involve a lot of men looking at beautiful women). But unless you really love anime-style high-tech mercenary boarding-school yuri, want to read it in book form, and don't mind a lot of cliches, I can't recommend it.

Followed by The Machines of Bellatrix.

Rating: 3 out of 10

Categories: Elsewhere

Richard Hartmann: Release Critical Bug report for Week 16

Planet Debian - Sun, 19/04/2015 - 23:35

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1031 (Including 146 bugs affecting key packages)
    • Affecting Jessie: 53 (key packages: 42) That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 49 (key packages: 42) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 12 bugs are tagged 'patch'. (key packages: 9) Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 3 bugs are marked as done, but still affect unstable. (key packages: 2) This can happen due to missing builds on some architectures, for example. Help investigate!
        • 34 bugs are neither tagged patch, nor marked done. (key packages: 31) Help make a first step towards resolution!
      • Affecting Jessie only: 4 (key packages: 0) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 1 bugs are in packages that are unblocked by the release team. (key packages: 0)
        • 3 bugs are in packages that are not unblocked. (key packages: 0)

How do we compare to the Squeeze and Wheezy release cycles?

Week Squeeze Wheezy Jessie 43 284 (213+71) 468 (332+136) 319 (240+79) 44 261 (201+60) 408 (265+143) 274 (224+50) 45 261 (205+56) 425 (291+134) 295 (229+66) 46 271 (200+71) 401 (258+143) 427 (313+114) 47 283 (209+74) 366 (221+145) 342 (260+82) 48 256 (177+79) 378 (230+148) 274 (189+85) 49 256 (180+76) 360 (216+155) 226 (147+79) 50 204 (148+56) 339 (195+144) ??? 51 178 (124+54) 323 (190+133) 189 (134+55) 52 115 (78+37) 289 (190+99) 147 (112+35) 1 93 (60+33) 287 (171+116) 140 (104+36) 2 82 (46+36) 271 (162+109) 157 (124+33) 3 25 (15+10) 249 (165+84) 172 (128+44) 4 14 (8+6) 244 (176+68) 187 (132+55) 5 2 (0+2) 224 (132+92) 175 (124+51) 6 release! 212 (129+83) 161 (109+52) 7 release+1 194 (128+66) 147 (106+41) 8 release+2 206 (144+62) 147 (96+51) 9 release+3 174 (105+69) 152 (101+51) 10 release+4 120 (72+48) 112 (82+30) 11 release+5 115 (74+41) 97 (68+29) 12 release+6 93 (47+46) 13 release+7 50 (24+26) 14 release+8 51 (32+19) 15 release+9 39 (32+7) 16 release+10 20 (12+8) 17 release+11 24 (19+5) 18 release+12 2 (2+0)

Graphical overview of bug stats thanks to azhag:

Categories: Elsewhere

Patrick Schoenfeld: Resources about writing puppet types and providers

Planet Debian - Sun, 19/04/2015 - 13:52

When doing a lot of devops stuff with Puppet, you might get to a point, where the existing types are not enough. That point is usually reached, when a task at hand becomes extraordinary complex when trying to achieve it with the Puppet DSL. One example of such a case could be if you need to interact with a system binary a lot. In this case, writing your own puppet type might be handy.

Now where to start, if you want to write your own type?

Overview: modeling and providing types

First thing that you should know about puppet types (if you do not already): a puppet resource type consists of a type and one or more providers.

The type is a model of the resource and describes which properties (e.g. the uid of a user resource) and parameters (like the managehome parameter) a resource has. It's a good idea to start with a rough idea of what properties you'll be manage with your resource and what values they will accept, since the type also does the job of validation.

What actually needs to be done on the target system is what the provider is up to. There can be different providers for different implementations (e.g. a native ruby implementation or an implementation using a certain utility), different operating systems and other conditions.

A combination of a type and a matching provider is what forms a (custom) resource type.

Resources

Next I'll show you some resources about puppet provider development, that I found useful:

Official documentation:

Actually types and resources is quiet well documented in the official documentation, although it might not get to much in the details:


Blog posts:
A hands-on tutorial in multiple parts with good explanations are the blog posts by Gary Larizza:

Books:
The probably most complete information, including explanations of the puppet resource model and it's resource abstraction layer (RAL), can be found in the book Puppet Types and providers by Dan Bode and Nan Liu.

The puppet source:
Last but not least, it's always worth a peek at how others did it. The puppet source contains all providers of the official puppet release, as well as the base libraries for puppet types and providers with their api documentation: https://github.com/puppetlabs/puppet/

Categories: Elsewhere

Patrick Schoenfeld: Resources about writing puppet types and providers

Planet Debian - Sun, 19/04/2015 - 13:29

When doing a lot of devops stuff with Puppet, you might get to a point, where the existing types are not enough. That point is usually reached, when a task at hand becomes extraordinary complex when trying to achieve it with the Puppet DSL. One example of such a case could be if you need to interact with a system binary a lot. In this case, writing your own puppet type might be handy.

Now where to start, if you want to write your own type?

Overview: modeling and providing types

First thing that you should know about puppet types (if you do not already): a puppet resource type consists of a type and one or more providers.

The type is a model of the resource and describes which properties (e.g. the uid of a user resource) and parameters (like the managehome parameter) a resource has. It’s a good idea to start with a rough idea of what properties you’ll be manage with your resource and what values they will accept, since the type also does the job of validation.

What actually needs to be done on the target system is what the provider is up to. There can be different providers for different implementations (e.g. a native ruby implementation or an implementation using a certain utility), different operating systems and other conditions.

A combination of a type and a matching provider is what forms a (custom) resource type.

Resources

Next I’ll show you some resources about puppet provider development, that I found useful:

Official documentation:

Actually types and resources is quiet well documented in the official documentation, although it might not get to much in the details:

Blog posts:
A hands-on tutorial in multiple parts with good explanations are the blog posts by Gary Larizza:

Books:
The probably most complete information, including explanations of the puppet resource model and it’s resource abstraction layer (RAL), can be found in the book Puppet Types and providers by Dan Bode and Nan Liu.

The puppet source:
Last but not least, it’s always worth a peek at how others did it. The puppet source contains all providers of the official puppet release, as well as the base libraries for puppet types and providers with their api documentation: https://github.com/puppetlabs/puppet/

Categories: Elsewhere

Wouter Verhelst: Youn Sun Nah 5tet: Light For The People

Planet Debian - Sun, 19/04/2015 - 11:25

About a decade ago, I played in the (now defunct) "Jozef Pauly ensemble", a flute choir connected to the musical academy where I was taught to play the flute. At the time, this ensemble had the habit of goin on summer trips every year; sometimes these trips were large international concert tours (like our 2001 trip to Australia), but that wasn't always the case; there have also been smaller trips, like the 2002 one to the French Ardennes.

While there, we went on a day trip to the city of Reims. As a city close to the front in the first world war, it has a museum dedicated to that subject that I remembered going to. But the fondest memory of that day was going to a park where a podium was set up, with a few stacks of fold-up chairs standing nearby. I took one and listened to the music.

That was the day when I realized that I kindof like jazz. I had come into contact with Jazz before, but it had always been something to be used as a kind of musical wallpaper; something you put on, but don't consciously listen to. Watching this woman sing, however, was a different kind of experience altogether. I'm still very fond of her rendition of "Besame Mucho".

After having listened to the concert for about two hours, they called it quits, but did tell us that there was a record which you could buy. Of course, after having enjoyed the afternoon so much, I couldn't imagine not buying it, so that happened.

Fast forward several years, in the move from my apartment above my then-office to my current apartment (just around the corner), the record got put into the wrong box, and when I unpacked things again it got lost; permanently, I thought. Since I also hadn't digitized it yet at the time, I haven't listened to it anymore in quite a while.

But that time came to an end today. The record which I thought I'd lost wasn't, it was just in a weird place, and while cleaning yesterday, I found it sitting among a bunch of old stuff that I was going to throw out. Putting on the record today made me realize again how good it really is, and I thought that I might want to see if she was still active, and if she might perhaps have made another album.

It was great to find out that not only had she made six more albums since the one I bought, she'd also become a lot more known in the Jazz world (which I must admit I don't really follow all that well), and won a number of awards.

At the time, Youn Sun Nah was just a (fairly) recent graduate from a particular Jazz school in Paris. Today, she appears to be so much more...

Categories: Elsewhere

Laura Arjona: Six months selfhosting: my userop experiences

Planet Debian - Sun, 19/04/2015 - 02:06

Note: In this post I mention some problems and ask questions (to myself, like “thinking aloud”). The goal is not to get answers to those questions (I suppose that I will find them soon or later in the internet, manuals and so), but to show the kind of problems and questions that arise in my selfhosting adventures, which I suppose are common to other people trying to administer a home server with some web services.

Am I an userop? Well I’m something in the middle of (GNU/Linux) user and sysadmin: I have studied computer technical engineering but most of my experience has been in helpdesk, providing support for Windows users. I’m running Debian in some LAMP boxes at work (without GUI) since 2008 or so, and in my desktops (with GUI) since 2010. I don’t code nor package, but I don’t mind trying to read code and understand it (or not). I know a bit of C, a bit of Python, of PHP, and enough Perl to open a Perl file and close it after two minutes,  understanding that it’s great, but too much for me :) I translate software, so I’m not scared to clone a repository, edit files, commit or submit a patch. I’m not scared of compiling a program (except if it’s an Android app: I try to avoid setting up the development environment just to try some translation that I made… but I built my Puma before it was the binary available for download or in F-Droid).

In conclusion, I feel more like a “GNU/Linux power user” than a “sysadmin”. Sometimes just a “user” or even a “newbie” (for example, I don’t know very well the Unix/Linux folder tree… where are the wallpapers stored? Does it depend on the desktop that I use?).

Anyway. I won’t stop my free software + free networks digital life because I don’t know many things. I bought a small server for home last September, and I wanted to try to selfhost some services, for me and for my family. I want to be a “home sysadmin” or something like that, so I joined the “userops” mailing list :)

Here you have my experiences on selfhosting/being an userop until now.

Mail

I even didn’t try to setup my mail server, because many people say it’s a pain (although nice articles were published about how to do it, for example this series in ArsTechnica) and I need a static IP which is 14€/month more to my ISP, and Gandi, the place where I rented my domain name, provides mail, and they use Debian and Roundcube, and sponsor Debian too, so I decided to trust on them.

So this is my strategy now, to try to keep mail under my control:

  • Trust my domain provider.
  • Backup my mail and keep local copies, removing sensible stuff from the server.
  • Use and spread the word about GPG encryption.
  • Try not to send photos or videos by mail, just send the link to my MediaGoblin instance (see below).
MediaGoblin

I’ve setup two MediaGoblin instances (yes, two!). I managed to do it in Debian 7 stable (I think NodeJS’ npm was not needed then), but soon later I upgraded to Jessie so now it’s even better.

I installed Nginx and PostgreSQL via apt, to use them for both instances (and probably some more software later).

One instance is public, and I use a Debian user, a PostgreSQL database, and it’s running in http://media.larjona.net
I have requested an SSL cert to Gandi but I still didn’t deployed it (lazy LArjona!!).

The other instance is private, for family photos. I didn’t know very well how much of my existing setup could reuse and how to keep both instances in case of downtimes or attack… I know more or less the concept or “chroot” but I don’t know how to deploy it in my machine. So I decided to use another Debian user, another PostgreSQL database, deploy MediaGoblin in a different folder, and create another virtual server in my Nginx to serve it. I managed to setup that virtual server to http-authenticate and to serve content via a different port, and use a self-signed SSL certificate (it’s only for family, so it does not matter). I created another (unprivileged) Debian user with a password for the nginx authentication, and gave my family the URL in the form https://mediaprivate.larjona.net:PortNumber and the user and password (mediaprivate is a string, and PortNumber is a number). I think they don’t use the instance too much, but at least I upload photos there from time to time and email the link instead of emailing the photos themselves (they don’t use GPG either…).

Upgrades

I upgraded MediaGoblin from 0.7.1 to 0.8.0 successfully, I sent a report about how I did it to the mailing list. First I upgraded the public instance, when I figure out the process, I upgraded the second instance to test my instructions, and then, I sent the report with the instructions to the mailing list.

Static site and LimeSurvey: the power of free software (with instructions)

I wanted to act as a mirror of floss2013.libresoft.es and surveys.libresoft.es since they suffer a downtime and I participated in that project (not in the sysadmin part, but in the research and content creation).

The static site floss2013.libresoft.es offered a zip with the whole website tree (since the website was licensed as AGPL), and I had access to the git repo holding the development copy of the website. So I just cloned the repository and setup another nginx virtual server in my machine, and tuned my DNS zone in Gandi website to serve floss2013.larjona.net from home. 10 minutes setup YAY! #inGitWeTrust #FreeSoftwareFTW :)

For surveys.larjona.net I had to install a LimeSurvey instance. I knew how to do it because we use LimeSurvey at work, but at home I had Nginx instead of Apache, and PostgreSQL instead of MySQL. And no PHP… I searched about how to install PHP in Nginx (I can use apt-get, nice!) and how to install LimeSurvey with Nginx and PostgreSQL (I had documentation about that, so I followed, and it worked).

For making available the data (one survey and its results, so people can login as visitor to query and get statistics), I downloaded the LimeSurvey export dataset that we were providing in the static website, followed the replication instructions (hey, I wrote them!), and they worked #oleole! (And here, dear researchers, gets demonstrated that free software and free culture really empower your research and help spreading your results).

Etherpad: not so easy, it seems!

I’m trying to install Etherpad-Lite, but I’m suffering a bit. I think I did everything ok according to some guides but I get “Bad Gateway” and these kind of errors when trying to browse with Lynx in the host:

[error] 3615#0: *24 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 127.0.0.1, server: pad.larjona.net, request: "GET / HTTP/1.0", upstream: "http://127.0.0.1:9001/", host: "pad.larjona.net" 2015/04/17 20:52:56 [error] 3615#0: *24 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: pad.larjona.net, request: "GET / HTTP/1.0", upstream: "http://[::1]:9001/", host: "pad.larjona.net"

I’m not sure if I need to open some port in iptables, my router, or change my nginx configuration because the guides assume you’re only serving one website in the port 80 (and I have several of them, now…), or what… I’ve spent three chunks of time (maybe ~2h each?) on this, in different days, and couldn’t figure it out, so I decided to round-robin in my TODO list.

Userops thoughts Debian brings peace of mind (for me)

On one side, maintaining a Debian box it’s quite easy, and the more software that it’s packaged, the less time that I spend installing or upgrading. I like being in stable, I’m in Jessie now (I migrated when it was frozen), but I’ll stay in stable as much as I can.

I like that I can use the software that I installed via apt-get for several services (nginx, PostgreSQL…). About the software that is not packaged (MediaGoblin, LimeSurvey, EtherPad, maybe others later), I wonder how dependencies and updates are handled. And maybe (probably) I have installed some components several times, one for each service (this sounds like a Windows box #grr).

For example MediaGoblin uses PyPump. PyPump 0.5 is packaged in Debian Jessie. MediaGoblin uses PyPump 0.7+. What if PyPump 0.7+ gets, let’s say, into Jessie-backports? Can I benefit from that?

I know that MediaGoblin upgrade instructions includes upgrading the dependencies, but what about some security patch in one dependency? Should I upgrade the pip modules periodically? How to know if some upgrade is recommended because patches a vulnerability, or it’s just new features (and maybe breaking my setup)?

This kind of things are the “peace of mind” that Debian packaging brings to me: when some piece of software is packaged, I know maybe I need to care about proper setup and configuration, but later, it’s kind-of-easy to maintain (since the Debian maintainers care about the rest). I don’t mind about cloning a repo and compiling, I mind about later, or coexistance with other program/services. I trust in the MediaGoblin community and I’m an active member (I’m not developer, but hang on IRC, follow the mailing list, etc) but for example I don’t know anything about the EtherPad project. And I don’t feel like joining the community (I’m already an active member in Debian, MediaGoblin, F-Droid, Pump.io, translator of LimeSurvey and many other small apps that I use, and in the future will use more services, like OwnCloud, XMPP…), joining the community of each software that I use is becoming not sustainable :s

Free software is more than software

I follow the userop mailing list, and it’s becoming very technical. I mostly understand the problems (which are similar to the problems that I face: how to isolate different services, how to easily-configure them, how to make them installable by average user…) But I don’t understand most of the solutions proposed, and I think that probably we need technical solutions, but in the meanwhile, some issues can be addressed not with software, but with other means: good documentation, community support, translations, beta-testers…

This is my conclusion until now. When a project is well documented, I think I can find my way to selfhost no matter if the software is packaged (or “contained”) or not. MediaGoblin, and LimeSurvey, are well documented, and the user support channels are very responsive.

I find lots of instructions that assume that you will use a whole machine for their service (and not for other things). And lots of documentation for the LAMP stack, but not for Nginx + PostgreSQL and Node instead of PHP… So, for each “particularity” of my setup, I search the internet and try to pick good sources to help me do what I wanted to do.

I’m kind of privileged

Some elements, not software related, to take into account as “pre-requisites for succeed” selfhosting services:

  • I knew what to search.
  • I knew which sites to visit from the results (arch wiki, debian wiki, stack overflow, etc: some of them were not the Top1 in the results).
  • I had time to read several sources and make my mind about what to do and how.
  • I can read, understand, and write in English.
  • I have no fear about my broken English.
  • I have no impostor syndrome.
  • I felt welcome in the FLOSS communities where I hanged out.

These aspects are not present in a lot of people. If I look around to the “computer users” that I know (mostly Windows+Android, some GNU/Linux users, some Mac OSX users, some iOS users), I find that they search things like “X does not work” or they cannot write a proper search query in English. Or they trust some random person writing a recipe in their blog, without trying first to understand what the recipe does. Other people just say “I’m not a professional sysadmin, I’ll just do what «everybody» does (aka use Google services or whatever). What if I try and I don’t succeed?”. Things like that.

We may need some technical solutions (and hackers are thinking about that, and working on that). But I feel that we need (more) a huge group of beta-testers, dogfooding people, aventurers that try the half-cooked solutions and provide successful and unsuccessful experiences, to guide the research and make software technologies advance. I’m not sure if I am an userop, but I feel part of that “vanguard force”, I want to be part of the future of free software and free networks.


Filed under: My experiences and opinion Tagged: Communities, Contributing to libre software, Debian, Developer motivations, English, free networks, Free Software, Freedom, innovation, MediaGoblin, Moving into free software, Project Management, selfhosting, sysadmin
Categories: Elsewhere

Gregor Herrmann: RC bugs 2015/11-16

Planet Debian - Sun, 19/04/2015 - 00:52

only one week left until the jessie release. yay!

in the last weeks I didn't find many RC bugs that I could fix; still, here's the short list; nice feature: I mostly help others or could build an work done by others.

  • #669735 – dpkg-www: "dpkg-www: transition towards Apache 2.4"
    sponsor QA upload from Jean-Michel Nirgal Vourgère
  • #669777 – yocto-reader: "yocto-reader: transition towards Apache 2.4"
    sponsor NMU from Jean-Michel Nirgal Vourgère, upload to DELAYED/2
  • #669796 – w3c-linkchecker: "w3c-linkchecker: transition towards Apache 2.4"
    cherry-pick periapt's commit from 2012 (pkg-perl)
  • #764284 – testdisk: "[testdisk] after ntfs-3g upgrade, testdisk cannot be installed (Depends: error)"
    tag + sid to get it out of jessie RC bugs
  • #780629 – libibverbs1: "libibverbs1: please add Breaks: libopenmpi1.3"
    upload NMU prepared by Andreas Beckmann, adding a Breaks, upload to DELAYED/2
  • #780729 – pbuilder: "pbuilder must define PATH as in debian-policy (and as used on buildds)"
    downgrade
  • #782160 – src:chrony: "chrony: Multiple issues: CVE-2015-1821 CVE-2015-1822 CVE-2015-1853"
    sponsor maintainer upload by Joachim Wiedorn
Categories: Elsewhere

Neil McGovern: Taking office

Planet Debian - Sat, 18/04/2015 - 15:36

Yesterday, my first term started as the Debian Project Leader. There’s been a large number of emails congratulating me, and thanks to everyone who sent those. I’d also like to thank Mehdi Dogguy and Gergely Nagy for running, and of course Lucas Nussbaum for his service over the past two years.

Lucas also did a great handover, and so (I hope!) I’m aware of most of the issues that are ongoing. As started previously, I’ll keep my daily log of activities in /srv/leader/news/  on master.debian.org.

Categories: Elsewhere

Russ Allbery: Review: The Long Way to a Small, Angry Planet

Planet Debian - Sat, 18/04/2015 - 08:24

Review: The Long Way to a Small, Angry Planet, by Becky Chambers

Publisher: CreateSpace Copyright: 2014 ISBN: 1-5004-5330-7 Format: Kindle Pages: 503

The Wayfarer is a tunneling ship: one of the small, unremarked construction ships that help build the wormhole network used for interstellar transport. It's a working ship with a crew of eight (although most people would count seven and not count the AI). They don't all like each other — particularly not the algaeist, who is remarkably unlikeable — but they're used to each other. It's not a bad life, although a more professional attention to paperwork and procedure might help them land higher-paying jobs.

That's where Rosemary Harper comes in. At the start of the book, she's joining the ship as their clerk: nervous, hopeful, uncertain, and not very experienced. But this is a way to get entirely away from her old life and, unbeknownst to the ship she's joining, her real name, identity, and anyone who would know her.

Given that introduction, I was expecting this book to be primarily about Rosemary. What is she fleeing? Why did she change her identity? How will that past come to haunt her and the crew that she joined? But that's just the first place that Chambers surprised me. This isn't that book at all. It's something much quieter, more human, more expansive, and more joyful.

For one, Chambers doesn't stick with Rosemary as a viewpoint character, either narratively or with the focus of the plot. The book may open with Rosemary and the captain, Ashby, as focal points, but that focus expands to include every member of the crew of the Wayfarer. We see each through others' eyes first, and then usually through their own, either in dialogue or directly. This is a true ensemble cast. Normally, for me, that's a drawback: large viewpoint casts tend to be either jarring or too sprawling, mixing people I want to read about with people I don't particularly care about. But Chambers avoids that almost entirely. I was occasionally a touch disappointed when the narrative focus shifted, but then I found myself engrossed in the backstory, hopes, and dreams of the next crew member, and the complex ways they interweave. Rosemary isn't the center of this story, but only because there's no single center.

It's very hard to capture in a review what makes this book so special. The closest that I can come is that I like these people. They're individual, quirky, human (even the aliens; this is from more the Star Trek tradition of alien worldbuilding), complicated, and interesting, and it's very easy to care about them. Even characters I never expected to like.

The Long Way to a Small, Angry Planet does have a plot, but it's not a fast-moving or completely coherent one. The ship tends to wander, even when the mission that gives rise to the title turns up. And there are a lot of coincidences here, which may bother you if you're reading for plot. At multiple points, the ship ends up in exactly the right place to trigger some revelation about the backstory of one of the crew members, even if the coincidence strains credulity. Similar to the algae-driven fuel system, some things one just has to shrug about and move past.

On other fronts, though, I found The Long Way to be refreshingly willing to take a hard look at SF assumptions. This is not the typical space opera: humans are a relatively minor species in this galaxy, one that made rather a mess of their planet and are now refugees. They are treated with sympathy or pity; they're not somehow more flexible, adaptable, or interesting than the rest of the galaxy. More fascinatingly to me, humans are mostly pacifists, a cultural reaction to the dire path through history that brought them to their current exile. This is set against a backdrop of a vibrant variety of alien species, several of whom are present onboard the Wayfinder. The history and background of the other species are not, sadly, as well fleshed out as the humans, but each with at least a few twists that add interest to the story.

But the true magic of this book, the thing that it has in overwhelming abundance, is heart. Not everyone in this book is a good person, but most of them are trying. I've rarely read a book full of so much empathy and willingness to reach out to others with open hands. And, even better, they're all nice in different ways. They bring their own unique personalities and approaches to their relationships, particularly the complex web of relationships that connects the crew. When bad things happen, and, despite the overall light tone, a few very bad things happen, the crew rallies like friends, or like chosen family. I have to say it again: I like these people. Usually, that's not a good sign for a book, since wholly likeable people don't generate enough drama. But this is one of the better-executed "protagonist versus nature" plots I've read. It successfully casts the difficulties of making a living at a hard and lonely and political job as the "nature" that provides the conflict.

This is a rather unusual book. It's probably best classified as space opera, but it doesn't fit the normal pattern of space opera and it doesn't have enough drama. It's not a book about changing the universe; at the end of the book, the universe is in pretty much the same shape as we found it. It's not even about the character introduced in the first pages, or really that much about her dilemma. And it's certainly not a book about winning a cunning victory against your enemies.

What it is, rather, is a book about friendships, about chosen families and how they form, about being on someone else's side, about banding together while still being yourself. It's about people making a living in a hard universe, together. It's full of heart, and I loved it.

I'm unsurprised that The Long Way to a Small, Angry Planet had to be self-published via a Kickstarter campaign to find its audience. I'm also unsurprised that, once it got out there, it proved very popular and has now been picked up by a regular publisher. It's that sort of book. I believe it's currently out of print, at least in the US, as its new publisher spins up that process, but it should be back in print by late 2015. When that happens, I recommend it to your attention. It was the most emotionally satisfying book I've read so far this year.

Rating: 9 out of 10

Categories: Elsewhere

Jim Birch: Drupal 7: Hide Sticky and Promote

Planet Drupal - Sat, 18/04/2015 - 03:35

Promoted to front page? Don't worry about that, we don't use it.

That was the phrase I heard from a developer on the first site I was tasked to theme. I had asked what the "Promoted to front page" check box on the admin screen of a content type was what put it in the queue on the home page. 

It turns out that most every home page our agency ever built in Drupal had more complex requirements than that sole checkbox allowed for. 

The same goes for Sticky at top of lists. No one ever uses those, just ignore them.

What makes sense for a developer to ignore, can cause confusion for an administrative user.  The admin doesn't know all of the hard work that went into the Panel that drives the home page, or the view that creates the pane for the home page.  They just see a simple checkbox.  And when that checkbox doesn't do what it says it does, the site seems "broken".

So, I started searching, and found a great post discussing this problem, and a great solution from user StudioZut, who has created a custom module called "Hide Sticky and Promote" as a Drupal Sandbox and a Github Repository.

Read more

Categories: Elsewhere

Steve Kemp: skx-www upgraded to jessie

Planet Debian - Sat, 18/04/2015 - 02:00

Today I upgraded my main web-host to the Jessie release of Debian GNU/Linux.

I performed the upgraded by changing wheezy to jessie in the sources.list file, then ran:

apt-get update apt-get dist-upgrade

For some reason this didn't upgrade my kernel, which remained the 3.2.x version. That failed to boot, due to some udev/systemd issues (lots of "waiting for job: udev /dev/vda", etc, etc). To fix this I logged into my KVM-host, chrooted into the disk image (which I mounted via the use of kpartx), and installed the 3.16.x kernel, before rebooting into that.

All my websites seemed to be OK, but I made some changes regardless. (This was mostly for "neatness", using Debian packages instead of gems, and installing the attic package rather than keeping the source-install I'd made to /opt/attic.)

The only surprise was the significant upgrade of the Net::DNS perl-module. Nothing that a few minutes work didn't fix.

Now that I've upgraded the SSL-issue I had with redirections is no longer present. So it was a worthwhile thing to do.

Categories: Elsewhere

Code Karate: Drupal Views Module: Creating lists of content on your Drupal site

Planet Drupal - Fri, 17/04/2015 - 21:14
Episode Number: 203

In this episode we cover an overview of the Drupal 7 Views module. The Drupal Views module is probably the most popular Drupal module and is installed in almost every Drupal 7 website I build. It’s so popular in fact that it’s included in Drupal 8 by default.

Tags: DrupalViewsDrupal 7Site BuildingDrupal Planet
Categories: Elsewhere

Drupal.org Featured Case Studies: National Baseball Hall of Fame and Museum

Planet Drupal - Fri, 17/04/2015 - 20:30
Completed Drupal site or project URL: http://www.baseballhall.org/

The National Baseball Hall of Fame and Museum (BHoF) is an American institution. For 75 years they have housed the archive of America's favorite game, welcoming new inductees each year and connecting generations with their huge love and knowledge of the sport.

BHoF has a large and dedicated audience, but their location in Central New York limits the number of physical visits to the museum. To reach a wider audience, they needed to unlock the full potential of their online presence.

Cogapp helps organizations use digital media, specializing in large-scale, mission-critical projects for prominent institutions.

BHoF appointed Cogapp to perform a discovery phase to research user engagement, the kinds of content that are of interest to users, and key value propositions of the website to its visitors. This work then fed into developing the site, with the central objective being to showcase the vast number of artifacts in the Hall's collection, creating connections that bring these objects to life for site visitors.

Key modules/theme/distribution used: Islandora Imagecache ExternalParagraphsEntity APIMetatagFeaturesStrongarmMasterVarnish HTTP Accelerator IntegrationOrganizations involved: CogappTeam members: alxbridgechapabutassos
Categories: Elsewhere

Drupal Watchdog: VIDEO: DrupalCon Amsterdam Interview: Angie Byron

Planet Drupal - Fri, 17/04/2015 - 18:59

Angie Byron is Director of Community Development at Acquia. For this interview, during the final day of DrupalCon Amsterdam, we were able to find an empty auditorium. Alas, filming with my brand-new GoPro camera, we got off to a topsy-turvy start...

RONNIE RAY: I’ve had you upside down.

ANGIE BYRON: Oh hahaha!

I go by Angie Byron or webchick, and more people know me as webchick than Angie Byron.

Today, what I love to do at DrupalCons, on the last day of the sprint days, is just walk around all the tables and see what everyone is working on, cause there’s hundreds of people here and they’re all sort of scratching their own itches on everything from Drupal-dot-org to, like, what is the newest coolest content staging thing gonna be?, to how are we going to get Drupal 8 done?

And everybody working together and collaborating with people they don’t get to see all the time, it’s a lot of fun for me.

I feel like we made a lot of really great decisions about the Drupal 8 release management stuff here that we’ll be able to put into practice, and help try and focus efforts on getting the critical issues resolved, trying to clean up the loose ends that we still have, and getting the release out the door faster.

And the other thing I’m going to work on for the next month is something called Drupal Module Upgrader, which is the script that can help contrib modules port their modules to Drupal 8. It automates a lot of that task.

Now that Beta is here it’s a great time for people to update their modules, so I want to work on tools to help facilitate that.

RR: What are you reading, besides books on Drupal?

AB: Not much. Although I love reading kids books, because I have a daughter who’s 16 months now and she loves to be read to. So my latest books I’ve been reading are Where is the Green Sheep? and Go, Dog, Go! and a bunch of Richard Scarry stuff and things like that because she loves to know what everything’s called. She loves books.

There’s a Dr. Seuss book called Oh, The Places You’ll Go! That book is dark, man, that is like a dark book. It’s entertaining. I remember it from when I was a kid but I don’t remember it like that!

RR: Music?

AB: I listen to a lot of old music cause I’m one of those curmudgeonly people who thinks the best music was already made. So, like I’ve been having like a ‘70s rock, ‘80s pop, ‘90s punk rock, like – that’s sort of what’s in my chain all the time. Hair metal, junk like that. How to relive my kid-age stuff.

I think the community has grown to such an enormous size now that I guess one thing I wonder about, – not really worry about– but am curious about, is if can we still maintain that small-knit community feel that we had back when I started, when we were 70 people at a DrupalCon – not the 2,500 people we have now.

It’s cool to kind of walk around DrupalCon, especially on a sprint day, especially because I feel we have retained that – and people are finding people to connect with and cool things to work on and stuff like that.

I think it’s something we all need to collectively be intentional about is, you know, it’s not just enough that Drupal is just a great software project, it’s also about the people and trying to maintain that welcome feeling – that got us all in the door – for generations to come.

So that’s something I would leave as a parting note.

Tags:  DrupalCon DrupalCon Amsterdam Video Video: 
Categories: Elsewhere

Chapter Three: Presentation: Drupal 8 Module Development

Planet Drupal - Fri, 17/04/2015 - 18:15

This session was presented at Bay Area Drupal Camp, San Diego Drupal Camp, Phoenix Drupal Camp, and Stanford Drupal Camp.



Have you written a few simple modules for Drupal 7, and are a little bit nervous to find out the changes you'll be facing in Drupal 8?

Categories: Elsewhere

Aten Design Group: Speeding up Complex Drupal Data Loads with Custom Caches

Planet Drupal - Fri, 17/04/2015 - 17:27

Recently we had the task of loading data from a content type with 350 fields. Each node is a University’s enrollment data for one year by major, gender, minority, and a number of other categories. CSV exports of this data obviously became problematic. Even before we got to 350 fields, with the overhead of the Views module we would hit PHP timeouts when exporting all the nodes. If you’re not familiar with Drupal's database structure, each field’s data is stored in a table named ‘field_data_FIELDNAME’. Loading an entire node means JOINing the node table by entity_id with each related field table. When a node only has a handful of fields, those JOINs work fine, but at 350 fields the query runs slow.

On this site we’re also plotting some of the data using highcharts.js. We really hit a wall when trying to generate aggregate data to plot alongside a single university's. This meant loading every node of this content type to calculate the averages, which turned our slow query into a very slow query. We even hit a limit on the number of database JOINs that can be done at one time.

In retrospect this is a perfect case for a custom entity, but we already had thousands of nodes in the existing content type. Migrating them and implementing a custom entity was no longer a good use of time. Instead, we added a custom table that keeps all the single value fields in a serialized string.

The table gets defined with a hook_schema in our module's .install file:

function ncwit_charts_schema() {   $schema['ncwit_charts_inst_data'] = array( 'description' => 'Table for serialized institution data.', 'fields' => array( 'nid' => array( 'type' => 'int', 'default' => 0, 'not null' => TRUE, 'description' => 'node id for this row', ), 'tid' => array( 'type' => 'int', 'default' => 0, 'not null' => TRUE, 'description' => 'intitution term id that this data belongs to', ), 'year' => array( 'type' => 'int', 'default' => 0, 'not null' => TRUE, 'description' => 'school year for this node', ), 'data' => array( 'type' => 'blob', 'not null' => FALSE, 'size' => 'big', 'serialize' => TRUE, 'description' => 'A serialized array of name value pairs that store the field data for a survey data node.', ), ), 'primary key' => array('nid'), );   return $schema; }

The most important part of the array is 'data' with type 'blob', which can be up to 65kB. Not shown is another array to create a table for our aggregate data.

When a new node is saved hook_node_insert() is invoked. hook_node_update() fires both when a new node is saved and when it's updated.

/** * Implements hook_node_insert(). * save serialized field data to inst_data table for a new node * For a new node, have to use this */ function ncwit_charts_node_insert($node) { ncwit_charts_serialize_save($node); }     /** * Implements hook_node_update(). * save serialized field data to inst_data table */ function ncwit_charts_node_update($node) { if (isset($node->nid)) { // we're also calling this function from hook_node_insert // because hook_node_update doesn't have the nid if is a new node ncwit_charts_serialize_save($node); } else { return; } }

Now we actually process the fields to be serialized and store. This section will vary greatly depending on your fields.

function ncwit_charts_serialize_save($node) { // save each value as a simple key => value item foreach ($node as $key => $value) { $data[$key] = $value[LANGUAGE_NONE][0]['value']; }   $fields = array(); $fields['nid'] = $node->nid; $fields['tid'] = $node->field_institution_term[LANGUAGE_NONE][0]['tid']; $fields['year'] = $node->field_school_year[LANGUAGE_NONE][0]['value']; $fields['data'] = serialize($data);   db_merge('ncwit_charts_inst_data') ->key(array( 'nid' => $node->nid, )) ->fields($fields) ->execute();

When a node is deleted we have some clean-up to do.

/** * Implements hook_node_delete(). * Also remove node's data from inst_data */ function ncwit_charts_node_delete($node) { if ($node->type !== 'data_survey') { //only care about data_survey nodes return; }   $query = db_select('ncwit_charts_inst_data', 'i'); $query->fields('i')->condition('i.nid', $node->nid); $result = $query->execute(); $data = $result->fetchAssoc(); if ($data > 0) { db_delete('ncwit_charts_inst_data')->condition('nid', $node->nid)->execute(); } }

When first installed or when fields get changed, we added a batch process that re-saves the serialized strings. Aggregate data is calculated during cron and saved in another table. Rather than loading every node with JOINs, the data comes from a simple query of this custom table.

Pulling the data out of the database and calling unserialize() gives us a simple associative array of the data. To pass this data to highcharts.js we have a callback defined that returns the arrays encoded as JSON. Obviously this gets more complicated when dealing with multiple languages or multi-value fields. But in our case almost everything is a simple integer.

This process of caching our nodes as serialized data changed our loading speed from painfully slow to almost instant. If you run into similar challenges, hopefully this approach will help you too.

Categories: Elsewhere

EvolvisForge blog: Tricks for using Googlemail at work

Planet Debian - Fri, 17/04/2015 - 15:54

For these who similarily suffer from having to use Googlemail at work. If anyone else has more of these, please do share.

Deactivate the spamfilter

The site admins can do that. Otherwise, you will have work-relevant eMails, for example from your own OTRS system, end up in Spam (where you don’t see it, as their IMAP sucks) and deleted without asking 30 days later. (AIUI, the only way to get eMails actually deleted from Google…)

Do not use their SMTP service

Use your own outgoing MTA. This brings back the, well, not feature but should-have-been-granted-but-Google-doesn’t-do-it-anyway that, when you write to a mailing list, you also get your own messages into your own INBOX.

Calendars…

I have no solutions for this. I stopped using the Googlemail calendars because they didn’t think it a problem that, when I accept an invitation in Kontact (KDEPIM as packaged in Debian sid), the organiser of the calendar item in the sender’s calendar (for which I do not have write permissions) changes to me (so the actual meeting organiser cannot change anything afterwards) and/or calendar items get doubled. I now run a local uw-imapd (forward-ported to sid by means of a binNMU) for sent-mail folders etc. and a local iCalendar directory for calendars.

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator