Feed aggregator

Dirk Eddelbuettel: R and Docker

Planet Debian - Fri, 26/09/2014 - 04:57

Earlier this evening I gave a short talk about R and Docker at the September Meetup of the Docker Chicago group.

Thanks to Karl Grzeszczak for setting the meeting, and for providing a pretty thorough intro talk regarding CoreOS and Docker.

My slides are now up on my presentations page.

Categories: Elsewhere

Mediacurrent: Exploring the Picture Element Module (Part 1)

Planet Drupal - Fri, 26/09/2014 - 04:01

Responsive Web Design or RWD, has come a long way since it was first introduced in 2010, and you would think that by now, given the popularity of the subject, all things have been sorted out and all questions have been answered. Not quite.  RWD is a moving target that continues to evolve, but for the most part the majority of the techniques used to accomplish the goal of an adaptable website are unquestionable except one, images. 

Categories: Elsewhere

CMS Quick Start: Drupal 7 Login Methods and Module Roundup: Part 1

Planet Drupal - Thu, 25/09/2014 - 23:45
If your site relies on user engagement, chances are you are using Drupal's powerful built in user modules. Sometimes, however, it can be difficult to understand what tweaks you can make to the default login process to make it a better experience all around. Should you present the login form on every page? Where should you put it? What methods can you use to display the login form in unobtrusive ways? What action does your site take after someone logs in? We're going to be presenting an array of options to hopefully point you in the right direction.

read more

Categories: Elsewhere

Steve Kemp: Today I mostly removed python

Planet Debian - Thu, 25/09/2014 - 21:11

Much has already been written about the recent bash security problem, allocated the CVE identifier CVE-2014-6271, so I'm not even going to touch it.

It did remind me to double-check my systems to make sure that I didn't have any packages installed that I didn't need though, because obviously having fewer packages installed and fewer services running reduces the potential attack surface.

I had noticed in the past I had python installed and just though "Oh, yeah, I must have python utilities running". It turns out though that on 16 out of 19 servers I control I had python installed solely for the lsb_release script!

So I hacked up a horrible replacement for `lsb_release in pure shell, and then became cruel:

~ # dpkg --purge python python-minimal python2.7 python2.7-minimal lsb-release

That horrible replacement is horrible because it defers detection of all the names/numbers to the /etc/os-release which wasn't present in earlier versions of Debian. Happily all my Debian GNU/Linux hosts run Wheezy or later, so it all works out.

So that left three hosts that had a legitimate use for Python:

  • My mail-host runs offlineimap
    • So I purged it.
    • I replaced it with isync.
  • My host-machine runs KVM guests, via qemu-kvm.
    • qemu-kvm depends on Python solely for the script /usr/bin/kvm_stat.
    • I'm not pleased about that but will tolerate it for now.
  • The final host was my ex-mercurial host.
    • Since I've switched to git I just removed tha package.

So now 1/19 hosts has Python installed. I'm not averse to the language, but given that I don't personally develop in it very often (read "once or twice in the past year") and by accident I had no python-scripts installed I see no reason to keep it on the off-chance.

My biggest surprise of the day was that now that we can use dash as our default shell we still can't purge bash. Since it is marked as Essential. Perhaps in the future.

Categories: Elsewhere

Aigars Mahinovs: Distributing third party applications via Docker?

Planet Debian - Thu, 25/09/2014 - 20:54

Recently the discussions around how to distribute third party applications for "Linux" has become a new topic of the hour and for a good reason - Linux is becoming mainstream outside of free software world. While having each distribution have a perfectly packaged, version-controlled and natively compiled version of each application installable from a per-distribution repository in a simple and fully secured manner is a great solution for popular free software applications, this model is slightly less ideal for less popular apps and for non-free software applications. In these scenarios the developers of the software would want to do the packaging into some form, distribute that to end-users (either directly or trough some other channels, such as app stores) and have just one version that would work on any Linux distribution and keep working for a long while.

For me the topic really hit at Debconf 14 where Linus voiced his frustrations with app distribution problems and also some of that was touched by Valve. Looking back we can see passionate discussions and interesting ideas on the subject from systemd developers (another) and Gnome developers (part2 and part3).

After reading/watching all that I came away with the impression that I love many of the ideas expressed, but I am not as thrilled about the proposed solutions. The systemd managed zoo of btrfs volumes is something that I actually had a nightmare about.

There are far simpler solutions with existing code that you can start working on right now. I would prefer basing Linux applications on Docker. Docker is a convenience layer on top of Linux cgroups and namespaces. Docker stores its images in a datastore that can be based on AUFS or btrfs or devicemapper or even plain files. It already has a semantic for defining images, creating them, running them, explicitly linking resources and controlling processes.

Lets play a simple scenario on how third party applications should work on Linux.

Third party application developer writes a new game for Linux. As his target he chooses one of the "application runtime" Docker images on Docker Hub. Let's say he chooses the latest Debian stable release. In that case he writes a simple Dockerfile that installs his build-dependencies and compiles his game in "debian-app-dev:wheezy" container. The output of that is a new folder containing all the compiled game resources and another Dockerfile - this one describes the runtime dependencies of the game. Now when a docker image is built from this compiled folder, it is based on "debian-app:wheezy" container that no longer has any development tools and is optimized for speed and size. After this build is complete the developer exports the Docker image into a file. This file can contain either the full system needed to run the new game or (after #8214 is implemented) just the filesystem layers with the actual game files and enough meta-data to reconstruct the full environment from public Docker repos. The developer can then distribute this file to the end user in the way that is comfortable for them.

The end user would download the game file (either trough an app store app, app store website or in any other way) and import it into local Docker instance. For user convenience we would need to come with an file extension and create some GUIs to launch for double click, similar to GDebi. Here the user would be able to review what permissions the app needs to run (like GL access, PulseAudio, webcam, storage for save files, ...). Enough metainfo and cooperation would have to exist to allow desktop menu to detect installed "apps" in Docker and show shortcuts to launch them. When the user does so, a new Docker container is launched running the command provided by the developer inside the container. Other metadata would determine other docker run options, such as whether to link over a socket for talking to PulseAudio or whether to mount in a folder into the container to where the game would be able to save its save files. Or even if the application would be able to access X (or Wayland) at all.

Behind the scenes the application is running from the contained and stable libraries, but talking to a limited and restricted set of system level services. Those would need to be kept backwards compatible once we start this process.

On the sandboxing part, not only our third party application is running in a very limited environment, but also we can enhance our system services to recognize requests from such applications via cgroups. This can, for example, allow a window manager to mark all windows spawned by an application even if the are from a bunch of different processes. Also the window manager can now track all processes of a logical application from any of its windows.

For updates the developer can simply create a new image and distribute the same size file as before, or, if the purchase is going via some kind of app-store application, the layers that actually changed can be rsynced over individually thus creating a much faster update experience. Images with the same base can share data, this would encourage creation of higher level base images, such as "debian-app-gamegl:wheezy" that all GL game developers could use thus getting a smaller installation package.

After a while the question of updating abandonware will come up. Say there is is this cool game built on top of "debian-app-gamegl:wheezy", but now there was a security bug or some other issue that requires the base image to be updated, but that would not require a recompile or a change to the game itself. If this Docker proposal is realized, then either the end user or a redistributor can easily re-base the old Docker image of the game on a new base. Using this mechanism it would also be possible to handle incompatible changes to system services - ten years down the line AwesomeAudio replaces PulseAudio, so we create a new "debian-app-gamegl:wheezy.14" version that contains a replacement libpulse that actually talks to AwesomeAudio system service instead.

There is no need to re-invent everything or push everything and now package management too into systemd or push non-distribution application management into distribution tools. Separating things into logical blocks does not hurt their interoperability, but it allows to recombine them in a different way for a different purpose or to replace some part to create a system with a radically different functionality.

Or am I crazy and we should just go and sacrifice Docker, apt, dpkg, FHS and non-btrfs filesystems on the altar of systemd?

P.S. You might get the impression that I dislike systemd. I love it! Like an init system. And I love the ideas and talent of the systemd developers. But I think that systemd should have nothing to do with application distribution or processes started by users. I am sometimes getting an uncomfortable feeling that systemd is morphing towards replacing the whole of System V jumping all the way to System D and rewriting, obsoleting or absorbing everything between the kernel and Gnome. In my opinion it would be far healthier for the community if all of these projects would developed and be usable separately from systemd, so that other solutions can compete on a level playing field. Or, maybe, we could just confess that what systemd is doing is creating a new Linux meta-distribution.

Categories: Elsewhere

Jan Wagner: Redis HA with Redis Sentinel and VIP

Planet Debian - Thu, 25/09/2014 - 19:56

For an actual project we decided to use Redis for some reasons. As there is availability a critical part, we discovered that Redis Sentinel can monitor Redis and handle an automatic master failover to a available slave.

Setting up the Redis replication was straight forward and even setting up Sentinel. Please keep in mind, if you configure Redis to require an authentication password, you even need to provide that for the replication process (masterauth) and for the Sentinel connection (auth-pass).

The more interesting part is, how to migrate over the clients to the new master in case of a failover process. While Redis Sentinel could also be used as configuration provider, we decided not to use this feature, as the application needs to request the actual master node from Redis Sentinel much often, which will maybe a performance impact.
The first idea was to use some kind of VRRP, implemented into keepalived or something like this. The problem with such a solution is, you need to notify the VRRP process when a redis failover is in progress.
Well, Redis Sentinel has a configuration option called 'sentinel client-reconfig-script':

# When the master changed because of a failover a script can be called in # order to perform application-specific tasks to notify the clients that the # configuration has changed and the master is at a different address. # # The following arguments are passed to the script: # # <master-name> <role> <state> <from-ip> <from-port> <to-ip> <to-port> # # <state> is currently always "failover" # <role> is either "leader" or "observer" # # The arguments from-ip, from-port, to-ip, to-port are used to communicate # the old address of the master and the new address of the elected slave # (now a master). # # This script should be resistant to multiple invocations.

This looks pretty good and as there is provided a <role>, I thought it would be a good idea to just call a script which evaluates this value and based on it's return, it adds the VIP to the local network interface, when we get 'leader' and removes it when we get 'observer'. It turned out that this was not working as <role> didn't returned 'leader' when the local redis instance got master and 'observer' when got slave in any case. This was pretty annoying and I was short before giving up.
Fortunately I stumpled upon a (maybe) chinese post about Redis Sentinal, where was done the same like I did. On the second look I recognized that the decision was made on ${6} which is <to-ip>, nothing more then the new IP of the Redis master instance. So I rewrote my tiny shell script and after some other pitfalls this strategy worked out well.

Some notes about convergence. Actually it takes round about 6-7 seconds to have the VIP migrated over to the new node after Redis Sentinel notifies a broken master. This is not the best performance, but as we expect this happen not so often, we need to design the application using our Redis setup to handle this (hopefully) rare scenario.

Categories: Elsewhere

Drupal Watchdog: Drupal 7 Form Building

Planet Drupal - Thu, 25/09/2014 - 19:45
Article

Static websites, comprising web pages that do not respond to any user input, may be adequate for listing information, but little else. Dynamic pages are what make the Web more than just interlinked digital billboards. The primary mechanism that enables this is the humble web form – whether a modest single button or a multi-page form with various controls that allow the user to input text, make choices, upload files, etc.

Anyone developing a new Drupal-based website will usually need to create forms for gathering data from users. There are at least two possible approaches to solving this problem: One approach mostly relies on Drupal core, and the other uses a contributed module dedicated to forms.

The Node Knows

If you understand how to create content types and attach fields to them, then that could be a straightforward way to make a form – not for creating nodes to be used later for other purposes, but solely for gathering user input. To add different types of form fields to a content type, you will need to install and enable the corresponding modules, all of which are available from Drupal.org's modules section.

Some of these modules are part of Drupal's core: File (for uploading files), List (for selection lists), Number (for integers, decimals, and floats), Options (for selection controls, checkboxes, and radio buttons), Taxonomy (for tagging content with terms), and Text (for single- and multi-line entry fields).

Other field-related modules have been contributed by the Drupal community, including:

Categories: Elsewhere

Gunnar Wolf: #bananapi → On how compressed files should be used

Planet Debian - Thu, 25/09/2014 - 18:37

I am among the lucky people who got back home from DebConf with a brand new computer: a Banana Pi. Despite the name similarity, it is not affiliated with the very well known Raspberry Pi, although it is a very comparable (although much better) machine: A dual-core ARM A7 system with 1GB RAM, several more on-board connectors, and same form-factor.

I have not yet been able to get it to boot, even from the images distributed on their site (although I cannot complain, I have not devoted more than a hour or so to the process!), but I do have a gripe on how the images are distributed.

I downloaded some images to play with: Bananian, Raspbian, a Scratch distribution, and Lubuntu. I know I have a long way to learn in order to contribute to Debian's ARM port, but if I can learn by doing... ☻

So, what is my gripe? That the three images are downloaded as archive files:

  1. 0 gwolf@mosca『9』~/Download/banana$ ls -hl bananian-latest.zip \
  2. > Lubuntu_For_BananaPi_v3.1.1.tgz Raspbian_For_BananaPi_v3.1.tgz \
  3. > Scratch_For_BananaPi_v1.0.tgz
  4. -rw-r--r-- 1 gwolf gwolf 222M Sep 25 09:52 bananian-latest.zip
  5. -rw-r--r-- 1 gwolf gwolf 823M Sep 25 10:02 Lubuntu_For_BananaPi_v3.1.1.tgz
  6. -rw-r--r-- 1 gwolf gwolf 1.3G Sep 25 10:01 Raspbian_For_BananaPi_v3.1.tgz
  7. -rw-r--r-- 1 gwolf gwolf 1.2G Sep 25 10:05 Scratch_For_BananaPi_v1.0.tgz

Now... that is quite an odd way to distribute image files! Specially when looking at their contents:

  1. 0 gwolf@mosca『14』~/Download/banana$ unzip -l bananian-latest.zip
  2. Archive: bananian-latest.zip
  3. Length Date Time Name
  4. --------- ---------- ----- ----
  5. 2032664576 2014-09-17 15:29 bananian-1409.img
  6. --------- -------
  7. 2032664576 1 file
  8. 0 gwolf@mosca『15』~/Download/banana$ for i in Lubuntu_For_BananaPi_v3.1.1.tgz \
  9. > Raspbian_For_BananaPi_v3.1.tgz Scratch_For_BananaPi_v1.0.tgz
  10. > do tar tzvf $i; done
  11. -rw-rw-r-- bananapi/bananapi 3670016000 2014-08-06 03:45 Lubuntu_1404_For_BananaPi_v3_1_1.img
  12. -rwxrwxr-x bananapi/bananapi 3670016000 2014-08-08 04:30 Raspbian_For_BananaPi_v3_1.img
  13. -rw------- bananapi/bananapi 3980394496 2014-05-27 01:54 Scratch_For_BananaPi_v1_0.img

And what is bad about them? That they force me to either have heaps of disk space available (2GB or 4GB for each image) or to spend valuable time extracting before recording the image each time.

Why not just compressing the image file without archiving it? That is,

  1. 0 gwolf@mosca『7』~/Download/banana$ tar xzf Lubuntu_For_BananaPi_v3.1.1.tgz
  2. 0 gwolf@mosca『8』~/Download/banana$ xz Lubuntu_1404_For_BananaPi_v3_1_1.img
  3. 0 gwolf@mosca『9』~/Download/banana$ ls -hl Lubun*
  4. -rw-r--r-- 1 gwolf gwolf 606M Aug 6 03:45 Lubuntu_1404_For_BananaPi_v3_1_1.img.xz
  5. -rw-r--r-- 1 gwolf gwolf 823M Sep 25 10:02 Lubuntu_For_BananaPi_v3.1.1.tgz

Now, wouldn't we need to decompress said files as well? Yes, but thanks to the magic of shell redirections, we can just do it on the fly. That is, instead of having 3×4GB+1×2GB files sitting on my hard drive, I just need to have several files ranging between 145M and I guess ~1GB. Then, it's as easy as doing:

  1. 0 gwolf@mosca『8』~/Download/banana$ dd if=<(xzcat bananian-1409.img.xz) of=/dev/sdd

And the result should be the same: A fresh new card with Bananian ready to fly. Right, right, people using these files need to have xz installed on their systems, but... As it stands now, I can suppose current prospective users of a Banana Pi won't fret about facing a standard Unix tool!

(Yes, I'll forward this rant to the Banana people, it's not just bashing on my blog :-P )

Categories: Elsewhere

Marco d'Itri: CVE-2014-6271 fix for Debian sarge and etch

Planet Debian - Thu, 25/09/2014 - 17:01

Very old Debian releases like sarge (3.1) and etch (4.0) are not supported anymore by the Debian Security Team and do not get security updates. Since some of our customers still have servers running these version, I have built bash packages with the fix for CVE-2014-6271 (the "shellshock" bug):

http://ftp.linux.it/pub/People/md/bash/

This work has been sponsored by my employer Seeweb, an hosting, cloud infrastructure and colocation provider.

Categories: Elsewhere

Julian Andres Klode: hardlink 0.3.0 released; xattr support

Planet Debian - Thu, 25/09/2014 - 14:42

Today I not only submitted my bachelor thesis to the printing company, I also released a new version of hardlink, my file deduplication tool.

hardlink 0.3 now features support for xattr support, contributed by Tom Keel at Intel. If this does not work correctly, please blame him.

I also added support for a –minimum-size option.

Most of the other code has been tested since the upload of RC1 to experimental in September 2012.

The next major version will split up the code into multiple files and clean it up a bit. It’s getting a bit long now in a single file.


Filed under: Uncategorized
Categories: Elsewhere

Petter Reinholdtsen: Suddenly I am the new upstream of the lsdvd command line tool

Planet Debian - Thu, 25/09/2014 - 11:20

I use the lsdvd tool to handle my fairly large DVD collection. It is a nice command line tool to get details about a DVD, like title, tracks, track length, etc, in XML, Perl or human readable format. But lsdvd have not seen any new development since 2006 and had a few irritating bugs affecting its use with some DVDs. Upstream seemed to be dead, and in January I sent a small probe asking for a version control repository for the project, without any reply. But I use it regularly and would like to get an updated version into Debian. So two weeks ago I tried harder to get in touch with the project admin, and after getting a reply from him explaining that he was no longer interested in the project, I asked if I could take over. And yesterday, I became project admin.

I've been in touch with a Gentoo developer and the Debian maintainer interested in joining forces to maintain the upstream project, and I hope we can get a new release out fairly quickly, collecting the patches spread around on the internet into on place. I've added the relevant Debian patches to the freshly created git repository, and expect the Gentoo patches to make it too. If you got a DVD collection and care about command line tools, check out the git source and join the project mailing list. :)

Categories: Elsewhere

MariqueCalcus: Prius is in Alpha 15

Planet Drupal - Thu, 25/09/2014 - 10:51

Today, we are very excited to announce the latest release of our Drupal 8 theme Prius. Alongside a full support of Drupal 8 Alpha 15, we have included a number of new features. This release is particularly exciting as we are one step closer to an official launch of Drupal 8. Indeed, Drupal Alpha 15 is the first candidate for a Beta release. Meaning if no new beta blocker bugs are found within the next coming days1, we could see the first Beta version of our favourite "CMS" very soon.

Read More...
Categories: Elsewhere

Mike Hommey: So, hum, bash…

Planet Debian - Thu, 25/09/2014 - 09:43

So, I guess you heard about the latest bash hole.

What baffles me is that the following still is allowed:

env echo='() { xterm;}' bash -c "echo this is a test"

Interesting replacements for “echo“, “xterm” and “echo this is a test” are left as an exercise to the reader.

Categories: Elsewhere

Commerce Guys: Commerce Guys is pleased to sponsor Symfony live

Planet Drupal - Thu, 25/09/2014 - 08:00

The Symfony Live events of this Fall (London, Berlin, NYC, Madrid) are around the corner, and for the first year, Commerce Guys is going to attend these events as a sponsor. Some people are wondering why, and I’d like to explain why Commerce Guys is very excited to engage with the Symfony community and its open source software vendor, SensioLabs.

In fact, there are 3 main reasons for Commerce Guys’ interest in Symfony and working tightly with SensioLabs:

Drupal 8 and Drupal Commerce 2.0

It’s no secret that Drupal8 will rely on Symfony components. This architecture decision is good, and paved the way for similar thoughts on Drupal Commerce 2.0. It also ties the destinies of both open source communities, we think for the better. The work on Drupal Commerce for Drupal 8, known as Drupal Commerce 2.x, started in June 2014. During a community sprint that included members of SensioLabs and other partners like Smile, Publicis Modem, Osinet, i-KOS, Adyax, and Ekino, we validated the idea that some of the core eCommerce components of Drupal Commerce 2.x should rely on Symfony and other general PHP libraries directly. The goal is to offer an even more generic and flexible solution that spreads the impact of our code beyond the walls of the Drupal community.

This effort is well in progress already. Bojan Zivanovic, Drupal Commerce 2.x co-maintainer, provides a great example of this in a recent blog post about our new Internationalization library. He explains how much improvement this component will bring to the software for managing and formatting currencies internationally via a generic PHP library called commerceguys/intl. Expanding the reach of our work to the broader PHP community will help us get more feedback, more users, and more open source contributors, ultimately leading to better software. Ryan Szrama, Commerce Guys co-founder and Drupal Commerce CTO, will be presenting this approach at Symfony Live in New York City in October. We strongly believe this vision will bring us closer to our goal of building the most popular open source eCommerce software.

Platform.sh now refined for Symfony projects

In a context where Symfony will be central to mastering Drupal 8 projects, we’ve pursued the goal to enable our development & production Platform as a Service (PaaS) for Symfony projects in general. We’re convinced that this will provide Platform.sh an edge, and wanted to be a driving force in providing tools that will fit both open source communities.

Since Spring 2014, Commerce Guys engineers have been collaborating with SensioLabs engineers to understand Symfony better. Few companies in the world have the expertise in enterprise PHP that SensioLabs has, and the Platform.sh Symfony experience is the outcome of lots of intense discussions with the SensioLabs’ team.

Our objective was to enable teams to develop and deploy Symfony projects faster and more productively on Platform.sh. That work is now done and we’re very happy to announce today that, with just a few clicks, Symfony developers can create a full Symfony development environment (starting from an existing Symfony distribution), in order to build and deploy highly scalable websites and custom applications. This will lead to a much improved development process, lots of time saved for developers and a reduced time to market from development to production. Sponsoring Symfony live is a way for Commerce Guys to share the hard work we’ve done to build a unique, cloud-based development experience for Symfony developers. We’re excited to share our work and get feedback from the Symfony community about this product.

A shared focus on the developers

The time we’ve spent with SensioLabs’ management team highlighted our common passion and interest: help developers be more efficient and successful and, as much as it depends on us, to enjoy their jobs even more. SensioLabs and Commerce Guys were both founded to design and develop open source frameworks, gather large and global developer communities, and enable developers to create great web experiences. Both companies aim at making developers happier and more successful by providing them the right tools. It’s on these values and fundamental principles that this partnership was built. It’s all very solid and here to stay!

Categories: Elsewhere

Russ Allbery: Review: Turn the Ship Around!

Planet Debian - Thu, 25/09/2014 - 05:16

Review: Turn the Ship Around!, by L. David Marquet

Publisher: Portfolio Copyright: 2012 ISBN: 1-101-62369-1 Format: Kindle Pages: 272

Turn the Ship Around! (yes, complete with the irritating exclamation point in the title) is marketed to the business and management non-fiction market, which is clogged with books claiming to provide simple techniques to be a great manager or fix an organization. If you're like me, this is a huge turn-off. The presentation of the books is usually just shy of the click-bait pablum of self-help books. Many of the books are written by famous managers best known for doing horrible things to their staff (*cough* Jack Welch). It's hard to get away from the feeling that this entire class of books is an ocean of bromides covering a small core of outright evil.

This book is not like that, and Marquet is not one of those managers. It can seem that way at times: it is presented in the format that caters to short attention span, with summaries of primary points at the end of every short chapter and occasionally annoying questions sprinkled throughout. I'm capable of generalizing information to my own life without being prompted by study questions, thanks. But that's just form. The core of this book is a surprisingly compelling story of Marquet's attempt to introduce a novel management approach into one of the most conservative and top-down of organizations: a US Navy nuclear submarine.

I read this book as an individual employee, and someone who has no desire to ever be a manager. But I recently changed jobs and significantly disrupted my life because of a sequence of really horrible management decisions, so I have strong opinions about, at least, the type of management that's bad for employees. A colleague at my former employer recommended this book to me while talking about the management errors that were going on around us. It did such a good job of reinforcing my personal biases that I feel like I should mention that as a disclaimer. When one agrees with a book this thoroughly, one may not have sufficient distance from it to see the places where its arguments are flawed.

At the start of the book, Marquet is assigned to take over as captain of a nuclear submarine that's struggling. It had a below-par performance rating, poor morale, and the worst re-enlistment rate in the fleet, and was not advancing officers and crew to higher ranks at anywhere near the level of other submarines. Marquet brought to this assignment some long-standing discomfort with the normal top-down decision-making processes in the Navy, and decided to try something entirely different: a program of radical empowerment, bottom-up decision-making, and pushing responsibility as far down the chain of command as possible. The result (as you might expect given that you can read a book about it) was one of the best-performing submarines in the fleet, with retention and promotion rates well above average.

There's a lot in here about delegated decision-making and individual empowerment, but Turn the Ship Around! isn't only about that. Those are old (if often ignored) rules of thumb about how to manage properly. I think the most valuable part of this book is where Marquet talks frankly about his own thought patterns, his own mistakes, and the places where he had to change his behavior and attitude in order to make his strategy successful. It's one thing to say that individuals should be empowered; it's quite another to stop empowering them (which is still a top-down strategy) and start allowing them to be responsible. To extend trust and relinquish control, even though you're the one who will ultimately be held responsible for the performance of the people reporting to you. One of the problems with books like this is that they focus on how easy the techniques presented in the book are. Marquet does a more honest job in showing how difficult they are. His approach was not complex, but it was emotionally quite difficult, even though he was already biased in favor of it.

The control, hierarchy, and authority parts of the book are the most memorable, but Marquet also talks about, and shows through specific examples from his command, some accompanying principles that are equally important. If everyone in an organization can make decisions, everyone has to understand the basis for making those decisions and understand the shared goals, which requires considerable communication and open discussion (particularly compared to a Navy ideal of an expert and taciturn captain). It requires giving people the space to be wrong, and requires empowering people to correct each other without blame. (There's a nice bit in here about the power of deliberate action, and while Marquet's presentation is more directly applicable to the sorts of physical actions taken in a submarine, I was immediately reminded of code review.) Marquet also has some interesting things to say about the power of, for lack of a better term, esprit de corps, how to create it, and the surprising power of acting like you have it until you actually develop it.

As mentioned, this book is very closely in alignment with my own biases, so I'm not exactly an impartial reviewer. But I found it fascinating the degree to which the management situation I left was the exact opposite of the techniques presented in this book in nearly every respect. I found it quite inspiring during my transition period, and there are bits of it that I want to read again to keep some of the techniques and approaches fresh in my mind.

There is a fair bit of self-help-style packaging and layout here, some of which I found irritating. If, like me, you don't like that style of book, you'll have to wade through a bit of it. I would have much preferred a more traditional narrative story from which I could draw my own conclusions. But it's more of a narrative than most books of this sort, and Marquet is humble enough to show his own thought processes, tensions, and mistakes, which adds a great deal to the presentation. I'm not sure how directly helpful this would be for a manager, since I've never been in that role, but it gave me a lot to think about when analyzing successful and unsuccessful work environments.

Rating: 8 out of 10

Categories: Elsewhere

Laura Arjona: 10 short steps to contribute translations to free software for Android

Planet Debian - Thu, 25/09/2014 - 01:14

This small guide assumes that you know how to create a public repository with git (or other version control system). Maybe some projects use other VCS, Subversion or whatever; the process would be similar although the commands will be different of course.

If you don’t want to use any VCS, you can just download the corresponding file, translate it, and send it by email or to the BTS of the project, but the commands required are very easy and you’ll see soon that using git (or any VCS) is quite comfortable and less scary than what it seems.

So, you were going to recommend a nice app that you use or found in F-Droid to your friend, but she does not understand English. Why not translating the app for her? And for everybody? It’s a job that can be done in 15 minutes or so (Android apps have very short strings, few menus, and so). Let’s go!

1.- Search the app in the F-Droid website

You can do it going to the URL:

https://f-droid.org/repository/browse/?fdfilter=wordofappname

Example: https://f-droid.org/repository/browse/?fdfilter=pomodoro

Then, open the details of the app, and learn where’s the source code.

2.- Clone the source code

If you have an account in that forge, fork/clone the project into your account, and then, clone your fork/clone in local.

If you haven’t got an account in that forge, clone the project in local.

git clone URLofTheProjectOrYourClone

3.- In local, create a new branch, and checkout to it.


git checkout -b Spanish

4.- Then, copy the “/res/values” folder into “res/values-XX” folder (where XX is your language code)

cd nameofrepo
cp ./res/values /res/values-es -R

5.- Translate

Edit the “strings.xml” file that is in the “res/values-XX” folder, and change the English strings to your language (respect the XML format).

6.- Translate other files, or delete them

If there are more files in that folder (e.g. “arrays.xml”), review them to know if they have “translatable” strings. If yes, translate them. If not, delete the files.

7.- Commit

When you are finished, commit your changes:

git add res/values-es/*
git commit -a

(Message can be “Spanish translation” or so)

8.- Push your changes to your public repo

If you didn’t create a public clone of the repo in your forge, create a public repo and push your local stuff into there.

git push --all

9.- Request a merge to the original repo

(Using the web interface of the forge, if it is the same for the original repo and your clone, or sending an email or creating an issue and providing the URL of your repo).

For example, open a new issue in the project’s BTS

Title: Spanish translation available for merging
Body: Hi everybody. Thanks for your work in "nameofapp".
I have completed a Spanish translation, it's available for review/merge
in the Spanish branch of my repo:

https://urlofyourclone

Best regards

10.- Congratulations!

Translations are new features, and having a new feature in your app for free is a great thing, so probably the app developer(s) will merge your translation soon.

Share your joy with your friends, so they begin to use the app you translated, and maybe become translators too!


Filed under: Uncategorized Tagged: Android, Contributing to libre software, English, Free Software, libre software, translations
Categories: Elsewhere

DrupalCon Amsterdam: Win €100 to the Drupal Store

Planet Drupal - Wed, 24/09/2014 - 23:32

We're excited about the great swag we've got at the Drupal store-- so excited that we're going to award a €100 gift card to a lucky winner at DrupalCon Amsterdam!

Here's how it works.

On Tuesday and Wednesday, we are going to hide puzzle pieces around the RAI Convention center. (The puzzle, for reference, is above!) If you find one of the puzzle pieces, bring it by the Drupal Association Booth in the exhibit hall.

We'll write your name and contact information on the back, and once the puzzle is complete-- or, at lunch on Thursday, whichever happens first-- we will select a lucky winner and award him or her with a €100 gift card!

Pro tip: during the hours the exhibit floor is open, we'll use the @DrupalAssoc Twitter handle to send out pictures of where the puzzle pieces are hidden. Keep your eye on that handle so you can have a shot at finding one of the pieces and winning the prize!

Note: there are only 15 puzzle pieces, so the odds of winning are great. Limit one puzzle piece per person.

Questions?

Come by the Drupal Association booth next to the bookstore or email Leigh Carver with any questions you may have.

Good luck!

Categories: Elsewhere

Julian Andres Klode: APT 1.1~exp3 released to experimental: First step to sandboxed fetcher methods

Planet Debian - Wed, 24/09/2014 - 23:06

Today, we worked, with the help of ioerror on IRC, on reducing the attack surface in our fetcher methods.

There are three things that we looked at:

  1. Reducing privileges by setting a new user and group
  2. chroot()
  3. seccomp-bpf sandbox

Today, we implemented the first of them. Starting with 1.1~exp3, the APT directories /var/cache/apt/archives and /var/lib/apt/lists are owned by the “_apt” user (username suggested by pabs). The methods switch to that user shortly after the start. The only methods doing this right now are: copy, ftp, gpgv, gzip, http, https.

If privileges cannot be dropped, the methods will fail to start. No fetching will be possible at all.

Known issues:

  • We drop all groups except the primary gid of the user
  • copy breaks if that group has no read access to the files

We plan to also add chroot() and seccomp sandboxing later on; to reduce the attack surface on untrusted files and protocol parsing.


Filed under: Uncategorized
Categories: Elsewhere

Acquia: Drupal community engagement for businesses – Ruth Fuller

Planet Drupal - Wed, 24/09/2014 - 21:23

Meet Ruth Fuller, she's here to help businesses get more out of Drupal by helping them engage more effectively with the Drupal community. She'd like to help you with effective Drupal and open source sponsorship, how to engage with the community, planning, coordination, presentation preparation, and public speaking coaching.

Categories: Elsewhere

Drupal Commerce: Commerce 2.x Stories - Addressing

Planet Drupal - Wed, 24/09/2014 - 18:56

Welcome to the second article in the “Commerce 2.x Stories” series. This time we’re going to talk about addressing, and our efforts to improve the already good Commerce 1.x addressing implementation (addressfield).

By addressing we mean storing, manipulating and formatting postal addresses, meant to identify a precise recipient location for shipping or billing purposes.

Read on to see what we're doing to improve it...

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator