Elsewhere

Chapter Three: Content Strategy for Drupal 8

Planet Drupal - Thu, 19/02/2015 - 02:40

We've been publishing a lot of technical blogs about Drupal 8 to educate and inspire the community. But what about the non-technical folk? How will Drupal 8 shift the way designers, content strategists and project managers plan websites? While many of the changes will not affect our day to day work, there are a few new terms and ways of thinking that can streamline the strategy process, save developers time and save clients money.



Entities: The Word of the Day

Entities are our new friend. They are easygoing and flexible. The sooner we get comfortable with the word Entity and begin using it with our teams, the sooner we can all reap the rewards of the budding relationship.

Categories: Elsewhere

Victor Kane: Desgrabación de mi presentación sobre DurableDrupal Lean UX+Dev+DevOps Drupalcon Latin America

Planet Drupal - Thu, 19/02/2015 - 00:46

Desgrabación de la Presentación en el DrupalCon Latin America 2015

Poner en pie una fábrica de DurableDrupal (Drupal Duradero) en base de un proceso Lean reutilizable

[Bajar el pdf de 18 páginas al pie de la página]

Ya que el sonido de la grabación de mi presentación quedó muy bajo en volumen, quiero presentar la desgrabación con la esperanza de que mi mensaje llegue a la mayor cantidad de personas posible.

El video de la presentación en sí se encuentra aquí: https://www.youtube.com/watch?v=bNbkBvtQ8Z0

Los diapositivas: http://awebfactory.com/drupalcon2015lean/#/

Para cada diapositiva, incluyo a continuación el link al slide y el texto correspondiente del video.

En algunos casos hay correcciones inevitables o he extendido el texto para su mejor comprensión. También he traducido diapositivas importantes y agregado algunos comentarios para incluir puntos de suma importancia que no fueron mencionados en la presentación por falta de tiempo, como el Kanban y sus características.

El artículo como consecuencia se extiende bastante, pero espero que resulte de utilidad para quienes se interesan por la cuestión del proceso Lean en los desarrollos con Drupal.

Mi plan es publicar en breve un libro que integre estos conceptos en la práctica con un ejemplo concreto de desarrollo de una aplicación web en Drupal 7 y 8.

read more

Categories: Elsewhere

Richard Hartmann: Listing screen sessions on login

Planet Debian - Wed, 18/02/2015 - 21:32

Given Peter Eisentraut's blog post on the same topic, I thought I would also share this Zsh function (from 2011):

startup () { # info on any running screens if [[ -x $(which screen) ]] then ZSHRC_SCREENLIST=(${${(M)${(f)"$(screen -ls)"}:#(#s)?:space:##([0-9]##).*}/(#b)?:space:#([0-9]##).*/$match[1]}) if [[ $#ZSHRC_SCREENLIST -ge 1 ]] then echo "There are $#ZSHRC_SCREENLIST screens running. $ZSHRC_SCREENLIST" fi fi }
Categories: Elsewhere

Philipp Kern: Caveats of the HP MicroServer Gen8

Planet Debian - Wed, 18/02/2015 - 19:15
If you intend to buy the HP MicroServer Gen8 as a home server there are a few caveats that I didn't find on the interwebs before I bought the device:
  • Even though the main chassis fan is now fixed in AHCI mode with recent BIOS versions, there is still an annoying PSU fan that's tiny and high frequency. You cannot control it and the PSU seems to be custom-built.
  • The BIOS does not support ACPI S3 (suspend-to-RAM) at all. Apparently it being a server BIOS they chose to not include the code paths in the BIOS needed to properly turn off devices and turn them back on. This means that it's not possible to simply suspend it and have it woken up when your media center boots.
  • In contrast to the older AMD-based MicroServers the Gen8 comes with iLO, which will consume quite a few watts just for being present even if you don't use it. I read figures of about ten watts. It also cannot be turned off, as it does system management like fan control.
  • The HDD cages are not vibration proof or decoupled.
If you try to boot FreeBSD with its zfsloader you will likely need to apply a workaround patch, because the BIOS seems to do something odd. Linux works as expected.
Categories: Elsewhere

DrupalCon News: One week left to submit your DrupalCon sessions

Planet Drupal - Wed, 18/02/2015 - 19:06

We are in the fourth week of session submissions for DrupalCon Los Angeles and only one week remains before the deadline. Now is your chance to shine! Send us your talk idea and you could find yourself presenting to the Drupal community's largest annual event this spring.

Categories: Elsewhere

InternetDevels: Drupal vulnerability or developers' carelessness?

Planet Drupal - Wed, 18/02/2015 - 16:16

In October, 2014 Sektion Eins company has discovered vulnerability which affects all branches of Drupal 7 versions. It allows performing any SQL-request to database even without having permissions in the system. The security risk was recognized as highly critical. The corresponding core update was released on October, 15. It upgraded the core to 7.32 version and eliminates this vulnerability. And now we’ll talk about some other kinds of vulnerabilities.

Read more
Categories: Elsewhere

Dcycle: A quick intro to Docker for a Drupal project

Planet Drupal - Wed, 18/02/2015 - 16:05

I recently added Docker support to Realistic Dummy Content, a project I maintain on Drupal.org. It is now possible to run ./scripts/dev.sh directly from the project directory (use the latest dev version if you try this), and have a development environment, sans MAMP.

I don't consider myself an expert in Docker, virtualization, DevOps and config management, but here, nonetheless, is my experience. If I'm wrong about something, please leave a comment!

Intro: Docker and DevOps

The DevOps movement, popularized in the last years, promises to include environment information along with application information in the same git repo for smoother development, testing, and production environments. For example, if your Drupal module requires version 5.4 of PHP, along with a given library, then that information should be somewhere in your Git repo. Building an environment for testing, development or production should then use that information and not be dependent on anything which is unversioned. Docker is a tool which is anchored in the DevOps movement.

DevOps: the Config management approach

The family of tools which has been around for awhile now includes Puppet, Chef, and Ansible. These tools are configuration management tools: they define environment information (PHP version should be 5.3, Apache mod_rewrite should be on, etc.) and make sure a given environment conforms to that information.

I have used Puppet, along with Vagrant, to deliver applications, including my Jenkins server hosted on GitHub.

Virtualization and containers

Using Puppet and Vagrant, you need to use Virtualization: create a Virtual Machine on your host machine. Docker uses containers so resources are shared. The article Getting Started with Docker (Servers for Hackers, 2014/03/20) contains some graphics which demonstrate how much more efficient containers are as opposed to virtualization.

Puppet and Vagrant are slow; Docker is fast

Puppet and Vagrant together work for packaging software and environment configuration, but it is excruciatingly slow: it can take several minutes to launch an environment. My reaction to this has been to cringe every time I have to do it.

Docker, on the other hand, uses caching agressively: if a server was already in a given state, Docker uses a cached version of it move faster. So, when building a container, Docker goes through a series of steps, and caches each step to make it lightning fast.

One example: launching a dev environment of Jenkins projects on Mac OS takes over five minutes, but launching dev environment of my Drupal project Realistic Dummy Content (which uses Docker), takes less than 15 seconds the first time it is run once the server code has been downloaded, and, because of caching, less than one (1) second subsequent times if no changes have been made.

Configuration management is idempotent, Docker is not

Before we move on, note that Docker is not incompatible with config management tools, but Docker does not require them. Here is why I think, in many cases, config management tools are not necessary.

The config management tools such as Puppet are idempotent: you define how an environment should, and the tools runs whatever steps are necessary to make it that way. This sounds like a good idea in theory, but it looks like this in practice. I have come to the conclusion that this is not the way I think, and it forces me to relearn how to think of my environments. I suspect that many developers have a hard time wrapping their heads around idempotence.

Docker is not idempotent; it defines a series of steps to get to a given state. If you like idempotence, one of the steps can be to run a puppet manifest; but if, like me, you think idempotence is overrated, then you don't need to use it. Here is what a Dockerfile looks like: I understood it at first glace, it doesn't require me to learn a new way of thinking.

The CoreOS project

The CoreOS project has seen the promise of Docker and containers. It is an OS which ships with Docker, Git, and a few other tools, but is designed so that everything you do happens within containers (using the included Docker, and eventually Rocket, a tool they are building). The result is that CoreOS is tiny: it takes 10 seconds to build a CoreOS instance on DigitalOcean, for example, but almost a minute to set up a CentOS instance.

Because Docker does not work on Mac OS without going through hoops, I decided to use Vagrant to set up a CoreOS VM on my Mac, which is speedy and works great.

Docker for deploying to production

We have seen that Docker can work for quickly setting up dev and testing environments. Can it be used to deploy to production? I don't see why not, especially if used with CoreOS. For an example see the blog post Building an Internal Cloud with Docker and CoreOS (Shopify, Oct. 15, 2014).

In conclusion, I am just beginning to play with Docker, and it just feels right to me. I remember working with Joomla in 2006, when I discovered Drupal, and it just felt right, and I have made a career of it since then. I am having the same feeling now discovering Docker and CoreOs.

I am looking forward to your comments explaining why I am wrong about not liking idempotence, how to make config management and virutalization faster, and how and why to integrate config management tools with Docker!

Tags: blogplanet
Categories: Elsewhere

Acquia: Helping Remote Teams Work - The Manager

Planet Drupal - Wed, 18/02/2015 - 15:51
Language Undefined

Part 2 of 2 – I ran into Elia Albarran, Four Kitchens' Operations Manager at BADCamp 2014. She mentioned she'd read my blog post 10 Tips for Success as a Remote Employee; we started exchanging tips and ideas until I basically yelled, "Stop! I need to get this on camera for the podcast!" She graciously agreed and brought along two Four Kitchens developers for the session, too: Taylor Smith and Matt Grill, whom I spoke with in part 1.

Categories: Elsewhere

Drupalize.Me: Release Day: PhpStorm for Modern PHP Development

Planet Drupal - Wed, 18/02/2015 - 15:15

Ready to take your PHP development to the next level? This week, we have another batch of video tutorials from the awesome folks at JetBrains on their IDE, PhpStorm. In these tutorials, you'll learn how to generate code using templates, set up your modern PHP app with namespaces, PSR-0 or PSR-4, integrate Composer, and debug like a pro.

Categories: Elsewhere

Propeople Blog: Varnish Tips and Tricks

Planet Drupal - Wed, 18/02/2015 - 06:02

In this article we would like to share some use cases and recipes for configuring Varnish.

Use custom data for building varnish cache hash

By default, Varnish uses URL as a parameter for caching. In the VCL file, it is also possible to add custom data (for example: location, or custom header value) to hashing by using the sub vcl_hash{} configuration part. But there is yet another solution to be used that Varnish comes equipped with out of the box. We can set a Vary header that is respected. So, if we want our cache to be based on header X-MyCustomHeader in Drupal, then we can set the header to

Vary: Cookie,Accept-Encoding,X-MyCustomHeader. This way, different values of our custom header will have different cache records in Varnish.

Limit access to the site by ip address

When we build an intranet website, we can limit access to it on the Varnish level. This can be done in following way:

First we define list of allowed IP addresses:

acl offices {

   "localhost";

   "127.0.0.1";

   "1.2.3.4";`

   "5.6.7.8";

}

Then we restrict access to non matching addresses:

sub vcl_recv {

   if ( req.http.host ~ "(intranet\.example\.com)$" && !(client.ip ~ offices) ) {

        error 403 "Access denied";

   }

}

SSL termination

As Varnish is not handling https traffic, we need to terminate SSL before it hits Varnish. For that we can use nginx. Here is a list of links to articles that dive deeper into this topic:

https://www.digitalocean.com/community/tutorials/how-to-configure-varnish-cache-4-0-with-ssl-termination-on-ubuntu-14-04

http://edoceo.com/howto/nginx-varnish-ssl

http://mikkel.hoegh.org/2012/07/24/varnish-as-reverse-proxy-with-nginx-as-web-server-and-ssl-terminator/

https://wiki.deimos.fr/Nginx_%2B_Varnish_:_Cache_even_in_HTTPS_by_offloading_SSL

ESI

On a recent Propeople project, we had the requirement to include a block with data from an external website without any caching. The tricky part was that the external site was providing XML with the data. The solution we implemented was to use ESI block pointing to the custom php file that was pulling that XML and parsing it on the fly.

Hiding js requests to external domains

If we need to do some CORS (http://en.wikipedia.org/wiki/Cross-origin_resource_sharing) requests instead of our Javascript doing requests directly to external domain, we can do requests to our site, but with a specific URL. Then, on the Varnish level, we can redirect that request to external domain. In this case, Varnish will act like a proxy. This can be achieved with backend options.

backend google {

 .host = "209.85.147.106";

 .port = "80";

}

sub vcl_fetch {

 if (req.url ~ "^/masq") {

   set req.backend = google;

   set req.http.host = "www.google.com";

   set req.url = regsub(req.url, "^/masq", "");

   remove req.http.Cookie;

   return(deliver);

 }

}

This is an example from a brilliant book: https://www.varnish-software.com/static/book/

Multiple backends, load balancing

It is possible to define multiple backends for Varnish and switch between them. Most basic implementation is round robin or random. Here is an example:

backend web01 {

   .host = "example1";

   .port = "80";

   .connect_timeout = 120s;

   .first_byte_timeout = 300s;

   .between_bytes_timeout = 60s;

   .max_connections = 50;

   .probe = {

    .url = "/";

    .timeout = 10s;

    .interval = 20s;

    .window = 5;

    .threshold = 3;

   }

}

 

backend web02 {

   .host = "example2";

   .port = "80";

   .max_connections = 100;

   .connect_timeout = 120s;

   .first_byte_timeout = 300s;

   .between_bytes_timeout = 60s;

   .probe = {

       .url = "/";

       .timeout = 10s;          

       .interval = 20s;       

       .window = 5;

       .threshold = 3;

   }

}

 

director apache round-robin {

 { .backend = web01; }

 { .backend = web02; }

}

 

sub vcl_recv {

 set req.backend = apache;

}

 

It is also possible to set a specific backend for visitors coming from specific IP addresses. This can have a number of helpful uses, such as making sure that the editors team has  adedicated backend server.

if (client.ip ~ offices) {

 set req.backend = web03;

}


I hope you have enjoyed our tips regarding Varnish configuration. Please feel free to share your own thoughts and tips on Varnish in the comments below!

Tags: VarnishService category: TechnologyCheck this option to include this post in Planet Drupal aggregator: planetTopics: Tech & Development
Categories: Elsewhere

Four Kitchens: Announcing SANDcamp Training for Advanced Responsive Web Design

Planet Drupal - Wed, 18/02/2015 - 00:44

Patrick Coffey and I have been busy building a new version of the popular Advanced Responsive Web Design all-day training program and are excited to host it at San Diego’s SANDcamp next week. Registration is open and there are several spaces remaining and we would love for you to join us!

Responsive Web Design is on everyone’s mind at the moment, and for good reason. The old techniques we have used to create pixel perfect sites for desktop audiences have already become a thing of the past as mobile usage accelerates.

Training Drupal Camp Drupal
Categories: Elsewhere

Isovera Ideas & Insights: When Do You Make the Move to Drupal 8?

Planet Drupal - Tue, 17/02/2015 - 21:54
Lately, whenever we start a project at Isovera, we are typically asked, "would you build this with Drupal 8"? It's a good question. There are many good reasons to get a leg up with the (currently) beta release, but there are also good reasons to keep your head down and stick with Drupal 7. The official release of Drupal 8 is rapidly approaching. What might this mean to you?
Categories: Elsewhere

Phase2: The Pros And Cons of Headless Drupal

Planet Drupal - Tue, 17/02/2015 - 19:38

Drupal is an excellent content management system. Nodes and fields allow site administrators the ability to create complex datasets and models without having to write a singe mysql query. Unfortunately Drupal’s theming system isn’t as flexible. Complete control over the dom is nearly impossible without a lot of work. Headless Drupal bridges the gap and gives us the best of both worlds.

What is headless Drupal?

Headless Drupal is an approach that decouples Drupal’s backend from the frontend theme. Drupal is used as a content store and admin interface. Using the services module in Drupal 7 or Core in Drupal 8, a rest web service can be created. Visitors to the site don’t view a traditional Drupal theme instead they are presented with pages created with Ember.js, Angular.js, or even a custom framework. Using the web service the chosen framework can be used to transfer data from Drupal to the front end and vice versa.

Pros

So what makes Headless Drupal so great? For one thing it allows frontend developers full control over the page markup. Page speed also increases since display logic is on the client side instead of the server. Sites can also become much more interactive with page transitions and animations. But most importantly the Drupal admin can also be used to power not only web apps but also mobile applications including Android and iOS.

Cons

Unfortunately Headless Drupal is not without its drawbacks. For one layout control for editors becomes much more difficult. Something that could be done easily via context or panels now requires a lot of custom fronted logic. Also if proper caching isn’t utilized and the requests aren’t batched properly lots of roundtrips can occur, causing things to slow down drastically.

Want to learn more about Headless Drupal?  Check out the Headless Drupal Initiative on Drupal.org.  And on a related topic, check out “Drupal 8 And The Changing CMS Landscape.

Categories: Elsewhere

Lucas Nussbaum: Some email organization tips and tricks

Planet Debian - Tue, 17/02/2015 - 18:34

I’d like to share a few tips that were useful to strengthen my personal email organization. Most of what follows is probably not very new nor special, but hey, let’s document it anyway.

Many people have an inbox folder that just grow over time. It’s actually similar to a twitter or RSS feed (except they probably agree that they are supposed to read more of their email “feed”). When I send an email to them, it sometimes happen that they don’t notice it, if the email arrives at a bad time. Of course, as long as they don’t receive too many emails, and there aren’t too many people relying on them, it might just work. But from time to time, it’s super-painful for those interacting with them, when they miss an email and they need to be pinged again. So let’s try not to be like them. :-)

Tip #1: do Inbox Zero (or your own variant of it)

Inbox Zero is an email management methodology inspired from David Allen’s Getting Things Done book. It’s best described in this video. The idea is to turn one’s Inbox into an area that is only temporary storage, where every email will get processed at some point. Processing can mean deleting an email, archiving it, doing the action described in the email (and replying to it), etc. Put differently, it basically means implementing the Getting Things Done workflow on one’s email.

Tip #1.1: archive everything

One of the time-consuming decisions in the original GTD workflow is to decide whether something should be eliminated (deleted) or stored for reference. Given that disk space is quite cheap, it’s much easier to never decide about that, and just archive everything (by duplicating the email to an archive folder when it is received). To retrieve archived emails when needed, I then use notmuch within mutt to easily search through recent (< 2 year) archives. I use archivemail to archive older email in compressed mboxes from time to time, and grepmail to search through those mboxes when needed.

I don’t archive most Debian mailing lists though, as they are easy to fetch from master.d.o with the following script:

#!/bin/sh rsync -vP master.debian.org:~debian/*/*$1/*$1.${2:-$(date +%Y%m)}* .

Then I can fetch a specific list archive with getlist devel 201502, or a set of archives with e.g. getlist devel 2014, or the current month with e.g. getlist devel. Note that to use grepmail on XZ-compressed archives, you need libmail-mbox-messageparser-perl version 1.5002-3 (only in unstable — I was using a locally-patched version for ages, but finally made a patch last week, which Gregor kindly uploaded).

Tip #1.2: split your inbox

(Yes, this one looks obvious but I’m always surprised at how many people don’t do that.)

Like me, you probably receive various kinds of emails:

  • emails about your day job
  • emails about Debian
  • personal emails
  • mailing lists about your day job
  • mailing lists about Debian
  • etc.

Splitting those into separate folders has several advantages:

  • I can adjust my ‘default action’ based on the folder I am in (e.g. delete after reading for most mailing lists, as it’s archived already)
  • I can adjust my level of focus depending on the folder (I might not want to pay a lot of attention to each and every email from a mailing list I am only remotely interested in; while I should definitely pay attention to each email in my ‘DPL’ folder)
  • When busy, I can skip the less important folders for a few days, and still be responsive to emails sent in my more important folders

I’ve seen some people splitting their inbox into too many folders. There’s not much point in having a per-sender folder organization (unless there’s really a recurring flow of emails from a specific person), as it increases the risk of missing an email.

I use procmail to organize my email into folders. I know that there are several more modern alternatives, but I haven’t looked at them since procmail does the job for me.

Resulting organization

I use one folder for my day-job email, one for my DPL email, one for all other email directed or Cced to me. Then, I have a few folders for automated notifications of stuff. My Debian mailing list folders are auto-managed by procmail’s $MATCH:

:0: * ^X-Mailing-List: <.*@lists\.debian\.org> * ^X-Mailing-List: <debian-\/[^@]* .ml.debian.$MATCH/

Some other mailing lists are in they separate folders, and there’s a catch-all folder for the remaining ones. Ah, and since I use feed2imap, I have folders for the RSS/Atom feeds I follow.

I have two different commands to start mutt. One only shows a restricted number of (important) folders. The other one shows the full list of (non-empty) folders. This is a good trick to avoid spending time reading email when I am supposed to do something more important. :)

As for many people probably, my own organization is loosely based on GTD and Inbox Zero. It sometimes happen that some emails stay in my Inbox for several days or weeks, but I very rarely have more than 20 or 30 emails in one of my main inbox folders. I also do reviews of the whole content of my main inbox folders once or twice a week, to ensure that I did not miss an email that could be acted on quickly.

A last trick is that I have a special folder replies, where procmail copies emails that are replies to a mail I sent, but which do not Cc me. That’s useful to work-around Debian’s “no Cc on reply to mailing list posts” policy.

I receive email using offlineimap (over SSH to my mail server), and send it using nullmailer (through a SSH tunnel). The main advantage of offlineimap over using IMAP directly in mutt is that using IMAP to a remote server feels much more sluggish. Another advantage is that I only need SSH access to get my full email setup to work.

Tip #2: tracking sent emails

Two recurring needs I had was:

  • Get an overview of emails I sent to help me write the day-to-day DPL log
  • Easily see which emails got an answer, or did not (which might mean that they need a ping)

I developed a short script to address that. It scans the content of my ‘Sent’ maildir and my archive maildirs, and, for each email address I use, displays (see example output in README) the list of emails sent with this address (one email per line). It also detects if an email was replied to (“R” column after the date), and abbreviates common parts in email addresses (debian-project@lists.debian.org becomes d-project@l.d.o). It also generates a temporary maildir with symlinks to all emails, so that I can just open the maildir if needed.

Categories: Elsewhere

Clint Adams: Copyleft licenses are oppressing someone

Planet Debian - Tue, 17/02/2015 - 18:18

I go to a party, carrying two expensive bottles of liquor that I have acquired from faraway lands.

The hosts of the party provide a variety of liquors, snacks, and mixers.

Some neuro guy shows up, looks around, feels guilty, says that he should have brought something.

His friend shows up, bearing hot food. The neuro guy decides to contribute $7 to the purchase of food since he didn't bring anything. The friend then proceeds to charge us each $7.

No one else demands money for any of the other things being share and consumed by everyone. The hosts do not retroactively charge a cover fee for entrance to the house. No one else offers to pay anyone for anything.

The neuro guy attempts to wash some dishes before leaving, but is stopped by the hosts, because he is a guest.

Categories: Elsewhere

Cameron Eagans: Use the Force!

Planet Drupal - Tue, 17/02/2015 - 18:00

Python developers have Jedi. Go developers have gocode. Hack developers have the built-in autocomplete functionality in hhvm. PHP developers have….nothing.

Categories: Elsewhere

Cheeky Monkey Media: How to add typekit fonts to your drupal website

Planet Drupal - Tue, 17/02/2015 - 18:00

So you just got the latest design from your graphics department. Now it’s up to you, the drupal developer, to take that design and turn it into reality. The problem is that they used some fancy pants new font and you need to make sure it works on every browser and mobile device.

There are a few solid options to choose from, including Google fonts and the popular @font-your-face drupal module. However, one of the services that I have been using lately is Adobe Typekit. They offer thousands of fonts and make it easy to scale. Typekit offers a basic free account as well as paid...Read More

Categories: Elsewhere

Appnovation Technologies: Export Data From Views to CSV File

Planet Drupal - Tue, 17/02/2015 - 17:55

It is sometimes useful to be able to save our view results into a document to allow non-technical people to manipulate the data.

Categories: Elsewhere

John Goerzen: “Has Linux lost its way?” comments prompt a Debian developer to revisit FreeBSD after 20 years

Planet Debian - Tue, 17/02/2015 - 17:11

I’ll admit it. I have a soft spot for FreeBSD. FreeBSD was the first Unix I ran, and it was somewhere around 20 years ago that I did so, before I switched to Debian. Even then, I still used some of the FreeBSD Handbook to learn Linux, because Debian didn’t have the great Reference that it does now.

Anyhow, some comments in my recent posts (“Has modern Linux lost its way?” and Reactions to that, and the value of simplicity), plus a latent desire to see how ZFS fares in FreeBSD, caused me to try it out. I installed it both in VirtualBox under Debian, and in an old 64-bit Thinkpad sitting in my basement that previously ran Debian.

The results? A mixture of amazing and disappointing. I will say that I am quite glad that both exist; there is plenty of innovation happening everywhere and neat features exist everywhere, too. But I can also come right out and say that the statement that FreeBSD doesn’t have issues like Linux does is false and misleading. In many cases, it’s running the exact same stack. In others, it’s better, but there are also others where it’s worse. Perhaps this article might dispell a bit of the FUD surrounding jessie, while also showing off some of the nice things FreeBSD does. My conclusion: Both jessie and FreeBSD 10.1 are awesome Free operating systems, but both have their warts. This article is more about FreeBSD than Debian, but it will discuss a few of Debian’s warts as well.

The experience

My initial reaction to FreeBSD was: wow, this feels so familiar. It reminds me of a commercial Unix, or maybe of Linux from a few years ago. A minimal, well-documented base system, everything pretty much in logical places in the filesystem, and solid memory management. I felt right at home. It was almost reassuring, even.

Putting together a FreeBSD box is a lot of package installing and config file editing. The FreeBSD Handbook, describing how to install X, talks about editing this or that file for this or that feature. I like being able to learn directly how things fit together by doing this.

But then you start remembering the reasons you didn’t like Linux a few years ago, or the commercial Unixes: maybe it’s that programs like apache are still not as well supported, or maybe it’s that the default vi has this tendency to corrupt the terminal periodically, or perhaps it’s that root’s default shell is csh. Or perhaps it’s that I have to do a lot of package installing and config file editing. It is not quite the learning experience it once was, either; now there are things like “paste this XML file into some obscure polkit location to make your mouse work” or something.

Overall, there are some areas where FreeBSD kills it in a way no other OS does. It is unquestionably awesome in several areas. But there are a whole bunch of areas where it’s about 80% as good as Linux, a number of areas (even polkit, dbus, and hal) where it’s using the exact same stack Linux is (so all these comments about FreeBSD being so differently put together strike me as hollow), and frankly some areas that need a lot of work and make it hard to manage systems in a secure and stable way.

The amazing

Let’s get this out there: I’ve used ZFS too much to use any OS that doesn’t support it or something like it. Right now, I’m not aware of anything like ZFS that is generally stable and doesn’t cost a fortune, so pretty much: if your Unix doesn’t do ZFS, I’m not interested. (btrfs isn’t there yet, but will be awesome when it is.) That’s why I picked FreeBSD for this, rather than NetBSD or OpenBSD.

ZFS on FreeBSD is simply awesome. They have integreated it extremely well. The installer supports root on zfs, even encrypted root on zfs (though neither is a default). top on a FreeBSD system shows a line of ZFS ARC (cache) stats right alongside everything else. The ZFS defaults for maximum cache size, readahead, etc. auto-tune themselves at boot (unless overridden) based on the amount of RAM in a system and the system type. Seriously, these folks have thought of everything and it just reeks of solid. I haven’t seen ZFS this well integrated outside the Solaris-type OSs.

I have been using ZFSOnLinux for some time now, but it is just not as mature as ZFS on FreeBSD. ZoL, for instance, still has some memory tuning issues, and is not really suggested for 32-bit machines. FreeBSD just nails it. ZFS on FreeBSD even supports TRIM, which is not available in ZoL and I think fairly unique even among OpenZFS platforms. It also supports delegated administration of the filesystem, both to users and to jails on the system, seemingly very similar to Solaris zones.

FreeBSD also supports beadm, which is like a similar tool on Solaris. This lets you basically use ZFS snapshots to make lightweight “boot environments”, so you can select which to boot into. This is useful, say, before doing upgrades.

Then there are jails. Linux has tried so hard to get this right, and fallen on its face so many times, a person just wants to take pity sometimes. We’ve had linux-vserver, openvz, lxc, and still none of them match what FreeBSD jails have done for a long time. Linux’s current jail-du-jour is LXC, though it is extremely difficult to configure in a secure way. Even its author comments that “you won’t hear any of the LXC maintainers tell you that LXC is secure” and that it pretty much requires AppArmor profiles to achieve reasonable security. These are still rather in flux, as I found out last time I tried LXC a few months ago. My confidence in LXC being as secure as, say, KVM or FreeBSD is simply very low.

FreeBSD’s jails are simple and well-documented where LXC is complex and hard to figure out. Its security is fairly transparent and easy to control and they just work well. I do think LXC is moving in the right direction and might even get there in a couple years, but I am quite skeptical that even Docker is getting the security completely right.

The simply different

People have been throwing around the word “distribution” with respect to FreeBSD, PC-BSD, etc. in recent years. There is an analogy there, but it’s not perfect. In the Linux ecosystem, there is a kernel project, a libc project, a coreutils project, a udev project, a systemd/sysvinit/whatever project, etc. You get the idea. In FreeBSD, there is a “base system” project. This one project covers the kernel and the base userland. Some of what they use in the base system is code pulled in from elsewhere but maintained in their tree (ssh), some is completely homegrown (kernel), etc. But in the end, they have a nicely-integrated base system that always gets upgraded in sync.

In the Linux world, the distribution makers are responsible for integrating the bits from everywhere into a coherent whole.

FreeBSD is something of a toolkit to build up your system. Gentoo might be an analogy in the Linux side. On the other end of the spectrum, Ubuntu is a “just install it and it works, tweak later” sort of setup. Debian straddles the middle ground, offering both approaches in many cases.

There are pros and cons to each approach. Generally, I don’t think either one is better. They are just different.

The not-quite-there

I said that there are a lot of things in FreeBSD that are about 80% of where Linux is. Let me touch on them here.

Its laptop support leaves something to be desired. I installed it on a few-years-old Thinkpad — basically the best possible platform for working suspend in a Free OS. It has worked perfectly out of the box in Debian for years. In FreeBSD, suspend only works if it’s in text mode. If X is running, the video gets corrupted and the system hangs. I have not tried to debug it further, but would also note that suspend on closed lid is not automatic in FreeBSD; the somewhat obscure instuctions tell you what policykit pkla file to edit to make suspend work in XFCE. (Incidentally, it also says what policykit file to edit to make the shutdown/restart options work).

Its storage subsystem also has some surprising misses. Its rough version of LVM, LUKS, and md-raid is called GEOM. GEOM, however, supports only RAID0, RAID1, and RAID3. It does not support RAID5 or RAID6 in software RAID configurations! Linux’s md-raid, by comparison, supports RAID0, RAID1, RAID4, RAID5, RAID6, etc. There seems to be a highly experimental RAID5 patchset floating around for many years, but it is certainly not integrated into the latest release kernel. The current documentation makes no mention of RAID5, although it seems that a dated logical volume manager supported it. In any case, RAID5 does not seem to be well-supported in software like it is in Linux.

ZFS does have its raidz1 level, which is roughly the same as RAID5. However, that requires full use of ZFS. ZFS also does not support some common operations, like adding a single disk to an existing RAID5 group (which is possible with md-raid and many other implementations.) This is a ZFS limitation on all platforms.

FreeBSD’s filesystem support is rather a miss. They once had support for Linux ext* filesystems using the actual Linux code, but ripped it out because it was in GPL and rewrote it so it had a BSD license. The resulting driver really only works with ext2 filesystems, as it doesn’t work with ext3/ext4 in many situations. Frankly I don’t see why they bothered; they now have something that is BSD-licensed but only works with a filesystem so old nobody uses it anymore. There are only two FreeBSD filesystems that are really useable: UFS2 and ZFS.

Virtualization under FreeBSD is also not all that present. Although it does support the VirtualBox Open Source Edition, this is not really a full-featured or fast enough virtualization environment for a server. Its other option is bhyve, which looks to be something of a Xen clone. bhyve, however, does not support Windows guests, and requires some hoops to even boot Linux guest installers. It will be several years at least before it reaches feature-parity with where KVM is today, I suspect.

One can run FreeBSD as a guest under a number of different virtualization systems, but their instructions for making the mouse work best under VirtualBox did not work. There may have been some X.Org reshuffle in FreeBSD that wasn’t taken into account.

The installer can be nice and fast in some situations, but one wonders a little bit about QA. I had it lock up on my twice. Turns out this is a known bug reported 2 months ago with no activity, in which the installer attempts to use a package manger that it hasn’t set up yet to install optional docs. I guess the devs aren’t installing the docs in testing.

There is nothing like Dropbox for FreeBSD. Apparently this is because FreeBSD has nothing like Linux’s inotify. The Linux Dropbox does not work in FreeBSD’s Linux mode. There are sketchy reports of people getting an OwnCloud client to work, but in something more akin to rsync rather than instant-sync mode, if they get it working at all. Some run Dropbox under wine, apparently.

The desktop environments tend to need a lot more configuration work to get them going than on Linux. There’s a lot of editing of polkit, hal, dbus, etc. config files mentioned in various places. So, not only does FreeBSD use a lot of the same components that cause confusion in Linux, it doesn’t really configure them for you as much out of the box.

FreeBSD doesn’t support as many platforms as Linux. FreeBSD has only two platforms that are fully supported: i386 and amd64. But you’ll see people refer to a list of other platforms that are “supported”, but they don’t have security support, official releases, or even built packages. They includ arm, ia64, powerpc, and sparc64.

The bad: package management

Roughly 20 years ago, this was one of the things that pulled me to Debian. Perhaps I am spolied from running the distribution that has been the gold standard for package management for so long, but I find FreeBSD’s package management — even “pkg-ng” in 10.1-RELEASE — to be lacking in a number of important ways.

To start with, FreeBSD actually has two different package management systems: one for the base system, and one for what they call the ports/packages collection (“ports” being the way to install from source, and “packages” being the way to install from binaries, but both related to the same tree.) For the base system, there is freebsd-update which can install patches and major upgrades. It also has a “cron” option to automate this. Sadly, it has no way of automatically indicating to a calling script whether a reboot is necessary.

freebsd-update really manages less than a dozen packages though. The rest are managed by pkg. And pkg, it turns out, has a number of issues.

The biggest: it can take a week to get security updates. The FreeBSD handbook explains pkg audit -F which will look at your installed packages (but NOT the ones in the base system) and alert you to packages that need to be updates, similar to a stripped-down version of Debian’s debsecan. I discovered this myself, when pkg audit -F showed a vulnerability in xorg, but pkg upgrade showed my system was up-to-date. It is not documented in the Handbook, but people on the mailing list explained it to me. There are workarounds, but they can be laborious.

If that’s not bad enough, FreeBSD has no way to automatically install security patches for things in the packages collection. Debian has several (unattended-upgrades, cron-apt, etc.) There is “pkg upgrade”, but it upgrades everything on the system, which may be quite a bit more than you want to be upgraded. So: if you want to run Apache with PHP, and want it to just always apply security patches, FreeBSD packages are not up to the job like Debian’s are.

The pkg tool doesn’t have very good error-handling. In fact, its error handling seems to be nonexistent at times. I noticed that some packages had failures during install time, but pkg ignored them and marked the package as correctly installed. I only noticed there was a problem because I happened to glance at the screen at the right moment during messages about hundreds of packages. In Debian, by contrast, if there are any failures, at the end of the run, you get a nice report of which packages failed, and an exit status to use in scripts.

It also has another issue that Debian resolved about a decade ago: package scripts displaying messages that are important for the administrator, but showing so many of them that they scroll off the screen and are never seen. I submitted a bug report for this one also.

Some of these things just make me question the design of pkg. If I can’t trust it to accurately report if the installation succeeded, or show me the important info I need to see, then to what extent can I trust it?

Then there is the question of testing of the ports/packages. It seems that, automated tests aside, basically everyone is running off the “master” branch of the ports/packages. That’s like running Debian unstable on your servers. I am distinctly uncomfortable with this notion, though it seems FreeBSD people report it mostly works well.

There are some other issues, too: FreeBSD ports make no distinction between development and runtime files like Debian’s packages do. So, just by virtue of wanting to run a graphical desktop, you get all of the static libraries, include files, build scripts, etc for XOrg installed.

For a package as concerned about licensing as FreeBSD, the packages collection does not have separate sections like Debian’s main, contrib, and non-free. It’s all in one big pot: BSD-license, GPL-license, proprietary without source license. There is /usr/local/share/licenses where you can look up a license for each package, but there is no way with FreeBSD, like there is with Debian, to say “never even show me packages that aren’t DFSG-free.” This is useful, for instance, when running in a company to make sure you never install packages that are for personal use only or something.

The bad: ABI stability

I’m used to being able to run binaries I compiled years ago on a modern system. This is generally possible in Linux, assuming you have the correct shared libraries available. In FreeBSD, this is explicitly NOT possible. After every major version upgrade, you must reinstall or recompile every binary on your system.

This is not necessarily a showstopper for me, but it is a hassle for a lot of people.

Update 2015-02-17: Some people in the comments are pointing out compat packages in the ports that may help with this situation. My comment was based on advice in the FreeBSD Handbook stating “After a major version upgrade, all installed packages and ports need to be upgraded”. I have not directly tried this, so if the Handbook is overstating the need, then this point may be in error.

Conclusions

As I said above, I found little validation to the comments that the Debian ecosystem is noticeably worse than the FreeBSD one. Debian has its warts too — particularly with keeping software up-to-date. You can see that the two projects are designed around a different passion: FreeBSD’s around the base system, and Debian’s around an integrated whole system. It would be wrong to say that either of those is always better. FreeBSD’s approach clearly produces some leading features, especially jails and ZFS integration. Yet Debian’s approach also produces some leading features in the way of package management and security maintainability beyond the small base.

My criticism of excessive complexity in the polkit/cgmanager/dbus area still stands. But to those people commenting that FreeBSD hasn’t “lost its way” like Linux has, I would point out that FreeBSD mostly uses these same components also, and FreeBSD has excessive complexity in its ports/package system and system management tools. I think it’s a draw. You pick the best for your use case. If you’re looking for a platform to run a single custom app then perhaps all of the Debian package management benefits don’t apply to you (you may not even need FreeBSD’s packages, or just a few). The FreeBSD ZFS support or jails may well appeal. If you’re looking to run a desktop environment, or a server with some application that needs a ton of PHP, Python, Perl, or C libraries, then Debian’s package management and security handling may well be attractive.

I am disappointed that Debian GNU/kFreeBSD will not be a release architecture in jessie. That project had the promise to provide a best of both worlds for those that want jails or tight ZFS integration.

Categories: Elsewhere

Tag1 Consulting: How to Maintain Contrib Modules for Drupal and Backdrop at the Same Time - Part 2

Planet Drupal - Tue, 17/02/2015 - 17:00

This is the second in a series of blog posts about the relationship between Drupal and Backdrop CMS, a recently-released fork of Drupal. The goal of the series is to explain how a module (or theme) developer can take a Drupal project they currently maintain and support it for Backdrop as well, while keeping duplicate work to a minimum.

read more

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator - Elsewhere