Feed aggregator

OSTraining: How to Use the Breeze Theme in Drupal 8

Planet Drupal - Mon, 10/10/2016 - 22:33

Breeze is a design that we make available as a Joomla template and a WordPress theme. Now, finally, it's available as a Drupal 8 theme!

We use Breeze as an example in many of our video classes and books.

By using the same design, it makes it easy for OSTraining members to see differences and similarities between the various platforms.

Breeze is fully responsive and uses the Bootstrap framework.

Categories: Elsewhere

Drupalize.Me: Catching the Spirit of Open Hardware

Planet Drupal - Mon, 10/10/2016 - 21:52

Drupalize.Me trainer Amber Matz attended this year's Open Hardware Summit in Portland and reports back on what she took away from the event.

Categories: Elsewhere

The Sego Blog: Testing Software - A Quick Overview

Planet Drupal - Mon, 10/10/2016 - 21:28
10/10/2016Testing Software - A Quick Overview

Software is an ever changing interweaving of collections of ideas expressed in code to solve various problems. In today's day an age the problems that software is solving is expanding at an ever increasing rate.

Categories: Elsewhere

Daniel Pocock: DVD-based Clean Room for PGP and PKI

Planet Debian - Mon, 10/10/2016 - 21:25

There is increasing interest in computer security these days and more and more people are using some form of PKI, whether it is signing Git tags, signing packages for a GNU/Linux distribution or just signing your emails.

There are also more home networks and small offices who require their own in-house Certificate Authority (CA) to issue TLS certificates for VPN users (e.g. StrongSWAN) or IP telephony.

Back in April, I started discussing the PGP Clean Room idea (debian-devel discussion and gnupg-users discussion), created a wiki page and started development of a script to build the clean room ISO using live-build on Debian.

Keeping the master keys completely offline and putting subkeys onto smart cards and other devices dramatically lowers the risk of mistakes and security breaches. Using a read-only DVD to operate the clean-room makes it convenient and harder to tamper with.

Trying it out in VirtualBox

It is fairly easy to clone the Git repository, run the script to create the ISO and boot it in VirtualBox to see what is inside:

At the moment, it contains a number of packages likely to be useful in a PKI clean room, including GnuPG, smartcard drivers, the lightweight pki utility from StrongSWAN and OpenSSL.

I've been trying it out with an SPR-532, one of the GnuPG-supported smartcard readers with a pin-pad and the OpenPGP card.

Ready to use today

More confident users will be able to build the ISO and use it immediately by operating all the utilities from the command line. For example, you should be able to fully configure PGP smart cards by following this blog from Simon Josefsson.

The ISO includes some useful scripts, for example, create-raid will quickly partition and RAID a set of SD cards to store your master key-pair offline.

Getting involved

To make PGP accessible to a wider user-base and more convenient for those who don't use GnuPG frequently enough to remember all the command line options, it would be interesting to create a GUI, possibly using python-newt to create a similar look-and-feel to popular text-based installer and system administration tools.

If you are keen on this project and would like to discuss it further, please come and join the new pki-clean-room mailing list and feel free to ask questions or share your thoughts about it.

One way to proceed may be to recruit an Outreachy or GSoC intern to develop the UI. Before they can get started, it would be necessary to more thoroughly document workflow requirements.

Categories: Elsewhere

Palantir: Palantir.net's Guide to Digital Governance: Ownership

Planet Drupal - Mon, 10/10/2016 - 19:37
Palantir.net's Guide to Digital Governance: Ownership Palantir.net's Guide to Digital Governance brandt Mon, 10/10/2016 - 12:37 Scott DiPerna Oct 10, 2016

This is the third installment of Palantir.net’s Guide to Digital Governance, a comprehensive guide intended to help get you started when developing a governance plan for your institution’s digital communications.

In this post we will cover...
  • Why ownership is the cornerstone of good governance
  • What ownership entails
  • How to manage instances of shared or collaborative ownership 

Stay connected with the latest news on web strategy, design, and development.

Sign up for our newsletter.

Now that we have defined all of the digital properties and platforms that we will consider for our Governance Plan, we next need to establish who “owns,” or who will ultimately be responsible for the care, maintenance, and accuracy, of these properties.

Ownership is the cornerstone of good governance. In fact, some may think of ownership as being synonymous with governance. From my experience, I believe that good governance of any digital communications platform involves more than simply defining who is responsible for each piece.

In most organizations, many people are using, sharing, and collaborating on the same systems together. The processes and interactions between those users needs to be defined as well, however we have to identify the people before the process. Defining ownership first is the foundation on which we can begin to define the more complex relationships that exist in a shared system.

Ownership is the cornerstone of good governance…. Defining ownership first is the foundation on which we can begin to define the more complex relationships that exist in a shared system.

I should make one other important distinction between maintenance of the system and the maintenance of the presentation of content, as it relates to ownership.

Since this Governance Plan is considering the guidelines for digital communications, it is explicitly NOT considering the roles, policies, and procedures for the maintenance of the infrastructure that supports the properties and platforms we are considering for the plan.

In other words, when we define who has ownership of the public website or the intranet, we are considering only the content and its presentation – not the underlying software and hardware that makes the website or intranet functional.

Perhaps this is obvious, but it is an important distinction to make for those who are less familiar with modern web technology, who may not fully understand where the functions of an IT department end and an Online Marketing or Communications department begin.

With those caveats out of the way, we can now begin to define who is responsible for each of the properties and platforms we listed earlier.

Obviously, I can’t tell you who is or who should be responsible for each piece within your organization – that must be defined by how your work responsibilities are distributed across the institution – but I can describe some general principles for defining ownership that should help.

  • Ownership of your organization’s web presences ultimately should reside at the very top, with levels of responsibility being delegated down the hierarchy of the institution.
  • The top leadership of an organization should be responsible ultimately for the accuracy and maintenance of the content contained within the parts of the properties they own.
  • Every website, subsite, microsite, department site; every section and sub-section; every page, aggregated listing, and piece of content all the way down to each video, photo, paragraph, headline, and caption should fall within the ownership of someone at the top.
  • Responsibility for daily oversight and hands-on maintenance of those properties then may be delegated to staff within the owner’s groups, offices, or areas of responsibility.
  • Owners should have sufficiently trained staff who have the authority and capacity to make changes, corrections, and updates to the content as needed in a timely manner, such that inaccurate and/or outdated content does not remain on the property for an unreasonable period of time.

In short, ownership has two essential aspects:

  1. top-level responsibility for the accuracy and efficacy of the content, and
  2. hands-on responsibility for the creation and maintenance of the content.

Both are essential and required for good governance, and very likely may be responsibilities held by one person, split between two, or shared among a group.

Shared Ownership / Responsibility

There may be instances in which shared ownership may be necessary. I generally recommend against doing that as it puts at risk a clear chain of accountability. If two people are responsible, it’s easy for both to think the other person is handling it.

If some form of shared ownership is required, consider having one person be the primary owner, who is supported by a secondary owner when needed; or that a primary owner is a decision-maker, but secondary owner(s) are consulted or informed of issues and pending decisions.

If “equally” shared ownership or responsibility is required, try defining the exact responsibilities that are to be owned and dividing them logically between the two. Perhaps there is a logical separation of pages or sections. Or maybe one person is responsible for copy, while another is responsible for images.

Shared ownership is less-than-ideal, but there can be reasonable ways to make it work, provided you do not create any structural gaps in authority, unwittingly.


There are many instances in digital communications where groups of people collaborate to produce content. This is most common with organizational news and events, publications, blogs, social media, etc.

For example, if there is a single person who can be ultimately responsible for all blog content created by various content creators, great! If blog content is created by subject-matter experts from different fields or different parts of the organization, perhaps it is possible to invest ownership in one person for all of the blog posts within a specific subject for each field.

If you are in a situation similar to what I described above, where you have multiple, subject-specific owners, it will probably make sense for all of the owners to meet regularly to agree on standards and best-practices for all contributors to follow.

In the end, the fundamental concept here is to place responsibility for all content and every part of a digital property with the people who are in the best position to manage it and ensure its quality, accuracy, pertinence, and value.


This post is part of a larger series of posts, which make up a Guide to Digital Governance Planning. The sections follow a specific order intended to help you start at a high-level of thinking and then focus on greater and greater levels of detail. The sections of the guide are as follows:

  1. Starting at the 10,000ft View – Define the digital ecosystem your governance planning will encompass.
  2. Properties and Platforms – Define all the sites, applications and tools that live in your digital ecosystem.
  3. Ownership – Consider who ultimately owns and is responsible for each site, application and tool.
  4. Intended Use – Establish the fundamental purpose for the use of each site, application and tool.
  5. Roles and Permissions – Define who should be able to do what in each system.
  6. Content – Understand how ownership and permissions should apply to content.
  7. Organization – Establish how the content in your digital properties should be organized and structured.
  8. URLs – Define how URL patterns should be structured in your websites.
  9. Design – Determine who owns and is responsible for the many aspects design plays in digital communications and properties.
  10. Personal Websites – Consider the relationship your organization should have with personal websites of members of your organization.
  11. Private Websites, Intranets and Portals – Determine the policies that should govern site which are not available to the public.
  12. Web-Based Applications – Consider use and ownership of web-based tools and applications.
  13. E-Commerce – Determine the role of e-commerce in your website.
  14. Broadcast Email – Establish guidelines for the use of broadcast email to constituents and customers.
  15. Social Media – Set standards for the establishment and use of social media tools within the organization.
  16. Digital Communications Governance – Keep the guidelines you create updated and relevant.

We want to make your project a success.

Let's Chat.
Categories: Elsewhere

Matt Glaman: Managing Your Drupal Project with Composer

Planet Drupal - Mon, 10/10/2016 - 15:15

Drupal Commerce was started without writing any Drupal code. Our libraries set Drupal Commerce off the island before Drupal was able to support using third party library not provided by core.

Drupal now ships without third party libraries committed, fully using Composer for managing outside dependencies. However, that does not mean the community and core developers have everything figured out, quite yet.

YNACP: Yet Another Composer Post. Yes. Because as a co-maintainer of Drupal Commerce we're experiencing quite a lot of issue queue frustration. I also want to make the case of "let's make life eaiser" for working with Drupal. As you read compare the manual sans-Composer process for local development and remote deployment versus the Composer flows.

Before we begin

We're going to be discussing Composer. There's specific terminologies I'll cover first.

  • composer.json: defines metadata about the project and dependencies for the project.
  • composer.lock: metadata file containing computed information about dependencies and expected install state.
  • composer install: downloads and installs dependencies, also builds the class autoloader. If a .lock file is available it will install based off of the metadata. Otherwise it will calculated and resolve the download information for dependencies.
  • composer update: updates defined dependencies and rebuilds the lock file.
  • composer require: adds a new dependency, updates the JSON and .lock file.
  • composer remove: removes a dependency, updates the JSON and .lock file.

All Composer commands need to run in the same directory as your composer.json file.

Installing Drupal

There are multiple ways to install Drupal. This article focuses on working with Composer, for general installation help review the official documentation at https://www.drupal.org/docs/8/install

Install from packaged archive

Drupal.org has a packaging system which provides zip and tar archives. These archives come with all third party dependencies downloaded.

You download the archive, extract the contents and have an installable Drupal instance. The extracted contents will contain the vendor directory and a composer.lock file.

Install via Composer template

A community initiative was started to provide a Composer optimized project installation for Drupal. The  project provided a version of Drupal core which could be installed via Composer and a mirror of Drupal.org projects via a Composer endpoint (This has been deprecated in favor of the Drupal.org endpoint).

To get started you run the create-project command. 

composer create-project drupal-composer/drupal-project:8.x-dev some-dir --stability dev --no-interaction

This will create some-dir folder which holds the vendor directory and a web root directory (Drupal.) This will allow you to install Drupal within a subdirectory of the project, which is a common application structure.

This also keeps your third party libraries out of access from your web server.

Review the repository for documentation on how to use the project, including adding and updating core/projects: https://github.com/drupal-composer/drupal-project.

Adding dependencies to Drupal Without Composer

Modules, themes, and profiles are added to Drupal my placing them in a specific directory. This can be done by visiting Drupal.org, downloading the packaged archive and extracting it to the proper location.

There's a problem with this process: it's manual and does not ensure any of the project's dependencies were downloaded. Luckily Composer is a package and dependency manager!

With Composer

To add a dependency we use the composer require command. This will mark the dependency, download any of its own. 

Note if you did not use project base: Currently there is no out of the box way to add Drupal.org projects to a standard Drupal installation. You will need to run a command to the endpoint.

composer config repositories.drupal composer https://packages.drupal.org/8

Let's use the Panels module as an example. Running the following command would add it to your Drupal project.

composer require drupal/panels

This will install the latest stable version of the Paragraphs version. If you inspect your composer.json file you should see something like the following

"require": { "drupal/panels": "3.0-beta4", }

One of the key components is the version specification. This tells Composer what version it can install, and how it can update.

  • 3.0 will be considered a specific version and never update.
  • ~3.0 will consider any patch version as a possible installation option, such as new betas, RCs.
  • ~3 will allow any minor releases to be considered for install or update.
  • ^3.0 will match anything under the major release — allowing any minor or patch release.

You can specify version constraints when adding a dependency as well. This way you can define of you will allow minor or patch updates when updating.

composer require drupal/panels:~3.0

This will allow versions 3.0-beta5,3.0-rc1, 3.0 to be valid update versions.

Know what! The same versioning patterns exist in NPM and other package managers.

Updating dependencies Without Composer

As stated with installing dependencies, it could be done manually. But this requires knowing if any additional dependencies need to be updated. In fact, this is becoming a common issue in the Drupal.org issue queues.

With Composer

Again, this is where Composer is utilized and simplifies package management.

Going from our previous example, let's say that Paragraphs has a new patch release. We want to update it. We would run

composer update drupal/panels --with-dependencies

This will update our Drupal project and any of its dependencies. Why is this important? What if Paragraphs required the newest version of Entity Reference Revisions for a critical fix? Without a package manager, we would have not known or possibly updated.

Why we need --with-dependencies

When Composer updates a dependency, it does not automatically update its dependencies. Why? No idea, apparently the maintainers do not believe it should.

Updating Drupal core Without the Composer template

If you installed Drupal through the normal process, via an extracted archive, you have to manually update in the same fashion. You will need to remove all files provided by Drupal core — *including your possibly modified composer.json file*.

Rightly so, you can move your modified .htaccess, composer.json, or robots.txt and move them back. However, you’ll need to make sure your composer.json matches the current Drupal core’s requirements and run composer update.

That’s difficult.

The official documentation: https://www.drupal.org/docs/7/updating-your-drupal-site/update-procedure...

Updating Drupal core via the Composer template

If you have setup Drupal with the Composer template or any Composer based workflow, all you need to do is run the following command (assuming you’ve tagged the drupal/core dependency as ^8.x.x or ~8, ~8.1, ~8.2)

composer update drupal/core --with-dependencies

This will update Drupal core and its files alongside the drupal-composer/drupal-scaffold project.

Using patches with Composer

I have been a fan of using build tools with Drupal, specifically  using . However, when I first used Composer I was concerned on how to use patches or pull requests not yet merged with the project — without maintaining some kind of fork.

 create the   project. This will apply patches to your dependencies. The project’s README fully documents its use, so I’ll cover it quickly here.

Patches are stored in a patches portion of the extra schema of the JSON file.

"extra": { "patches": { "drupal/commerce”: { "#2805625: Add a new service to manage the product variation rendering": "https://www.drupal.org/files/issues/add_a_new_service_to-2805625-4.patch" } } }

This patches Drupal Commerce with a specific patch. 

Using GitHub PRs as a patch

Patches are great, as they let you use uncommitted functionality immediately. A problem can arise when you need code from a GitHub pull request (or so it seems.) For instance, Drupal Commerce is developed on GitHub since DrupalCI doesn’t support Composer and contributed projects yet.

Luckily we can take the PR for the issue used in the example https://github.com/drupalcommerce/commerce/pull/511 and add .patch to it to retrieve a patch file: https://github.com/drupalcommerce/commerce/pull/511.patch

We could then update our composer.json to use the pull request’s patch URL and always have up to date versions o the patch.

"extra": { "patches": { "drupal/commerce”: { "#2805625: Add a new service to manage the product variation rendering": "https://www.drupal.org/files/issues/add_a_new_service_to-2805625-4.patch" } } }
Categories: Elsewhere

Pantheon Blog: Turn on Twig Debug Mode in Drupal 8 on Pantheon

Planet Drupal - Mon, 10/10/2016 - 15:00
When working on Drupal 8 theming, it is very helpful to have Twig debug mode on. Debug mode will cause twig to emit a lot of interesting information about which template generated each part of the page. The instructions for enabling debug mode can be found within the comments of the default.services.yml file, among other sources. In short, all you need is the following in your services.yml file:  
Categories: Elsewhere

Reproducible builds folks: Reproducible Builds: week 76 in Stretch cycle

Planet Debian - Mon, 10/10/2016 - 13:16

What happened in the Reproducible Builds effort between Sunday October 2 and Saturday October 8 2016:

Media coverage Events
  • Vagrant Cascadian gave an impromptu talk about reproducible builds at CAT Barcamp on 8th October.

  • Holger discussed Reproducible coreboot at coreboot.berlin. Unlike other projects, coreboot doesn't do binary releases because there have been many instances of people taking some incorrect coreboot binary, flashed it and bricked their machines… The end idea is that coreboot will simply release .buildinfo files (and still no binaries) instead.

Packages reviewed and fixed, and bugs filed Reviews of unreproducible packages

31 package reviews have been added, 27 have been updated and over 20 have been removed in this week, adding to our knowledge about identified issues.

3 issue types have been addded:

1 issue type has been updated:

Weekly QA work

During of reproducibility testing, some FTBFS bugs have been detected and reported by:

  • Chris Lamb (12)


  • The data in reproducible-tracker.json (which is fed to tracker.d.o and DDPO) has been changed to contain data from testing as the build path variations we introduced for unstable are not yet ready for wider consumption. For testing/stretch we recomment to create reproducible packages by rebuilding in the same path. (h01ger)
  • Various reproducibility statistics for testing/stretch have been added to the dashboard view. (h01ger)
  • The repository comparison page has been improved to only show obsolete packages if they exist (which they currently don't as we have rebuilt everything from the plain Debian repos, except for our modified dpkg due to #138409 and #787980). (h01ger)
  • All armhf boards are now using Linux kernels provided by Debian. (vagrant)

This week's edition was written by Chris Lamb, Holger Levsen & Vagrant Cascadian and reviewed by a bunch of Reproducible Builds folks on IRC.

Categories: Elsewhere

Petter Reinholdtsen: Experience and updated recipe for using the Signal app without a mobile phone

Planet Debian - Mon, 10/10/2016 - 11:30

In July I wrote how to get the Signal Chrome/Chromium app working without the ability to receive SMS messages (aka without a cell phone). It is time to share some experiences and provide an updated setup.

The Signal app have worked fine for several months now, and I use it regularly to chat with my loved ones. I had a major snag at the end of my summer vacation, when the the app completely forgot my setup, identity and keys. The reason behind this major mess was running out of disk space. To avoid that ever happening again I have started storing everything in userdata/ in git, to be able to roll back to an earlier version if the files are wiped by mistake. I had to use it once after introducing the git backup. When rolling back to an earlier version, one need to use the 'reset session' option in Signal to get going, and notify the people you talk with about the problem. I assume there is some sequence number tracking in the protocol to detect rollback attacks. The git repository is rather big (674 MiB so far), but I have not tried to figure out if some of the content can be added to a .gitignore file due to lack of spare time.

I've also hit the 90 days timeout blocking, and noticed that this make it impossible to send messages using Signal. I could still receive them, but had to patch the code with a new timestamp to send. I believe the timeout is added by the developers to force people to upgrade to the latest version of the app, even when there is no protocol changes, to reduce the version skew among the user base and thus try to keep the number of support requests down.

Since my original recipe, the Signal source code changed slightly, making the old patch fail to apply cleanly. Below is an updated patch, including the shell wrapper I use to start Signal. The original version required a new user to locate the JavaScript console and call a function from there. I got help from a friend with more JavaScript knowledge than me to modify the code to provide a GUI button instead. This mean that to get started you just need to run the wrapper and click the 'Register without mobile phone' to get going now. I've also modified the timeout code to always set it to 90 days in the future, to avoid having to patch the code regularly.

So, the updated recipe for Debian Jessie:

  1. First, install required packages to get the source code and the browser you need. Signal only work with Chrome/Chromium, as far as I know, so you need to install it. apt install git tor chromium git clone https://github.com/WhisperSystems/Signal-Desktop.git
  2. Modify the source code using command listed in the the patch block below.
  3. Start Signal using the run-signal-app wrapper (for example using `pwd`/run-signal-app).
  4. Click on the 'Register without mobile phone', will in a phone number you can receive calls to the next minute, receive the verification code and enter it into the form field and press 'Register'. Note, the phone number you use will be user Signal username, ie the way others can find you on Signal.
  5. You can now use Signal to contact others. Note, new contacts do not show up in the contact list until you restart Signal, and there is no way to assign names to Contacts. There is also no way to create or update chat groups. I suspect this is because the web app do not have a associated contact database.

I am still a bit uneasy about using Signal, because of the way its main author moxie0 reject federation and accept dependencies to major corporations like Google (part of the code is fetched from Google) and Amazon (the central coordination point is owned by Amazon). See for example the LibreSignal issue tracker for a thread documenting the authors view on these issues. But the network effect is strong in this case, and several of the people I want to communicate with already use Signal. Perhaps we can all move to Ring once it work on my laptop? It already work on Windows and Android, and is included in Debian and Ubuntu, but not working on Debian Stable.

Anyway, this is the patch I apply to the Signal code to get it working. It switch to the production servers, disable to timeout, make registration easier and add the shell wrapper:

cd Signal-Desktop; cat <<EOF | patch -p1 diff --git a/js/background.js b/js/background.js index 24b4c1d..579345f 100644 --- a/js/background.js +++ b/js/background.js @@ -33,9 +33,9 @@ }); }); - var SERVER_URL = 'https://textsecure-service-staging.whispersystems.org'; + var SERVER_URL = 'https://textsecure-service-ca.whispersystems.org'; var SERVER_PORTS = [80, 4433, 8443]; - var ATTACHMENT_SERVER_URL = 'https://whispersystems-textsecure-attachments-staging.s3.amazonaws.com'; + var ATTACHMENT_SERVER_URL = 'https://whispersystems-textsecure-attachments.s3.amazonaws.com'; var messageReceiver; window.getSocketStatus = function() { if (messageReceiver) { diff --git a/js/expire.js b/js/expire.js index 639aeae..beb91c3 100644 --- a/js/expire.js +++ b/js/expire.js @@ -1,6 +1,6 @@ ;(function() { 'use strict'; - var BUILD_EXPIRATION = 0; + var BUILD_EXPIRATION = Date.now() + (90 * 24 * 60 * 60 * 1000); window.extension = window.extension || {}; diff --git a/js/views/install_view.js b/js/views/install_view.js index 7816f4f..1d6233b 100644 --- a/js/views/install_view.js +++ b/js/views/install_view.js @@ -38,7 +38,8 @@ return { 'click .step1': this.selectStep.bind(this, 1), 'click .step2': this.selectStep.bind(this, 2), - 'click .step3': this.selectStep.bind(this, 3) + 'click .step3': this.selectStep.bind(this, 3), + 'click .callreg': function() { extension.install('standalone') }, }; }, clearQR: function() { diff --git a/options.html b/options.html index dc0f28e..8d709f6 100644 --- a/options.html +++ b/options.html @@ -14,7 +14,10 @@ <div class='nav'> <h1>{{ installWelcome }}</h1> <p>{{ installTagline }}</p> - <div> <a class='button step2'>{{ installGetStartedButton }}</a> </div> + <div> <a class='button step2'>{{ installGetStartedButton }}</a> + <br> <a class="button callreg">Register without mobile phone</a> + + </div> <span class='dot step1 selected'></span> <span class='dot step2'></span> <span class='dot step3'></span> --- /dev/null 2016-10-07 09:55:13.730181472 +0200 +++ b/run-signal-app 2016-10-10 08:54:09.434172391 +0200 @@ -0,0 +1,12 @@ +#!/bin/sh +set -e +cd $(dirname $0) +mkdir -p userdata +userdata="`pwd`/userdata" +if [ -d "$userdata" ] && [ ! -d "$userdata/.git" ] ; then + (cd $userdata && git init) +fi +(cd $userdata && git add . && git commit -m "Current status." || true) +exec chromium \ + --proxy-server="socks://localhost:9050" \ + --user-data-dir=$userdata --load-and-launch-app=`pwd` EOF chmod a+rx run-signal-app

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Categories: Elsewhere

Paul Johnson: Help Dries crowdsource Drupal 8 success stories

Planet Drupal - Mon, 10/10/2016 - 11:14

In little over a month Drupal 8 will be one year old. To mark this momentous occasion Dries Buytaert, Drupal’s founder, will champion noteworthy web sites and applications powered by Drupal 8.

Have you launched a Drupal 8 web site or application this year? Dries would like to hear from you. We’ve prepared a short web form so you can tell him your Drupal 8 success story.

Please spread the word

To reach the widest potential audience and capture the very best examples I encourage your to share this blog post with colleagues, peers, clients. Please email, share on social media, speak to your clients.

Beyond the Drupal shops developing applications our objective is to attract submissions from end users using Drupal 8. If you represent an organisation, enterprise, SME, startup, manufacturer, government department (and more) using Drupal 8 we want to hear from you.

So tell the world, complete the short form and help us celebrate Drupal 8.

Deadline for submissions is November 11th.

Categories: Elsewhere

Arturo Borrero González: The day I became Debian Developer

Planet Debian - Mon, 10/10/2016 - 07:00

The moment has come. You may contact me now at arturo@debian.org :-)

After almost 6 months of tough NM process, the waiting is over. I have achieved the goal I set to myself back in 2011: become Debian Developer.

This is a professional and personal victory.

I would like to mention many people who have been important for this to happen. But they all know, no need to create a list here. Thanks!

This weekend I was doing some hiking in the mountains and had no internet conection at all. When I arrived back home, I discovered an email from Debian System Administrators on behalf of The Debian New Maintainer Team, in which they let me know that my official DD account had been created.

During the last 6 month I have been trying to imagine the moment in which the process is finally completed (yes, I have been a bit impatient). At the end, the magical moment in the mountains was followed by the joy of the DD account. Curious how things happen sometimes.

Here is a pic of this mountain day, with my adventure friends. I am the first from the left.

Categories: Elsewhere

Enzolutions: A week in Paris

Planet Drupal - Mon, 10/10/2016 - 02:00

Last week I had the opportunity of visit Paris, France for a week as para of my tour Around the Drupal world in 140+ days.

Sadly for me, I got a #drupalflu in Drupal Con, Dublin; So I wasn't in the best shape to enjoy the city.

I want to say thank you to Sebastien Lissarrague and his family for allowing me to stay with them in their home.

During my visit, I had the opportunity to participate in the local Drupal Meetup of Paris community.

Also, I visited some companies like Koriolis, Sensio Labs and Platform.sh.

At Koriolis I have the opportunity to show the owner and developers the Drupal Console project to accelerate the Drupal 8 adoption in their projects.

During my visit to Sensio Labs I had a session with part the Symfony Core development team, to show them how we have been using the Symfony Console in Drupal Console project.

During my visit at Platform.sh I learn I little bit how they enable to their Drupal 8 user the usage of Drupal Console project. Next week I will write an articule about that.

About the city, I think anything that I could say about Paris will be nothing compare how beautiful it's; I just could recommend you to visit three mandatory places in you visit.

Eiffel Tower

Louvre Museum


Airplane Distance (Kilometers) Dublin , Ireland → Paris, France → San Jose, Costa Rica 9.957 Previously 96,604 Total 106.561 Walking Distance (steps) Dublin 116.133 Previously 1.780.955 Total 1.897.088 Train Distance (Kilometers) Today 0 Previously 528 Total 528 Bus/Car Distance (Kilometers) Today 0 Previously 2.944 Total 2.944
Categories: Elsewhere

Hideki Yamane: Simplest debian/watch file GNOME packages

Planet Debian - Mon, 10/10/2016 - 01:03
Simplest (two lines) debian/watch file GNOME-related packages, you can just copy&paste it.version=4

Categories: Elsewhere

Bits from Debian: Debian is participating in the next round of Outreachy!

Planet Debian - Sun, 09/10/2016 - 19:50

Following the success of the last round of Outreachy, we are glad to announce that Debian will take part in the program for the next round, with internships lasting from the 6th of December 2016 to the 6th of March 2017.

From the official website: Outreachy helps people from groups underrepresented in free and open source software get involved. We provide a supportive community for beginning to contribute any time throughout the year and offer focused internship opportunities twice a year with a number of free software organizations.

Currently, internships are open internationally to women (cis and trans), trans men, and genderqueer people. Additionally, they are open to residents and nationals of the United States of any gender who are Black/African American, Hispanic/Latin@, American Indian, Alaska Native, Native Hawaiian, or Pacific Islander.

If you want to apply to an internship in Debian, you should take a look at the wiki page, and contact the mentors for the projects listed, or seek more information on the (public) debian-outreach mailing-list. You can also contact the Outreach Team directly. If you have a project idea and are willing to mentor an intern, you can submit a project idea on the Outreachy wiki page.

Here's a few words on what the interns for the last round achieved within Outreachy:

  • Tatiana Malygina worked on Continuous Integration for Bioinformatics applications; She has pushed more than a hundred commits to the Debian Med SVN repository over the last months, and has been sponsored for more than 20 package uploads.

  • Valerie Young worked on Reproducible Builds infrastructure, driving a complete overhaul of the database and software behind the tests.reproducible-builds.org website. Her blog contains regular updates throughout the program.

  • ceridwen worked on creating reprotest, an all-in-one tool allowing anyone to check whether a build is reproducible or not, replacing the string of ad-hoc scripts the reproducible builds team used so far. She posted regular updates on the Reproducible Builds team blog.

  • While Scarlett Clark did not complete the internship (as she found a full-time job by the mid-term evaluation!), she spent the four weeks she participated in the program providing patches for reproducible builds in Debian and KDE upstream.

Debian would not be able to participate in Outreachy without the help of the Software Freedom Conservancy, who provides administrative support for Outreachy, as well as the continued support of Debian's donors, who provide funding for the internships. If you want to donate, please get in touch with one of our trusted organizations.

Debian is looking forward to welcoming new interns for the next few months, come join us!

Categories: Elsewhere

Guido Günther: Debian Fun in September 2016

Planet Debian - Sun, 09/10/2016 - 16:59
Debian LTS

September marked the seventeenth month I contributed to Debian LTS under the Freexian umbrella. I spent 6 hours (out of 7) working on

  • updating Icedove to 45.3 resulting in DLA-640-1
  • finishing my work on bringing rails into shape security wise resulting in DLA-641-1 for ruby-activesupport-3.2 and DLA-642-1 for ruby-activerecord-3.2.
  • enhancing the autopkgtests for qemu a bit
Other Debian stuff
  • Uploaded libvirt 2.3.0~rc1 to experimental
  • Uploaded whatmaps to 0.0.12 in unstable.
  • Uploaded git-buildpackage 0.8.4 to unstable.
Other Free Software activities
  • Ansible: got the foreman callback plugin needed for foreman_ansible merged upstream.
  • Made several improvements to foreman_ansible_inventory (a ansible dynamic inventory querying Foreman): Fixing an endless loop when Foreman would miscalculate the number of hosts to process, flake8 cleaniness and some work on python3 support
  • ansible-module-foreman:
    • unbreak defining subnets by setting the default boot mode.
    • add support for configuring realms
  • Foreman: add some robustness to the nice rebuild host feature when DNS entries are already there
  • Released whatmaps 0.0.12.
    • Errors related to a single package don't abort the whole program but rather skip over it now.
    • Systemd user sessions are filtered out
    • The codebase is now checked with flake8.
Categories: Elsewhere

Steve Purkiss: Leapfrog the Drupal Learning Curve & Architect the Perfect Solution in 3 Simple Steps

Planet Drupal - Sun, 09/10/2016 - 15:06
Leapfrog the Drupal Learning Curve & Architect the Perfect Solution in 3 Simple Steps Steve Purkiss Sun, 10/09/2016 - 14:06

"Drupal has a steep learning curve" is something I hear time and again, however I feel this is a misguided perception and something we need to work towards changing - especially now focus is on the adoption journey. Learning how to 'Drupal' is actually incredibly easy - the trick is to understand exactly what Drupal is and how to mould it to your needs - this is what I'm going to show you how to do in three simple steps.

Step 1: Discover what Drupal doesn't know

This is by far the most important step of the process, hence why I go into much further detail than the other two - skim if you so wish but I assure you the story is there for a reason!

We've been here before

As of writing, Drupal has been around for 15 years and has solved many problems associated with building a wide range of web sites and applications, embedding this knowledge in either the core Drupal distribution or one of the 35,000+ modules available on the drupal.org site. Drupal's decision to only provide backwards-compatibility for content and not functionality means this functionality has had the ability to improve over time and make the most of innovation in technology, for example the recent big jump from mostly procedural programming to object-oriented.

A note about the jump from procedural to object orientation

This latest jump was a big one - Drupal was developed before object orientation was available in PHP (the language Drupal is written in), and so developed its own system of 'hooks'. You use hooks to interact with Drupal core to override functionality in order to make Drupal do what you want it to do for you. You can think of hooks like the ones on a coat stand - the trouble here was as different modules and themes overrode hooks, like an overloaded coat stand with many different coats on each hook, it became increasingly harder to work out what hook was changing what and when in the process it was changing it.

There are still hooks in Drupal 8, but these may disappear in future versions of Drupal as the migration to object-orientation continues. An added benefit is more backwards compatibility than before for future versions, so the change between versions 8 and 9 shouldn't be as pronounced as the change from 7 to 8 as we don't have to perform again such a big move as changing the fundamental way the entire code works. I believe there's plans to support backwards compatibility over two major versions from now on, so 9 will be backwards compatible with 8, 10 with 9, but not 10 with 8 - YMMV, etc.!

Knowledge carried throughout generations Courtesy @sgrame: https://twitter.com/sgrame/status/774232084231680000

The key point to understand here is what Drupal brings along with it as it progresses from version to version. Whilst the underlying code may change in order to improve and make the most of the latest innovation in programming languages, the knowledge, experience, and best practices gained and shared from its deployment to millions of sites is maintained in the API and module layer. It is unlikely what you are trying to build is unknown to Drupal in some way or another, it has dealt with everything from simple brochureware sites which look the same to everyone to sites such as weather.com where everyone who visits sees a personalised version of the site. As I often like to quip, I've never been asked for Rocket Science and even if I was, NASA uses Drupal ;)

This development process is fundamentally different to how other systems on the market work, with many other popular ones focusing on ease of use at the expense of progressive innovation, and is why you see Drupal have a larger share of the market on sites with complex requirements. The adoption of semantic versioning means there are now minor releases which include bug fixes along with both new and experimental functionality, and a new version of Drupal is released every six months. We are already up to version 8.2, and with current focus on 'outside-in' it is becoming easier for people used to systems other than Drupal - or none at all - to use Drupal, however it is not easy to visualise your end goal and know how to get there, or there is a module or modules already out there which could help you along the way to achieve your desired outcome without having to code anew.

To help overcome this out-of-the-box experience there are many ongoing initiatives to provide default content, make module discovery easier, build focused distributions, etc. but they will all take time. There is a way to approach development which means you don't end up going down the wrong path or developing functionality which already exists, it is to discover what exactly it is you want to build Drupal doesn't already know about and focus only on functionality required which is specific to your situation and no other.

What makes you different?

I recently provided the architecture for a high-profile specialist travel site - a six-figure project which unfortunately as with many projects I'm involved in I'm under non-disclosure agreements, doesn't mean I can't talk about the approach I took though, and this is a particularly good example.

As they were merging a number of existing systems I could've just looked at the existing data, however there is nothing to say those systems were designed well and we don't want to fall into the trap which I see many times where people re-create bad systems. Drupal is a very flexible system, many others require you to fit your data into how they work. So by asking the client to explain how their organisation worked and what was different about themselves as opposed to other similar organisations I discovered there were six distinct areas:

  1. Activity - their offerings were split into distinct activity types
  2. Resorts - they operate their own resorts
  3. Accommodation - each resort contains one or more different types of accommodation
  4. Region - the organisation had their own definition of a region, some spanning more than one country
  5. Departure Gateways - they fly out from a limited number of airports
  6. Arrival Gateways - resorts are serviced by one or more local airports

Everything else on the system was something Drupal would have dealt with before in one way or another - number of rooms, features of accommodation and resorts, and so on. These could easily be achieved using fields, taxonomy terms, and everything else Drupal provides out-of-the-box.

Design with the future in mind

I also took the time to observe the operations of the organisation as I walked around their office. I noticed the majority of people were answering calls, so I asked what exactly they had to deal with on the phone - people wanting more information on particular deals, issues with accommodation crop up from time to time - all the usual a travel company would have to deal with but more so here as they owned and operated the resorts. The point here is there's a whole wealth of user requirements contained here which although weren't in the scope of this current phase of development, by having them in mind when designing a system it should make it easier to extend to accommodate their needs as and when budgets and time allow.

If you only design a system for buying via the web you may find when a member of staff is trying to help a customer on the phone the process is unnecessarily complicated, or extending the system to cope with this new use case is particularly hard if you haven't taken this scenario into consideration to start with. Not to say it can't be done, and is easier to adapt now Drupal 8 is more object-oriented, but it's always good to have the future in mind - some of this you will be able to see, some you'll need to extract from key stakeholders, you'll be surprised sometimes with what you find out which you'll then be glad you asked. Here I knew the latest version of Commerce for Drupal 8 has the ability to set up different buying processes so it would be able to cope easily with phone orders if it were ever a requirement.

Design for different rates of change

It is feasible I could've used Drupal's built-in content types to build the system, but this would've limited the system to this particular use-case, making it harder to cope with different buying processes like the one mentioned above. It also did not sound right - an "airport" isn't a content type, it's an entity. It has content - facilities etc. but the thing itself is an entity. So I created six custom entities, and it sounded much better especially when you went to create a view - "list accommodation in resort". By simply teaching Drupal what was different about this particular organisation, we extended Drupal's "knowledge" and leveraged everything else it had to offer to deal with the functionality it does know about, like date ranges, durations, prices, and so on.

Whilst the front-end of a website may go through many enhancements and refreshes, the core business model of an organisation - especially one such as this which is well-established and operated for many years, does not change as much, if at all. In this example they mentioned they may add new activities, and they offered packages which covered more than one activity but their current system couldn't cope with this, which is why activity was treated as a separate entity.

By encoding the core business model of an organisation as high up the chain as you can with Drupal, you end up with a far more flexible system to cope with the faster-moving changes such as views to list out particular promotions, plus ensure longevity by enabling future development of those core parts of the system. I also wanted to make it a little more difficult for them to change any of this as this is critical to the operation of the organisation, so if changes were needed they would have to go through a harder process than changing a view, but there should be a good reason for any changes needed to the core business model so happy with the custom entity approach taken.

Seeing the wood for the trees

It's not only when architecting systems you need to take this approach to Drupal - another small example is when I helped someone out a couple of weeks back who was having problems getting a product listing displaying exactly how he wanted it to using Drupal 7. He had tried a number of different types of views (Drupal's user interface for manipulating database queries) but none of them would do what he wanted, which was to provide a faceted search facility, listing the results grouped by category. You'll see this functionality on most e-commerce sites these days, for example click on Televisions and it'll provide you a list grouped by manufacturer, or perhaps size - the point is it's not Rocket Science, it's been done before, it shouldn't be hard to do, so something else was causing the issue here. Sometimes it's hard to see the wood for the trees, so you need to take a step back and take a logical think about the situation.

We delved into the problem and through a series of questions worked out the thing he wanted to do which was different was he wanted a number of fields to be displayed at the group level - the name of the group, an image, and a description. None of the various combinations of views he had tried provided the ability to display more than one field, and rewriting the field output in the view did not apply to group by fields. Although there are a number of ways to achieve this from different parts of Drupal, I implemented the simplest way I knew which was to output the taxonomy term ID as the field to group by, and overwrite the template in order to load the details of the taxonomy term so we could easily grab the fields we needed.

I can almost hear others screaming at me to use display modes or some other functionality available as I'm sure there's other ways this can be achieved which are 'better', however as I spend most of my time dealing with back-end issues and not front-end and as we only had limited time and budget to solve the issue, this worked as a solution for the situation at hand so we went with it.

The take-away here is to go with what solves the majority of the problem, the thing you see or can imagine seeing other people using, and focus on what is specific to your needs. Faceted searching, listing products, grouping products by category - all standard functionality and should be simple to achieve in Drupal. Outputting multiple fields for a grouping category title? Not so much.

Step 2: Modularise Your Requirements

Drupal is a modular system, so you need to modularise your requirements by breaking them down as much as you can. Yes, what you're wanting to do has more than likely been done before, but maybe not in your exact combination - if it has then cool, you don't have to do anything as there's already a module/distribution/theme/etc. out there for you! Many times there isn't though, and every organisation has their differences, so you need to break your requirements down in order to deal with them successfully.

In our example above where we have a faceted search listing out products grouped by category, by splitting it up into "faceted search", "list products", and "group a view by category" we are going to get much better results when searching for answers than if we search for "faceted search grouped by taxonomy", which is more specific to our use-case than the majority of uses. You're more likely to end up with someone else's specific situation who also has had issues solving it and may forever skip past the actual solutions you are looking for. Be as generic as you can with generic requirements, then be as specific as you can with the ones you identified as particular to your situation, in this example we could've searched for "override view field output" and it would've brought us results for how to override using views templates, which is how we solved the problem there.

Once you align your vocabulary more closely with Drupal's generic, modular functionality, you'll enjoy much more success with your searches - it takes a little logical thought and remembering it's not Rocket Science! Far too many times I've seen sites where little or no research has been done as to what's already out there and people have essentially forked Drupal, creating their own monster significantly increasing the amount of work it takes to maintain and extend the site when it's not necessary.

Every line of code you produce is technical debt - even if you decide not to use the module you find which does what you need or part of what you need, you can study the tried-and-tested code, copy it into your module and use as a base for your work. A good example is detailed in my previous blog post about creating a Drupal Console command where I found code which did some of what I wanted so I based my work on it because I knew what had already been written worked and there was no point in me writing it again.

Step 3: Only Develop Specifics, Share Where Possible & Grow Drupal!

If you find you have to develop specific functionality for your site, have a think about if it would be of use to anyone else, or whether you're going to be the only person in the world doing this specific thing. As mentioned above, every line of code you write is something you or your client is going to need to support your/themselves. If you publish a module to the drupal.org module repository you not only have the possibility of others sharing the maintenance of the code but they may also provide enhancements, and stable module releases are covered by the security advisory policy which doesn't mean they secure your module, but if an exploit is found and reported the 40+ strong Drupal Security Team are there to help. Even if you just create a sandbox project you may discover others find the code useful and provide feedback.

If you're working for a client and they are worried about sharing code, or you're the end client and worry about losing competitive advantage, remember software is easy to copy and it's the rest of what you do which sets you apart from your competition. In our travel example above, it's the resorts they own which provide the value to the customer, not the software code which enables people to book a stay in them.

Currently there is a lack of sharing code on the implementation side - there's a lot of factors for this including competition between suppliers, infrastructure ease of use or lack thereof, and a general lack of co-operation in some industries. The result is many people end up writing similar code when they could be starting at a higher level, collaborating with industry peers, sharing development and maintenance costs, and going towards pushing the Drupal project forward. The more we can do out-of-the-box, the better it gets for all concerned as projects cost less, launch quicker, and we can focus on code which isn't out there already which is specific to the organisation itself, so spending the development budget on genuinely useful code instead of code which could be freely available to us in the first instance. Remembering how much we started with for free may be of help creating impetus to share any code we develop.

Although my site here doesn't do much functionally I haven't had to write a single line of code to be able to use the web to communicate my message to you, something I believe is amazeballs! Drupal can and does provide code for generic websites, however it's up to industries to collaborate and build their modules and distributions, and/or some enterprising people to build code and distributions for them, as we see in some areas such as e-learning and government.

I'm honestly shocked when I hear projects haven't contributed any code back, especially larger projects lasting longer than a year - I worry about how much technical debt they've incurred and feel sorry they haven't helped Drupal to grow, it's only by contributing code the Drupal product itself has reached this amazing level of innovation. I understand there are reasons, however I never see it as "contribution", more akin to riding a bicycle - I can stare at it as long as I like but until I push my feet down on the pedal it's not going to take me anywhere, I don't call it "contribution", just how the bike works!

I hope this post has been of help, do feel free to comment below, or get in touch with me if I can be of help with anything specific.

Happy Drupaling!

Main Drupal 8 Learning Curve image courtesy @sgrame. Other images attributed inline, the rest are public domain, found on pixabay.

Category Tutorials Tags Add new comment
Categories: Elsewhere

Ben Armstrong: Annual Hike with Ryan: Salt Marsh Trail, 2016

Planet Debian - Sun, 09/10/2016 - 14:20

Once again, Ryan Neily and I met last month for our annual hike. This year, to give our aging knees a break, we visited the Salt Marsh Trail for the first time. For an added level of challenge and to access the trail by public transit, we started with the Shearwater Flyer Trail and finished with the Heritage Trail. It was a perfect day both for hiking and photography: cool with cloud cover and a refreshing coastal breeze. The entire hike was over 25 km and took the better part of the day to complete. Good times, great conversations, and I look forward to visiting these beautiful trails again!

Salt Marsh trail hike, 2016. Click to start the slideshow. We start here, on the Shearwater flyer trail. Couldn’t ID this bush. The berries are spectacular! A pond to the side of the trail. Different angle for dramatic lighting effect. Rail bridge converted to foot bridge. Cranberries! Reviewing our progress. From the start … Map of the Salt Marsh trail ahead. Off we go again! First glimpse through the trees. Appreciating the cloud cover today. Salt-marshy grasses. Never far from rocks in NS. Rocks all laid out in stripes. Lunch & selfie time. Ryan attacking his salad. Vantage point. A bit of causeway coast. Plenty of eel grass. Costal flora. We head for the bridge next. Impressed by the power of the flow beneath. Snapping more marsh shots. Ripples. Gulls, and if you squint, a copter. More ripples. Swift current along this channel. Until it broadens out and slows down. Nearly across. Heron! Sorry it’s so tiny. Heron again, before I lost it. Ducks at the head of the Atlantic View trail where we rested and then turned back. Attempt at artsy. Nodding ladies tresses on the way back. Several of them. Sky darkening, but we still have time. A lonely wild rose. The last gasp of late summer. Back across the marshes. A short breather on the Heritage Trail.

Here’s the Strava record of our hike:

Categories: Elsewhere

Norbert Preining: Reload: Android 7.0 Nougat – Root – Pokemon Go

Planet Debian - Sun, 09/10/2016 - 13:01

Ok, it turned out that a combinations of updates has broken my previous guide on playing Pokemon GO on a rooted Android device. What has happened that the October security update of the Android Nougat has changed the SecurityNet that is used for checking for rooted devices, and at the same time the Magisk rooting system as catapulted itself (hopefully temporarily) into the complete irrelevance by removing the working version and providing an “improved” version that does neither have SuperSU installed, nor the ability to hide root – well done, congratulations.

But there is a way around, and I am now back at the latest security patch level, rooted, and playing Pokemon GO (not very often, no time, though!).

My previous guide used Magisk Version 6 to root and hide root. But the recent security updated of Andorid Nougat (October 2016) has rendered Magisk-v6 non-working. I first thought that Magisk-v7 could solve the problem, but I was badly disappointed: After reflashing my device to pristine state, installing Magisk-v7, I suddenly was left with: no SuperSU (that means, X-plore, Titanium Backup etc do not work anymore), nor the ability to hide root for Pokemon Go or banking apps. Great update.

Thus, I have decided to remove Magisk completely and make a clean start with SuperSU and suhide (and a GUI for suhide). And it turned out to be more convenient and more standard than Magisk, may it rest in peace (until they fix their stuff together).

In the following I assume you have a clean slate Android Nougat device, if not, please see one of the previous guides for hints how to flash back without loosing your user data.


One need the following few items:


Unzip the CF-Auto-Root-angler-angler-nexus6p.zip and either use the included programs (root-linux.sh, root-mac.sh, root-windows.bat) to root your device, or simply connect your device to your computer, and run (assuming you have adb and fastboot installed):

adb reboot bootloader sleep 10 fastboot boot image/CF-Auto-Root-angler-angler-nexus6p.img

After that your device will reboot a few times, and you will finally land in your normal Android screen and a new program SuperSU will be available. At this stage you will not be able to play Pokemon GO anymore.

Updating SuperSU

The version of SuperSU packaged with the CF-AutoRoot is unfortunately too old, so one needs to update using the zip file SR1-SuperSU-v2.78-SR1-20160915123031.zip (or later). Here are two options: Either you use TWRP recovery system, or you install FlashFire (main page, app store page) which allows you to flash zip/ota directly from your Android screen. This time I used for the first time the FlashFire method, and it worked without any problem.

Just press the “+” button in FlashFire, than the “Flash zip/ota” button, select the SR1-SuperSU-v2.78-SR1-20160915123031.zip, click two times yes, and then wait a bit and a few black screens (don’t do anything!) later you will be back in your Nougat environment. Opening the SuperSU app should show you on the Settings tag that the version has been updated to 2.78-SR1.

Installing suhide

As with the update of SuperSU, install the suhide zip file, smae procedure, nothing special.

After this you will be able to add an application (like Pokemon GO) from the command line (shell), but this is not very convenient. Better is to install the suhide GUI from the app store, start it, scroll for Pokemnon GO, add a tick, and you are settled.

After that you are free to play Pokemon GO again. At least until the next security update brings again problems. In the long run this is a loosing game, anyway. Enjoy it while you can.

Categories: Elsewhere

Craig Sanders: Converting to a ZFS rootfs

Planet Debian - Sun, 09/10/2016 - 07:57

My main desktop/server machine (running Debian sid) at home has been running XFS on mdadm raid-1 on a pair of SSDs for the last few years. A few days ago, one of the SSDs died.

I’ve been planning to switch to ZFS as the root filesystem for a while now, so instead of just replacing the failed drive, I took the opportunity to convert it.

NOTE: at this point in time, ZFS On Linux does NOT support TRIM for either datasets or zvols on SSD. There’s a patch almost ready (TRIM/Discard support from Nexenta #3656), so I’m betting on that getting merged before it becomes an issue for me.

Here’s the procedure I came up with:

1. Buy new disks, shutdown machine, install new disks, reboot.

The details of this stage are unimportant, and the only thing to note is that I’m switching from mdadm RAID-1 with two SSDs to ZFS with two mirrored pairs (RAID-10) on four SSDs (Crucial MX300 275G – at around $100 AUD each, they’re hard to resist). Buying four 275G SSDs is slightly more expensive than buying two of the 525G models, but will perform a lot better.

When installed in the machine, they ended up as /dev/sdp, /dev/sdq, /dev/sdr, and /dev/sds. I’ll be using the symlinks in /dev/disk/by-id/ for the zpool, but for partition and setup, it’s easiest to use the /dev/sd? device nodes.

2. Partition the disks identically with gpt partition tables, using gdisk and sgdisk.

I need:

  • A small partition (type EF02, 1MB) for grub to install itself in. Needed on gpt.
  • A small partition (type EF00, 1MB) for EFI System. I’m not currently booting with UEFI but I want the option to move to it later.
  • A small partition (type 8300, 2GB) for /boot.

    I want /boot on a separate partition to make it easier to recover from problems that might occur with future upgrades. 2GB might seem excessive, but as this is my tftp & dhcp server I can’t rely on network boot for rescues, so I want to be able to put rescue ISO images in there and boot them with grub and memdisk.

    This will be mdadm RAID-1, with 4 copies.

  • A larger partition (type 8200, 4GB) for swap. With 4 identically partitioned SSDs, I’ll end up with 16GB swap (using zswap for block-device backed compressed RAM swap)

  • A large partition (type bf07, 210GB) for my rootfs

  • A small partition (type bf08, 2GBB) to provide ZIL for my HDD zpools

  • A larger partition (type bf09, 32GB) to provide L2ARC for my HDD zpools

ZFS On Linux uses partition type bf08 (“Solaris Reserved 1”) natively, but doesn’t seem to care what the partition types are for ZIL and L2ARC. I arbitrarily used bf08 (“Solaris Reserved 2”) and bf09 (“Solaris Reserved 3”) for easy identification. I’ll set these up later, once I’ve got the system booted – I don’t want to risk breaking my existing zpools by taking away their ZIL and L2ARC (and forgetting to zpool remove them, which I might possibly have done once) if I have to repartition.

I used gdisk to interactively set up the partitions:

# gdisk -l /dev/sdp GPT fdisk (gdisk) version 1.0.1 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sdp: 537234768 sectors, 256.2 GiB Logical sector size: 512 bytes Disk identifier (GUID): 4234FE49-FCF0-48AE-828B-3C52448E8CBD Partition table holds up to 128 entries First usable sector is 34, last usable sector is 537234734 Partitions will be aligned on 8-sector boundaries Total free space is 6 sectors (3.0 KiB) Number Start (sector) End (sector) Size Code Name 1 40 2047 1004.0 KiB EF02 BIOS boot partition 2 2048 2099199 1024.0 MiB EF00 EFI System 3 2099200 6293503 2.0 GiB 8300 Linux filesystem 4 6293504 14682111 4.0 GiB 8200 Linux swap 5 14682112 455084031 210.0 GiB BF07 Solaris Reserved 1 6 455084032 459278335 2.0 GiB BF08 Solaris Reserved 2 7 459278336 537234734 37.2 GiB BF09 Solaris Reserved 3

I then cloned the partition table to the other three SSDs with this little script:


#! /bin/bash src='sdp' targets=( 'sdq' 'sdr' 'sds' ) for tgt in "${targets[@]}"; do sgdisk --replicate="/dev/$tgt" /dev/"$src" sgdisk --randomize-guids "/dev/$tgt" done 3. Create the mdadm for /boot, the zpool, and and the root filesystem.

Most rootfs on ZFS guides that I’ve seen say to call the pool rpool, then create a dataset called "$(hostname)-1" and then create a ROOT dataset under that. so on my machine, that would be rpool/ganesh-1/ROOT. Some reverse the order of hostname and the rootfs dataset, for rpool/ROOT/ganesh-1.

There might be uses for this naming scheme in other environments but not in mine. And, to me, it looks ugly. So I’ll use just $(hostname)/root for the rootfs. i.e. ganesh/root

I wrote a script to automate it, figuring I’d probably have to do it several times in order to optimise performance. Also, I wanted to document the procedure for future reference, and have scripts that would be trivial to modify for other machines.


#! /bin/bash exec &> ./create.log hn="$(hostname -s)" base='ata-Crucial_CT275MX300SSD1_' md='/dev/md0' md_part=3 md_parts=( $(/bin/ls -1 /dev/disk/by-id/${base}*-part${md_part}) ) zfs_part=5 # 4 disks, so use the top half and bottom half for the two mirrors. zmirror1=( $(/bin/ls -1 /dev/disk/by-id/${base}*-part${zfs_part} | head -n 2) ) zmirror2=( $(/bin/ls -1 /dev/disk/by-id/${base}*-part${zfs_part} | tail -n 2) ) # create /boot raid array mdadm "$md" --create \ --bitmap=internal \ --raid-devices=4 \ --level 1 \ --metadata=0.90 \ "${md_parts[@]}" mkfs.ext4 "$md" # create zpool zpool create -o ashift=12 "$hn" \ mirror "${zmirror1[@]}" \ mirror "${zmirror2[@]}" # create zfs rootfs zfs set compression=on "$hn" zfs set atime=off "$hn" zfs create "$hn/root" zpool set bootfs="$hn/root" # mount the new /boot under the zfs root mount "$md" "/$hn/root/boot"

If you want or need other ZFS datasets (e.g. for /home, /var etc) then create them here in this script. Or you can do that later after you’ve got the system up and running on ZFS.

If you run mysql or postgresql, read the various tuning guides for how to get best performance for databases on ZFS (they both need their own datasets with particular recordsize and other settings). If you download Linux ISOs or anything with bit-torrent, avoid COW fragmentation by setting up a dataset to download into with recordsize=16K and configure your BT client to move the downloads to another directory on completion.

I did this after I got my system booted on ZFS. For my db, I stoppped the postgres service, renamed /var/lib/postgresql to /var/lib/p, created the new datasets with:

zfs create -o recordsize=8K -o logbias=throughput -o mountpoint=/var/lib/postgresql \ -o primarycache=metadata ganesh/postgres zfs create -o recordsize=128k -o logbias=latency -o mountpoint=/var/lib/postgresql/9.6/main/pg_xlog \ -o primarycache=metadata ganesh/pg-xlog

followed by rsync and then started postgres again.

4. rsync my current system to it.

Logout all user sessions, shut down all services that write to the disk (postfix, postgresql, mysql, apache, asterisk, docker, etc). If you haven’t booted into recovery/rescue/single-user mode, then you should be as close to it as possible – everything non-esssential should be stopped. I chose not to boot to single-user in case I needed access to the web to look things up while I did all this (this machine is my internet gateway).


hn="$(hostname -s)" time rsync -avxHAXS -h -h --progress --stats --delete / /boot/ "/$hn/root/"

After the rsync, my 130GB of data from XFS was compressed to 91GB on ZFS with transparent lz4 compression.

Run the rsync again if (as I did), you realise you forgot to shut down postfix (causing newly arrived mail to not be on the new setup) or something.

You can do a (very quick & dirty) performance test now, by running zpool scrub "$hn". Then run watch zpool status "$hn". As there should be no errorss to correct, you should get scrub speeds approximating the combined sequential read speed of all vdevs in the pool. In my case, I got around 500-600M/s – I was kind of expecting closer to 800M/s but that’s good enough….the Crucial MX300s aren’t the fastest drive available (but they’re great for the price), and ZFS is optimised for reliability more than speed. The scrub took about 3 minutes to scan all 91GB. My HDD zpools get around 150 to 250M/s, depending on whether they have mirror or RAID-Z vdevs and on what kind of drives they have.

For real benchmarking, use bonnie++ or fio.

5. Prepare the new rootfs for chroot, chroot into it, edit /etc/fstab and /etc/default/grub.

This script bind mounts /proc, /sys, /dev, and /dev/pts before chrooting:


#! /bin/sh hn="$(hostname -s)" for i in proc sys dev dev/pts ; do mount -o bind "/$i" "/${hn}/root/$i" done chroot "/${hn}/root"

Change /etc/fstab (on the new zfs root to) have the zfs root and ext4 on raid-1 /boot:

/ganesh/root / zfs defaults 0 0 /dev/md0 /boot ext4 defaults,relatime,nodiratime,errors=remount-ro 0 2

I haven’t bothered with setting up the swap at this point. That’s trivial and I can do it after I’ve got the system rebooted with its new ZFS rootfs (which reminds me, I still haven’t done that :).

add boot=zfs to the GRUB_CMDLINE_LINUX variable in /etc/default/grub. On my system, that’s:

GRUB_CMDLINE_LINUX="iommu=noagp usbhid.quirks=0x1B1C:0x1B20:0x408 boot=zfs"

NOTE: If you end up needing to run rsync again as in step 4. above copy /etc/fstab and /etc/default/grub to the old root filesystem first. I suggest to /etc/fstab.zfs and /etc/default/grub.zfs

6. Install grub

Here’s where things get a little complicated. Running install-grub on /dev/sd[pqrs] is fine, we created the type ef02 partition for it to install itself into.

But running update-grub to generate the new /boot/grub/grub.cfg will fail with an error like this:

/usr/sbin/grub-probe: error: failed to get canonical path of `/dev/ata-Crucial_CT275MX300SSD1_163313AADD8A-part5'.

IMO, that’s a bug in grub-probe – it should look in /dev/disk/by-id/ if it can’t find what it’s looking for in /dev/

I fixed that problem with this script:


#! /bin/sh cd /dev ln -s /dev/disk/by-id/ata-Crucial* .

After that, update-grub works fine.

NOTE: you will have to add udev rules to create these symlinks, or run this script on every boot otherwise you’ll get that error every time you run update-grub in future.

7. Prepare to reboot

Unmount proc, sys, dev/pts, dev, the new raid /boot, and the new zfs filesystems. Set the mount point for the new rootfs to /


#! /bin/sh hn="$(hostname -s)" md="/dev/md0" for i in dev/pts dev sys proc ; do umount "/${hn}/root/$i" done umount "$md" zfs umount "${hn}/root" zfs umount "${hn}" zfs set mountpoint=/ "${hn}/root" zfs set canmount=off "${hn}" 8. Reboot

Remember to configure the BIOS to boot from your new disks.

The system should boot up with the new rootfs, no rescue disk required as in some other guides – the rsync and chroot stuff has already been done.

9. Other notes
  • If you’re adding partition(s) to a zpool for ZIL, remember that ashift is per vdev, not per zpool. So remember to specify ashift=12 when adding them. e.g.

    zpool add -o ashift=12 export log \ mirror ata-Crucial_CT275MX300SSD1_163313AAEE5F-part6 \ ata-Crucial_CT275MX300SSD1_163313AB002C-part6

    Check that all vdevs in all pools have the correct ashift value with:

    zdb | grep -E 'ashift|vdev|type' | grep -v disk
10. Useful references

Reading these made it much easier to come up with my own method. Highly recommended.

Converting to a ZFS rootfs is a post from: Errata

Categories: Elsewhere


Subscribe to jfhovinne aggregator