Agrégateur de flux

Liip: State of Drupal Commerce for Drupal 8

Planet Drupal - jeu, 21/04/2016 - 13:12

The two biggest players in the Drupal 7 webshop field are Drupal Commerce (also known as DC1) and Übercart. DC1 actually started as an Übercart rewrite to make use of Drupal 7 APIs. After the split Übercart was ported to Drupal 7 too but it was still using Drupal 6 technologies.

Although still very much in development, it seems something similar will be true for Drupal 8 as well. The developers of DC2 (the Drupal 8 version of Drupal Commerce), lead by Bojan Živanović rewrote the whole system from scratch to make use of the huge changes in Drupal 8. They are active members of the Drupal developer community so they not only know but also form the actual best practices. While working on DC2 they have fixed many dozens of Drupal 8 core issues and much more in other contributed modules (such as Entity, Inline Entity Form, Profile).

A great realisation when rewriting Commerce was that several components of a webshop could be reused by other (not even necessarily webshop or Drupal) systems. Some typical examples are address formats, currencies and taxes. These components are usually a huge pain to maintain because of the small differences from country to country. So they have created standalone PHP libraries usually using authorative third party datasets such as CLDR for currency or Google’s dataset for address formats. Some of them are already used by other webshop solutions like Foxycart and developers even outside the Drupal community are giving feedback which makes maintaining them easier.

In the DC2 development process UI and UX has got a big emphasis already from the beginning. Based on research of existing webshop solutions the shop administration and checkout process has been redesigned by UX specialists. For example, the product creation process is quite confusing in DC1 and there’s not even a recommended way to do it. In DC2 this happens now in one single form which makes it super easy.

A new concept in DC2 is Stores. Stores represent billing locations and products can belong to one ore more stores. One use case is the need for different billing for customers from different countries. Another one is having a shop where sellers can open an account and sell their own products. In this case each seller has their own store.

There are many other new features and improvements like a new and flexible tax system (you can say things like: “from Jan 1st 2014 the VAT changes from 21% to 19%”), a redesigned checkout flow, different workflows for different order types etc.

DC2 is still in alpha phase and is not recommended for production use yet. Beta releases will already have upgrade paths between them and so can be considered for starting real sites with. Beta1 is expected for May.

Drupal Commerce is the most popular e-commerce solution for Drupal 7. Given the high quality code and responsiveness to developer, shop maintainer and customer needs I do not expect this to change in Drupal 8 either.

Drupal Commerce 2 blog
Modules Unraveled podcast on Commerce

Catégories: Elsewhere

Valuebound: How to create Custom Rest Resources for POST methods in Drupal 8

Planet Drupal - jeu, 21/04/2016 - 11:07

One of the biggest changes in Drupal 8 is the integration of Rest services in the core. With use of Views it become very easy to create RESTful Services.

But there are certain situations when you need to create your own custom REST Resources. There are documentations available for creating a GET request using Views and using custom module. However there isn’t much documentation available for creating Rest Resources for POST methods.

In this article I will be sharing, how to create a REST Resource for POST methods. This Rest API will create an article in Drupal 8 site from an external…

Catégories: Elsewhere

Alessio Treglia: Corporate Culture in the Transformative Enterprise

Planet Debian - jeu, 21/04/2016 - 10:40


The “accelerated” world of the Western or “Westernized” countries seems to be fed by an insidious food, which generates a kind of psychological dependence: anxiety. The economy of global markets cannot help it, it has a structural need of it to feed their iron logic of survival. The anxiety generated in the masses of consumers and in market competitors is crucial for Companies fighting each other and now they can only live if men are projected to objective targets continuously moving forward, without ever allowing them to achieve a stable destination.

The consumer is thus constantly maintained in a state of perpetual breathlessness, always looking for the fresh air of liberation that could eventually reduce his tension. It is a state of anxiety caused by false needs generated by advertising campaigns whose primary purpose is to create a need, to interpret to their advantage a still confused psychological demand leading to the destination decided by the market…

<Read More…[by Fabio Marzocca]>

Catégories: Elsewhere

Sven Decabooter: Using Drupal 8 contact forms via REST

Planet Drupal - jeu, 21/04/2016 - 10:34
Using Drupal 8 contact forms via REST

While working on Narwal CMS, our hosted decoupled / headless CMS service built on Drupal 8, a feature request that has popped up multiple times is the ability to have (contact) forms in the frontend application or website be processed by the Drupal backend through REST.

Drupal 8 provides you with REST services and fieldable contact forms out of the box, but doesn't provide all the pieces to get this specific use case working. In the tutorial below I will show you how to implement this on your Drupal 8 website.

  1. Prerequisites
    • Install your Drupal 8 website and make sure to enable the "Contact" and "Restful web services" modules, as we'll obviously need those.
    • Install and enable the REST UI module to easily manage which REST resources are activated on your site.
    • Install and enable the "Contact message REST" module.
    • You should also activate an authentication module such as "HTTP Basic Authentication" (in Drupal 8 core) or IP Consumer Auth to be able to restrict form submission access to your frontend application(s) only, and not leave your REST endpoints open to everyone.
  2. Set up REST resource
    • Go to /admin/config/services/rest and enable the "Contact message" resource, with path /contact_message/{entity}:
    • Configure the supported formats and authentication providers to your preferences. E.g.:
  3. Configure permissions
    • Go to /admin/people/permissions and grant selected roles access to the permission "Access POST on Contact message resource". This should be set up correctly so your frontend application can authenticate with the correct role and perform the POST method. Only grant anonymous users permissions if you have some method to restrict access to your frontend application, such as IP whitelisting or API keys.
  4. Create contact form
    • Go to /admin/structure/contact and create a contact form, or use the default "Website feedback" form. Take note of the machine name of the contact form, as we'll need it when performing the REST call.
  5. Submit contact form via REST
    • In your frontend application, you should do a POST method call to the /contact_message REST endpoint: e.g. POST Make sure you have set up your headers correctly for authentication and the X-CSRF-Token.
    • The body of your method call should provide values for the contact form entity fields. When using the default contact form fields, your request would look like this (in JSON format): { "contact_form":[{"target_id":"YOUR_FORM_MACHINE_NAME"}], "name":[{"value":"SENDER_NAME_VALUE"}], "mail":[{"value":"SENDER_MAIL_VALUE"}], "subject":[{"value":"FORM_SUBJECT"}], "message":[{"value":"FORM_MESSAGE"}] }
    • For example:

You could also install the Contact Storage module if you wish to store the submitted form entries, rather then just sending them via mail. In my experience, the module currently still has some bugs that prevent it from being completely usable, but those might get ironed out soon.

Background about "Contact message REST" module

While trying to get the functionality above working with Drupal core, 2 issues came up that prevented it from working:

  • When trying to create a Contact message through the default /entity/contact_message/{contact_message} resource, errors are thrown, because a contact Message entity doesn't get an ID assigned to it (because it uses the ContentEntityNullStorage storage handler). Since this REST resource tries to load the created entity and return it, this fails.
  • The actual sending of the contact form mails happens in the \Drupal\contact\MessageForm submit handler, which doesn't get called when a Message entity gets created through a REST resource.

Hence I created the Contact message REST module to solve these issues. Hopefully they will be resolved in later versions of Drupal 8.x, so the module won't be needed anymore.


Sven Decabooter Thu, 04/21/2016 - 10:34
Catégories: Elsewhere

Mario Lang: Scraping the web with Python and XQuery

Planet Debian - jeu, 21/04/2016 - 10:30

During a JAWS for Windows training, I was introduced to the Research It feature of that screen reader. Research It is a quick way to utilize web scraping to make working with complex web pages easier. It is about extracting specific information from a website that does not offer an API. For instance, look up a word in an online dictionary, or quickly check the status of a delivery. Strictly speaking, this feature does not belong in a screen reader, but it is a very helpful tool to have at your fingertips.

Research It uses XQuery (actually, XQilla) to do all the heavy lifting. This also means that the Research It Rulesets are theoretically also useable on other platforms. I was immediately hooked, because I always had a love for XPath. Looking at XQuery code is totally self-explanatory for me. I just like the syntax and semantics.

So I immediately checked out XQilla on Debian, and found #821329 and #821330, which were promptly fixed by Tommi Vainikainen, thanks to him for the really quick response!

Unfortunately, making xqilla:parse-html available and upgrading to the latest upstream version is not enough to use XQilla on Linux with the typical webpages out there. Xerces-C++, which is what XQilla uses to fetch web resources, does not support HTTPS URLs at the moment. I filed #821380 to ask for HTTPS support in Xerces-C to be enabled by default.

And even with HTTPS support enabled in Xerces-C, the xqilla:parse-html function (which is based on HTML Tidy) fails for a lot of real-world webpages I tried. Manually upgrading the six year old version of HTML Tidy in Debian to the latest from GitHub (tidy-html5, #810951) did not help a lot either.

Python to the rescue

XQuery is still a very nice language for extracting information from markup documents. XQilla just has a bit of a hard time dealing with the typical HTML documents out there. After all, it was designed to deal with well-formed XML documents.

So I decided to build myself a little wrapper around XQilla which fetches the web resources with the Python Requests package, and cleans the HTML document with BeautifulSoup (which uses lxml to do HTML parsing). The output of BeautifulSoup can apparently be passed to XQilla as the context document. This is a fairly crazy hack, but it works quite reliably so far.

Here is how one of my web scraping rules looks like:

from click import argument, group @group() def xq(): """Web scraping for command-line users.""" pass'') def github(): """Quick access to""" pass @github.command('code_search') @argument('language') @argument('query') def github_code_search(language, query): """Search for source code.""" scrape(get='', params={'l': language, 'q': query, 'type': 'code'})

The function scrape automatically determines the XQuery filename according to the callers function name. Here is how github_code_search.xq looks like:

declare function local:source-lines($table as node()*) as xs:string* { for $tr in $table/tr return normalize-space(data($tr)) }; let $results := html//div[@id="code_search_results"]/div[@class="code-list"] for $div in $results/div let $repo := data($div/p/a[1]) let $file := data($div/p/a[2]) let $link := resolve-uri(data($div/p/a[2]/@href)) return (concat($repo, ": ", $file), $link, local:source-lines($div//table), "---------------------------------------------------------------")

That is all I need to implement a custom web scraping rule. A few lines of Python to specify how and where to fetch the website from. And a XQuery file that specifies how to mangle the document content.

And thanks to the Python click package, the various entry points of my web scraping script can easily be called from the command-line.

Here is a sample invokation:

fx:~/xq% ./ Usage: [OPTIONS] COMMAND [ARGS]... Quick access to Options: --help Show this message and exit. Commands: code_search Search for source code. fx:~/xq% ./ code_search Pascal '"debian/rules"' prof7bit/LazPackager: frmlazpackageroptionsdeb.pas 230 procedure TFDebianOptions.BtnPreviewRulesClick(Sender: TObject); 231 begin 232 ShowPreview('debian/rules', EdRules.Text); 233 end; 234 235 procedure TFDebianOptions.BtnPreviewChangelogClick(Sender: TObject); --------------------------------------------------------------- prof7bit/LazPackager: lazpackagerdebian.pas 205 + 'mv ../rules debian/' + LF 206 + 'chmod +x debian/rules' + LF 207 + 'mv ../changelog debian/' + LF 208 + 'mv ../copyright debian/' + LF ---------------------------------------------------------------

For the impatient, here is the implementation of scrape:

from bs4 import BeautifulSoup from bs4.element import Doctype, ResultSet from inspect import currentframe from itertools import chain from os import path from os.path import abspath, dirname from subprocess import PIPE, run from tempfile import NamedTemporaryFile import requests def scrape(get=None, post=None, find_all=None, xquery_name=None, xquery_vars={}, **kwargs): """Execute a XQuery file. When either get or post is specified, fetch the resource and run it through BeautifulSoup, passing it as context to the XQuery. If find_all is given, wrap the result of executing find_all on the BeautifulSoup in an artificial HTML body. If xquery_name is not specified, the callers function name is used. xquery_name combined with extension ".xq" is searched in the directory where this Python script resides and executed with XQilla. kwargs are passed to get or post calls. Typical extra keywords would be: params -- To pass extra parameters to the URL. data -- For HTTP POST. """ response = None url = None context = None if get is not None: response = requests.get(get, **kwargs) elif post is not None: response =, **kwargs) if response is not None: response.raise_for_status() context = BeautifulSoup(response.text, 'lxml') dtd = next(context.descendants) if type(dtd) is Doctype: dtd.extract() if find_all is not None: context = context.find_all(find_all) url = response.url if xquery_name is None: xquery_name = currentframe().f_back.f_code.co_name cmd = ['xqilla'] if context is not None: if type(context) is BeautifulSoup: soup = context context = NamedTemporaryFile(mode='w') print(soup, file=context) cmd.extend(['-i',]) elif isinstance(context, list) or isinstance(context, ResultSet): tags = context context = NamedTemporaryFile(mode='w') print('<html><body>', file=context) for item in tags: print(item, file=context) print('</body></html>', file=context) context.flush() cmd.extend(['-i',]) cmd.extend(chain.from_iterable(['-v', k, v] for k, v in xquery_vars.items())) if url is not None: cmd.extend(['-b', url]) cmd.append(abspath(path.join(dirname(__file__), xquery_name + ".xq"))) output = run(cmd, stdout=PIPE).stdout.decode('utf-8') if type(context) is NamedTemporaryFile: context.close() print(output, end='')

The full source for xq can be found on GitHub. The project is just two days old, so I have only implemented three scraping rules as of now. However, adding new rules has been made deliberately easy, so that I can just write up a few lines of code whenever I find something on the web which I'd like to scrape on the command-line. If you find this "framework" useful, make sure to share your insights with me. And if you impelement your own scraping rules for a public service, consider sharing that as well.

If you have an comments or questions, send me mail. Oh, and by the way, I am now also on Twitter as @blindbird23.

Catégories: Elsewhere

Vardot: Project Manager’s Guide to Breaking Down a Drupal Site for Incremental Delivery

Planet Drupal - jeu, 21/04/2016 - 09:39
How to, Resources Read time: 8 minutes

TL;DR. Jump to the free template: Standard Drupal Work Breakdown Structure Template.

Building a new site on a content management system has always been a tricky project to manage for a project manager, when compared to building a site on a framework or from scratch. That is because you are dealing with building blocks that are provided as a standard from the CMS. A project manager should have the necessary knowledge of the CMS’s building blocks to be able to manage a successful project.

Put this in context of today’s Scrum management approach (an agile way to manage a project, usually software development) and you’ll end up with a puzzled project manager with several questions such as:

  1. What can my team deliver in the first sprint?

  2. How can I breakdown the project’s deliveries into sprints?

  3. What expectations of deliverables should I set with my project’s stakeholders (product owner, business owner, client)

  4. When do I deliver the homepage for my stakeholder to look at?

  5. Are we supposed to deliver on page by page basis?


Drupal disrupts the “page” methodology that we are used to thinking of. One naturally tends to think of a website as a folder, with sub-folders, and pages (.html) inside those folders. That’s the 90s. We’re in 2016. Drupal is a database-driven CMS that takes a content-first (or content-out) approach of building rich web experiences, instead of a page-first approach. See “Drupal is content first, and that's good” and “A Richer Canvas”.


Due to this approach of how Drupal works, we at Vardot have came up with a framework of planning the phases of building a Drupal site, to lead to an incremental development that can be broken down and fit into Scrum sprints. This will apply to almost all Drupal projects.

This standard way we'll call: The Standard Drupal Work Breakdown Structure


Why Do I Need a Breakdown of Work for Planning My Drupal Site?

Because this what project managers do. I have seen in the many Drupal projects that I was part of, that project managers (and/or coordinators) must understand how Drupal works, how the development process goes, and how do we get 80% of the site done in 20% of the time.

A work breakdown structure will help you (as a project manager) to understand how a Drupal site is built. It will also ease the process for getting high quality incremental deliveries to fit in your sprints. In this post, I will walk you through the high-level breakdown for any Drupal site.


Most importantly, the goals and outcomes of a breakdown are for you to understand and communicate to your project’s stakeholders your timeline of deliveries, and to be able to fit these deliveries into sprints.

To summarize, these goals are:

  1. Breakdown of deliverables. Define needed outcomes of initial sprints

  2. Provide a holistic view and analysis of the site’s functionality and its building blocks

  3. Remember, we are building a CMS, not a website. Therefore you need to architect your “CMS solution”, and not your “website solution”


Let’s Start With The How

Now we enforce these goals by implementing the The Standard Drupal Work Breakdown Structure, that will fit for almost all of the Drupal projects you will work with.

These phases will be divided into:

  1. Initialization Work Breakdown Structure: This phase is the cornerstone phase for starting right, it’s most probably a typical standard way that you should do in every project.

  2. Project’s Epics Work Breakdown Structure: Careful analysis of the site’s components and how it will be developed in the CMS will be implemented here.

  3. Finalization Work Breakdown Structure: This is the ending phase, where you make sure your site is ready for launch. Final preparations, tuning, and tweaks are carried out in this phase to prepare for your big day.

Note that you will be able to deliver something for your stakeholders to look at, in the “Initialization” phase.

This breakdown must happen after high-fidelity wireframes are done, or if you have the full visual mockups of a Drupal site done for your key pages.

It’s important to note that the visual mockups should use and adhere to Drupal’s design language and patterns. But what is Drupal’s design language and patterns? That’s for another article to discuss.

Now that we have designs handed to us with a clear communication of how the new website will look like. We are ready to breakdown our Drupal site for a successful delivery.


The Work Breakdown

Disclaimer: the terminology that I’m using below to name some components that make up your site is not an “official Drupal language”. No worries if you stick with the same terminology or use your own names, what really matters is just the breakdown structure.

So I’m categorizing what makes a (Drupal) site into six components:

  1. Wrapping components: Header and Footer.
    These are the components that provide your site with a wrapper for all your next components. Start with these as soon as you install Drupal; it will help you get through the easy stuff that makes up your site.
  2. Global components: Page title, Breadcrumbs, Tabs (a.k.a menu local tasks), System messages ...etc.
    These are the components that make up the uniformity of a CMS. These are your next target.
  3. Site-unified components: Ad blocks, Newsletter subscribe block, Social media feeds or “follow us” blocks, Static “about us” block ..etc.
    These are the components that most likely appear in the same style across multiple pages in the site.
  4. Full nodes and entities: Your “Full content” node/user/entity pages.
    Getting back to “content-out” approach, always start with the full-node or entities completion.
  5. Views, view modes, and other content: Views of recent content, Featured content, Node pages, Feeds integration, CRM integration, Single Sign On integration, ...etc.
    This is the major work; components that define your site.
  6. The annoying 20% of the site: This is where the built 80% of your site gets the final hidden work, iterative tweaking and enhancements to your site, whether it is requested by your QA team, the client, or the product owner.

In light of this breakdown of CMS’s categories, here’s an animated illustration of how a site can be made possible when following the flow of development based on the components above:

In this order, you can now think of a Drupal site to be developed according the following steps:

Initialization Work Breakdown Structure

  1. Delivering “1. Wrapping components

    1. Install Drupal (or the distribution you want to use), setup the development environment ..etc.

    2. Populate the things that make up the “Wrapping components”: Menus, logo, search ..etc.

    3. Create your theme, and theme the “Wrapping components”

  2. Delivering “2. Global components

    1. Just populate then and theme them.

  3. Delivering “3. Site-unified components

    1. Create and populate the things that make up your “site-unified components”

    2. Theme them

Project’s Epics Work Breakdown Structure

  1. Outline your content types starting from the “Full node” view modes. Identify other view modes for your content types. Start creating those into “Tasks”

  2. Do the same for other Drupal entities: Entities, Files, Comments ..etc.

  3. Deliver “4. Full nodes and entities

  4. Deliver “5. Views, view modes, and other content

  5. Deliver “6. The annoying 20% of the site

Finalization Work Breakdown Structure

  1. Final overall testing

  2. SEO, Printability, Performance, Security, and Accessibility tuning and configuration

  3. Your pre-launch checklists

  4. Go live!


FREEBIE: The Standard Drupal Work Breakdown Structure Template

Our Standard Drupal Work Breakdown Structure Template provides an outline of these phases and detailed tasks to be done that we use for every Drupal project. This template is made to be easily imported to JIRA. It contains:

  • a master sheet that aggregates the standard epics, tasks and stories to be easily imported to JIRA.

  • a sheet for defining the project’s own epics and stories

  • the standard Initialization and Finalization work breakdown structure that must not be missed for any project

All of this helps to reduce discrepancies in developing each project, not to miss important tasks and also allows our team to deliver a project fast, and incrementally (delivering in the first week of development).

Using The Template

The template is a Google Spreadsheet that you can easily clone and customize. To do so:

  1. Open the sheet and copy it to make it yours.

  2. Feel free to edit the sheet to make it your own. There are some instructions on how to use the sheet to make it yours.

  3. Follow the instructions on what to edit. We recommend that the “Initialization WBS” and the “Finalization WBS” stay intact (you can edit them once to your standard flow, then replicate for all projects).

  4. For each project, you will want to copy your template to customize the “Project’s Epics WBS” as per the project. The template has some samples for you to consider.

  5. Once done, export the “Master WBS” sheet to CSV. So you can import to your JIRA project.

  6. Map fields to your JIRA. See sample [image to illustrate mapping]

  7. That’s it!



Two things have helped us to standardize our work process when developing a Drupal site, and insure consistency and quality:

  1. Starting a project by finishing up components-first approach, not page-first approach.

  2. Documenting our recurring tasks and processes in a Template that uses this approach. This template makes applying this process easier for you.

Next time you start a Drupal project, consider this approach and let us know how this would help you in the comments section below.

Note: This does not depend on a specific Drupal version, this methodology works with Drupal 6, 7 or 8. It depends on Drupal’s conceptual building approach.

Tags:  Drupal Planet drupal 8 Project Management Drupal Templates Title:  Project Manager’s Guide to Breaking Down a Drupal Site for Incremental Delivery
Catégories: Elsewhere

Drupal Console: Drupal Console alpha1 is now available

Planet Drupal - jeu, 21/04/2016 - 09:24
We are so excited to announce the first Alpha release of Drupal Console. Almost three years of working on this project and after 84 releases, almost 86,000 downloads and the awesome help of 169 contributors we released the 1.0.0-alpha1 version. What is so great about this version  This release provides support for latest Drupal 8.1.x version released on April the 20th. For more information and details about this Drupal release you can visit Drupal 8.1.0 is now available. This release includes minor fixes and improvements and only one new feature. Support for placeholders on chain files, I will elaborate about this on another blog post, but if you are interested to know about this please visit the issue 2055. What is not so great about it Drupal 8.0.x is no longer supported. We are still trying to confirm if the Embedded Composer project can help us with this issue. If this is not doable, we can open a discussion to find a better way to approach this issue.
Catégories: Elsewhere Pune Drupal Meetup - March 2016

Planet Drupal - jeu, 21/04/2016 - 08:44
Pune Drupal Meetup - March 2016 Body

The monthly meet-up for March was moved from the last friday of the month, which was the good Friday, to the 1st of April and hoped really hard that people didn't think it was an April fools prank. This PDG meetup was hosted by Rotary International thanks to diligence of Dipak Yadav who works there. It is always fun when the meetup is hosted in different locations because we get to explore different parts of Pune and see new faces.

With 25 members in attendence, the meetup was kicked off by Dipak giving us an informative talk about Rotary International and the work they do.


The speaker for the evening was Sushil Hanwate of Axelerant and he spoke on,“ Services and dependency injections in Drupal 8.”


The session ended after a short Q&A, we broke off into smaller groups for BOF sessions. Saket headed the BOF for Service workers and the second group discussed about the Drupal 8 Module development.

Once we were done with technical talks, we were served one of the best Kachoris we have tasted :). While we happily munched on the snacks, we decided on the preliminary team members for the upcoming Pune Drupal Camp.

Though the meetups are being held regularly we still need to figure a way of involving newer members into the community and one of the way that is possible is if we get more people volunteering to host the meetups. Kudos to Rotary for hosting us, if you are a Pune based company / group who would like to host the next meetup then please get in touch via comments. 

Our next PDG meetup is scheduled for the 29th of April. Along with a session on,"Experience with Drupal" by Rahul Savaria and Prashant Kumar from QED42, we shall also be planning and discussing further about the upcoming Pune Drupal Camp.
Dont forget to RSVP, See you soon!

aurelia.bhoy Thu, 04/21/2016 - 12:14
Catégories: Elsewhere

KnackForge: Programmatically create and trigger feeds importer

Planet Drupal - jeu, 21/04/2016 - 07:21
Programmatically create and trigger feeds importer

We met with a challenging requirement where we needed to create a feeds importer on node creation of particular content type. It’s like for ‘n’ number of nodes there must be a ‘n’ number of feeds importer.

For that I created a feeds importer which will be considered as the template. Whenever the node of specific content type is created, the template will be cloned and a new importer will be created.

The following code needs to reside in hook_node_insert() and will be used to clone the feeds importer:

Thu, 04/21/2016 - 10:51
Catégories: Elsewhere

KnackForge: Apache Solr View Count module

Planet Drupal - jeu, 21/04/2016 - 06:46
Apache Solr View Count module

While working with Apache solr in drupal, I had to sort the results based on relevancy, date and popularity. The Apache solr sort module allowed me to sort based on relevancy and date, but sorting based on popularity wasn't available. What I mean by sorting based on popularity is that sorting based on the view count of each result.

suresh Thu, 04/21/2016 - 10:16
Catégories: Elsewhere

FFW Agency: Drupal Console: An Overview of the New Drupal CLI

Planet Drupal - jeu, 21/04/2016 - 02:01
Drupal Console: An Overview of the New Drupal CLI jmolivas Thu, 04/21/2016 - 00:01

Drupal Console is the new CLI (Command Line Interface) for Drupal. This tool can help you to generate boilerplate code, as well as interact with, and debug Drupal 8. From the ground up, it is built to utilize the same modern PHP practices that have been adopted in Drupal 8.

Drupal Console takes advantage of the Symfony Console and other well-known third-party components like Twig, Guzzle, and Dependency Injection among others. By embracing those standard components, we’re fully participating in the PHP community, building bridges and encouraging the PHP community to join the Drupal project and allow us to reduce the isolation of Drupal.

Why is Drupal Console important?

Drupal is infamous for having a steep learning curve, complete with its own language of “Drupalisms”. While Drupal 8 simplifies and standardizes the development process, it is more technically advanced and complex than its predecessor. 

Managing the increasing complexity of Drupal 8 could be a daunting task for anyone. Drupal Console has been designed to help you manage that complexity, facilitating Drupal 8 adoption while making development and interaction more efficient and enjoyable. Drupal Console was created with one goal in mind: to allow individuals and teams to develop smarter and faster on Drupal 8. 

Drupal Console features

In this blog post, I will mention some of the most common features and commands of Drupal Console, to serve as a good introduction.

Install Drupal Console

Copy configuration files

The init command copy application configuration files to the user home directory. Modifying this configuration files is how the behavior of the application can be modified. 

  Validate system requirements

The check command will verify the system requirements and throw error messages if any required extension is missing.

Install Drupal 8

The easiest way to try Drupal 8 on your local machine is by executing the chain command and pass the option --file=~/.console/chain/quick-start.yml

The chain command helps you to automate command execution, allowing you to define an external YAML file containing the definition name, options, and arguments of several commands and execute that list based on the sequence defined in the file.

In this example, the chain command will download and install Drupal using SQLite, and finally, start the PHP's built- in server. Now you only need to open your browser and point it to

  Generate a module

The generate:module command helps you to:

  • Generate a new module, including a new directory named hello_world at modules/custom directory.
  • Creates a file at modules/custom/hello_world directory.
  Generate a service

The generate:service command helps you to: 

  • Generate a new service class and register it with the file.


Generate a Controller

The generate:controller command helps you to:

  • Generate a new HelloController Class with a hello method at src/Controller directory.
  • Generate a route with a path to /hello/{name} at hello_world.routing.yml file.


Generate a Configuration Form

The generate:form:config command helps you to:

  • Generate a SettingsForm.php class at src/Form directory,
  • Generate a route with path to /admin/config/hello_world/settings at hello_world.routing.yml
  • Register at the file the hello_world.settings_form route using system.admin_config_system as parent.

This command allows you to add a form structure to include form fields based on the field API. Also generates a buildForm and submitForm methods with the required code to store and retrieve form values from the configuration system.

NOTE: The parent route system.admin_config_system for the menu_link can be selected from the command interaction.

  Debug Services

The container:debug command displays the currently registered services for an application. Drupal contains several services registered out-of-the-box plus the services added by custom and contributed modules, for that reason I will use peco a simplistic interactive filtering tool to make this debug easier.


Debug Routes

The router:debug command displays the currently registered routes for an application. Similar to debugging services. In this example, I will use peco to make this debugging task easier.


Create Data

The create:nodes command creates dummy nodes for your application.


Drupal Console provides a YAML to execute using the chain command. This file contains instructions to execute create:users, create:vocabularies, create:terms and create:nodes command using one command.

Change the application language

The settings:set command helps to change the application configuration in this example using the arguments language es we can set Spanish as the application language. After switching the default language the interface is translated.

  All of the available commands

The list command can be used to show all of the available commands. A print screen was not included because the more that 130 commands make the image huge to show on this blog post. 

For the full list of commands, you can also visit the documentation page at 

What makes Drupal Console unique
  • Has been built to utilize the same modern PHP practices adopted by Drupal 8.
  • Generates the code and files required by Drupal 8 modules and components.
  • Facilitate Drupal 8 adoption while making development and interaction more efficient and enjoyable. 
  • Allow individuals and teams to develop smarter and faster on Drupal 8.
  • Help developers to understand Drupal 8 with the "--learning" flag.
  • Fully multilingual and translatable, just like Drupal 8 itself.
Links and resources Tagged with Comments
Catégories: Elsewhere

FFW Agency: Great Examples Of Distributed Content Management In The Pharmaceutical Industry

Planet Drupal - jeu, 21/04/2016 - 02:00
Great Examples Of Distributed Content Management In The Pharmaceutical Industry hank.vanzile Thu, 04/21/2016 - 00:00

This is the third post in my series on Distributed Content Management.  In my first post I defined the term and used a few examples while doing so.  My second post, Great Examples of Distributed Content Management In Higher Education, expanded on the first example of a large university.  In today’s post we’ll explore the second example - a global pharmaceutical company - and once again discuss some great use cases for Distributed Content Management.


Setting The Scene

Pharmaceutical companies, more than companies in many other industries, must carefully consider all elements of their content lifecycle. Providing correct, approved content to both healthcare professionals and consumers is of utmost importance and, as such, web content in the pharmaceutical industry must undergo stringent regulatory review and control.  This requires consistent management across all digital properties and, for larger companies, that can be hundreds, or potentially even thousands, of websites and channels globally.


Use Case 1: Efficient Regulatory Review With Content Publishing Workflows

At first, the idea of Distributed Content Management may seem somewhat counterintuitive to how pharmaceutical companies work.  (In previous posts we’ve used it to explore empowering content creators and overcoming bottlenecks to content publishing - challenging concepts to tout for such a regulated industry.)  However, I’ve also opined that content approval and publishing workflows must be tailored to the specific use case.  

Consider a web publishing workflow that allows medical-legal reviews to take place within a Content Management System.  In some web systems this requires a multi-tiered platform wherein a “staging” version of the website - an exact copy of the real (“production”) website on which content changes have been staged - is made available for regulatory approval before the content is made available to the public.  While this is certainly more efficient than sharing offline documents, a deeper consideration of the technologies used can increase the efficiency and further control its risks.  

Some Content Management Systems, such as Drupal, allow content approval to take place on the production website, controlling the visibility and publishing of content through user authentication and roles instead of requiring  separate “staging” websites.  By mapping the appropriate roles to regulatory affairs, pharmaceutical companies using this approach can save costly and timely deployments of new content to the production site and free up the resources required to manage multiple copies of each website.


Use Case 2: Controlled, Single-Source Content Deployment

For some pharmaceutical content, decentralized content publishing may not be an appropriately-sized solution.  Some content is not only highly-regulated but also highly reused wherever products are marketed and is therefore best suited to be updated, approved, and disseminated from a central source.  Important Safety Information and Indications, for example, are types of content that a pharmaceutical company may choose to publish only through a centralized content repository.  

By establishing policies that all content editing must occur in the content repository, with individual websites disallowed from making changes locally, companies may avoid the need to have regulatory approval workflows on each of those sites and ensure that important information is updated in a timely and error-free way across numerous sites.  Content syndication is a fascinating opportunity for organizations considering Distributed Content Management and I’ll explore some of the available technologies, such as Acquia Content Hub, in later posts.


Use Case 3: Multichannel Brand Content

Single-source content syndication also provides an opportunity for pharmaceutical companies looking to promote their consumer products across multiple channels.  Let’s use e-commerce as an example.  Many companies choose to employ standalone, all-in-one e-commerce systems such as BigCommerce, Demandware and Magento rather than integrate e-commerce stores into each of their individual brand websites.  This makes a tremendous amount of sense: these systems can provide a number of compelling features such as gift cards, coupons, centralized inventory management, and opportunities for cross-selling other products among the company’s brands.  However, because these stores are independent of the main brand website, they too need to display content such as product descriptions, use and dosing information, ingredients, etc.  

By programmatically providing that content from a content repository to the e-commerce system, pharmaceutical companies can eliminate the risk of entering information directly into the store and potentially make use of the streamlined regulatory control processes they’ve already set up for the brand sites.


Use Case 4: Content Delivery To Validated Audiences

In addition to marketing content, pharmaceutical companies maintain large amounts of HCP content - information intended for healthcare professionals.  What content is available to these professionals, how they’ll access it, and how to validate the identity of a user seeking that information

is another key consideration for a pharma  company’s Distributed Content Management strategy.  A common approach is to segregate HCP content into regional “portals” - websites that require medical professionals to create accounts and login to see the information for their country or part of the world.  To overcome the challenge of validating these accounts, companies often integrate with an Identity Provider (IdP) such as DocCheck or Cegedim that specializes in maintaining national registries of healthcare professionals.  

However, having a number of disparate system integrations dependant on which country a website is intended to serve introduces both the overhead of managing multiple bundles of code - sometimes written in entirely different programming languages - and the opportunity for error in integrating the wrong code for the intended region.  Because of this, some global pharmaceutical companies may choose to build a more centralized approach to validation and registration using an integration platform such as Mulesoft Anypoint Platform to amalgamate the different Identity Provider code bundles and provide simultaneous access them all through a dedicated Identity Management system such as Janrain.


What’s Next?

We will continue exploring use cases for distributed content management for the next few posts before moving on to discussing some prerequisites for companies looking to implement Distributed Content Management.  Thoughts or questions?  Reach out in the comments below or tweet them to me at @HankVanZile.



Tagged with Comments
Catégories: Elsewhere

Aten Design Group: Adding CSS Classes to Blocks in Drupal 8

Planet Drupal - jeu, 21/04/2016 - 00:25

This is an update to a previous post I wrote on adding classes to blocks in Drupal 7

As I've stated before, I'm a big fan of Modular CSS which requires the ability to easily manage classes on your markup. This was often a struggle in previous versions of Drupal. However, Drupal 8 makes this significantly easier to manage thanks to a number of improvements to the front-end developer experience (DX). In this post we'll look at how two of these DX improvements, the Twig template language and hook_theme_suggestions_HOOK_alter, and how they make adding classes to blocks much easier to manage.

Twig allows us to easily open up a template file and add our classes where we need them. There are two main approaches to adding classes to a template file. The first is simple: open the file, add the class directly to the tag, save the file and move on with your life.

block.html.twig <div class="block block--fancy"> {{ title_prefix }} {% if label %} <h2 class="block__title block__title--fancy">{{ label }}</h2> {% endif %} {{ title_suffix }} {% block content %} {{ content }} {% endblock %} </div>

This works in a lot of cases, but may not be flexible enough. The second approach utilizes the new attributes object – the successor to Drupal 7's attributes array. The attribute object encapsulates all the attributes for a given tag. It also includes a number of methods which enable you to add, remove and alter those attributes before printing. For now we'll just focus on the attributes.addClass() method. You can learn more about available methods in the official Drupal 8 documentation.

block.html.twig {% set classes = [ 'block', 'block--fancy' ] %}   {% set title_classes = [ 'block__title', 'block__title--fancy' ] %}   <div{{ attributes.addClass(classes) }}> {{ title_prefix }} {% if label %} <h2{{ title_attributes.addClass(title_classes) }}>{{ label }}</h2> {% endif %} {{ title_suffix }} {% block content %} {{ content }} {% endblock %} </div>

Alternatively, we can add our class directly to the class attribute with the existing classes from the attribute.class then print the remaining attributes. To prevent the class attribute from printing twice, we exclude it using the without Twig filter. Either way works.

block.html.twig <div class="block--fancy {{ attributes.class }}"{{attributes|without('class') }}> {{ title_prefix }} {% if label %} <h2 class="block--fancy {{ title_attributes.class }}" {{title_attributes|without('class') }}>{{ label }}</h2> {% endif %} {{ title_suffix }} {% block content %} {{ content }} {% endblock %} </div>

In any case, all our blocks on the site now look fancy as hell (assuming we've styled .block--fancy as such)

Template Suggestions

The above examples work. In reality if all our blocks look fancy, no blocks will look fancy. We need to apply this class only to our special blocks that truly deserve to be fancy. This introduces my second favorite DX improvement to Drupal 8 – hook_theme_suggestions_HOOK_alter.

If you wanted to make a custom template available for use to a certain block In Drupal 7, you had to do so in a preprocess function. Altering theme hook suggestions (the list of possible templates) in the Drupal 8 is delegated to its very own hook. The concept is pretty straight forward. Before Drupal renders an element, it looks at an array of possible template file names (a.k.a. suggestions) one-by-one. For each template file, it looks in the file system to see if that file exists in our theme, its base theme or core themes. Once it finds a match, it stops looking and renders the element using the matching template.

We'll use this hook to add our new template file to the list of suggestions. In the case of blocks, the function we'll define is hook_theme_suggestions_block_alter. It takes two arguments, the first is the array of suggestions which are passed by reference (by prefixing the parameter with a & so we can alter them directly. The second is the variables from our element that we can use to determine which templates we want to include.

Lets assume we renamed one of our templates above to block--fancy.html.twig and saved it to our theme. We then add the following function to my_theme.theme where "my_theme" is the name of our theme.

my_theme.theme <?php   /** * Implements hook_theme_suggestions_HOOK_alter() for block templates. */ function my_theme_theme_suggestions_block_alter(array &$suggestions, array $variables) { $block_id = $variables['elements']['#id'];   /* Uncomment the line below to see variables you can use to target a block */ // print $block_id . '<br/>';   /* Add classes based on the block id. */ switch ($block_id) { /* Account Menu block */ case 'account_menu': $suggestions[] = 'block__fancy'; break; } }

Now the account menu block on our site will use block--fancy.html.twig as we can see from the output of twig debug

This is just one example of the improvements in D8 theming. I'm really excited for the clarity that the new Twig templates bring to Drupal 8 and the simplicity of managing template suggestions through hook_theme_suggestions_HOOK_alter.

Catégories: Elsewhere

Jonathan Dowland: mount-on-demand backups

Planet Debian - mer, 20/04/2016 - 22:49

Last week, someone posted a request for help on the popular Server Fault Q&A site: they had apparently accidentally deleted their entire web hosting business, and all their backups. The post (now itself deleted) was a reasonably obvious fake, but mainstream media reported on it anyway, and then life imitated art and 123-reg went and did actually delete all their hosted VMs, and their backups.

I was chatting to some friends from $job-2 and we had a brief smug moment that we had never done anything this bad, before moving on to incredulity that we had never done anything this bad in the 5 years or so we were running the University web servers. Some time later I realised that my personal backups were at risk from something like this because I have a permanently mounted /backup partition on my home NAS. I decided to fix it.

I already use Systemd to manage mounting the /backup partition (via a backup.mount file) and its dependencies. I'll skip the finer details of that for now.

I planned to define some new Systemd units for each backup job which was previously scheduled via Cron in order that I could mark them as depending on the /backup mount. I needed to adjust that mount definition by adding StopWhenUnneeded=true. This ensures that /backup will be unmounted when it is not in use by another job, and not at risk of a stray rm -rf.

The backup jobs are all simple shell scripts that convert quite easily into services. An example:


[Unit] Requires=backup.mount After=backup.mount [Service] User=backupuser Group=backupuser ExecStart=/home/backupuser/bin/phobos-backup-home

To schedule this, I also need to create a timer:


[Timer] OnCalendar=*-*-* 04:01:00 [Install]

To enable the timer, you have to both enable and start it:

systemctl enable backup-home.timer
systemctl start backup-home.timer

I created service and timer units for each of my cron jobs.

The other big difference to driving these from Cron is that by default I won't get any emails if the jobs generate output - in particular, if they fail. I definitely do want mail if things fail. The Arch Wiki has an interesting proposed solution to this which I took a look at. It's a bit clunky, and my initial experiments with a derivation from this (using mail(1) not sendmail(1)) have not yet generated any mail.

Pros and Cons

The Systemd timespec is more intuitive than Cron's. It's a shame you need a minimum of three more lines of boilerplate for the simplest of timers. I think should probably be an implicit default for all .timer type units. Here I think clarity suffers in the name of consistency.

With timers, start doesn't kick-off the job, it really means "enable" in the context of timers, which is clumsy considering the existing enable verb, which seems almost superfluous, but is necessary for consistency, since Systemd units need to be enabled before they can be started As Simon points out in the comments, this is not true. Rather, "enable" is needed for the timer to be active upon subsequent boots, but won't enable it in the current boot. "Start" will enable it for the current boot, but not for subsequent ones.

Since I need a .service and a .unit file for each active line in my crontab, that's a lot of small files (twice as many as the number of jobs being defined) and they're all stored in system-wide folder because of the dependency on the necessarily system-level units defining the mount.

It's easy to forget the After= line for the backup services. On the one hand, it's a shame that After= doesn't imply Require=, so you don't need both; or alternatively there was a convenience option that did both. On the other hand, there are already too many Systemd options and adding more conjoined ones would just make it even more complicated.

It's a shame I couldn't use user-level units to achieve this, but they could not depend on the system-level ones, nor activate /backup. This is a sensible default, since you don't want any user to be able to start any service on-demand, but some way of enabling it for these situations would be good. I ruled out systemd.automount because a stray rm -rf would trigger the mount which defeats the whole exercise. Apparently this might be something you solve with Polkit, as the Arch Wiki explains, which looks like it has XML disease.

I need to get mail-on-error working reliably.

Catégories: Elsewhere

Ben Hutchings: Experiments with signed kernels and modules in Debian

Planet Debian - mer, 20/04/2016 - 20:53

I've lately been working on support for Secure Boot in Debian, mostly in the packages maintained by the kernel team.

My instructions for setting up UEFI Secure Boot are based on OVMF running on KVM/QEMU. All 'Designed for Windows' PCs should allow reconfiguration of SB, but it may not be easy to do so. They also assume that the firmware includes an EFI shell.

Updated: Robert Edmonds pointed out that the 'Designed for Windows' requirements changed with Windows 10:

@benhutchingsuk "Hardware can be Designed for Windows 10 and can offer no way to opt out of the Secure Boot"

— Robert Edmonds (@rsedmonds) April 20, 2016

The ability to reconfigure SB is indeed now optional for devices which are designed to always boot with a specific Secure Boot configuration. I also noticed that the requirements say that OEMs should not sign an EFI shell binary. Therefore I've revised the instructions to use efibootmgr instead.


UEFI Secure Boot, when configured and enabled (which it is on most new PCs) requires that whatever it loads is signed with a trusted key. The one common trusted key for PCs is held by Microsoft, and while they will sign other people's code for a nominal fee, they require that it also validates the code it loads, i.e. the kernel or next stage boot loader. The kernel in turn is responsible for validating any code that could compromise its integrity (kernel modules, kexec images).

Currently there are no such signed boot loaders in Debian, though the shim and grub-signed packages included in many other distributions should be usable. However it's possible to load an appropriately configured Linux kernel directly from the UEFI firmware (typically through the shell) which is what I'm doing at the moment.

Packaging signed kernels

Signing keys obviously need to be protected against disclosure; the private keys can't be included in a source package. We also won't install them on buildds separately, and generating signatures at build time would of course be unreproducible. So I've created a new source package, linux-signed, which contains detached signatures prepared offline.

Currently the binary packages built from linux-signed also contain only detached signatures, which are applied as necessary at installation time. The signed kernel image (only on x86 for now) is named /boot/vmlinuz-kversion.efi.signed. However, since packages must not modify files owned by another package and I didn't want to dpkg-divert thousands of modules, the module signatures remain detached. Detached module signatures are a new invention of mine, and require changes in kmod and various other packages to support them. (An alternate might be to put signed modules under a different directory and drop a configuration file in /lib/depmod.d to make them higher priority. But then we end up with two copies of every module installed, which can be a substantial waste of space.)


The packages you need to repeat the experiment:

  • linux-image-4.5.0-1-flavour version 4.5.1-1 from unstable (only 686, 686-pae or amd64 flavours have signed kernels; most flavours have signed modules)
  • linux-image-4.5.0-1-flavour-signed version 1~exp3 from experimental
  • initramfs-tools version 0.125 from unstable
  • kmod and libkmod2 unofficial version 22-1.2 from

For Secure Boot, you'll then need to copy the signed kernel and the initrd onto the EFI system partition, normally mounted at /boot/efi.

SB requires a Platform Key (PK) which will already be installed on a real PC. You can replace it but you don't need to. If you're using OVMF, there are no persistent keys so you do need to generate your own:

openssl req -new -x509 -newkey rsa:2048 -keyout pk.key -out pk.crt \ -outform der -nodes

You'll also need to install the certificate for my kernel image signing key, which is under debian/certs in the linux-signed package. OVMF requires this in DER format:

openssl x509 -in linux-signed-1~exp3/debian/certs/ \ -out linux.crt -outform der

You'll need to copy the certificate(s) to a FAT-formatted partition such as the EFI system partition, so that the firmware can read it.

Use efibootmgr to add a boot entry for the kernel, for example:

efibootmgr -c -d /dev/sda -L linux-signed -l '\vmlinuz.efi' -u 'initrd=initrd.img root=/dev/sda2 ro quiet'

You should use the same kernel parameters as usual, except that you also need to specify the initrd filename using the initrd= parameter. The EFI stub code at the beginning of the kernel will load the initrd using EFI boot services.

Enabling Secure Boot
  1. Reboot the system and enter UEFI setup
  2. Find the menu entry for Secure Boot customisation (in OVMF, it's under 'Device Manager' for some reason)
  3. In OVMF, enrol the PK from pk.crt
  4. Add linux.crt to the DB (whitelist database)
  5. Ensure that Secure Boot is enabled and in 'User Mode'
Booting the kernel in Secure Boot

If all went well, Linux will boot as normal. You can confirm that Secure Boot was enabled by reading /sys/kernel/security/securelevel, which will contain 1 if it was.

Module signature validation

Module signatures are now always checked and unsigned modules will be given the 'E' taint flag. If Secure Boot is used or you add the kernel parameter module.sig_enforce=1, unsigned modules will be rejected. You can also turn on signature enforcement and turn off various other methods of modifying kernel code (such as kexec) by writing 1 to /sys/kernel/security/securelevel.

Catégories: Elsewhere

Reproducible builds folks: Reproducible builds: week 51 in Stretch cycle

Planet Debian - mer, 20/04/2016 - 20:47

What happened in the reproducible builds effort between April 10th and April 16th 2016:

Toolchain fixes
  • Roland Rosenfeld uploaded transfig/1:3.2.5.e-6 which honors SOURCE_DATE_EPOCH. Original patch by Alexis Bienvenüe.
  • Bill Allombert uploaded gap/4r8p3-2 which makes honor SOURCE_DATE_EPOCH. Original patch by Jerome Benoit, duplicate patch by Dhole.
  • Emmanuel Bourg uploaded ant/1.9.7-1 which makes the Javadoc task use UTF-8 as the default encoding if none was specified and SOURCE_DATE_EPOCH is set.

Antoine Beaupré suggested that gitpkg stops recording timestamps when creating upstream archives. Antoine Beaupré also pointed out that git-buildpackage diverges from the default gzip settings which is a problem for reproducibly recreating released tarballs which were made using the defaults.

Alexis Bienvenüe submitted a patch extending sphinx SOURCE_DATE_EPOCH support to copyright year.

Packages fixed

The following packages have become reproducible due to changes in their build dependencies: atinject-jsr330, avis, brailleutils, charactermanaj, classycle, commons-io, commons-javaflow, commons-jci, gap-radiroot, jebl2, jetty, libcommons-el-java, libcommons-jxpath-java, libjackson-json-java, libjogl2-java, libmicroba-java, libproxool-java, libregexp-java, mobile-atlas-creator, octave-econometrics, octave-linear-algebra, octave-odepkg, octave-optiminterp, rapidsvn, remotetea, ruby-rinku, tachyon, xhtmlrenderer.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues, but not all of them:

Patches submitted which have not made their way to the archive yet:

  • #820603 on viking by Alexis Bienvenüe: fix icon headers inclusion order.
  • #820661 on nullmailer by Alexis Bienvenüe: fix the order in which files are included in the static archive.
  • #820668 on sawfish by Alexis Bienvenüe: fix file ordering in theme archives, strip hostname and username from the config.h file, and honour SOURCE_DATE_EPOCH when creating the config.h file.
  • #820740 on bless by Alexis Bienvenüe: always use /bin/sh as shell.
  • #820742 on gmic by Alexis Bienvenüe: strip the build date from help messages.
  • #820809 on wsdl4j by Alexis Bienvenüe: use a plain text representation of the copyright character.
  • #820815 on freefem++ by Alexis Bienvenüe: fix the order in which files are included in the .edp files, and honour SOURCE_DATE_EPOCH when using the build date.
  • #820869 on pyexiv2 by Alexis Bienvenüe: honour the SOURCE_DATE_EPOCH environment variable through the ustrftime function, to get a reproducible copyright year.
  • #820932 on fim by Alexis Bienvenüe: fix the order in which files are joined in header files, strip the build date from fim binary, make the embeded vim2html script honour SOURCE_DATE_EPOCH variable when building the documentation, and force language to be English when using bison to make a grammar that is going to be parsed using English keywords.
  • #820990 on grib-api by Santiago Vila: always call dh-buildinfo.
diffoscope development

Zbigniew Jędrzejewski-Szmek noted in #820631 that diffoscope doesn't work properly when a file contains several cpio archives.

Package reviews

21 reviews have been added, 14 updated and 22 removed in this week.

New issue found: timestamps_in_htm_by_gap.

Chris Lamb reported 10 new FTBFS issues.


The video and the slides from the talk "Reproducible builds ecosystem" at LibrePlanet 2016 have been published now.

This week's edition was written by Lunar and Holger Levsen. h01ger automated the maintenance and publishing of this weekly newsletter via git.

Catégories: Elsewhere

Mediacurrent: New eBook: Intranets the Drupal Way

Planet Drupal - mer, 20/04/2016 - 20:14

The Intranet has entered a new era where 78% of companies are running on open source software. Now, options for corporate Intranets are no longer confined to proprietary platforms.

Catégories: Elsewhere Drupal 6 security update for Views!

Planet Drupal - mer, 20/04/2016 - 19:40

As you may know, Drupal 6 has reached End-of-Life (EOL) which means the Drupal Security Team is no longer doing Security Advisories or working on security patches for Drupal 6 core or contrib modules - but the Drupal 6 LTS vendors are and we're one of them!

Today, there is a Moderately Critical security release for Views to fix an Access Bypass vulnerability.

The Views module provides a flexible method for Drupal site designers to control how lists and tables of content, users, taxonomy terms and other data are presented.

The module doesn't sufficiently check handler access when returning the list of handlers fromview_plugin_display::get_handlers(). The most critical code (access plugins and field output) is unaffected - only area handlers, theget_field_labels()method, token replacement, and some relationship handling are susceptible.

Download the patch for Views 6.x-2.x or Views 6.x-3.x!

If you have a Drupal 6 site using the Views module (probably most sites), we recommend you update immediately! We have already deployed the patch for all of our Drupal 6 Long-Term Support clients. :-)

If you'd like all your Drupal 6 modules to receive security updates and have the fixes deployed the same day they're released, please check out our D6LTS plans.

Note: if you use the myDropWizard module (totally free!), you'll be alerted to these and any future security updates, and will be able to use drush to install them (even though they won't necessarily have a release on

Catégories: Elsewhere

OSTraining: Drupal 8.1 and What It Means for Drupal's Future

Planet Drupal - mer, 20/04/2016 - 17:32

Today, Drupal 8.1 was officially released.

All the way back in 2014, we talked about the changes coming to Drupal and how the release cycle would allow for changes to be progressively added to Drupal.

At that time, it was estimated that a new version with new features could be released every 6 months. Keeping to that schedule for Drupal 8 has been problematic due to the size and scope of what they wanted to achieve, but they made it! 

Catégories: Elsewhere

Wim Leers: Drupal 8.1: BigPipe as an experimental module

Planet Drupal - mer, 20/04/2016 - 13:09

Today, Drupal 8.1 has been released and it includes BigPipe as an experimental module.

Six months ago, on the day of the release of Drupal 8, the BigPipe contrib module was released.

So BigPipe was first prototyped in contrib, then moved into core as an experimental module.

Experimental module?

Quoting d.o/core/experimental:

Experimental modules allow core contributors to iterate quickly on functionality that may be supported in an upcoming minor release and receive feedback, without needing to conform to the rigorous requirements for production versions of Drupal core.

Experimental modules allow site builders and contributed project authors to test out functionality that might eventually be included as a stable part of Drupal core.

With your help (in other words: by testing), we can help BigPipe “graduate” as a stable module in Drupal 8.2. This is the sort of module that needs wider testing because it changes how pages are delivered, so before it can be considered stable, it must be tested in as many circumstances as possible, including the most exotic ones.

(If your site offers personalization to end users, you are encouraged to enable BigPipe and report issues. There is zero risk of data loss. And when the environment — i.e. web server or (reverse) proxy — doesn’t support streaming, then BigPipe-delivered responses behave as if BigPipe was not installed. Nothing breaks, you just go back to the same perceived performance as before.)

About 500 sites are currently using the contrib module. With the release of Drupal 8.1, hopefully thousands of sites will test it.12

Please report any issues you encounter! Hopefully there won’t be many. I’d be very grateful to hear about success stories too — feel free to share those as issues too!


Of course, documentation is ready too:

What about the contrib module?

The BigPipe contrib module is still available for Drupal 8.0, and will remain available.

  • 1.0-beta1 was released on the same day as Drupal 8.0.0
  • 1.0-beta2 was released on the same day as Drupal 8.0.1, and made it feature-complete
  • 1.0-beta3 contained only improved documentation
  • 1.0-rc1 brought comprehensive test coverage, which was the last thing necessary for BigPipe to become a core-worthy module — the same day as the work continued on the core issue:
  • 1.0 was tagged today, on the same day as Drupal 8.1.0

Going forward, I’ll make sure to tag releases of the BigPipe contrib module matching Drupal 8.1 patch releases, if they contain BigPipe fixes/improvements. So, when Drupal 8.1.3 is released, BigPipe 1.3 for Drupal 8.0 will be released also. That makes it easy to keep things in sync.


When you upgrade from Drupal 8.0 to Drupal 8.1, and you were using the BigPipe module on your 8.0 site, then follow the instructions in the 8.1.0 release notes:

If you previously installed the BigPipe contributed module, you must uninstall and remove it before upgrading from Drupal 8.0.x to 8.1.x.

  1. Note there is also the BigPipe demo module (d.o/project/big_pipe_demo), which makes it easy to simulate the impact of BigPipe on your particular site. 

  2. There’s also a live demo: 

  • Acquia
  • Drupal
  • WPO
  • performance
Catégories: Elsewhere


Subscribe to jfhovinne agrégateur