Feed aggregator

Dirk Eddelbuettel: RcppEigen

Planet Debian - Fri, 25/09/2015 - 05:06

A bugfix release of RcppEigen is now on CRAN and in Debian. The NEWS file entry follows.

Changes in RcppEigen version (2015-09-23)
  • Corrected use of kitten() thanks to Grant Brown (#21)

  • Applied upstream change to protect against undefined behaviour with null pointers

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Steve McIntyre: Linaro VLANd v0.4

Planet Debian - Fri, 25/09/2015 - 02:44

VLANd is a python program intended to make it easy to manage port-based VLAN setups across multiple switches in a network. It is designed to be vendor-agnostic, with a clean pluggable driver API to allow for a wide range of different switches to be controlled together.

There's more information in the README file. I've just released v0.4, with a lot of changes included since the last release:

  • Large numbers of bugfixes and code cleanups
  • Code changes for integration with LAVA:
    • Added db.find_lowest_unused_vlan_tag()
    • create_vlan() with a tag of -1 will find and allocate the first unused tag automatically
  • Add port numbers as well as names to the ports database, to give human-recognisable references. See README.port-numbering for more details.
  • Add tracking of trunks, the inter-switch connections, needed for visualisation diagrams.
  • Add a simple http-based visualisation feature:
    • Generate network diagrams on-demand based on the information in the VLANd database, colour-coded to show port configuration
    • Generate a simple website to reference those diagrams.
  • Allow more ports to be seen on Catalyst switches
  • Add a systemd service file for vland

VLANd is Free Software, released under the GPL version 2 (or any later version). For now, grab it from git; tarballs will be coming shortly.

Categories: Elsewhere

Lars Wirzenius: FUUG grant for Obnam development

Planet Debian - Thu, 24/09/2015 - 21:24

I'm very pleased to say that the FUUG foundation in Finland has awarded me a grant to buy some hardware to help development of Obnam, by backup program. The announcement has more details in Finnish.

Categories: Elsewhere

Annertech: DrupalCon 2016 is Coming to Dublin!

Planet Drupal - Thu, 24/09/2015 - 19:00
DrupalCon 2016 is Coming to Dublin!

Céad míle fáilte go Baile Átha Cliath. DrupalCon is coming to Dublin. Yes, you read that right, DrupalCon 2016 will be in Dublin, and we at Annertech can't wait to see you there.

The Drupal Ireland community has been doing great work over the past few years - Drupal.ie was launched, Drupal Camp Dublin became Drupal Open Days Ireland, hundreds of Drupalists came to Drupal Dev Days in Dublin, DrupalCon Trivia Nights were organised and hosted in many cities, and now - at last - DrupalCon will be held in Dublin.

Categories: Elsewhere

Jonathan McDowell: New GPG key

Planet Debian - Thu, 24/09/2015 - 16:45

Just before I went to DebConf15 I got around to setting up my gnuk with the latest build (1.1.7), which supports 4K RSA keys. As a result I decided to generate a new certification only primary key, using a live CD on a non-networked host and ensuring the raw key was only ever used in this configuration. The intention is that in general I will use the key via the gnuk, ensuring no danger of leaking the key material.

I took part in various key signings at DebConf and the subsequent UK Debian BBQ, and finally today got round to dealing with the key slips I had accumulated. I’m sure I’ve missed some people off my signing list, but at least now the key should be embedded into the strong set of keys. Feel free to poke me next time you see me if you didn’t get mail from me with fresh signatures and you think you should have.

Key details are:

pub 4096R/0x21E278A66C28DBC0 2015-08-04 [expires: 2018-08-03] Key fingerprint = 3E0C FCDB 05A7 F665 AA18 CEFA 21E2 78A6 6C28 DBC0 uid [ full ] Jonathan McDowell <noodles@earth.li>

I have no reason to assume my old key (0x94FA372B2DA8B985) has been compromised and for now continue to use that key. Also for the new key I have not generated any subkeys as yet, which caff handles ok but emits a warning about unencrypted mail. Thanks to those of you who sent me signatures despite this.

[Update: I was asked about my setup for the key generation, in particular how I ensured enough entropy, given that it was a fresh boot and without networking there were limited entropy sources available to the machine. I made the decision that the machine’s TPM and the use of tpm-rng and rng-tools was sufficient (i.e. I didn’t worry overly about the TPM being compromised for the purposes of feeding additional information into the random pool). Alternative options would have been flashing the gnuk with the NeuG firmware or using my Entropy Key.]

Categories: Elsewhere

Dries Buytaert: The future of decoupled Drupal

Planet Drupal - Thu, 24/09/2015 - 16:43

Of all the trends in the web community, few are spreading more rapidly than decoupled (or "headless") content management systems (CMS). The evolution from websites to more interactive web applications and the need for multi-channel publishing suggest that moving to a decoupled architecture that leverages client-side frameworks is a logical next step for Drupal. For the purposes of this blog post, the term decoupled refers to a separation between the back end and one or more front ends instead of the traditional notion of service-oriented architecture.

Traditional ("monolithic") versus fully decoupled ("headless") architectural paradigm.

Decoupling your content management system enables front-end developers to fully control an application or site's rendered markup and user experience. Furthermore, the use of client-side frameworks helps developers give websites application-like behavior with smoother interactivity (there is never a need to hit refresh, because new data appears automatically), optimistic feedback (a response appears before the server has processed a user's query), and non-blocking user interfaces (the user can continue to interact with the application while portions are still loading).

Another advantage of a decoupled architecture is decoupled teams. Since front-end and back-end developers no longer need to understand the full breadth and depth of a monolithic architecture, they can work independently. With the proper preparation, decoupled teams can improve the velocity of a project.

Still, it's important to temper the hype around decoupled content management with balanced analysis. Before decoupling, you need to ask yourself if you're ready to do without functionality usually provided for free by the CMS, such as layout and display management, content previews, user interface (UI) localization, form display, accessibility, authentication, crucial security features such as XSS (cross-site scripting) and CSRF (cross-site request forgery) protection, and last but not least, performance. Many of these have to be rewritten from scratch, or can't be implemented at all, on the client-side. For many projects, building a decoupled application or site on top of a CMS will result in a crippling loss of critical functionality or skyrocketing costs to rebuild missing features.

To be clear, I'm not arguing against decoupled architectures per se. Rather, I believe that decoupling needs to be motivated through ample research. Since it is still quite early in the evolution of this architectural paradigm, we need to be conscious about how and in what scenarios we decouple. In this blog post, we examine different decoupled architectures, discuss why a fully decoupled architecture is not ideal for all use cases, and present "progressive decoupling", a better approach for which Drupal 8 is well-equipped. Today, Drupal 8 is ready to deliver pages quickly and offers developers the flexibility to enhance user experience through decoupled interactive components.

Fully decoupled is not usually the best solution

In a fully decoupled architecture, the theme layer is often ignored altogether, and many content management capabilities are lost, though many clients ingesting data are possible.

Traditionally, CMSes have focused on making websites rather than web applications, but the line between them continues to blur. For example, let's imagine we are building a site delivering content-rich curated music reviews alongside an interaction-rich ticketing interface for live shows. In the past, this ticketing interface would have been a multi-step flow, but here we aim to keep visitors on the same page as they browse and purchase ticket options.

To write and organize reviews, beyond basic content management, we need site building tools to assemble content and lay it out on a page. On the other hand, for a pleasant ticket purchase experience, we need seamless interactivity. From the end user's perspective, this means a continuous, uninterrupted experience using the application and best of all, no need to refresh the page.

Using Drupal to build a content-rich site delivering music reviews is very easy thanks to its content types and extensive editorial features. We can list, categorize, and write reviews by employing the user interfaces provided by Drupal. But because of Drupal's emphasis on server-side rendering rather than client-side rendering, Drupal alone does not yet satisfy our envisioned user experience. Meanwhile, off-the-shelf JavaScript MVC frameworks and native application frameworks are ill-suited for managing large amounts of content, due to the need to rebuild various content management and site building tools, and they can pay a performance penalty.

In a fully decoupled architecture, our losses from having to rebuild what Drupal gives us for free outweigh the wins from client-side frameworks. With a fully decoupled front end, we lose important aspects of the theme layer (which is tightly intertwined with Drupal APIs) such as theme hook suggestions, Twig templating, and, to varying degrees, the render pipeline. We lose content preview and nuances of creating, curating, and composing content such as layout management tools. We lose all of the advancements in accessibility and user experience that make Drupal 8 websites a great tool for both end users and site builders, like ARIA roles, improved system messages, and, most visibly, the fully integrated Drupal toolbar on non-administrative pages. Moreover, where there is elaborate interactivity, security vulnerabilities are easily introduced.

Progressive rendering with BigPipe

Fully decoupled architectures can also have a performance disadvantage. We want users to see the information they want as soon as possible (time to first paint) and be able to perform a desired action as soon as possible (time to interaction). A well-architected CMS will have the advantage over a client-side framework.

Much of the content we want to send for our music website is cacheable and mostly static, such as a navigation bar, recurring footer, and other unchanging components. Under a traditional page serving model, however, operations taking longer to execute can hold up simpler parts of the page in the pipeline and significantly delay the response to the client. In our music site example, this could be blocks containing songs a user listened to most in the last week and the song currently playing in a music player.

What we really need is a means of selectively processing those expensive components later and sending the less intensive bits earlier. With BigPipe, an approach for client-side dynamic content substitution, we can render our pages progressively, where the skeleton of the page loads first, then expensive components such as "songs I listened to most in the last week" or "currently playing" are sent to the browser later and fill out placeholders. This component-driven approach gives us the best of both worlds: non-blocking user interfaces with a brisk time to first interaction and rapid piecemeal loading of complete Drupal pages that leverage the theme layer.

Currently, Drupal 8 is the only CMS with BigPipe deeply integrated across the board for both core and contributed modules—they merely have to provide some cacheability metadata and need no awareness of the technical minutiae. Drupal 8's Dynamic Page Cache module ensures that the page skeleton is already cached and can thus be sent immediately. For example, as menus are identical for many users, we can reuse the menu for those users that can access the same menu links, so Dynamic Page Cache is able to cache those as part of the page skeleton. On the other hand, a personalized block with a user's most played songs is less effective to cache and will therefore be rendered after the page skeleton is sent. This cacheability is built into core, and every contributed module is required to provide the necessary metadata for it.

For a fully decoupled site to load more rapidly than a highly personalized Drupal 8 site using BigPipe, you would need to reconstruct a great deal of Drupal's smart caching logic, store the cache on the client, and continuously synchronize the client cache with server data. In addition, parsing and executing JavaScript to generate HTML takes longer than simply downloading HTML, especially on mobile devices. As a result, you will need extremely well-tuned JavaScript to overcome this challenge.

Client-side frameworks encounter some critical and inescapable drawbacks of conducting all rendering on the client side. On slow connections such as on mobile devices, client-side rendering slows down performance, depletes batteries faster, and forces the user to wait. Because most developers test locally and not in real-world network conditions on actual devices, it's easy to forget that real risks of sluggishness and unreliability, especially due to spotty connectivity, continue to confront any fully decoupled site — Drupal or otherwise.

Progressive decoupling is the future

Under progressive decoupling, the CMS renderer still outputs the skeleton of the page.

As we've seen, fully decoupling eliminates crucial CMS functionality and BigPipe rendering. But what if we could decouple while still having both? What if we could keep things like layout management, security, and accessibility when decoupling while still enjoying all the benefits an interaction-rich single-page application would give us? More importantly, what if we could take advantage of BigPipe to leverage shortened times to interaction and lowered obstacles for heavily personalized content? The answer lies in decoupled components, or progressive decoupling. Instead of decoupling the entire page, why not decouple only portions of it, like individual blocks?

This component-driven decoupled architecture comes with big benefits over a fully decoupled architecture. Namely, traditional content management and workflow, display and layout management are still available to site builders. To return to our music website example, we can drag a block containing the songs a user listened to most in the past week into an arbitrarily located region on the page; a front-end developer can then infuse interactivity such that an account holder can play a song from that list or add it to a favorites list on the fly. Meanwhile, if a content creator wants to move the "most listened to in the past week" block from the right sidebar of the page to the left side of the page, she can do that with a few mouse clicks, rather than having to get a front-end developer involved.

We call this approach progressive decoupling, because you decide how much of the page and which components to decouple from Drupal's front end or dedicated Drupal renderings. For this reason, progressively decoupled Drupal brings decoupling to the assembled web, something I'm very passionate about, because empowering site builders and administrators to build great websites without (much) programming is important. Drupal is uniquely capable for this, precisely because it allows for varying degrees of decoupledness.

Drupal 8 is ahead of the competition and is your go-to platform for building decoupled websites. It comes with a built-in REST API for all content, a system to query data from Drupal (e.g. REST exports in the Views module), and a BigPipe implementation. It will be even better once the full range of contributed modules dealing with REST is available to developers.

With Drupal 8, an exciting spectrumopens up beyond just the two extremes of fully decoupling and traditional Drupal. As we've seen, fully decoupling opens the door to a Drupal implementation providing content to a broad range of one or more clients ("many-headed" Drupal), such as mobile applications, interactive kiosks, and television sets. But progressive decoupling goes a step further, since you can fully decouple and progressively decouple a single Drupal site with all the tools to assemble content still intact.

What is next for decoupling in Drupal?

In the case of many-headed Drupal, fully decoupled applications can live alongside progressively decoupled pages, whose skeletons are rendered through the CMS.

Although Drupal has made huge strides in the last few years and is ahead of the competition, there is still work to do. Traditional Drupal, fully decoupled Drupal, and progressively decoupled Drupal can all coexist. With improved decoupling tools, Drupal can be an even better hub for many heads: a collection of applications and sites linked and supported by a single backend.

One of the most difficult issues facing front-end developers is network performance with REST. For example, a REST query fetching multiple content entities requires a round trip to the server for each individual entity, each of which may also depend on relational data such as referenced entities that require their own individual requests to the server. Then, to gather the required data, each REST query needs a corresponding back-end bootstrap, which can be quite hefty. Such a large stack of round trips and bootstraps can be prohibitively expensive.

Currently, the only way to solve this problem is to create custom API endpoints (e.g. in Views) that comprehensively provide all the data needed by the client in a single response to minimize round trips. However, managing endpoints for each respective client can quickly spiral out of control given the range of different information each client might demand and number of deployed versions in the wild. On the other hand, if you are relying on a single endpoint, updating it requires modifying all the JavaScript that relies on that endpoint and ingests its response. These endpoint management issues can force the front-end developer to depend on work the back-end developer must complete, which creates bottlenecks in decoupled teams.

Beyond performance and endpoint management, developer experience also suffers under a RESTfully decoupled Drupal architecture. For each individual request, there is a single corresponding schema for the response that is immutable. In order to retrieve differently formatted or filtered data according to your needs, you must create a second variant of a given endpoint or provision a new endpoint altogether. Front-end developers concerned about performance limitations desire full control from the client side over what schema comes back and want to avoid working with the server side.

Decoupling thus reveals the need to investigate better ways of exposing data to the client side. As our pages and applications become ever more component-driven, the complexity of the queries we must perform and our demands on their performance increase. What if we could extract only the data we need by writing queries that are efficient and performant by default? Sebastian Siemssen proposes using Facebook's GraphQL (see demo video and project on drupal.org) due to the client's explicit definition of what schema to return and the use of consolidated queries which break apart into smaller calls and recombine for the response, thereby minimizing round trips.

I like GraphQL's approach for both fully and progressively decoupled front-ends. It means that decoupled sites will enjoy better overall performance and give front-end developers a better experience: very few round trips to the server, no need for custom endpoints, no need for versioning, and no dependence on back-end developers. (I even wonder if GraphQL could be the basis for a future version of Views.)

In addition, work on rendering improvements such as BigPipe should continue in order to explore the possibilities given a more progressive rendering system. It's currently in progress to accelerate Drupal's perceived page loads where users consume expensive or personalized content. As for tools within the administration layer and in the contributed space, further progress in Drupal 8 is also necessary for layout management tools such as Panels and block placement that would make decoupling at a granular level much easier.

A great deal is being done, but we could always use more help; please get in touch if you're interested in contributing code or funding our work. After all, the potential impact of progressive rendering on the future of decoupled Drupal is huge.


Decoupled content management is a rapidly evolving trend that has the potential to upend existing architectural paradigms. Nonetheless, we need to be cautious when approaching decoupled projects due to the loss of functionality and performance.

With Drupal 8, we can use progressive rendering to build a Drupal-driven skeleton of the page to fill out increasingly expensive portions of the page. We can then selectively delineate which parts of the page decouple once the page load is complete. This concept of progressive decoupling offers the best of both worlds. Layout management, security, and content previews are unaffected, and within page components, front-end application logic can work its magic.

Drupal 8 is now a state-of-the-art platform for building projects that lie at the nexus between traditional content-rich websites and modern interaction-rich web applications. As always, while remarkable strides have been made with Drupal 8, more work remains.

Special thanks to Preston So and Wim Leers at Acquia for contributions to this blog post and to Moshe Weitzman, Kevin O'Leary, Christian Yates, David Hwang, Michael Pezzi and Erik Baldwin for their feedback during its writing.

Categories: Elsewhere

Petter Reinholdtsen: The life and death of a laptop battery

Planet Debian - Thu, 24/09/2015 - 16:00

When I get a new laptop, the battery life time at the start is OK. But this do not last. The last few laptops gave me a feeling that within a year, the life time is just a fraction of what it used to be, and it slowly become painful to use the laptop without power connected all the time. Because of this, when I got a new Thinkpad X230 laptop about two years ago, I decided to monitor its battery state to have more hard facts when the battery started to fail.

First I tried to find a sensible Debian package to record the battery status, assuming that this must be a problem already handled by someone else. I found battery-stats, which collects statistics from the battery, but it was completely broken. I sent a few suggestions to the maintainer, but decided to write my own collector as a shell script while I waited for feedback from him. Via a blog post about the battery development on a MacBook Air I also discovered batlog, not available in Debian.

I started my collector 2013-07-15, and it has been collecting battery stats ever since. Now my /var/log/hjemmenett-battery-status.log file contain around 115,000 measurements, from the time the battery was working great until now, when it is unable to charge above 7% of original capacity. My collector shell script is quite simple and look like this:

#!/bin/sh # Inspired by # http://www.ifweassume.com/2013/08/the-de-evolution-of-my-laptop-battery.html # See also # http://blog.sleeplessbeastie.eu/2013/01/02/debian-how-to-monitor-battery-capacity/ logfile=/var/log/hjemmenett-battery-status.log files="manufacturer model_name technology serial_number \ energy_full energy_full_design energy_now cycle_count status" if [ ! -e "$logfile" ] ; then ( printf "timestamp," for f in $files; do printf "%s," $f done echo ) > "$logfile" fi log_battery() { # Print complete message in one echo call, to avoid race condition # when several log processes run in parallel. msg=$(printf "%s," $(date +%s); \ for f in $files; do \ printf "%s," $(cat $f); \ done) echo "$msg" } cd /sys/class/power_supply for bat in BAT*; do (cd $bat && log_battery >> "$logfile") done

The script is called when the power management system detect a change in the power status (power plug in or out), and when going into and out of hibernation and suspend. In addition, it collect a value every 10 minutes. This make it possible for me know when the battery is discharging, charging and how the maximum charge change over time. The code for the Debian package is now available on github.

The collected log file look like this:

timestamp,manufacturer,model_name,technology,serial_number,energy_full,energy_full_design,energy_now,cycle_count,status, 1376591133,LGC,45N1025,Li-ion,974,62800000,62160000,39050000,0,Discharging, [...] 1443090528,LGC,45N1025,Li-ion,974,4900000,62160000,4900000,0,Full, 1443090601,LGC,45N1025,Li-ion,974,4900000,62160000,4900000,0,Full,

I wrote a small script to create a graph of the charge development over time. This graph depicted above show the slow death of my laptop battery.

But why is this happening? Why are my laptop batteries always dying in a year or two, while the batteries of space probes and satellites keep working year after year. If we are to believe Battery University, the cause is me charging the battery whenever I have a chance, and the fix is to not charge the Lithium-ion batteries to 100% all the time, but to stay below 90% of full charge most of the time. I've been told that the Tesla electric cars limit the charge of their batteries to 80%, with the option to charge to 100% when preparing for a longer trip (not that I would want a car like Tesla where rights to privacy is abandoned, but that is another story), which I guess is the option we should have for laptops on Linux too.

Is there a good and generic way with Linux to tell the battery to stop charging at 80%, unless requested to charge to 100% once in preparation for a longer trip? I found one recipe on askubuntu for Ubuntu to limit charging on Thinkpad to 80%, but could not get it to work (kernel module refused to load).

I wonder why the battery capacity was reported to be more than 100% at the start. I also wonder why the "full capacity" increases some times, and if it is possible to repeat the process to get the battery back to design capacity. And I wonder if the discharge and charge speed change over time, or if this stay the same. I did not yet try to write a tool to calculate the derivative values of the battery level, but suspect some interesting insights might be learned from those.

Update 2015-09-24: I got a tip to install the packages acpi-call-dkms and tlp (unfortunately missing in Debian stable) packages instead of the tp-smapi-dkms package I had tried to use initially, and use 'tlp setcharge 40 80' to change when charging start and stop. I've done so now, but expect my existing battery is toast and need to be replaced. The proposal is unfortunately Thinkpad specific.

Categories: Elsewhere

Lullabot: What Happened to Hook_Menu in Drupal 8?

Planet Drupal - Thu, 24/09/2015 - 15:46

In Drupal 7 and earlier versions hook_menu has been the Swiss Army knife of hooks. It does a little bit of everything: page paths, menu callbacks, tabs and local tasks, contextual links, access control, arguments and parameters, form callbacks, and on top of all that it even sets up menu items! In my book it’s probably the most-used hook of all. I don’t know if I’ve ever written a module that didn’t implement hook_menu.

But things have changed In Drupal 8. Hook_menu is gone and now all these tasks are managed separately using a system of YAML files that provide metadata about each item and corresponding PHP classes that provide the underlying logic.

The new system makes lots of sense, but figuring out how to make the switch can be confusing. To make things worse, the API has changed a few times over the long cycle of Drupal 8 development, so there is documentation out in the wild that is now incorrect. This article explains how things work now, and it shouldn't change any more.

I’m going to list some of the situations I ran into while porting a custom module to Drupal 8 and show before and after code examples of what happened to my old hook_menu items.

Custom Pages

One of the simplest uses of hook_menu is to set up a custom page at a given path. You'd use this for a classic "Hello World" module. In Drupal 8, paths are managed using a MODULE.routing.yml file to describe each path (or ‘route’) and a corresponding controller class that extends a base controller, which contains the logic of what happens on that path. Each controller class lives in its own file, where the file is named to match the class name. This controller logic might have lived in a separate MODULE.pages.inc file in Drupal 7.

In Drupal 7 the code might look like this:

function example_menu() { $items = array(); $items[‘main’] = array( 'title' => 'Main Page', 'page callback' => example_main_page', 'access arguments' => array('access content'), 'type' => MENU_NORMAL_ITEM, 'file' => 'MODULE.pages.inc' ); return $items; } function example_main_page() { return t(‘Something goes here’); }

In Drupal 8 we put the route information into a file called MODULE.routing.yml. Routes have names that don’t necessary have anything to do with their paths. They are just unique identifiers. They should be prefixed with your module name to avoid name clashes. You may see documentation that talks about using _content or _form instead of _controller in this YAML file, but that was later changed. You should now always use _controller to identify the related controller.

example.main_page_controller: path: '/main’ defaults: _controller: '\Drupal\example\Controller\MainPageController::mainPage’ _title: ‘Main Page’ requirements: _permission: 'access content'

Note that we now use a preceding slash on paths! In Drupal 7 the path would have been main, and in Drupal 8 it is /main! I keep forgetting that and it is a common source of problems as I make the transition. It’s the first thing to check if your new code isn’t working!

The page callback goes into a controller class. In this example the controller class is named MainPageController.php, and is located at MODULE/src/Controller/MainPageController.php. The file name should match the class name of the controller, and all your module’s controllers should be in that /src/Controller directory. That location is dictated by the PSR-4 standard that Drupal has adopted. Basically, anything that is located in the expected place in the ‘/src’ directory will be autoloaded when needed without using module_load_include() or listing file locations in the .info file, as we had to do in Drupal 7.

The method used inside the controller to manage this route can have any name, mainPage is an arbitrary choice for the method in this example. The method used in the controller file should match the YAML file, where it is described as CLASS_NAME::METHOD. Note that the Contains line in the class @file documentation matches the _controller entry in the YAML file above.

A controller can manage one or more routes, as long as each has a method for its callback and its own entry in the YAML file. For instance, the core nodeController manages four of the routes listed in node.routing.yml.

The controller should always return a render array, not text or HTML, another change from Drupal 7.

Translation is available within the controller as $this->t() instead of t(). This works because ControllerBase has added the StringTranslationTrait. There's a good article about how PHP Traits like translation work in Drupal 8 on Drupalize.Me.

/** * @file * Contains \Drupal\example\Controller\MainPageController. */ namespace Drupal\example\Controller; use Drupal\Core\Controller\ControllerBase; class MainPageController extends ControllerBase { public function mainPage() { return [ '#markup => $this->t('Something goes here!'), ]; } Paths With Arguments

Some paths need additional arguments or parameters. If my page had a couple extra parameters it would look like this in Drupal 7:

function example_menu() { $items = array(); $items[‘main/first/second’] = array( 'title' => 'Main Page', 'page callback' => example_main_page', ‘page arguments’ => array(1, 2), 'access arguments' => array('access content'), 'type' => MENU_NORMAL_ITEM, ); return $items; } function example_main_page($first, $second) { return t(‘Something goes here’); }

In Drupal 8 the YAML file would be adjusted to look like this (adding the parameters to the path):

example.main_page_controller: path: '/main/{first}/{second}’ defaults: _controller: '\Drupal\example\Controller\MainPageController::mainPage’ _title: ‘Main Page’ requirements: _permission: 'access content'

The controller then looks like this (showing the parameters in the function signature)::

/** * @file * Contains \Drupal\example\Controller\MainPageController. */ namespace Drupal\example\Controller; use Drupal\Core\Controller\ControllerBase; class MainPageController extends ControllerBase { public function mainPage($first, $second) { // Do something with $first and $second. return [ '#markup => $this->t('Something goes here!'), ]; } }

Obviously anything in the path could be altered by a user so you’ll want to test for valid values and otherwise ensure that these values are safe to use. I can’t tell if the system does any sanitization of these values or if this is a straight pass-through of whatever is in the url, so I’d probably assume that I need to type hint and sanitize these values as necessary for my code to work.

Paths With Optional Arguments

The above code will work correctly only for that specific path, with both parameters. Neither the path /main, nor /main/first will work, only /main/first/second. If you want the parameters to be optional, so /main, /main/first, and /main/first/second are all valid paths, you need to make some changes to the YAML file.

By adding the arguments to the defaults section you are telling the controller to treat the base path as the main route and the two additional parameters as path alternatives. You are also setting the default value for the parameters. The empty value says they are optional, or you could give them a fixed default value to be used if they are not present in the url.

example.main_page_controller: path: '/main/{first}/{second}’ defaults: _controller: '\Drupal\example\Controller\MainPageController::mainPage’ _title: ‘Main Page’ first: '' second: '' requirements: _permission: 'access content' Restricting Parameters

Once you set up parameters you probably should also provide information about what values will be allowed for them. You can do this by adding some more information to the YAML file. The example below indicates that $first can only contain the values ‘Y’ or ‘N’, and $second must be a number. Any parameters that don’t match these rules will return a 404. Basically the code is expecting to evaluate a regular expression to determine if the path is valid.

See Symfony documentation for lots more information about configuring routes and route requirements.

example.main_page_controller: path: '/main/{first}/{second}’ defaults: _controller: '\Drupal\example\Controller\MainPageController::mainPage’ _title: ‘Main Page’ first: '' second: '' requirements: _permission: 'access content' first: Y|N second: \d+ Entity Parameters

As in Drupal 7, when creating a route that has an entity id you can set it up so the system will automatically pass the entity object to the callback instead of just the id. This is called ‘upcasting’. In Drupal 7 we did this by using %node instead of %. In Drupal 8 you just need to use the name of the entity type as the parameter name, for instance {node} or {user}.

example.main_page_controller: path: '/node/{node}’ defaults: _controller: '\Drupal\example\Controller\MainPageController::mainPage’ _title: ‘Node Page’ requirements: _permission: 'access content'

This obviously means you should be careful how you name your custom parameters to avoid accidentally getting an object when you didn’t expect it. Treat entity type names as reserved words that should not be used for other parameters. Or maybe even add a prefix to custom parameters to ensure they won’t collide with current or future entity types or other automatic mapping.

JSON Callbacks

All the above code will create HTML at the specified path. Your render array will be converted to HTML automatically by the system. But what if you wanted that path to display JSON instead? I had trouble finding any documentation about how to do that. There is some old documentation that indicates you need to add _format: json to the YAML file in the requirements section, but that is not required unless you want to provide alternate formats at the same path.

Create the array of values you want to return and then return it as a JsonResponse object. Be sure to add ”use Symfony\Component\HttpFoundation\JsonResponse” at the top of your class so it will be available.

/** * @file * Contains \Drupal\example\Controller\MainPageController. */ namespace Drupal\example\Controller; use Drupal\Core\Controller\ControllerBase; use Symfony\Component\HttpFoundation\JsonResponse; class MainPageController extends ControllerBase { public function mainPage() { $return = array(); // Create key/value array. return new JsonResponse($return); } } Access Control

Hook_menu() also manages access control. Access control is now handled by the MODULE.routing.yml file. There are various ways to control access:

Allow access by anyone to this path:

example.main_page_controller: path: '/main’ requirements: _access: ‘TRUE’

Limit access to users with ‘access content’ permission:

example.main_page_controller: path: '/main’ requirements: _permission: 'access content'

Limit access to users with the ‘admin’ role:

example.main_page_controller: path: '/main’ requirements: _role: 'admin'

Limit access to users who have ‘edit’ permission on an entity (when the entity is provided in the path):

example.main_page_controller: path: '/node/{node}’ requirements: _entity_access: ‘node.edit’

See Drupal.org documentation for more details about setting up access control in your MODULE.routing.yml file.


So what if a route already exists (created by core or some other module) and you want to alter something about it? In Drupal 7 that is done with hook_menu_alter, but that hook is also removed in Drupal 8. It’s a little more complicated now. The simplest example in core I could find was in the Node module, which is altering a route created by the System module.

A class file at MODULE/src/Routing/CLASSNAME.php extends RouteSubscriberBase and looks like the following. It finds the route it wants to alter using the alterRoutes() method and changes it as necessary. You can see that the values that are being altered map to lines in the original MODULE.routing.yml file for this entry.

/** * @file * Contains \Drupal\node\Routing\RouteSubscriber. */ namespace Drupal\node\Routing; use Drupal\Core\Routing\RouteSubscriberBase; use Symfony\Component\Routing\RouteCollection; /** * Listens to the dynamic route events. */ class RouteSubscriber extends RouteSubscriberBase { /** * {@inheritdoc} */ protected function alterRoutes(RouteCollection $collection) { // As nodes are the primary type of content, the node listing should be // easily available. In order to do that, override admin/content to show // a node listing instead of the path's child links. $route = $collection->get('system.admin_content'); if ($route) { $route->setDefaults(array( '_title' => 'Content', '_entity_list' => 'node', )); $route->setRequirements(array( '_permission' => 'access content overview', )); } } }

To wire up the menu_alter there is also a MODULE.services.yml file with an entry that points to the class that does the work:

services: node.route_subscriber: class: Drupal\node\Routing\RouteSubscriber tags: - { name: event_subscriber }

Many core modules put their RouteSubscriber class in a different location: MODULE/src/EventSubscriber/CLASSNAME.php instead of MODULE/src/Routing/CLASSNAME.php. I haven’t been able to figure out why you would use one location over the other.

Altering routes and creating dynamic routes are complicated topics that are really beyond the scope of this article. There are more complex examples in the Field UI and Views modules in core.

And More!

And these are still only some of the things that are done in hook_menu in Drupal 7 that need to be transformed to Drupal 8. Hook_menu is also used for creating menu items, local tasks (tabs), contextual links, and form callbacks. I’ll dive into the Drupal 8 versions of some of those in a later article.

Categories: Elsewhere

InternetDevels: Drupal 8 development: useful tips

Planet Drupal - Thu, 24/09/2015 - 15:34

Hello, everyone! The release of Drupal 8 is almost here, but its beta version is already vailable for use. So let's explore Drupal 8 together.

Read more
Categories: Elsewhere

Matt Glaman: Fixing rotated images uploaded to Drupal from an iPhone

Planet Drupal - Thu, 24/09/2015 - 15:16

iPhones 4 and up store images in landscape mode and use EXIF data to provide proper rotation when viewed. This is a bit quirky as not all desktop browsers provide fixes, or they may not be streamlined. I remember my old project manager telling me their images were showing up "flipped to the side" during mobile QA testing. Sure enough when the image was embedded in HTML it was cocked to the side - however when viewed directly in the browser or desktop it was fine. What? Luckily through some Google-fu I stumbled upon this great blog post detailing how "" True words.

I am guessing you landed here from a Google search and want to solve this problem. You are in luck - check out the project. Originally it spawned , but Dave Reid suggested its better as a standalone module so all files can be fixed - not just sites using File Entity. Typically Drupal takes a "manipulate on render" approach. This module does not. The image will be manipulated on upload to the proper rotation. Here is the reason: what if you want to display the original file and not a derivative? That is going to be embedded in an image tag and probably not render right. Secondly one would have to make sure every single image style added this filter. There is enough button clicking when setting up a Drupal site.

If you wold like to give it a test, checkout out these example files repository from the aforementioned blog: https://github.com/recurser/exif-orientation-examples.

I also would like to note that ImageCache Actions provides this functionality, but as a submodule and as an image filter. I wish I could remember who pointed this out, but it was discovered a few months after the project. But, again, with my previous arguments a filter does not cut it.

Categories: Elsewhere

Joachim Breitner: The Incredible Proof Machine

Planet Debian - Thu, 24/09/2015 - 14:14

In a few weeks, I will have the opportunity to offer a weekend workshop to selected and motivated high school students1 to a topic of my choice. My idea is to tell them something about logic, proofs, and the joy of searching and finding proofs, and the gratification of irrevocable truths.

While proving things on paper is already quite nice, it is much more fun to use an interactive theorem prover, such as Isabelle, Coq or Agda: You get immediate feedback, you can experiment and play around if you are stuck, and you get lots of small successes. Someone2 once called interactive theorem proving “the worlds most geekiest videogame”.

Unfortunately, I don’t think one can get high school students without any prior knowledge in logic, or programming, or fancy mathematical symbols, to do something meaningful with a system like Isabelle, so I need something that is (much) easier to use. I always had this idea in the back of my head that proving is not so much about writing text (as in “normally written” proofs) or programs (as in Agda) or labeled statements (as in Hilbert-style proofs), but rather something involving facts that I have proven so far floating around freely, and way to combine these facts to new facts, without the need to name them, or put them in a particular order or sequence. In a way, I’m looking for labVIEW wrestled through the Curry-Horward-isomorphism. Something like this:

A proof of implication currying

So I set out, rounded up a few contributors (Thanks!), implemented this, and now I proudly present: The Incredible Proof Machine3

This interactive theorem prover allows you to do perform proofs purely by dragging blocks (representing proof steps) onto the paper and connecting them properly. There is no need to learn syntax, and hence no frustration about getting that wrong. Furthermore, it comes with a number of example tasks to experiment with, so you can simply see it as a challenging computer came and work through them one by one, learning something about the logical connectives and how they work as you go.

For the actual workshop, my plan is to let the students first try to solve the tasks of one session on their own, let them draw their own conclusions and come up with an idea of what they just did, and then deliver an explanation of the logical meaning of what they did.

The implementation is heavily influenced by Isabelle: The software does not know anything about, say, conjunction (∧) and implication (→). To the core, everything is but an untyped lambda expression, and when two blocks are connected, it does unification4 of the proposition present on either side. This general framework is then instantiated by specifying the basic rules (or axioms) in a descriptive manner. It is quite feasible to implement other logics or formal systems on top of this as well.

Another influence of Isabelle is the non-linear editing: You neither have to create the proof in a particular order nor have to manually manage a “proof focus”. Instead, you can edit any bit of the proof at any time, and the system checks all of it continuously.

As always, I am keen on feedback. Also, if you want to use this for your own teaching or experimenting needs, let me know. We have a mailing list for the project, the code is on GitHub, where you can also file bug reports and feature requests. Contributions are welcome! All aspects of the logic are implemented in Haskell and compiled to JavaScript using GHCJS, the UI is plain hand-written and messy JavaScript code, using JointJS to handle the graph interaction.

Obviously, there is still plenty that can be done to improve the machine. In particular, the ability to create your own proof blocks, such as proof by contradiction, prove them to be valid and then use them in further proofs, is currently being worked on. And while the page will store your current progress, including all proofs you create, in your browser, it needs better ways to save, load and share tasks, blocks and proofs. Also, we’d like to add some gamification, i.e. achievements (“First proof by contradiction”, “50 theorems proven”), statistics, maybe a “share theorem on twitter” button. As the UI becomes more complicated, I’d like to investigating moving more of it into Haskell world and use Functional Reactive Programming, i.e. Ryan Trickle’s reflex, to stay sane.

Customers who liked The Incredible Proof Machine might also like these artifacts, that I found while looking whether something like this exists:

  • Easyprove, an interactive tool to create textual proofs by clicking on rules.
  • Domino On Acid represents natural deduction rules in propositional logic with → and ⊥ as a game of dominoes.
  • Proofscape visualizes the dependencies between proofs as graphs, i.e. it operates on a higher level than The Incredible Proof Machine.
  • Proofmood is a nice interactive interface to conduct proofs in Fitch-style.
  • Proof-Game represents proofs trees in a sequent calculus with boxes with different shapes that have to match.
  • JAPE is an editor for proofs in a number of traditional proof styles. (Thanks to Alfio Martini for the pointer.)
  • Logitext, written by Edward Z. Yang, is an online tool to create proof trees in sequent style, with a slick interface, and is even backed by Coq! (Thanks to Lev Lamberov for the pointer.)
  • Carnap is similar in implementation to The Incredible Proof Machine (logical core in Haskell, generic unification-based solver). It currently lets you edit proof trees, but there are plans to create something more visual.
  1. Students with migration background supported by the START scholarship

  2. Does anyone know the reference?

  3. We almost named it “Proofcraft”, which would be a name our current Minecraft-wild youth would appreciate, but it is alreay taken by Gerwin Kleins blog. Also, the irony of a theorem prover being in-credible is worth something.

  4. Luckily, two decades ago, Tobias Nipkow published a nice implementation of higher order pattern unification as ML code, which I transliterated to Haskell for this project.

Categories: Elsewhere

Amazee Labs: DrupalCon Barcelona 2015 - Day 3

Planet Drupal - Thu, 24/09/2015 - 13:23
DrupalCon Barcelona 2015 - Day 3 Bastian Widmer Thu, 09/24/2015 - 13:23

It’s Wednesday which is awesome because I had my talk delivered (please find all the resources here) yesterday and I dont need to get up as early as on the first day of the conference.

The day started off with a short stroll to the conference center and picking up some Amazees along the way.

Then the the day starts with a Keynote, this time with Natalie Nahai. She spoke about the 3 xxx brain.

After the Keynote I dropped by our amazing booth and got myself some coffee (yes we have a coffee machine at our booth! #CoffeOps). Since I was chairing the DevOps track I usually try to attend most of the Sessions to see if the guidelines, which the team created are being met in real-life (yes they do).

Claudine Brändle and Anna Hanchar - Site building great editorial experience

Claudine and Anna talked about how we build Drupal-Sites to create a much better editorial experience with a few simple tricks and guidelines.

It wasn’t just Claudine's first presentation at a DrupalCon, but it was also her birthday and the Amazees organized a very special surprise for her with some help of Jeffrey "Jam" McGuire.

Zequi Vázquez - Drupal Extreme Scaling

A multisite with 30’000 sites, availability close to 99.999%, high performance and lowest possible cost and those requirements need to be met by a team of only 3 people. Sounds at first like a really bad nightmare of everyone working in operations.

Zequi and his team went great lengths in turning this nightmare into a DevOps success story by leveraging AWS, Mesos, Marathon and other tools found in highly sophisticated setups.

Zequis slides can be found on Slideshare and on Youtube.

In the meantime our fellow friend Adam Juran was having some drush problems, which we fixed together over coffee.

Jon Pugh - Hassle-free Hosting and and Testing with DevShop & Behat

Jon showed how easy it is to setup hosting environments with DevShop, wich bases on Aegir. I didn’t know much about it and was amazed how complete and easy the whole system is set up.

Mark Sonnabaum - Introduction to R and Exploratory Graphics

If you are working with numbers, plots and statistics chances are high that you heard about the R language already. Mark works as a performance engineer and has deep knowledge of systems (you can discuss with him about optimizing CPU caches which only few people can talk about). He was explaining the basics of R language and I saw that I probably need to look into this language again because it evolved a lot since I used it last around 2 years ago. It’s definitively worth the time to look into it.

DevOps Meetup

DevOps is about breaking down silos and removing borders, this makes it clear to me that we can’t have a conference with specialists in a foreign country without trying to get in touch with the local community. Since quite a few conferences we try to get in touch with the local community. After the last session a group of around 20 Drupalistas headed out to the offices of InfoJob for meeting the local DevOps community of Barcelona. We had one short session hold by Kristof van Tomme where he talked about his approach of applying Lean- and DevOps Principles to a whole organisation. We then switched over to an open discussion and ended up with a lot of deep reaching topics like Database Optimisation or the CAP theorem. After the discussions we headed out for dinner together and network.

I’d like to thank all the attendees of the meet up and our local contact Ignasi Fosch for making this happen. Seeing that there are likeminded people in pretty much every city you visit is very empowering.


Categories: Elsewhere

Matthew Garrett: Filling in the holes in Linux boot chain measurement, and the TPM measurement log

Planet Debian - Thu, 24/09/2015 - 03:21
When I wrote about TPM attestation via 2FA, I mentioned that you needed a bootloader that actually performed measurement. I've now written some patches for Shim and Grub that do so.

The Shim code does a couple of things. The obvious one is to measure the second-stage bootloader into PCR 9. The perhaps less expected one is to measure the contents of the MokList and MokSBState UEFI variables into PCR 14. This means that if you're happy simply running a system with your own set of signing keys and just want to ensure that your secure boot configuration hasn't been compromised, you can simply seal to PCR 7 (which will contain the UEFI Secure Boot state as defined by the UEFI spec) and PCR 14 (which will contain the additional state used by Shim) and ignore all the others.

The grub code is a little more complicated because there's more ways to get it to execute code. Right now I've gone for a fairly extreme implementation. On BIOS systems, the grub stage 1 and 2 will be measured into PCR 9[1]. That's the only BIOS-specific part of things. From then on, any grub modules that are loaded will also be measured into PCR 9. The full kernel image will be measured into PCR10, and the full initramfs will be measured into PCR11. The command line passed to the kernel is in PCR12. Finally, each command executed by grub (including those in the config file) is measured into PCR 13.

That's quite a lot of measurement, and there are probably fairly reasonable circumstances under which you won't want to pay attention to all of those PCRs. But you've probably also noticed that several different things may be measured into the same PCR, and that makes it more difficult to figure out what's going on. Thankfully, the spec designers have a solution to this in the form of the TPM measurement log.

Rather than merely extending a PCR with a new hash, software can extend the measurement log at the same time. This is stored outside the TPM and so isn't directly cryptographically protected. In the simplest form, it contains a hash and some form of description of the event associated with that hash. If you replay those hashes you should end up with the same value that's in the TPM, so for attestation purposes you can perform that verification and then merely check that specific log values you care about are correct. This makes it possible to have a system perform an attestation to a remote server that contains a full list of the grub commands that it ran and for that server to make its attestation decision based on a subset of those.

No promises as yet about PCR allocation being final or these patches ever going anywhere in their current form, but it seems reasonable to get them out there so people can play. Let me know if you end up using them!

[1] The code for this is derived from the old Trusted Grub patchset, by way of Sirrix AG's Trusted Grub 2 tree.

Categories: Elsewhere

Simon Josefsson: Cosmos – A Simple Configuration Management System

Planet Debian - Thu, 24/09/2015 - 00:38

Back in early 2012 I had been helping with system administration of a number of Debian/Ubuntu-based machines, and the odd Solaris machine, for a couple of years at $DAYJOB. We had a combination of hand-written scripts, documentation notes that we cut’n’paste’d from during installation, and some locally maintained Debian packages for pulling in dependencies and providing some configuration files. As the number of people and machines involved grew, I realized that I wasn’t happy with how these machines were being administrated. If one of these machines would disappear in flames, it would take time (and more importantly, non-trivial manual labor) to get its services up and running again. I wanted a system that could automate the complete configuration of any Unix-like machine. It should require minimal human interaction. I wanted the configuration files to be version controlled. I wanted good security properties. I did not want to rely on a centralized server that would be a single point of failure. It had to be portable and be easy to get to work on new (and very old) platforms. It should be easy to modify a configuration file and get it deployed. I wanted it to be easy to start to use on an existing server. I wanted it to allow for incremental adoption. Surely this must exist, I thought.

During January 2012 I evaluated the existing configuration management systems around, like CFEngine, Chef, and Puppet. I don’t recall my reasons for rejecting each individual project, but needless to say I did not find what I was looking for. The reasons for rejecting the projects I looked at ranged from centralization concerns (single-point-of-failure central servers), bad security (no OpenPGP signing integration), to the feeling that the projects were too complex and hence fragile. I’m sure there were other reasons too.

In February I started going back to my original needs and tried to see if I could abstract something from the knowledge that was in all these notes, script snippets and local dpkg packages. I realized that the essence of what I wanted was one shell script per machine, OpenPGP signed, in a Git repository. I could check out that Git repository on every new machine that I wanted to configure, verify the OpenPGP signature of the shell script, and invoke the script. The script would do everything needed to get the machine up into an operational stage again, including package installation and configuration file changes. Since I would usually want to modify configuration files on a system even after its initial installation (hey not everyone is perfect), it was natural to extend this idea to a cron job that did ‘git pull’, verified the OpenPGP signature, and ran the script. The script would then have to be a bit more clever and not redo everything every time.

Since we had many machines, it was obvious that there would be huge code duplication between scripts. It felt natural to think of splitting up the shell script into a directory with many smaller shell scripts, and invoke each shell script in turn. Think of the /etc/init.d/ hierarchy and how it worked with System V initd. This would allow re-use of useful snippets across several machines. The next realization was that large parts of the shell script would be to create configuration files, such as /etc/network/interfaces. It would be easier to modify the content of those files if they were stored as files in a separate directory, an “overlay” stored in a sub-directory overlay/, and copied into the file system’s hierarchy with rsync. The final realization was that it made some sense to run one set of scripts before rsync’ing in the configuration files (to be able to install packages or set things up for the configuration files to make sense), and one set of scripts after the rsync (to perform tasks that require some package to be installed and configured). These set of scripts were called the “pre-tasks” and “post-tasks” respectively, and stored in sub-directories called pre-tasks.d/ and post-tasks.d/.

I started putting what would become Cosmos together during February 2012. Incidentally, I had been using etckeeper on our machines, and I had been reading its source code, and it greatly inspired the internal design of Cosmos. The git history shows well how the ideas evolved — even that Cosmos was initially called Eve but in retrospect I didn’t like the religious connotations — and there were a couple of rewrites on the way, but on the 28th of February I pushed out version 1.0. It was in total 778 lines of code, with at least 200 of those lines being the license boiler plate at the top of each file. Version 1.0 had a debian/ directory and I built the dpkg file and started to deploy on it some machines. There were a couple of small fixes in the next few days, but development stopped on March 5th 2012. We started to use Cosmos, and converted more and more machines to it, and I quickly also converted all of my home servers to use it. And even my laptops. It took until September 2014 to discover the first bug (the fix is a one-liner). Since then there haven’t been any real changes to the source code. It is in daily use today.

The README that comes with Cosmos gives a more hands-on approach on using it, which I hope will serve as a starting point if the above introduction sparked some interest. I hope to cover more about how to use Cosmos in a later blog post. Since Cosmos does so little on its own, to make sense of how to use it, you want to see a Git repository with machine models. If you want to see how the Git repository for my own machines looks you can see the sjd-cosmos repository. Don’t miss its README at the bottom. In particular, its global/ sub-directory contains some of the foundation, such as OpenPGP key trust handling.

Categories: Elsewhere

Steve Purkiss: Remote DrupalCon - Day 2

Planet Drupal - Wed, 23/09/2015 - 22:15
Wednesday, 23rd September 2015Remote DrupalCon - Day 2

Let's never do that again

Unlike when I was watching yesterday's Driesnote, I actually quite expected these sorts of words to come out of the mouth of Larry Garfield, aka @crell, long-time Drupal contributor and the reason I stayed up way too late last night after blogging so not strictly Day 2 but deserves a mention as was a superb, insightful session "Drupal in 2020".

The never do that again refers to the four-or-so years spent on developing Drupal 8 with most of that time spent not developing new stuff but just barely catching up with modern technology trends. In order to be relevant even with today's technologies we need to be looking at what we could be doing and Larry shows off a number of impressive development projects which enable PHP to run in a similar way to node.js - even faster in many cases. Well worth a watch!

I ended the night with Ken Rickard's 2020 Vision, an entertaining session from a highly experienced professional reminding us that we are implementing a content management system, not a web publishing tool which comes from the print era, and thus there are many different considerations, and often many of the non-technical ones are overlooked whereas they can prove to be the biggest obstacles.

Day 2 Keynote - Web Psychologist Nathalie Nahai

I'd seen Nathalie talk before so I must admit I wasn't paying much attention until I saw a question pop up on twitter asking how this session mostly on marketing manipulation techniques was relevant to our community. Nathalie quickly focused on how we could use some of the techniques to help our current community as well as attract new people in by simply telling our story. A well-deserved round of applause came when Nathalie remarked:

"This is such a vibrant community it needs to be expressed online much more"

This is a big area of interest to me as I see so many wonderful stories from around the Drupal world yet currently the loudest voices being heard are the ones with funding. I've not an issue with that per se, I believe we could do more by collaborating together on strong marketing messaging.

I know the DA are doing as much as they can with the resources they have available, however I believe there is a place in the market for an organisation which markets the community as a whole - I envisage trucks that turn into training rooms / 24h coder lounges with schwag stores on board so can rock up to camps all over the place ;) But I guess that's another blog for another time - all I know is I'd love to go round the world interviewing the community for all to see & potentially training many more unexplored areas up in our community values of ownership!

Making the Leap: Successful Products as a Web Agency

Drawing from his own experience with Drupal offsite backup service NodeSquirrelDrew Gorton from managed hosting service providers Pantheon gave an interesting talk covering how quite a few product businesses had managed to make the uncommon successful birth from an agency. Drew provides useful insights I empathise with as I much prefer working in the product world however what with my bootstrapping and co-operative ideals it's taking a little longer than I'd hoped for ;)

Self-Managing Organizations: Teal is the new Orange

This was a really interesting session from a company I hadn't heard of before - Liip. Their organisation is around the 120 people mark and they have a self-organising way of working, with the ratio of pay difference between high and low 3-1. I beleive the company is also owned by the staff however I don't think the percentages were detailed, will have to watch again. They said they had no plans and let teams decide their own projects, strategies, etc. Obviously it's not all plain-sailing and provided a for a great case-study in things going certainly a better way in terms of fairer working environments and enabling human beings to grow rather than be stunted by job roles.

I watched a little of Shut up and take my money! which was about integrating the Stripe payment system with Drupal 8. I've done this previously and nothing much seemed to be different on the Stripe side so moved on - the videos are pouring in quick & fast!

I then watched Expose Drupal with RESTful for a short while until I realised it was 7 so moved on to PhpStorm for Drupal Development which was a fairly short session clocking in at 15 minutes however very useful, even pointing out a feature which shows you what features you have and haven't been using. I'm no fan of the licensing on PhpStorm but it does make life much easier so will be harder to give up than my macbook but I guess will have to be done at some point if I'm going to achieve complete Freedom!

Headless D8 in HHVM plus Angular.js

It was noted from the outset that this was a sponsored session from platform.sh so they would be showing off their product, which I've had the pleasure of playing around with a little on a time-limited trial, however I was suckered in by the buzzwords so I stuck it out. Being at home it was even easier for me to just click the mouse than suffer potential slight embarrasment as I walk out of the session room but in reality that rarely happens and I end up sitting right through the session continually questioning myself as if I were watching the fifth instalment of Jaws wondering wether an incident with a fish will happen at some point.

Suffice to say platform.sh works with HHVM and Angular.js. I've nothing against sponsor talks or platform.sh, I think they are both good things, just not this session, for me at least. I guess I wanted to see something shiny, not just a product demo, I feel they could've made a lot more out of the title than they did without having to be so focused on the continual sales pitch. Which I know that's what it was, but felt more like something that should've been out in the exhibit hall. I guess that doesn't get videod and put into the stream though.

I started to watch Altering, Extending, and Enhancing Drupal 8 by Joe Shindelar (@eojthebrave) whom I've had the pleasure of meeting at a number of Drupal events here & in the US. Joe's a great teacher, but for me as I've been playing with Drupal 8 for a while now I decided to skip on, especially when he said "Don't hack core" which is I know the thing, but in Drupal 8 I plan to hack core by simply using its interfaces... it's made for 'hacking' this time. Properly hacking that is of course! I realise this presentation wasn't for me though.

Then I watched a little Building amazing searches with Search API but all was looking pretty similar to 7 so thought I'd put that one on the watch when I have a specific need for it list. Then came along a truly awesome session...

Avoiding and surviving of contribution burnout

As someone who has suffered from depression I am particularly proud of the fact our community can have sessions that cover topics like this. I feel like I'm coming from a different angle as I'm spending most of my time working out how and where I can be of help and it's the client work if anything that's burning me out due to my complete lack of wanting to do anything other than write beautiful code, and I've not yet met a client who has the want or budget to pay me to do that. Sarcasm aside, burnout is a big issue, and something I have an issue with the business/community balance side as I believe one is currently gaining far more benefit out of the other than there should be and I don't really think it's anything that can't be solved with a more balance put back into the situation. That of course is not to make light of anyone's situation, just how I see the situation from my many travels around camps and to CXO meetups and my experience in the world up until now.

Pain Points of Contribution in the Drupal Community

Along similar veins to the previous session, Kalpana Goel delivers another important session trying to untangle the issues surrounding contibuting to the community and how we can potentially go about solving them.

Then I watched around half of Hassle-free Hosting and Testing with DevShop & Behat which looks like an interesting, open, option for self-hosting your own sites. Being a little tired I thought I'd come back to that when I'm more awake one weekend.

Last one for the day was The future of Groups on Drupal.org, which gave an interesting insight into forthcoming changes on drupal.org, much powered by the persona work done previously, so should be interesting when I log in and tailored content appears for me! It's great to see movement finally here, but I agree with Dries when he said previously it really needs perhaps ten million dollars of investment in it. ATEOTD, if you don't look after your tools you won't be able to make a decent product. It's always been my hope that as we talk about Drupal more, about the Why, and show people around the world what we're building the community will organically scale as people will want to be part of it. I think we have a number of issues in the way of that at the moment - perception, current human fear-driven non-sharing society, and state of internal systems. It's good to see a little focus going on the things we can fix now, hopefully we can scale it up soon so we don't get more fractured across different proprietary community silos just because they're 'easy'.


Well I may not be in Barcelona but I'm certainly ranting like I'm at DrupalCon, just on the record lol! With all the tweets and session-watching I'm certainly getting DrupalCon tired so signing off for the night, looking forward to the final day of sessions tomorrow with another important keynote and of course looking forward to finding out where next year's European DrupalCon will be - hopefully I'll plan a little better and build a little buffer so I don't miss out!

Unfortunately my comments are broken on this site so whilst I'm migrating to Drupal 8, do please tweet me @stevepurkiss or get in touch via my contact form.

tags: drupalconremoteDrupal PlanetPlanet Drupal
Categories: Elsewhere

Annertech: DrupalCon Barcelona 2015 Day 3

Planet Drupal - Wed, 23/09/2015 - 21:06
DrupalCon Barcelona 2015 Day 3

Wow! What a day we had at DrupalCon Barcelona 2015. I know, personally, I had the best day i've ever had at a DrupalCon, attending a great keynote on web psychology, a talk that validated my thoughts on design in the browser, an awesome presentation on linked data and the semantic web, and that's without mentioning the BoFs on web apps versus websites and Twitter Bootstrap, and then ... oh man - that was a lot.

So, today's best bits:


Categories: Elsewhere

Modules Unraveled: 149 Using Panopoly and it's Drupal 8 Future with David Snopek - Modules Unraveled Podcast

Planet Drupal - Wed, 23/09/2015 - 19:54
Published: Wed, 09/23/15Download this episodeProject
  • For people who might not know, what is a Drupal distribution?

    • Out of the box, vanilla Drupal doesn’t do much - you have to install modules and mold it into what you want
    • A distribution is Drupal prepackaged with contrib modules and themes, pre-configured for a specific use case (OOB, X+Y)
  • What is Panopoly?

    • A “starter site” distribution (replacement for vanilla Drupal)
    • A “base distribution” on which to build other distributions
    • A set of Features modules that can be used outside of Panopoly
  • Why would someone want to use Panopoly instead of vanilla Drupal?

    • Improved blank slate
    • Includes some the most popular modules and configuration that almost everyone is using anyway
    • Hide Drupal-isms from site managers and users
    • WYSIWYG, Media, responsive layouts, edit in place, live previews, improved search, UX improvements, a11y improvements
    • Include a bunch of stuff backported from D8: toolbar, responsive bartik, etc
    • Unified content/layout management system built on Panels eco-system
    • Rather than learning all that community knowledge over, re-use a well thought out, tested approach to doing Drupal
  • Some people love Panels, but others hate it. Why would someone who isn’t a “Panels lover” want to use Panopoly?

    • Best of Panels eco-system
    • You build with Views, Entities/Fields, custom code, whatever - the Panels bits tie those things together and allow users to customize them
    • We hide the nastiest bits (page_manager UI) from users and site managers
  • Why would someone want to create their own distro?

    • Boilerplate, build once / deploy lots, maintenance of lots of sites
    • Even small organizations can benefit
  • What advantages do you get by build your distro on Panopoly?

    • Like “base theme” shared work (like WYSIWYG, responsive, etc) and defined approach
    • Focus only on the unique stuff in your distro (by fitting into Panopoly’s architecture)
  • Why would someone want to use one of Panopoly’s Features modules outside of Panopoly?

    • [Quick background on Feature]
    • Dozen or so features
    • If like just a piece of Panopoly (ex. panopoly_wysiwyg) you could steal it!
    • Lots of thought into buttons to enable, filtering for control/security, additional features like Media/Linkit
  • Updating distributions can be hard. What does Panopoly do to help with this?

    • [explain why hard]
    • Docs
    • CI
    • distro_update
  • Security updates in particular can be hard, because you have to wait for the distro to make its own update. How does Panopoly handle them?

    • [mention how handled in the past / security team]
  • What are the plans for Panopoly in Drupal 8?

Episode Links: David on drupal.orgDavid on Twitter (@dsnopek)David on Twitter (@mydropninja)David on webPanopoly project pagePanopoly demo from DrupalCon Austin (demo starts at 18:15)#drupal-scotch on IRC#Panopoly on IRCPanopoly groupTags: planet-drupal
Categories: Elsewhere

InternetDevels: The 10 Commandments of User Interface Design

Planet Drupal - Wed, 23/09/2015 - 17:15

Design is not just what it looks like or feels like, design is how it works.

Read more
Categories: Elsewhere

KnackForge: TRANSLATION in Drupal 7 : How to work with

Planet Drupal - Wed, 23/09/2015 - 14:48

In the previous part, we discussed about the Translation in Drupal 7 works with few snapshots and some makeovers. Now, lets discuss how to work with translation to translate the contents, field values and entity items.

1) Translating Menus

With Drupal core alone, user-defined menu items are not translatable. The Menu translation module, part of the Internationalization (i18n) package, allows users to select a translation mode for each menu.

The following modes are available:

  • No Multilingual Options

  • Translate and Localize

  • Fixed Language

Translate and Localize Menus

There are two ways that menu items will be translated:

  • You can set a language when creating a custom menu item so that the menu item will only show up for that language. Menu items that link to nodes in a particular language will be treated this way.

  • You can localize other custom menu items without a language (for example, menu items linking to views pages). Use the Translate tab to translate the menu item title and description. Translators can also use the 'Translate interface' pages to translate these menu items.


Categories: Elsewhere

Wim Leers: Making Drupal fly — The fastest Drupal ever is here!

Planet Drupal - Wed, 23/09/2015 - 13:59

Together with Fabian Franz from Tag1 Consulting, I had a session about Big Pipe in Drupal 8, as well as related performance/cacheability improvements.

I’ll let the session description speak for itself:

With placeholders (https://www.drupal.org/node/2478483) having just gone into Drupal 8 Core, BigPipe being unblocked now and actively making its way in, Render Strategies around the corner, and out-of-the-box auth-caching in CDNs + Varnish a true possibility on the horizon, those are really exciting times for Drupal 8 Performance. But there is even more …

Come and join us for a wild ride into the depths of Render Caching and how it enables Drupal to be faster than ever.

The Masterplan of Drupal Performance (Next steps)

Here we will reveal the next steps of the TRUE MASTERPLAN of Drupal Performance. The plan we have secretly (not really!) been implementing for years and are now “sharing” finally with all of you! (Well you could look at the issue queue too or this public google doc, but this session will be more fun!)

Learn what we have in store for the future and what has changed since we last talked about this topic in Los Angeles and Amsterdam and why Drupal 8 will even be more awesome than what you have seen so far.

Also see a prototype of render_cache using the exact same Drupal 8 code within Drupal 7 and empowering you to do some of this in Drupal 7 as well.

Get the edge advantage of knowing more

Learn how to utilize cache contexts to vary the content of your site, cache tags to know perfectly when items are expired and cache keys to identify the objects - and what is the difference between them.

Learn how powerful ‘#lazy_builders’ will allow the perfect ESI caching you always wanted and how it will all be very transparent and how you can make your modules ready for the placeholder future.

See with your own eyes how you can utilize all of that functionality now on your Drupal 7 and 8 sites.

Get ready for a new area of performance

We will show you:

  • How to take advantage of #lazy_builders
  • How to tweak the auto-placeholdering strategies (depending on state of issue at time of session)
  • The biggest Do’s and Don’ts when creating render-cache enabled modules and sites
  • Common scenarios and how to solve them (mobile sites variation, cookie variation, etc.)
  • Drupal using an intelligent BigPipe approach (but a different one, one that is way more like Facebook does it …)
Get to know the presenters

This session will be presented by Wim Leers and Fabian Franz. Wim implemented a lot of what we show here in Drupal 8 and made the APIs easy and simple to use and made cache tags and #lazy_builders a very powerful concept. Fabian has prototyped a lot of this concepts in his render_cache module, introduced powerful Drupal 8 concepts into Drupal 7 and is always one step ahead in making the next big thing. Together they have set out on a crusade to rule the Drupal Performance world to bring you the fastest Drupal ever and with that trying to make the whole Web fast!

Frequently Asked Questions
  • I have already seen the session in Amsterdam and Los Angeles, will I learn something new?

Yes, absolutely. While the previous sessions focused more on the basics, this session will also cover how to use #lazy_builders and custom render strategies to empower your Drupal to be fast.

  • Will there again be a demo?

Yes, there will again be a nice demo :). You’ll love it!

  • Is it even possible to make it even faster than what we have seen?

Yes :)

Slides: Making Drupal fly — The fastest Drupal ever is here!Conference: DrupalCon BarcelonaLocation: BarcelonaDate: Sep 23 2015 - 14:15Duration: 60 minutesExtra information: 

See https://events.drupal.org/barcelona2015/sessions/making-drupal-fly-fastest-drupal-ever-here.

Categories: Elsewhere


Subscribe to jfhovinne aggregator