Web Wash: Build a Blog in Drupal 8: Create and Manage Menus

Planet Drupal - Mon, 23/11/2015 - 21:22

A website's navigation plays an important part in how easy a site is to use. It's essential that you spend time fleshing out the overall IA (Information architecture) or you'll end up with a site that's hard to use and difficult to navigate through.

Previous versions of Drupal have always offered a simple interface for managing menus, and Drupal 8 is no exception.

In this tutorial, we'll continue building our site by adding in some menus. We'll create a custom menu which'll be used to display links to popular categories, then create an "About us" page and add a menu link to the footer.

Categories: Elsewhere

Riku Voipio: Using ser2net for serial access.

Planet Debian - Mon, 23/11/2015 - 20:55
Is your table a mess of wires? Do you have multiple devices connected via serial and can't remember which is /dev/ttyUSBX is connected to what board? Unless you are a embedded developer, you are unlikely to deal with serial much anymore - In that case you can just jump to the next post in your news feed. Introducting ser2netUsually people start with minicom for serial access. There are better tools - picocom, screen, etc. But to easily map multiple serial ports, use ser2net. Ser2net makes serial ports available over telnet. Persistent usb device names and ser2netTo remember which usb-serial adapter is connected to what, we use the /dev/serial tree created by udev, in /etc/ser2net.conf:
# arndale
7004:telnet:0:'/dev/serial/by-path/pci-0000:00:1d.0-usb-0:1.8.1:1.0-port0':115200 8DATABITS NONE 1STOPBIT
# cubox
7005:telnet:0:/dev/serial/by-id/usb-Prolific_Technology_Inc._USB-Serial_Controller_D-if00-port0:115200 8DATABITS NONE 1STOPBIT
# sonic-screwdriver
7006:telnet:0:/dev/serial/by-id/usb-FTDI_FT230X_96Boards_Console_DAZ0KA02-if00-port0:115200 8DATABITS NONE 1STOPBIT
The by-path syntax is needed, if you have many identical usb-to-serial adapters. In that case a Patch from BTS is needed to support quoting in serial path. Ser2net doesn't seems very actively maintained upstream - a sure sign that project is stagnant is a homepage still at sourceforge.net... This patch among other interesting features can be also be found in various ser2net forks in github. Setting easy to remember names Finally, unless you want to memorize the port numbers, set TCP port to name mappings in /etc/services:
# Local services
arndale 7004/tcp
cubox 7005/tcp
sonic-screwdriver 7006/tcp
Now finally: telnet localhost sonic-screwdriver ^Mandatory picture of serial port connection in action
Categories: Elsewhere

Axelerant Blog: How To Set Up Drupal RESTful Caching

Planet Drupal - Mon, 23/11/2015 - 20:00

The Drupal RESTful module has a multitude of caching options and sorting through them on your own for the first time can be slow. This article will help you get started with Drupal RESTful caching.

NOTE: RESTful 2.x module was recently released. This article focuses on the 1.x RESTful module version and the techniques mentioned below may not work if you are using any other release.

Your caching options can be controlled at various levels in code. Knowing which layer your application needs is just as important as knowing how to execute, but we’ll start off with how, we’ll move on to why later.

hbspt.cta.load(557351, '5ad743d7-d16e-4b64-bc0a-078bc790dea1'); Start with Drupal RESTful Caching

To start caching your endpoint, the initial configuration is setting render to TRUE in the plugin file under render_cache key.

RESTful skips caching your endpoint if this setting is FALSE, which is the default value. In addition to this, Drupal RESTful also ships with support for the entitycache module for entity based endpoints.

Here’s how a typical flow looks like for an endpoint:

function viewEntity($id) { $cached_data = $this->getRenderedCache($context); if(!empty($cached_data->data)) { return $cached_data->data; } // perform expensive stuff and construct payload $values = construct_payload(); $this->setRenderedCache($values, $context); return $values; }

$context is the entity context, like the bundle name, entity ID and any other metadata you might find to be relevant to constructing your cache key. In most cases, just the bundle name, entity type and ID would suffice. RESTful fills in other contextual data like endpoint name, GET parameters, etc. RESTful builds your cache keys in a crafty way so that it is easy to do CRUD operations in bulk. For instance, clearing all caches for the “articles” endpoint would be something like clear("articles::*").

Within the RESTful project, RestfulBase.php houses all the caching primitives, like getRenderedCache, setRenderedCache, clearRenderedCache and generateCacheId. The last function, generateCacheId, constructs the cache key based on the $context supplied to that endpoint.

Preventing Cache-Busting

It is also worth noting that Drupal RESTful caching allows you to override the key generation logic on a per-endpoint basis. This is especially useful when you want to build a custom cache key.

While working on Legacy.com, we had to build a cache key which is agnostic of specific GET parameters. By default, the generateCacheId builds a different key for the following endpoints:

  • articles/23?foo=123456
  • articles/23?foo=567898
  • articles/23?foo=986543

Though a different key for each of these calls makes sense in most cases, it is redundant in some cases. E.g. we return the same payload for all the above 3. To change this behavior, we ended up overriding generateCacheId.

The setRenderedCache, getRenderedCache, and clearRenderedCache operate upon the default cache controller, which can be specified in the plugin using the class key inside render_cache. This value defaults to DrupalDatabaseCache.

This default value can also be explicitly set to your favorite caching backend. In our case, we use the memcache module and set this value to MemCacheDrupal. Again, Drupal RESTful allows you to configure caching backends on a per-endpoint basis.

Managing Caching Bins

Cache backends have this concept of bins, which is an abstraction for similar data which can be grouped together. Examples from the Drupal core are cache_filter and cache_variable.

There is a bin setting for every endpoint in the plugin file, which is cache_restful unless we explicitly specify otherwise. It is advisable to store high traffic endpoints in exclusive bins.

There is an expire setting for each endpoint, which dictates the cache expiration for that endpoint. This defaults to CACHE_PERMANENT, which means that the entry will never be wiped off until it is explicitly selected for clearing.

The alternative is CACHE_TEMPORARY which indicates that it will removed in the next round of cache clearing.

These are the very same constants used in Drupal cache interface’s cache_* calls. There is a middle ground too, which isn’t documented. The expire value can be set in seconds. This is a deviation from Drupal’s convention of mentioning it as a timestamp.

Varying Caching by Role or User

Some endpoints need to be cached for each role, and some for each user. This granularity can be controlled by the granularity setting, which takes either DRUPAL_CACHE_PER_USER or DRUPAL_CACHE_PER_ROLE. This depends to some extent on your authentication mechanism too.

We wrote our own authentication mechanism and had a user created exclusively for the API and serving the endpoints. We gave this user an exclusive role and configured per-role caching for all the endpoints.

Here’s how the plugin configuration looks for one of our endpoints:

$plugin = array( 'label' => t(Recommended Videos'), 'resource' => recommended_videos', 'name' => recommended_videos__1_1', 'entity_type' => 'node', 'bundle' => video', 'description' => t('Get all recommended videos for a given article.'), 'class' => RecommendedVideosResource__1_1', 'authentication_types' => array( 'my_custom_token', ), 'minor_version' => 1, 'render_cache' => array( 'render' => TRUE, 'expire' => CACHE_TEMPORARY, 'granularity' => DRUPAL_CACHE_PER_ROLE, ), // custom settings 'video_sources' => array(youtube', 'vimeo'), ); The anatomy of a Cache key

A cache key using the default key generation logic looks like this:


The corresponding endpoint URL looks like this:


The first part is the API version, followed by the resource name, which is “recommended_videos”. The next part is either a “uu” or “ur” depending on whether it is user level or role level granularity. Next is the entity type (e.g. node) with a prefix “pa”. This is followed by the entity ID part, which is “ei:105486” in this case.

The last part is the truncated key-value list of GET params foo and bar. Each logical section is separated by a “::” so that it is easy to do a selective purge, as in wiping out all endpoints for v1.0 of the API would be a call to clear("v1.0::*").

Note that a GET for a collection of resources like latest comments results in a viewEntity for each item in the collection and as many cache entries. If you want a single cache entry for the whole collection, you have to custom build your payload and call setRenderedCache as shown in the initial endpoint workflow code snippet.

Other Considerations Be Diligent, Validate Cache Strategies Early

RESTful is designed as being very modular from the ground up and has a provision for controlling caching settings for every endpoint. Such a high level of control is both good and bad. Digging through an issue for hours because some settings for an endpoint are misconfigured isn’t fun for anyone. Unless the settings are clear and explicit, it makes issues hard to debug and sort out.

Be diligent and validate your caching strategy from the beginning.

Memcache Stampeding

Another thing to look out for is memcache stampeding. Memcache stampeding occurs when a missing key results in simultaneous fetches from the database server, resulting in a high load. Memcache is designed to prevent too many requests from piling up.

With our work with Legacy.com, we could mitigate the need for passing these requests to Memcache by properly managing our Varnish layer.  We will detail on how we fixed the stampeding issue and constructed a Drupal RESTful Caching strategy in a later post. Stay tuned!

Need help with Drupal RESTful Caching on your team? Learn More Drupal RESTful Caching Resources

The post How To Set Up Drupal RESTful Caching first appeared on Axelerant.

Categories: Elsewhere

C.J. Adams-Collier: Regarding fdupes

Planet Debian - Mon, 23/11/2015 - 19:04

Dear readers,

There is a very useful tool for finding and merging shared permanent storage, and its name is fdupes. There was a terrible occurrence in the software after version 1.51, however. They removed the -L argument because too many people were complaining about lost data. It sounds like user error to me, and so I continue to use this one. I have to build from source, since the newer versions do not have the -L option.


And so there you are. I recommend using it, even though this most useful feature has been deprecated and removed from the software. Perhaps there should be a fdupes-danger package in Debian?

Categories: Elsewhere

Acquia Developer Center Blog: Integrating Drupal with a Proprietary Analytics Platform: How Parse.ly Did it.

Planet Drupal - Mon, 23/11/2015 - 18:07
Stefan Deeran

One of the great things about Drupal is its flexible system of nodes and taxonomies. This allows for bespoke categorization of many types of content.

At Parse.ly, we wanted to take advantage of that. Parse.ly, which has an alliance with Acquia to bring joint tech solutions to the worlds biggest media companies, works with hundreds of digital publishers to provide audience insights through an intuitive analytics platform.

Tags: acquia drupal planet
Categories: Elsewhere

Lunar: Reproducible builds: week 30 in Stretch cycle

Planet Debian - Mon, 23/11/2015 - 17:43

What happened in the reproducible builds effort this week:

Toolchain fixes
  • Markus Koschany uploaded antlr3/3.5.2-3 which includes a fix by Emmanuel Bourg to make the generated parser reproducible.
  • Markus Koschany uploaded maven-bundle-plugin/2.4.0-2 which includes a fix by Emmanuel Bourg to use the date in the DEB_CHANGELOG_DATETIME variable in the pom.properties file embedded in the jar files.
  • Niels Thykier uploaded debhelper/9.20151116 which makes the timestamp of directories created by dh_install, dh_installdocs, and dh_installexamples reproducible. Patch by Niko Tyni.

Mattia Rizzolo uploaded a version of perl to the “reproducible” repository including the patch written by Niko Tyni to add support for SOURCE_DATE_EPOCH in Pod::Man.

Dhole sent an updated version of his patch adding support for SOURCE_DATE_EPOCH in GCC to the upstream mailing list. Several comments have been made in response which have been quickly addressed by Dhole.

Dhole also forwarded his patch adding support for SOURCE_DATE_EPOCH in libxslt upstream.

Packages fixed

The following packages have become reproducible due to changes in their build dependencies: antlr3/3.5.2-3, clusterssh, cme, libdatetime-set-perl, libgraphviz-perl, liblingua-translit-perl, libparse-cpan-packages-perl, libsgmls-perl, license-reconcile, maven-bundle-plugin/2.4.0-2, siggen, stunnel4, systemd, x11proto-kb.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues, but not all of them:


Vagrant Cascadian has set up a new armhf node using a Raspberry Pi 2. It should soon be added to the Jenkins infrastructure.

diffoscope development

diffoscope version 42 was release on November 20th. It adds a missing dependency on python3-pkg-resources and to prevent similar regression another autopkgtest to ensure that the command line is functional when Recommends are not installed. Two more encoding related problems have been fixed (#804061, #805418). A missing Build-Depends has also been added on binutils-multiarch to make the test suite pass on architectures other than amd64.

Package reviews

180 reviews have been removed, 268 added and 59 updated this week.

70 new “fail to build from source” bugs have been reported by Chris West, Chris Lamb and Niko Tyni.

New issue this week: randomness_in_ocaml_preprocessed_files.


Jim MacArthur started to work on a system to rebuild and compare packages built on reproducible.debian.net using .buildinfo and snapshot.debian.org.

On December 1-3rd 2015, a meeting of about 40 participants from 18 different free software projects will be held in Athens, Greece with the intent of improving the collaboration between projects, helping new efforts to be started, and brainstorming on end-user aspects of reproducible builds.

Categories: Elsewhere

Jonathan Dowland: CDs should come with download codes

Planet Debian - Mon, 23/11/2015 - 17:06

boxes of CDs & the same data on MicroSD

There's a Vinyl resurgence going on, with vinyl record sales growing year-on-year. Many of the people buying records don't have record players. Many records are sold including a download code, granting the owner an (often one-time) opportunity to download a digital copy of the album they just bought.

Some may be tempted to look down upon those buying vinyl records, especially those who don't have a means to play them. The record itself is, now more than ever, a physical totem rather than a media for the music. But is this really that different to how we've treated audio CDs this century?

For at least 15 years, I've ripped every CD I've bought and then stored it in a shoebox. (I'm up to 10 shoeboxes). The ripped copy is the only thing I listen to. The CD is little more than a totem, albeit one which I have to use in a relatively inconvenient ritual in order to get something I can conveniently listen to.

The process of ripping CDs has improved a lot in this time, but it's still a pain. CD-ROM drives are also becoming a lot more scarce. Ripping is not necessary reliable, either. The best tool to verify a rip is AccurateRip, a privately-owned database of track checksums. The private status is a problem for the community (Remember what happened to CDDB?) and is only useful if other people using an AccurateRip-supported ripper have already successfully ripped the CD.

Then there's things like CD pre-emphasis. It turns out that the Red Book standard defines a rarely-used flag that means the CD (or individual tracks) have had pre-emphasis applied to the treble-end of the frequency spectrum. The CD player is supposed to apply de-emphasis on playback. This doesn't happen if you fetch the audio data digitally, so it becomes the CD rippers responsibility to handle this. CD rippers have only relatively recently grown support for it. Awareness has been pretty low, so low that nobody has a good idea about how many CDs actually have pre-emphasis set: it's thought to be very rare, but (as far as I know) MusicBrainz doesn't (yet) track it.

So some proportion of my already-ripped CDs may have actually been ripped incorrectly, and I can't easily determine which ones without re-ripping them all. I know that at least my Quake computer game CD has it set, and I have suspicions about some other releases.

Going forward, this could be avoided entirely if CDs were treated more like totems, as vinyl records are, than the media delivering the music itself, and if record labels routinely included download cards with audio CDs. For just about anyone, no matter how the music was obtained, media-less digital is the canonical form for engaging with it. Attention should also be paid to make sure that digital releases are of a high quality: but that's a topic for another blog post.

Categories: Elsewhere

Cryptic.Zone: Where is hook_page_alter in Drupal 8?

Planet Drupal - Mon, 23/11/2015 - 16:11

In Drupal 7, hook_page_alter was a convenient way to go when we needed to modify page elements that were added by other modules. Drupal 8 does away with this hook - hopefully for the better. To fill the void created by hook_page_alter’s dismissal, the following hooks have been introduced.

Categories: Elsewhere

ERPAL: Automation is the key to support SLAs

Planet Drupal - Mon, 23/11/2015 - 14:15

If you want to grow recurring revenue by providing SLAs for your Drupal projects, automation is THE key to offering a reliable response time. Of course, you could build a dedicated 24/7 support team - but the cost will be exorbitant!

There are many tools out there for digitizing your support and automating some of the processes. A well-defined support concept is the key to success. In this blog you'll get an introduction in four foundations of automated support systems. Read more

Categories: Elsewhere

Red Crackle: Express checkout to increase conversion

Planet Drupal - Mon, 23/11/2015 - 12:40
In this post, you will learn how to enable express checkout to increase conversions on your site. It's a fairly simple procedure that allows the shopper to proceed directly to the checkout without visiting the Cart. This saves time and ensures that you are able to win the shopper's business quicker.
Categories: Elsewhere

ThinkShout: Up and Theming with Drupal 8

Planet Drupal - Mon, 23/11/2015 - 12:00

Drupal 8 is finally here! We’ve been digging into the code and learning how to install D8 in a way that allow us to sync sites and use it for production work. A lot of things have changed, which we covered in our previous article, Up and Running with Drupal 8. The next step is to see what’s changed in the theming layer, installing a basic theme, and working with the new Twig templating system. There’s a good deal to cover, so let’s jump in!

Creating a Theme

The steps for setting up a basic site theme are fairly simple: create a custom/THEMENAME directory in web/themes, and then add a THEMENAME.info.yml file with the following:

name: THEMENAME Theme description: 'D8 theme for THEMENAME site.' package: Custom # base theme: classy type: theme version: 1.0 core: 8.x regions: header: Header content: Content # required! sidebar_first: 'Sidebar first' footer: Footer

Then you can enable your theme (administer » themes) in the interface. Note that uncommenting base theme: classy will cause you to set Classy as a parent theme. We feel that Classy is great if you want a lot of useful examples, but really clutters up the markup, so use at your own discretion. After rc1, the default theme will be ‘stable,’ and you may want to pull all of the core templates into your theme to ensure you’re working from the latest updated template code.

Also, the theme name must not contain hyphens. So /theme-name/ is invalid (it won’t even show up!), but /theme_name/ is fine.

Now we’ll want to start customizing our theme. Let us say we have a content type called ‘blog’ (machine name: blog), with a field type called ‘Publish Date’ (machine name: field_publish_date).

Despite setting the label of field_publish_date to ‘inline,’ it’s wrapping to a new line due to the fact that it’s a simple, unstyled <div>.

Worse, it has no classes to specifically style it. Let’s set ourselves some goals:

  1. Add the inline styling class(s).
  2. Change the markup for this field, so that we have a class for the label.
  3. Add CSS to style the label, but ONLY for the ‘Blog’ content type.

The documentation for this seemingly simple task is obfuscated and evolving right now, but we were able to get it working correctly using the following steps:

Step 1: Turn on twig debug mode. We also found it helpful at this point to make a copy of web/sites/example.settings.local.php in web/sites/default/ and uncomment the following in settings.php:

if (file_exists(__DIR__ . '/settings.local.php')) { include __DIR__ . '/settings.local.php'; }

This will allow you to disable caching during development, which is no longer a simple checkbox in the performance section. Note that disabling caching can be tricky; the drush cr (cache rebuild) command is the most reliable way to ensure the cache is really cleared. You’ll also have to rebuild the cache at least once after turning caching off, so the new cache settings are applied.

Step 2: Make a custom field template.

In this case, the suggested debug fields are:

<!-- FILE NAME SUGGESTIONS: * field--node--field-publish-date--blog.html.twig * field--node--field-publish-date.html.twig * field--node--blog.html.twig * field--field-publish-date.html.twig * field--datetime.html.twig x field.html.twig --> <!-- BEGIN OUTPUT from 'core/modules/system/templates/field.html.twig' -->

The highlighted line above shows the template currently being used, suggestions for increased specificity, and the file location (core/modules/system/templates/).

We want to update field_publish_date globally, so we’ll create a template called field--field-publish-date.html.twig

To do this, we copy field.html.twig from the core theme (see the ‘BEGIN OUTPUT’ line above for the path), and rename it in our theme’s folder to field--field-publish-date.html.twig. Now when we reload, we see the following (if your cache is disabled, of course, otherwise drush cr will clear the cache):

<!-- FILE NAME SUGGESTIONS: * field--node--field-publish-date--blog.html.twig * field--node--field-publish-date.html.twig * field--node--blog.html.twig x field--field-publish-date.html.twig * field--datetime.html.twig * field.html.twig --> <!-- BEGIN OUTPUT from 'themes/custom/THEMENAME/templates/field--field-publish-date.html.twig' -->

Now we can begin to update the markup. The relevant code is:

html {% if label_hidden %} ... (we don’t care about the label_hidden stuff) {% else %} <div{{ attributes }}> <div{{ title_attributes }}>{{ label }}</div> ... {% endif %}

To add the inline styling class, we add the following to the top of the template (below the comments):

html {% set classes = [ 'field--label-' ~ label_display, ] %}

And then update the label’s parent div attributes:

before: <div> after: <div>

Now the correct class is in place, but we see no change yet - because the <div> isn’t populating any classes. To fix that, we add the following, again at the top of the template:

html {% set title_classes = [ 'field__label', 'field__publish-date-label', label_display == 'visually_hidden' ? 'visually-hidden', ] %}

And update the div:

before: <div></div> after: <div ></div>

Rebuild the cache (drush cr) and… success! well sort of - we still have to add CSS. Note that we also added a custom class of 'field__publish-date-label' in case we want to style it directly.

Step 3: Add a THEMENAME.libraries.yml file to hold attachment library definitions.

This is pretty simple; it’s a file with the following:

blog: version: 1.x css: theme: css/blog.css: {} js: js/blog.js: {} dependencies: - core/jquery

We then add the directories (/css and /js) and files (blog.css/js). We’ve also added a jQuery dependency, just so you can see how that’s done. If we had something simple that could be done with Vanilla JS we could leave it off. Note that this won’t actually do anything until we follow step 4 below.

Step 4: Add a THEMENAME.theme file to hold theme hooks (this is actually a PHP file, so start it with <?php).

This is the code that appends the library based on the content type. The trickiest part of this is figuring out the correct format of hook_preprocess_HOOK():

function THEMENAME_preprocess_node__blog(&$variables) { $variables['#attached']['library'][] = 'THEMENAME/blog'; ]

The theme hook format for content types is to use node__MACHINENAME format - two underscores.

After that, rebuild your cache (drush cr), and your CSS and JS files should be loading on every instance of that content type, regardless of the page. (full or teaser)

And that’s it! Note that we could have changed the markup in any number of ways to suit our designs, or even make the template specific to the content type as well as the field.


The post was written at the end of 2015 while Drupal 8 was still in a Release Candidate stage. While some effort will be made to keep the post up-to-date, if it’s after 2016, you should probably add the current year to your Google search, or better yet, check the docs on Drupal.org.

Categories: Elsewhere

Gergely Nagy: Keyboard updates

Planet Debian - Mon, 23/11/2015 - 11:53

Last Friday, I compiled a list of keyboards I'm interested in, and received a lot of incredible feedback, thank you all! This allowed me to shorten the list considerably, two basically two pieces. I'm reasonably sure by now which one I want to buy (both), but will spend this week calming down to avoid impulse-buying. My attention was also brought to a few keyboards originally not on my list, and I'll take this opportunity to present my thoughts on those too.

The Finalists


  • Great design, by the looks of it.
  • Mechanical keys.
  • Open source hardware and firmware, thus programmable.
  • Thumb keys.
  • Available as an assembled product, from multiple sources.
  • Primarily a kit, but assembled available.
  • Assembled versions aren't as nice as home-made variants.

The keyboard looks interesting, primarily due to the thumb keys. From the ErgoDox EZ campaign, I'm looking at $270. That's friendly, and makes ErgoDox a viable option! (Thanks @miffe!)

There's also another option, FalbaTech, which ships sooner, I can customize the keyboard to some extent, and Poland is much closer to Hungary than the US. With this option, I'm looking at $205 + shipping, a very low price for what the keyboard has to offer. (Thanks @pkkolos for the suggestion!)

Keyboardio M01

  • Mechanical keyboard.
  • Hardwood body.
  • Blank and dot-only keycaps option.
  • Open source: firmware, hardware, and so on. Comes with a screwdriver.
  • The physical key layout has much in common with my TypeMatrix.
  • Numerous thumb-accessible keys.
  • A palm key, that allows me to use the keyboard as a mouse.
  • Fully programmable LEDs.
  • Custom macros, per-application even.
  • Fairly expensive.
  • Custom keycap design, thus rearranging them physically is not an option, which leaves me with the blank or dot-only keycap options only.
  • Available late summer, 2016.

With shipping cost and whatnot, I'm looking at something in the $370 ballpark, which is on the more expensive side. On the other hand, I get a whole lot of bang for my buck: LEDs, two center bars (tripod mounting sounds really awesome!), hardwood body, and a key layout that is very similar to what I came to love on the TypeMatrix.

I also have a thing for wooden stuff. I like the look of it, the feel of it.

The Verdict

Right now, I'm seriously considering the Model 01, because even if it is about twice the price of the ErgoDox, it also offers a lot more: hardwood body (I love wood), LEDs, palm key. I also prefer the layout of the thumb keys on the Model 01.

The Model 01 also comes pre-assembled, looks stunning, while the ErgoDox pales a little in comparsion. I know I could make it look stunning too, but I do not want to build things. I'm not good at it, I don't want to be good at it, I don't want to learn it. I hate putting things together. I'm the kind of guy who needs three tries to put together a set of IKEA shelves, and I'm not exaggerating. I also like the shape of the keys better on the Model 01.

Nevertheless, the ErgoDox is still an option, due to the price. I'd love to buy both, if I could. Which means that once I'm ready to replace my keyboard at work, I will likely buy an ErgoDox. But for home, Model 01 it is, unless something even better comes along before my next pay.

The Kinesis Advantage was also a strong contender, but I ended up removing it from my preferred options, because it doesn't come with blank keys, and is not a split keyboard. And similar to the ErgoDox, I prefer the Model 01's thumb-key layout. Despite all this, I'm very curious about the key wells, and want to try it someday.

Suggested options


Suggested by Andred Carter, a very interesting keyboard with a unique design.

  • Portable, foldable.
  • Active support for forearm and hand.
  • Hands never obstruct the view.
  • Not mechanical.
  • Needs a special inlay.
  • Best used for word processing, programmers may run into limitations.

I like the idea of the keyboard, and if it wouldn't need a special inlay, but used a small screen or something to show the keys, I'd like it even more. Nevertheless, I'm looking for a mechanical keyboard right now, which I can also use for coding.

But I will definitely keep the Yogitype in mind for later!

Matias Ergo Pro

  • Mechanical keys.
  • Simple design.
  • Split keyboard.
  • Doesn't seem to come with a blank keys option, nor in Dvorak.
  • No thumb key area.
  • Neither open source, nor open hardware.
  • I have no need for the dedicated undo, cut, paste keys.
  • Does not appear to be programmable.

This keyboard hardly meets any of my desired properties, and doesn't have anything standing out in comparison with the others. I had a quick look at this when compiling my original list, but was quickly discarded. Nevertheless, people asked me why, so I'm including my reasoning here:

While it is a split keyboard, with a fairly simple design, it doesn't come in the layout I'd prefer, nor with blank keys. It lacks the thumb key area that ErgoDox and the Model 01 have, and which I developed an affection for.

Microsoft Sculpt Ergonomic Keyboard

  • Numpad is a separate unit.
  • Reverse tilt.
  • Well positioned, big Alt keys.
  • Cheap.
  • Not a split keyboard.
  • Not mechanical.
  • No blank or Dvorak option as far as I see.

This keyboard does not buy me much over my current TypeMatrix 2030. If I'd be looking for the cheapest possible among ergonomic keyboards, this would be my choice. But only because of the price.

Truly Ergonomic Keyboard

  • Mechanical.
  • Detachable palm rest.
  • Programmable firmware.
  • Not a split keyboard.
  • Layouts are virtual only, the printed keycaps stay QWERTY, as far as I see.
  • Terrible navigation key setup.

Two important factors for me are physical layout and splittability. This keyboard fails both. While it is a portable device, that's not a priority for me at this time.

Categories: Elsewhere

Drupal Aid: Our First Site Built with Drupal 8 - How we did it.

Planet Drupal - Mon, 23/11/2015 - 11:04

I'm happy to say that we relaunched our parent agency's site on Drupal 8 within one day of Drupal 8's release. In this write-up, I'll cover how we did it, the highlights of our experience, and even throw in a few mini-tutorial for How to do a Custom Panel Layout and How to add Meta Tags in Drupal 8

Read more

Categories: Elsewhere

Merge: Drupal 8 + Gulp + BrowserSync

Planet Drupal - Mon, 23/11/2015 - 10:09

You might wonder whether you need automation task runners like Grunt or Gulp with Drupal. Common usecases for these tools are css/js aggregation/minification but there are at least two ways in which a task runner can help you out in Drupal.

Categories: Elsewhere

Thomas Goirand: OpenStack Liberty and Debian

Planet Debian - Mon, 23/11/2015 - 09:30
Long over due post

It’s been a long time I haven’t written here. And lots of things happened in the OpenStack planet. As a full time employee with the mission to package OpenStack in Debian, it feels like it is kind of my duty to tell everyone about what’s going on.

Liberty is out, uploaded to Debian

Since my last post, OpenStack Liberty, the 12th release of OpenStack, was released. In late August, Debian was the first platform which included Liberty, as I proudly outran both RDO and Canonical. So I was the first to make the announcement that Liberty passed most of the Tempest tests with the beta 3 release of Liberty (the Beta 3 is always kind of the first pre-release, as this is when feature freeze happens). Though I never made the announcement that Liberty final was uploaded to Debian, it was done just a single day after the official release.

Before the release, all of Liberty was living in Debian Experimental. Following the upload of the final packages in Experimental, I uploaded all of it to Sid. This represented 102 packages, so it took me about 3 days to do it all.

Tokyo summit

I had the pleasure to be in Tokyo for the Mitaka summit. I was very pleased with the cross-project sessions during the first day. Lots of these sessions were very interesting for me. In fact, I wish I could have attended them all, but of course, I can’t split myself in 3 to follow all of the 3 tracks.

Then there was the 2 sessions about Debian packaging on upstream OpenStack infra. The goal is to setup the OpenStack upstream infrastructure to allow packaging using Gerrit, and gating each git commit using the usual tools: building the package and checking there’s no FTBFS, running checks like lintian, piuparts and such. I knew already the overview of what was needed to make it happen. What I didn’t know was the implementation details, which I hoped we could figure out during the 1:30 slot. Unfortunately, this didn’t happen as I expected, and we discussed more general things than I wished. I was told that just reading the docs from the infra team was enough, but in reality, it was not. What currently needs to happen is building a Debian based image, using disk-image-builder, which would include the usual tools to build packages: git-buildpackage, sbuild, and so on. I’m still stuck at this stage, which would be trivial if I knew a bit more about how upstream infra works, since I already know how to setup all of that on a local machine.

I’ve been told by Monty Tailor that he would help. Though he’s always a very busy man, and to date, he still didn’t find enough time to give me a hand. Nobody replied to my request for help in the openstack-dev list either. Hopefully, with a bit of insistence, someone will help.

Keystone migration to Testing (aka: Debian Stretch) blocked by python-repoze.who

Absolutely all of OpenStack Liberty, as of today, has migrated to Stretch. All? No. Keystone is blocked by a chain of dependency. Keystone depends on python-pysaml2, itself blocked by python-repoze.who. The later, I upgraded it to version 2.2. Though python-repoze.what depends on version <= 1.9, which is blocking the migration. Since python-repoze.who-plugins, python-repoze.what and python-repoze.what-plugins aren’t used by any package anymore, I asked for them to be removed from Debian (see #805407). Until this request is processed by the FTP masters, Keystone, which is the most important piece of OpenStack (it does the authentication) will be blocked for migration to Stretch.

New OpenStack server packages available

On my presentation at Debconf 15, I quickly introduced new services which were released upstream. I have since packaged them all:

  • Barbican (Key management as a Service)
  • Congress (Policy as a Service)
  • Magnum (Container as a Service)
  • Manila (Filesystem share as a Service)
  • Mistral (Workflow as a Service)
  • Zaqar (Queuing as a Service)

Congress, unfortunately, was not accepted to Sid yet, because of some licensing issues, especially with the doc of python-pulp. I will correct this (remove the non-free files) and reattempt an upload.

I hope to make them all available in jessie-backports (see below). For the previous release of OpenStack (ie: Kilo), I skipped the uploads of services which I thought were not really critical (like Ironic, Designate and more). But from the feedback of users, they would really like to have them all available. So this time, I will upload them all to the official jessie-backports repository.

Keystone v3 support

For those who don’t know about it, Keystone API v3 means that, on top of the users and tenant, there’s a new entity called a “domain”. All of the Liberty is now coming with Keystone v3 support. This includes the automated Keystone catalog registration done using debconf for all *-api packages. As much as I could tell by running tempest on my CI, everything still works pretty well. In fact, Liberty is, to my experience, the first release of OpenStack to support Keystone API v3.

Uploading Liberty to jessie-backports

I have rebuilt all of Liberty for jessie-backports on my laptop using sbuild. This is more than 150 packages (166 packages currently). It took me about 3 days to rebuild them all, including unit tests run at build time. As soon as #805407 is closed by the FTP masters, all what’s remaining will be available in Stretch (mostly Keystone), and the upload will be possible. As there will be a lot of NEW packages (from the point of view of backports), I do expect that the approval will take some time. Also, I have to warn the original maintainers of the packages that I don’t maintain (for example, those maintained within the DPMT), that because of the big number of packages, I will not be able to process the usual communication to tell that I’m uploading to backports. However, here’s the list of package. If you see one that you maintain, and that you wish to upload the backport by yourself, please let me know. Here’s the list of packages, hopefully, exhaustive, that I will upload to jessie-backports, and that I don’t maintain myself:

alabaster contextlib2 kazoo python-cachetools python-cffi python-cliff python-crank python-ddt python-docker python-eventlet python-git python-gitdb python-hypothesis python-ldap3 python-mock python-mysqldb python-pathlib python-repoze.who python-setuptools python-smmap python-unicodecsv python-urllib3 requests routes ryu sphinx sqlalchemy turbogears2 unittest2 zzzeeksphinx.

More than ever, I wish I could just upload these to a PPA^W Bikeshed, to minimize the disruption for both the backports FTP masters, other maintainers, and our OpenStack users. Hopefully, Bikesheds will be available soon. I am sorry to give that much approval work to the backports FTP masters, however, using the latest stable system with the latest release, is what most OpenStack users really want to do. All other major distributions have specific repositories too (ie: RDO for CentOS / Red Hat, and cloud archive for Ubuntu), and stable-backports is currently the only place where I can upload support for the Stable release.

Debian listed as supported distribution on openstack.org

Good news! If you go at http://www.openstack.org/marketplace/distros/ you will see a list of supported distributions. I am proud to be able to tell that, after 6 months of lobbying from my side, Debian is also listed there. The process of having Debian there included talking with folks from the OpenStack foundation, and having Bdale to sign an agreement so that the Debian logo could be reproduced on openstack.org. Thanks to Bdale Garbee, Neil McGovern, Jonathan Brice, and Danny Carreno, without who this wouldn’t have happen.

Categories: Elsewhere

Out & About On The Third Rock: DPSX – lessons from Australia for a local government distribution in the UK

Planet Drupal - Mon, 23/11/2015 - 09:12
As a tax payer I want companies who provide frontline public services to make the data they gather in the provision of such services available through Open APIs to other actors in the ecosystem, so that such actors can utilise that data to provide new and innovative services to the public. News from Australia shared […]
Categories: Elsewhere

Sooper Drupal Themes: 9 Ways to Keep Your Drupalistas Engaged at Work

Planet Drupal - Mon, 23/11/2015 - 05:00

You want to grow your Drupal business an important part of this is going to be how you engage with your employees so they stay interested in what you do. As a manager of your business, your employees are just as important as the clients as you may need these employees as some point to keep your customers happy.

You may not have the current skills to keep employees engaged with your...

Categories: Elsewhere

Bálint Réczey: Wireshark 2.0 switched default UI to Qt in unstable

Planet Debian - Sat, 21/11/2015 - 23:54

With the latest release the Wireshark Project decided to make the Qt GUI the default interface. In line with Debian’s Policy the packages shipped by Debian also switched the default GUI to minimize the difference from upstream. The GTK+ interface which was the previous default is still available from the wireshark-gtk package.

You can read more about the new 2.0.0 release in the release notes or on the Wireshark Blog featuring some of the improvements.

Happy sniffing!

Categories: Elsewhere

Jonathan McDowell: Updating a Brother HL-3040CN firmware from Linux

Planet Debian - Sat, 21/11/2015 - 14:27

I have a Brother HL-3040CN networked colour laser printer. I bought it 5 years ago and I kinda wish I hadn’t. I’d done the appropriate research to confirm it worked with Linux, but I didn’t realise it only worked via a 32-bit binary driver. It’s the only reason I have 32 bit enabled on my house server and I really wish I’d either bought a GDI printer that had an open driver (Samsung were great for this in the past) or something that did PCL or Postscript (my parents have an Xerox Phaser that Just Works). However I don’t print much (still just on my first set of toner) and once setup the driver hasn’t needed much kicking.

A more major problem comes with firmware updates. Brother only ship update software for Windows and OS X. I have a Windows VM but the updater wants the full printer driver setup installed and that seems like overkill. I did a bit of poking around and found reference in the service manual to the ability to do an update via USB and a firmware file. Further digging led me to a page on resurrecting a Brother HL-2250DN, which discusses recovering from a failed firmware flash. It provided a way of asking the Brother site for the firmware information.

First I queried my printer details:

$ snmpwalk -v 2c -c public hl3040cn.local iso. iso. = STRING: "MODEL=\"HL-3040CN series\"" iso. = STRING: "SERIAL=\"G0JXXXXXX\"" iso. = STRING: "SPEC=\"0001\"" iso. = STRING: "FIRMID=\"MAIN\"" iso. = STRING: "FIRMVER=\"1.11\"" iso. = STRING: "FIRMID=\"PCLPS\"" iso. = STRING: "FIRMVER=\"1.02\"" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: ""

I used that to craft an update file which I sent to Brother via curl:

curl -X POST -d @hl3040cn-update.xml https://firmverup.brother.co.jp/kne_bh7_update_nt_ssl/ifax2.asmx/fileUpdate -H "Content-Type:text/xml" --sslv3

This gave me back some XML with a URL for the latest main firmware, version 1.19, filename LZ2599_N.djif. I downloaded that and took a look at it, discovering it looked like a PJL file. I figured I’d see what happened if I sent it to the printer:

cat LZ2599_N.djf | nc hl3040cn.local 9100

The LCD on the front of printer proceeded to display something like “Updating Program” and eventually the printer re-DHCPed and indicated the main firmware had gone from 1.11 to 1.19. Great! However the PCLPS firmware was still at 1.02 and I’d got the impression that 1.04 was out. I didn’t manage to figure out how to get the Brother update website to give me the 1.04 firmware, but I did manage to find a copy of LZ2600_D.djf which I was then able to send to the printer in the same way. This led to:

$ snmpwalk -v 2c -c public hl3040cn.local iso. iso. = STRING: "MODEL=\"HL-3040CN series\"" iso. = STRING: "SERIAL=\"G0JXXXXXX\"" iso. = STRING: "SPEC=\"0001\"" iso. = STRING: "FIRMID=\"MAIN\"" iso. = STRING: "FIRMVER=\"1.19\"" iso. = STRING: "FIRMID=\"PCLPS\"" iso. = STRING: "FIRMVER=\"1.04\"" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: ""

Cool, eh?

[Disclaimer: This worked for me. I’ve no idea if it’ll work for anyone else. Don’t come running to me if you brick your printer.]

Categories: Elsewhere


Subscribe to jfhovinne aggregator - Elsewhere