Feed aggregator

Dries Buytaert: Always be shippable

Planet Drupal - Tue, 22/09/2015 - 10:26

Drupal will soon be 15 years old, and 5 of that will be spent on building Drupal 8 -- a third of Drupal's life. We started work on Drupal early in 2011 and targeted December 1, 2012 as the original code freeze date. Now almost three years later, we still haven't released Drupal 8. While we are close to the release of Drupal 8, I'm sure many many of you are wondering why it took 3 years to stabilize. It is not like we didn't work hard or that we aren't smart people. Quite the contrary, the Drupal community has some of the most dedicated, hardest working and smartest people I know. Many spent evenings and weekends pushing to get Drupal 8 across the finish line. No one individual or group is to blame for the delay -- except maybe me as the project lead for not having learned fast enough from previous release cycles.

Trunk-based development

The past 15 years we used "trunk-based development"; we built new features in incremental steps, maintained in a single branch called "trunk". We'd receive the feature's first patch, commit it to our main branch, put it behind us, and move on to the next patch. Trunk-based development requires a lot of discipline and as a community we have mostly mastered this style of development. We invested heavily in our testing infrastructure and established a lot of processes. For all patches, we had various people reviewing the work to make sure it was solid. We also had different people figure out and review all the required follow-up work to complete the feature. The next steps are carefully planned and laid out for everyone to see in what we call "meta" issues. The idea of splitting one large feature into smaller tasks is not a bad idea; it helps to make things manageable for everyone involved.

Given all this rigor, how do you explain the delays then? The problem is that once these features and plans meet reality, they fall apart. Some features such as Drupal 8's configuration management system had to be rewritten multiple times based on our experience using it despite passing a rigorous review process. Other features such as our work on URL routing, entity base fields and Twig templating required much more follow-up work compared to what was initially estimated. It turns out that breaking up a large task into smaller ones requires a lot of knowledge and vision. It's often impossible to estimate the total impact of a larger feature on other subsystems, overall performance, etc. In other cases, the people working on the feature lacked time or interest to do the follow-up work, leaving it to others to complete. We should realize is that this is how things work in a complex world and not something we are likely to change.

The real problem is the fact that our main branch isn't kept in a shippable state. A lot of patches get committed that require follow-up work, and until that follow-up work is completed, we can't release a new version of Drupal. We can only release as fast as the slowest feature, and this is the key reason why the Drupal 8 release is delayed by years.

Trunk-based development; all development is done on a single main branch and as a result we can only release as fast as the slowest feature.

We need a better way of working -- one that conforms to the realities of the world we live in -- and we need to start using it the day Drupal 8.0.0 is released. Instead of ignoring reality and killing ourselves trying to meet unrealistic release goals, we need to change the process.

Feature branch workflow

The most important thing we have to do is keep our main branch in a shippable state. In an ideal world, each commit or merge into the main branch gives birth to a release candidate — it should be safe to release after each commit. This means we have to stop committing patches that put our main branch in an unshippable state.

While this can be achieved using a trunk-based workflow, a newer and better workflow called "feature branch workflows" has become popular. The idea is that (1) each new feature is developed in its own branch instead of the main branch and that (2) the main branch only contains shippable code.

Keeping the main branch shippable at all times enables us to do frequent date-based releases. If a specific feature takes too long, development can continue in the feature branch, and we can release without it. Or when we are uncertain about a feature's robustness or performance, rather than delaying the release, it will simply have to wait until the next release. The maintainers decide to merge in a feature branch based on objective and subjective criteria. Objectively, the test suite must pass, the git history must be clean, etc. Subjectively, the feature must deliver value to the users while maintaining desirable characteristics like consistency (code, API, UX), high performance, etc.

Feature branching; each feature is developed in a dedicated branch. A feature branch is only merged into the main branch when it is "shippable". We no longer have to wait for the slowest feature before we can create a new release.

Date-based releases are widely adopted in the Open Source community (Ubuntu, OpenStack, Android) and are healthy for Open Source projects; they reduce the time it takes for a given feature to become available to the public. This encourages contribution and is in line with the "release early, release often" mantra. We agreed on the benefits and committed to date-based releases following 8.0.0, so this simply aligns the tooling to make it happen.

Feature branch workflows have challenges. Reviewing a feature branch late in its development cycle can be challenging. There is a lot of change and discussion already incorporated. When a feature does finally integrate into main, a lot of change hits all at once. This can be psychologically uncomfortable. In addition, this can be disruptive to the other feature branches in progress. There is no way to avoid this disruption - someone has to integrate first. Release managers minimize the disruption by prioritizing high priority or low disruption feature branches over others.

Here is a workflow that could give us the best of both worlds. We create a feature branch for each major feature and only core committers can commit to feature branches. A team working on a feature would work in a sandbox or submit patches like we do today. Instead of committing patches to the main branch, core committers would commit patches to the corresponding feature branch. This ensures that we maintain our code review process with smaller changes that might not be shippable in isolation. Once we believe a feature branch to be in a shippable state, and it has received sufficient testing, we merge the feature branch into the main branch. A merge like this wouldn't require detailed code review.

Feature branches are also not the silver bullet to all problems we encountered with the Drupal 8 release cycle. We should keep looking for improvements and build them into our workflows to make life easier for ourselves and those we are implementing Drupal for. More on those in future posts.

Categories: Elsewhere

Matt Glaman: Using Features Override to manage changes to a distribution

Planet Drupal - Tue, 22/09/2015 - 06:14

The module has become a de facto tool in configuration management in Drupal 7. In fact, most Drupal distributions now utilize Features to export configuration and handle updates to the configuration. There is one pitfall - Features was  meant to be a way to export a feature set. Features takes a set of configurations and its job is to ensure those are in place. That means customizations to defaults are at risk of preventing incoming changes or loss when updating. That’s not good! You’re using a Drupal distribution so you save time, but now you have headaches because customizations disappear.

What is Features Override, then?

That’s where comes in and allows you to revert your feature modules and ensure configuration sticks. The name of the module speaks to its purpose. It allows you to override a feature and export those changes. But, how? Chaos Tools provides its own alter hooks for its object CRUD API. That means Views, Page Manager, Panels, and any item implementing this CRUD can be exported as a glob of PHP code representing an associative array of objects. Entity API  has its own CRUD, too; without knowing much of the history I’d assume Entity API’s implementation was modeled after Chaos Tools.  The biggest difference is the Entity API utilizes JSON for moving items around. Features, itself, provides its own alters for items that don’t have an import/export definition - such as field bases, field instances, etc. Features Override is a UI for writing alters in your feature module.

Let’s Do This!

We’ll run through customizing - the most download distribution which gets pretty customized. As a disclaimer, I am a maintainer of the project. If you were to install Commerce Kickstart 2 and navigate to Structure -> Features (admin/structure/features) you’d see something like the following:

“Default” means the Features is in its expected state. If we added a new config export or tweaked something, you’d receive it on update. Let’s change that because we want the Blog Post content type to hide the category and tag fields when we’re displaying them as a teaser.

In the above we’ve marked Category and Tags as hidden. In fact, since CK 2.27 there has been an automated test checking this change and Features Override. That’s right - we have automated testing to ensure your changes live on using Features Override (.) Let’s look back at our Features.

Now we’re going to create a new Feature called “CK2 Overrides” that will contain our specific changes. That will allow us to upgrade Commerce Kickstart 2 and revert its feature modules and keep our own changes. From the Manage Features page, we will select “Create Feature.” The next step is to pick your “Feature Overrides” component.

So this is where it can get confusing. As you can see there are two component types: “Feature Overrides” and “Feature Overrides (individual — advanced)” Unless you know what you’re doing - just pick the former. Feature’s magic will auto-populate what specific individual overrides are required to be exported. Think of it as nitpicking what specific keys in an array export it needs to alter (because that’s what it is!) Generate or download your feature and enable it!

And now we have nothing overridden!

Tips and Tricks

It’s not perfect - nothing is. So here’s a few tips and tricks I’ve learned

  • Not all overrides are caught, not all overrides want to stick.
  • Enabling an overrides feature sometimes needs caches cleared to kick off a new features component rebuild. Manually reverting won’t kick over the “overridden status."
  • An override feature will show up as overridden if the base feature is overridden - sometimes you just need to revert that one.
Categories: Elsewhere

Russ Allbery: INN 2.6.0

Planet Debian - Tue, 22/09/2015 - 05:30

This is the first major release of INN in several years, and incorporates lots of updates to the build system, protocol support, and the test suite. Major highlights and things to watch out for:

  • Cleanups of header handling in nnrpd, including using the new standardized headers for information about the post origin and rejecting many other obsolete headers.

  • nnrpd now treats IHAVE identically to POST when configured to accept IHAVE (a compatibility hack that's only necessary when dealing with some weird news implementations that can only do IHAVE).

  • innd authentication now requires a username be sent first, matching the NNTP protocol.

There are also tons of accumulated bug fixes, including (of course) all the fixes that were in INN 2.5.5. There are a lot of other changes, so I won't go into all the details here. If you're curious, take a look at the NEWS file.

As always, thanks to Julien ÉLIE for preparing this release and doing most of the maintenance work on INN!

You can get the latest version from the official ISC download page or from my personal INN pages. The latter also has links to the full changelog and the other INN documentation.

Categories: Elsewhere

Russ Allbery: Review: Half Life

Planet Debian - Tue, 22/09/2015 - 05:07

Review: Half Life, by S.L. Huang

Series: Russell's Attic #2 Publisher: S.L. Huang Copyright: 2014 ISBN: 0-9960700-5-2 Format: Kindle Pages: 314

This is a sequel to Zero Sum Game and the second book about Cas Russell, a mercenary superhero (in a world without the concept of superheroes) with preternatural ability to analyze anything about her surroundings with mathematics. While it reuses some personal relationships from the first book and makes a few references to the villains, it's a disconnected story. It would be possible to start here if you wanted to.

Cas is now in the strange and unexpected situation of having friends, and they're starting to complicate her life. First, Arthur has managed to trigger some unexpected storehouse of morals and gotten her to try to stop killing people on jobs. That conscience may have something to do with her willingness to take a job from an apparently crazy man who claims a corporation has stolen his daughter, a daughter who appears nowhere in any official records. And when her other friend, Checker, gets in trouble with the mob, Cas tries to protect him in her own inimitable way, which poses a serious risk of shortening her lifespan.

Even more than the first book, the story in Half Life is a mix of the slightly ridiculous world of superheroes with gritty (and bloody) danger. It featuring hit men, armed guards, lots of guns, and quite a lot of physical injury and blood. A nasty corporation that's obviously hiding serious secrets shares pages with the matriarch of a mob family who considers Checker sleeping with her daughter to be an abuse of her honor. The story eventually escalates into more outlandish bits of technology, an uncanny little girl, and a plot that would feel at home in a Batman comic. I like books that don't take themselves too seriously, but the contrast between the brutal treatment Cas struggles through and the outrageous mad scientist villain provokes a bit of cognitive whiplash.

That said, the villains of Half Life are neither as freakish nor as disturbing as those in Zero Sum Game, which I appreciated. Huang packs in several plot twists, some inobvious decisions and disagreements between Russell and her friends about appropriate strategy, and Cas's discovery that there are certain things she cares very strongly about other than money and having jobs. Cas goes from a barely moral, very dark hero in the first book to something closer to a very grumbly chaotic good who insists she's not as good as she actually is. It's a standard character type, but Huang does a good job with it.

Huang also adds a couple of supporting cast members in this book that I hope will stick around. Pilar starts as a receptionist at one of the companies Cas breaks into, and at first seems like she might be comic relief. But she ends up being considerably more competent than she first appears (or that she seems to realize); by the end of the book, I had a lot of respect for her. And Miri makes only a few appearances, but her unflappable attitude is a delight. I hope to see more of her.

The biggest drawback to this book for me is that Cas gets hurt a lot. At times, the story falls into one of the patterns of urban fantasy: the protagonist gets repeatedly beaten up and abused until they figure out what's going on, and spends most of the story up against impossible odds and feeling helpless. That's not a plot pattern I'm fond of. I don't enjoy reading about physical pain, and I had trouble at some points in the story with the constant feeling of dread. Parts of the book I read in short bursts, putting it aside to look at something else. But the sense of dread falls off towards the end of the book, as Cas figures out what's actually going on, and none of it is as horrible as it felt it could be. If you have a similar problem with some urban fantasy tropes, I think it's safe to stick with the story.

This was a fun story, but it doesn't develop much in the way of deeper themes in the series. There's essentially no Rio, no further discoveries about the villains of the first book, and no further details on what makes Cas tick or why she seems to be the only, or at least one of the few, super-powered people in this world. The advance publicity for the third book seems to indicate that's coming next. I'm curious enough now that I'll keep reading this series.

Recommended if you liked the first book. Half Life is very similar, but I think slightly better.

Followed by Root of Unity.

Rating: 7 out of 10

Categories: Elsewhere

OSTraining: How to Find Your Site's .htaccess File in cPanel

Planet Drupal - Tue, 22/09/2015 - 02:16

"Where is my .htaccess file?"

This is a problem that we've helped resolve over and over again at OSTraining.

The .htaccess file is absolutely crucial for the correct operation of many sites, whether they're running WordPress, Drupal, Joomla or similar platforms. The .htaccess files controls the URLs for sites and also adds many important security features.

Today, one more user was having trouble finding their .htaccess file, so we created this tutorial for her.

Categories: Elsewhere

frobiovox.com: Modern Drupal7 Site Building Tools

Planet Drupal - Tue, 22/09/2015 - 02:00
Why build a site with Drupal 7 Drupal8 is nearly out making Drupal 7 look like it isn't an appealing choice. However, Drupal 7 is still a contender. The module ecosystem for Drupal 7 is mature and, specifically for site...
Categories: Elsewhere

Sylvain Beucler: Rebuilding Android proprietary SDK binaries

Planet Debian - Tue, 22/09/2015 - 00:03

Going back to Android recently, I saw that all tools binaries from the Android project are now click-wrapped by a quite ugly proprietary license, among others an anti-fork clause (details below). Apparently those T&C are years old, but the click-wrapping is newer.

This applies to the SDK, the NDK, Android Studio, and all the essentials you download through the Android SDK Manager.

Since I keep my hands clean of smelly EULAs, I'm working on rebuilding the Android tools I need.
We're talking about hours-long, quad-core + 8GB-RAM + 100GB-disk-eating builds here, so I'd like to publish them as part of a project who cares.

As a proof-of-concept, the Replicant project ships a 4.2 SDK and I contributed build instructions for ADT and NDK (which I now use daily).

(Replicant is currently stuck to a 2013 code base though.)

I also have in-progress instructions on my hard-drive to rebuild various newer versions of the SDK/API levels, and for the NDK whose releases are quite hard to reproduce (no git tags, requires fixes committed after the release, updates are partial rebuilds, etc.) - not to mention that Google doesn't publish the source code until after the official release (closed development) And in some cases like Android Support Repository [not Library] I didn't even find the proper source code, only an old prebuilt.

Would you be interested in contributing, and would you recommend a structure that would promote Free, rebuilt Android *DK?

The legalese

Anti-fork clause:

3.4 You agree that you will not take any actions that may cause or result in the fragmentation of Android, including but not limited to distributing, participating in the creation of, or promoting in any way a software development kit derived from the SDK.

So basically the source is Apache 2 + GPL, but the binaries are non-free. By the way this is not a GPL violation because right after:

3.5 Use, reproduction and distribution of components of the SDK licensed under an open source software license are governed solely by the terms of that open source software license and not this License Agreement.

Still, AFAIU by clicking "Accept" to get the binary you still accept the non-free "Terms and Conditions".

(Incidentally, if Google wanted SDK forks to spread and increase fragmentation, introducing an obnoxious EULA is probably the first thing I'd have recommended. What was its legal team thinking?)

Indemnification clause:

12.1 To the maximum extent permitted by law, you agree to defend, indemnify and hold harmless Google, its affiliates and their respective directors, officers, employees and agents from and against any and all claims, actions, suits or proceedings, as well as any and all losses, liabilities, damages, costs and expenses (including reasonable attorneys fees) arising out of or accruing from (a) your use of the SDK, (b) any application you develop on the SDK that infringes any copyright, trademark, trade secret, trade dress, patent or other intellectual property right of any person or defames any person or violates their rights of publicity or privacy, and (c) any non-compliance by you with this License Agreement.

Usage restriction:

3.1 Subject to the terms of this License Agreement, Google grants you a limited, worldwide, royalty-free, non-assignable and non-exclusive license to use the SDK solely to develop applications to run on the Android platform.

3.3 You may not use the SDK for any purpose not expressly permitted by this License Agreement. Except to the extent required by applicable third party licenses, you may not: (a) copy (except for backup purposes), modify, adapt, redistribute, decompile, reverse engineer, disassemble, or create derivative works of the SDK or any part of the SDK; or (b) load any part of the SDK onto a mobile handset or any other hardware device except a personal computer, combine any part of the SDK with other software, or distribute any software or device incorporating a part of the SDK.

If you know the URLs, you can still direct-download some of the binaries which don't embed the license, but all this feels fishy. GNU licensing didn't answer me (yet). Maybe debian-legal has an opinion?

In any case, the difficulty to reproduce the *DK builds is worrying enough to warrant an independent rebuild.

Did you notice this?

Categories: Elsewhere

Niels Thykier: With 3 months of automatic decrufting in unstable

Planet Debian - Mon, 21/09/2015 - 22:57

After 3 months of installing an automatic decrufter in DAK, it:

  • has removed 689 cruft items from unstable and experimental
    • average removal rate being just shy of 230 cruft items/month
  • has become the “top 11th remover”.
  • is expected to become top 10 in 6 days from now and top 9 in 10 days.
    • This is assuming a continued average removal rate of 7.6 cruft items per day

On a related note, the FTP masters have removed 28861 items between 2001 and now.  The average being 2061 items a year (not accounting for the current year still being open). Though, intriguingly, in 2013 and 2014 the FTP masters removed 3394 and 3342 items.  With the (albeit limited) stats from the auto-decrufter, we can estimate that about 2700 of those were cruft items.

One could certainly also check the removal messages and check for the common “tags” used in cruft removals.  I leave that as an exercise to the curious readers, who are not satisfied with my estimate. :)

Filed under: Debian, Release-Team
Categories: Elsewhere

Jonathan McDowell: Getting a Dell E7240 working with a dock + a monitor

Planet Debian - Mon, 21/09/2015 - 22:29

I have a Dell E7240. I’m pretty happy with it - my main complaint is that it has a very shiny screen, and that seems to be because it’s the touchscreen variant. While I don’t care about that feature I do care about the fact it means I get FullHD in 12.5”

Anyway. I’ve had issues with using a dock and an external monitor with the laptop for some time, including getting so far as mentioning the problems on the appropriate bug tracker. I’ve also had discussions with a friend who has the same laptop with the same issues, and has some time trying to get it reliably work. However up until this week I haven’t had a desk I’m sitting at for any length of time to use the laptop, so it’s always been low priority for me. Today I sat down to try and figure out if there had been any improvement.

Firstly I knew the dock wasn’t at fault. A Dell E6330 works just fine with multiple monitors on the same dock. The E6330 is Ivybridge, while the E7240 is Haswell, so I thought potentially there might be an issue going on there. Further digging revealed another wrinkle I hadn’t previously been aware of; there is a DisplayPort Multi-Stream Transport (MST) hub in play, in particular a Synaptics VMM2320. Dell have a knowledge base article about Multiple external display issues when docked with a Latitude E7440/E7240 which suggests a BIOS update (I was already on A15) and a firmware update for the MST HUB. Sadly the firmware update is Windows only, so I had to do a suitable dance to be able to try and run it. I then discovered that the A05 update refused to work, complaining I had an invalid product ID. The A04 update did the same. The A01 update thankfully succeeded and told me it was upgrading from 2.00.002 to 2.15.000. After that had completed (and I’d power cycled to switch to the new firmware) I tried A05 again and this time it worked and upgraded me to 2.22.000.

Booting up Linux again I got further than before; it was definitely detecting that there was a monitor but it was very unhappy with lots of [drm:intel_dp_start_link_train] *ERROR* too many full retries, give up errors being logged. This was with 4.2, and as I’d been meaning to try 4.3-rc2 I thought this was a good time to give it a try. Lo and behold, it worked! Even docking and undocking does what I’d expect, with the extra monitor appearing / disappearing as you’d expect.

Now, I’m not going to assume this means it’s all happy, as I’ve seen this sort-of work in the past, but the clue about MST, the upgrade of that firmware (and noticing that it made things better under Windows as well) and the fact that there have been improvements in the kernel’s MST support according to the post 4.2 log gives me some hope that things will be better from here on.

Categories: Elsewhere

Red Crackle: Traits

Planet Drupal - Mon, 21/09/2015 - 21:51
In this blog post, you'll learn what traits are in the PHP language. You will also learn when to use them and how they can help in code reuse?
Categories: Elsewhere

Daniel Pocock: Skype outage? reSIProcate to the rescue!

Planet Debian - Mon, 21/09/2015 - 19:19

On Friday, the reSIProcate community released the latest beta of reSIProcate 1.10.0. One of the key features of the 1.10.x release series is support for presence (buddy/status lists) over SIP, the very thing that is currently out of action in Skype. This is just more proof that free software developers are always anticipating users' needs in advance.

reSIProcate 1.10.x also includes other cool things like support for PostgreSQL databases and Perfect Forward Secrecy on TLS.

Real free software has real answers

Unlike Skype, reSIProcate is genuine free software. You are free to run it yourself, on your own domain or corporate network, using the same service levels and support strategies that are important for you. That is real freedom.

Not sure where to start?

If you have deployed web servers and mail servers but you are not quite sure where to start deploying your own real-time communications system, please check out the RTC Quick Start Guide. You can read it online or download the PDF e-book.

Is your community SIP and XMPP enabled?

The Debian community has a federated SIP service, supporting standard SIP and WebRTC at rtc.debian.org for all Debian Developers. XMPP support was tested at DebConf15 and will be officially announced very soon now.

A similar service has been developed for the Fedora community and it is under evaluation at fedrtc.org.

Would you like to extend this concept to other free software and non-profit communities that you are involved in? If so, please feel free to contact me personally for advice about how you can replicate these successful initiatives. If your community has a Drupal web site, then you can install everything using packages and the DruCall module.

Comment and discuss

Please join the Free-RTC mailing list to discuss or comment

Categories: Elsewhere

Daniel Pocock: Skype outage? reSIProcate to the rescue!

Planet Drupal - Mon, 21/09/2015 - 19:19

On Friday, the reSIProcate community released the latest beta of reSIProcate 1.10.0. One of the key features of the 1.10.x release series is support for presence (buddy/status lists) over SIP, the very thing that is currently out of action in Skype. This is just more proof that free software developers are always anticipating users' needs in advance.

reSIProcate 1.10.x also includes other cool things like support for PostgreSQL databases and Perfect Forward Secrecy on TLS.

Real free software has real answers

Unlike Skype, reSIProcate is genuine free software. You are free to run it yourself, on your own domain or corporate network, using the same service levels and support strategies that are important for you. That is real freedom.

Not sure where to start?

If you have deployed web servers and mail servers but you are not quite sure where to start deploying your own real-time communications system, please check out the RTC Quick Start Guide. You can read it online or download the PDF e-book.

Is your community SIP and XMPP enabled?

The Debian community has a federated SIP service, supporting standard SIP and WebRTC at rtc.debian.org for all Debian Developers. XMPP support was tested at DebConf15 and will be officially announced very soon now.

A similar service has been developed for the Fedora community and it is under evaluation at fedrtc.org.

Would you like to extend this concept to other free software and non-profit communities that you are involved in? If so, please feel free to contact me personally for advice about how you can replicate these successful initiatives. If your community has a Drupal web site, then you can install everything using packages and the DruCall module.

Comment and discuss

Please join the Free-RTC mailing list to discuss or comment

Categories: Elsewhere

Drupal.org Featured Case Studies: SooperThemes Drupal Themes

Planet Drupal - Mon, 21/09/2015 - 18:01
Completed Drupal site or project URL: http://www.sooperthemes.com/#-Drupal-Themes

SooperThemes is a theme shop, selling premium Drupal themes. SooperThemes developed their sixth Drupal re-design to go with a completely new product line, based on our Glazed Drag and Drop theme.

As the oldest active Drupal themes shop, SooperThemes has been selling designs and contributing code to the community since 2007. We have used Drupal 6 with Ubercart and Drupal 7 with Commerce. For our newest website, we started with a clean slate and browsed the Drupal ecosystem for the most effective and maintainable tools to build the home for our new themes.

Traditionally, we have always used custom themes for our own website. This time, we built our website entirely with our own product. Our new theme is as much about its Drag and Drop site-building tools as it is about design. We could create a unique design by editing the many settings in the Glazed theme. Thereafter, we could design pages with the integrated visual drag and drop page builder. At no point did we miss Photoshop or our favorite code editor. Photography, text, and responsive design combine intuitively in the page builder, and everything is mobile-friendly out of the box.

Key modules/theme/distribution used: Drupal CMS Bootstrap 3 ProfileBootstrapModel EntitiesRecurlyOrganizations involved: SooperThemesTeam members: JurriaanRoelofs
Categories: Elsewhere

Lunar: Reproducible builds: week 21 in Stretch cycle

Planet Debian - Mon, 21/09/2015 - 17:33

What happened in the reproducible builds effort this week:

Media coverage

Nathan Willis covered our DebConf15 status update in Linux Weekly News. Access to non-LWN subscribers will be given on Thursday 24th.

Linux Journal published a more general piece last Tuesday.

Unexpected praise for reproducible builds appeared this week in the form of several iOS applications identified as including spyware. The malware was undetected by Apple screening. This actually happened because application developers had simply downloaded a trojaned version of XCode through an unofficial source. While reproducible builds can't really help users of non-free software, this is exactly the kind of attacks that we are trying to prevent in our systems.

Toolchain fixes

Niko Tyni wrote and uploaded a better patch for the source order problem in libmodule-build-perl.

Tristan Seligmann identified how the code generated by python-cffi could be emitted in random order in some cases. Upstream has already fixed the problem.

Packages fixed

The following 24 packages became reproducible due to changes in their build dependencies: apache-curator, checkbox-ng, gant, gnome-clocks, hawtjni, jackrabbit, jersey1, libjsr305-java, mathjax-docs, mlpy, moap, octave-geometry, paste, pdf.js, pyinotify, pytango, python-asyncssh, python-mock, python-openid, python-repoze.who, shadow, swift, tcpwatch-httpproxy, transfig.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues but not all of them:

Patches submitted which have not made their way to the archive yet:


Tests for Coreboot, OpenWrt, NetBSD, and FreeBSD now runs weekly (instead of monthly).

diffoscope development

Python 3 offers new features (namely yield from and concurrent.futures) that could help implement parallel processing. The clear separation of bytes and unicode strings is also likely to reduce encoding related issues.

Mattia Rizolo thus kicked the effort of porting diffoscope to Python 3. tlsh was the only dependency missing a Python 3 module. This got quickly fixed by a new upload.

The rest of the code has been moved to the point where only incompatibilities between Python 2.7 and Pyhon 3.4 had to be changed. The commit stream still require some cleanups but all tests are now passing under Python 3.

Documentation update

The documentation on how to assemble the weekly reports has been updated. (Lunar)

The example on how to use SOURCE_DATE_EPOCH with CMake has been improved. (Ben Beockel, Daniel Kahn Gillmor)

The solution for timestamps in man pages generated by Sphinx now uses SOURCE_DATE_EPOCH. (Mattia Rizzolo)

Package reviews

45 reviews have been removed, 141 added and 62 updated this week.

67 new FTBFS reports have been filled by Chris Lamb, Niko Tyni, and Lisandro Damián Nicanor Pérez Meyer.

New issues added this week: randomness_in_r_rdb_rds_databases, python-ply_compiled_parse_tables.


The prebuilder script is now properly testing umask variations again.

Santiago Villa started a discussion on debian-devel on how binNMUs would work for reproducible builds.

Categories: Elsewhere

Microserve: Responsive Design: Media Queries And How To Make Them More Friendly

Planet Drupal - Mon, 21/09/2015 - 17:33
Responsive Design: Media Queries And How To Make Them More FriendlySep 21st 2015

In building responsive websites, whether Drupal or otherwise, we all need to harness the power of media queries. 
Here is a brief overview of media queries, breakpoints and the basics of the ‘Mobile First’ approach and also some simple tricks for making media queries more friendly, semantic and easier to remember and use with LESS and SASS variables. 

Media queries - The basics

A CSS Media Query (originally ‘media rule’ in CSS2, renamed ‘media query’ in CSS3) is a CSS statement which targets a specific media type and/or size/size range. Inside a media query statement we can nest a collection of CSS declarations which will style content only when the conditions of that parent media query statement are true.

So, if a media query targets a media type of ‘screen’ with a min-width of 320px, it’s nested CSS declarations will only have effect on screens (smartphones/tablets/computers etc) but only if the device has a screen with a display larger than 320px wide.


In responsive design, a ‘breakpoint’ is a declared pixel width at which the layout of a web page or entire site should adjust (respond) to better display its content on the screen size of the device it is being viewed on. Commonly designers and developers set breakpoints at 2 or 3 size ranges to separately target mobile, tablet and larger desktop/laptop computers.

There’s no actual limit (other than common sense) to the amount of breakpoints you could add, but these 3 are a pretty good rule of thumb and cover most needs.

The beauty of the system is that it encourages us to target screen size ranges (all screens between ‘these sizes’) and not specific device types, so when a new model of smartphone comes out (for instance), we can be pretty sure the site will appear correctly as the device’s screen will be of a size that falls within an existing range that we are already targeting.

Mobile First? (min vs max)

First thing to decide is in which direction you want to media queries to work.

By this I mean whether you want to set max-width or min-width breakpoints. 

You can of course use a mixture of both, but here we’re all about making things more straightforward, so I would avoid ‘crossing the streams’ and stick to one strategy.

Max Width breakpoints:

Everything from zero or the last, narrower breakpoint,  up to ‘this width’ .
Used mostly in ‘Desktop First’ and ‘Retro Fitted Mobile’ builds.

Min Width breakpoints:

Everything above ‘this width’ to infinity, or until overridden by another, wider breakpoint.
Used mostly in modern ‘Mobile First’ builds.

‘Mobile First’ is widely accepted as the best way to approach responsive design, so I would advise the use of Min Width breakpoints.

You can find much more in-depth information about Mobile First from the person regarded as its original architect Luke Wroblewski.

In Mobile First theory we start without any breakpoints or media queries (in essence targeting all screen sizes of 1px wide to infinity) and here set all default, site-wide styles as well as any mobile specific styles for (so that we know all screens no matter how small will be covered) and then we create media query breakpoints at which we override the default styles for layout or elements which need to appear differently on larger screens. (As opposed to styling the whole site for desktop and then overriding for smaller screens.) 
This can take a little getting used to and feel a bit ‘backwards’ for developers used to building desktop sites, who may still consider mobile a ‘bolt on’, secondary consideration, but it’s surprising how easy the switch in thinking can be.

Example of a Mobile First media query body{ background: white; } @media screen and (min-width:768px) { body{ background: black; } }

In this example, the body of the document will have a white background on all screens smaller than 768px - This is it’s default ‘Mobile First’ setting.
Then on all screens larger than 768px the body will have a black background.

Which and how many breakpoints to choose?

You could in theory, create a mobile first website without any media queries or breakpoints.

You could serve the same mobile layout to all screen sizes. But because in most cases mobile layout consists of 100% width single column elements, it would start to become very difficult to use and read on larger screens. So the bare minimum we would usually consider using is one breakpoint to distinguish between mobile and more traditional desktop layouts.

The size of 768px used in the example above would work well for this, as our site would show mobile layout on mobiles and many tablets in portrait orientation, then on tablets in landscape orientation, laptops and desktops, another layout could be displayed.

I use Bootstrap 3 as my main front-end framework and ‘out of the box’ Bootstrap 3 has 3 defined breakpoints:

sm - 768px and up to...
md - 992px  and up to...
lg - 1200px  and up to infinity

This covers our Mobile First defaults up to 767px, a tablet aimed size range of 768px-991px, a traditional desktop width of 992px-1199px and a more modern, wider desktop width of 1200px and above.
*Personally, I usually add a smaller, custom breakpoint of around 480px, because I find in practice the range of 0-767px, before the first ‘sm’ breakpoint kicks in can sometimes be a bit wide for every part of my layout. Interestingly, one of the promised updates in the imminent release of Bootstrap 4 is that it will come with a similar ‘extra small’ breakpoint as standard.

Media queries can become surprisingly complicated to keep track of. This is where LESS and SASS variables can be a godsend.

LESS and SASS Variables - Making media queries more user friendly

Although not the most complicated coding syntax in the world, media queries can be hard to keep track of, once you’ve got 3 or 4 different sizes to contend with. Having to remember the precise pixel width you are targeting and in which direction can become surprisingly complicated, especially when we start using inline queries (more coming on that later.) This is where LESS and SASS variables can be a godsend.

Here’s a LESS example:

In our variables.less (or similar) create a variable called ‘@tablet’. *Notice we use a tilde (~) in front of our value string, so that the quote marks are not included when processed into CSS. - We need to use quotes around our variable value as LESS syntax requires this if a value contains a string of different tokens, but once compiled to CSS, these are not needed.

@tablet: ~“screen and (min-width: 768px)”;

Now instead of using

@media screen and (min-width:768px) { body{ background: black; } }

We can use the variable to target 768px and above.

@media @tablet { body{ background: black; } }

In SASS, the same principle applies, but our syntax is slightly different. Our variable is denoted by ‘$’ as opposed to ‘@’ and instead of adding a tilde to interpolate our value string and remove the quotes at the point at which our variable is declared, we use #{} around our variable name at the point at which it’s referenced in our code to do the same thing.:

$tablet: “only screen and (min-width: 768px)”;

We can then use:

@media #{$tablet} { body{ background: black; } } ‘Nested media queries’

So that’s given us a method of creating more memorably named variables to use in place or full hand media queries in our code, but it can still feel a bit disjointed and counter intuitive to set initial global declarations in one point of a file and then keep jumping to another point in a file or to another file altogether to set our media query overrides.  Also imagine if you only want to override an attribute value which was a few levels down within a LESS or SASS nest. For our media query override to work we would have to include the entire nest up to the point at which the value we want to change appears.

Say we want to change the color of all links on screens above 768px wide, but only if they are inside paragraphs, which are inside a div with the class of ‘thisdiv’, which are in turn inside a div with an ID of ‘container’.

The original global code might look like this:

#container{ .thisdiv{ p{ a{ color: red; } } } }

Our media query override would need to include this whole list (LESS version shown):

@media @tablet{ #container{ .thisdiv{ p{ a{ color: blue; } } } } }

That’s a heck of a lot of nesting to keep on top of and if we add a new nest level in our original declaration between say '#container' and '.thisdiv', we would also have to update the nesting in our media query.
Luckily current versions of both LESS and SASS allow for the nesting of media queries so that they can essentially be used inline within existing declarations.

So with the above example instead of having to re-write the entire nest in a separated part of our file or in a separate file, we can just specify the exact element we want to override from within the original global declaration.

So instead of having to write:

#container{ .thisdiv{ p{ a{ color: red; } } } } @media @tablet{ #container{ .thisdiv{ p{ a{ color: blue; } } } } }

We can just write:

#container{ .thisdiv{ p{ a{ color: red; @media @tablet{ color:blue; } } } } }

*Notice the media query is nested inside the ‘a’ attribute itself, after its initial, global value.

Bootstrap 3 specific queries

In Bootstrap 3 we can go a little further to make sure we fully integrate our media query variables into the framework. Bootstrap depends on a grid system which is based around a collection of pre existing breakpoints.

Although most users leave these existing breakpoints where they are, if you do need or want to changes these to custom values of your own choice, this can be done by identifying and changing the values of the corresponding variables in variables.less. For this reason it makes sense that our media queries use these existing variables as their values.

Why? Because if we (or anyone else) updates the value of the grid breakpoint widths, our media query variables will then in turn automatically receive these new values.

Here is our friendly name Bootstrap LESS variables:

@mobile-lg: ~"screen and (min-width: @{screen-xs-min})"; @tablet: ~"screen and (min-width: @{screen-sm-min})"; @normal: ~"screen and (min-width: @{screen-md-min})"; @wide: ~"screen and (min-width: @{screen-lg-min})";

I usually place these at the top of variables.less for easy reference.

So to re-iterate; instead of explicitly stating ‘768px’ as the value for min-width in the @tablet variable, we instead reference the existing Bootstrap variable ‘@{screen-sm-min}’. This means if the integral Bootstrap grid variable is updated, our media query variable and all dependant references will use this new value automatically.

Other Front End Frameworks Are Available!

There are of course other front-end frameworks available other than Bootstrap such as Foundation by Zurb as well as some Drupal specific responsive grid-based systems like AdaptiveTheme and Omega. I only cover Bootstrap here as an example and also because it is currently the most popular framework around also but not least because it's what I personally use most and know most about. All of the principles up to the Bootstrap specific queries should still be just as relevant, no matter which framework you use or whether you indeed use one at all.

Further reading

The above is intended as quite a top level, overview of media queries and their application. For more in-depth information on the technologies and concepts used I’d recommend the following further reading:

W3C Schools: Media Queries
Luke Wroblewski - Architect Of Mobile First
Bootstrap 3

Written by: Martin White, Drupal Themer

Microserve is a Drupal Agency based in Bristol, UK. We specialise in Drupal Development, Drupal Site Audits and Health Checks, and Drupal Support and Maintenance. Contact us for for further information.

Categories: Elsewhere

Drupalize.Me: Meet Front-end Developer Kris Bulman

Planet Drupal - Mon, 21/09/2015 - 15:00

We interview Kris Bulman about what it means to be a front-end developer and share advice from his experience. Kris has been theming Drupal sites and working on the front-end since 2007. He's done a lot of work in the open source world including being a co-maintainer of Drupal’s Zen theme and building his own Sass grid system.

Categories: Elsewhere

Wunderkraut blog: How to make DrupalCon even better?

Planet Drupal - Mon, 21/09/2015 - 11:46

You may have noticed that after three years of being the primary sponsor for DrupalCon Europe we've chosen to skip the traditional sponsorship this year. Many people have asked why, so here’s the answer.

For a long time we treated DrupalCon in Europe as a marketing investment. We did this even though it didn't create even nearly enough leads to justify the investment of time and money. In real life it was an HR investment, our staff liked it and it helped in our hiring. And DrupalCon is an awesome event that we are proud to support, could we support it in a better way? There is a real difference between DrupalCons, and in fact technical conferences in general, between Europe and the US. Customers don't go to technical events in Europe. The participant profile is much more technical in Europe. This makes DrupalCon in Europe a very attractive marketing opportunity for hosting providers, ISVs and other parties who target to developers and Drupal shops, less so for Drupal shops. After DrupalCon Amsterdam we sat down with the DA in order to come up with a better way to support the event. Traditional sponsorship wasn't working for us and we wanted to help improve the event. Our own staff is a big part of the Drupal community, so naturally we asked them what would make the DrupalCon even better. After considering quite a few different ideas, and learning a lot on the strange limitations conference venues have in the process, we decided to make contributing to Drupal more comfy. We do this by bringing loads of beanbag chairs to the venue. Naturally we paid the DA for the privilege, and in order to keep our financial support on the same level as before we also decided to be the first company to sign up for the new signature level supporting partnership. We have more ideas in store for the future years and I would also like to hear your take on how to make DrupaCon even better. Is there something we could do to help?
Categories: Elsewhere

Norbert Preining: International Sad Hits Volume 1

Planet Debian - Mon, 21/09/2015 - 07:22

I stumbled over this CD a few weeks ago, and immediately ordered it from some second-hand dealer in the US: International Sad Hits – Volume 1: Altaic Group. Four artists from different countries (2x Japan, Korea, Turkey) and very different music style, but connected in one thing: That they don’t fit into the happy-peppy culture of AKB48, JPOP, KPOP and the like, but singers and songwriters that probe the depths of sadness. With two of my favorite Japanese appearing in the list (Tomokawa Kazuki and Mikami Kan), there was no way I could not buy this CD.

The four artist combined in this excellent CD are: Fikret Kızılok, a Turkish musician, singer and songwriter. Quoting from the pamphlet:

However, in 1983 Kızılok returned with one of his most important and best albums: Zaman Zaman (Time to time). […] These albums presented an alternative to the horrible pop scene emerging in Turkey — they criticized the political situation, the so-called intellectuals, and the pop stars.

The Korean artist is 김두수 (Kim Doo Soo), who was the great surprise of the CD for me. The sadness and beauty that is transmitted through is music is special. The pamphlet of the CD states:

The strictness of his home atmosphere suffocated him, and in defiance against his father he dropped out, and walked along the roads. He said later that, “Hatred towards the world, and the emptiness of life overwhelmed me. I lived my life with alcohol every day.”

After some problems due to political crisis, and fierce reactions to his song “Bohemian” (included) made him disappear into the mountains for 10 years, only to return with even better music.

The third artist is one of my big favorites, Tomokawa Kazuki (Official site, Wikipedia) from Japan. I have written often about Tomokawa, but found a very fitting description in the booklet:

Author Tatematsu Wahei has described Tomokawa as “a man standing naked, his sensibility utterly exposed and tingling.” It’s an accurate response to a creativity that seems unmediated by embarrassment, voraciously feeding off the artist’s personal concern.

The forth artist is again from Japan, Mikami Kan, a well known wild man from the Japanese music scene. After is debut at the Nakatsugawa All-Japan Folk Jamboree in 1971, he was something like a star for some years, but without a record deal his popularity decreased steadily.

During this period his songwriting gradually altered, becoming more dense, surreal and uncompromisingly personal.

Every song on this CD is a masterpiece by itself, but despite me being a great fan of Tomokawa, my favorite is Kim Doo Soo here, with songs that grip your heart and soul, stunningly beauty and sad at the same time.

Categories: Elsewhere

Matthew Garrett: The Internet of Incompatible Things

Planet Debian - Sun, 20/09/2015 - 23:22
I have an Amazon Echo. I also have a LIFX Smart Bulb. The Echo can integrate with Philips Hue devices, letting you control your lights by voice. It has no integration with LIFX. Worse, the Echo developer program is fairly limited - while the device's built in code supports communicating with devices on your local network, the third party developer interface only allows you to make calls to remote sites[1]. It seemed like I was going to have to put up with either controlling my bedroom light by phone or actually getting out of bed to hit the switch.

Then I found this article describing the implementation of a bridge between the Echo and Belkin Wemo switches, cunningly called Fauxmo. The Echo already supports controlling Wemo switches, and the code in question simply implements enough of the Wemo API to convince the Echo that there's a bunch of Wemo switches on your network. When the Echo sends a command to them asking them to turn on or off, the code executes an arbitrary callback that integrates with whatever API you want.

This seemed like a good starting point. There's a free implementation of the LIFX bulb API called Lazylights, and with a quick bit of hacking I could use the Echo to turn my bulb on or off. But the Echo's Hue support also allows dimming of lights, and that seemed like a nice feature to have. Tcpdump showed that asking the Echo to look for Hue devices resulted in similar UPnP discovery requests to it looking for Wemo devices, so extending the Fauxmo code seemed plausible. I signed up for the Philips developer program and then discovered that the terms and conditions explicitly forbade using any information on their site to implement any kind of Hue-compatible endpoint. So that was out. Thankfully enough people have written their own Hue code at various points that I could figure out enough of the protocol by searching Github instead, and now I have a branch of Fauxmo that supports searching for LIFX bulbs and presenting them as Hues[2].

Running this on a machine on my local network is enough to keep the Echo happy, and I can now dim my bedroom light in addition to turning it on or off. But it demonstrates a somewhat awkward situation. Right now vendors have no real incentive to offer any kind of compatibility with each other. Instead they're all trying to define their own ecosystems with their own incompatible protocols with the aim of forcing users to continue buying from them. Worse, they attempt to restrict developers from implementing any kind of compatibility layers. The inevitable outcome is going to be either stacks of discarded devices speaking abandoned protocols or a cottage industry of developers writing bridge code and trying to avoid DMCA takedowns.

The dystopian future we're heading towards isn't Gibsonian giant megacorporations engaging in physical warfare, it's one where buying a new toaster means replacing all your lightbulbs or discovering that the code making your home alarm system work is now considered a copyright infringement. Is there a market where I can invest in IP lawyers?

[1] It also requires an additional phrase at the beginning of a request to indicate which third party app you want your query to go to, so it's much more clumsy to make those requests compared to using a built-in app.
[2] I only have one bulb, so as yet I haven't added any support for groups.

Categories: Elsewhere

Russ Allbery: Review: Shady Characters

Planet Debian - Sun, 20/09/2015 - 23:06

Review: Shady Characters, by Keith Houston

Publisher: W.W. Norton Copyright: 2013 ISBN: 0-393-34972-1 Format: Trade paperback Pages: 250

Subtitled The Secret Life of Punctuation, Symbols & Other Typographical Marks, Shady Characters is one of those delightfully quirky books that examines the history of something you would not normally connect to history. It's an attempt to document, and in some cases reconstruct, the history of several specific punctuation marks. If you've read and enjoyed Lynn Truss's Eats, Shoots & Leaves, this is a near-perfect complement, focusing on more obscure marks and adding in a more detailed and coherent history.

Houston has some interest in the common and quotidian punctuation marks, with chapters on the hyphen, dash, and quotation mark, but he avoids giving a full-chapter treatment to the most obvious periods and commas. (Although one learns quite a bit about them as asides in other histories.) The rest of the book focuses on the newly-popular (the at symbol), the recognizable but less common (the hash mark, the asterisk and dagger symbols used for footnotes, and the ampersand), and the historical but now obscure (the pilcrow or paragraph mark, and the manicule or pointing finger). He even devotes two chapters to unsuccessful invented punctuation: the interrobang and the long, failed history of irony and sarcasm punctuation.

And this is an actual history, not just a collection of anecdotes and curious facts. Houston does a great job of keeping the text light, engaging, and readable, but he's serious about his topic. There are many reproductions of ancient documents showing early forms of punctuation, several serious attempts to adjudicate between competing theories of origin, a few well-marked and tentative forays into guesswork, and an open acknowledgment of several areas where we simply don't know. Along the way, the reader gets a deeper feel for the chaos and arbitrary personal stylistic choices of the days of hand-written manuscripts and the transformation of punctuation by technology. So much of what we now use in punctuation was shaped and narrowed by the practicalities of typesetting. And then modern technology revived some punctuation, such as the now-ubiquitous at sign, or the hash mark that graces every telephone touchpad.

I think my favorite part of this history was using punctuation as perspective from which to track the changing relationship between people and written material. It's striking how much early punctuation was primarily used for annotations and commentary on the text, as opposed to making the text itself more readable. Much early punctuation was added after the fact, and then slowly was incorporated into the original manuscript, first via recopying and then via intentional authorial choice. Texts started including their own pre-emptive commentary, and brought the corresponding marks along with the notes. And then printing forced a vast simplification and standardization of punctuation conventions.

Full compliments to W.W. Norton, as well, for putting the time and effort into making the paper version of this book a beautiful artifact. Punctuation is displayed in red throughout when it is the focus of the text. Endnotes are properly split from footnotes, with asides and digressions presented properly at the bottom of the page, and numbered endnotes properly reserved solely for citations. There is a comprehensive index, list of figures, and a short and curated list of further readings. I'm curious how much of the typesetting care was preserved in the ebook version, and dubious that all of it would have been possible given the current state of ebook formatting.

Typesetting, obscure punctuation, and this sort of popular history of something quotidian are all personal interests of mine, so it's unsurprising I found this book delightful. But it's so well-written and engaging that I can recommend it even to people less interested in those topics. The next time you're in the mood for learning more about a corner of human history that few people consider, I recommend Shady Characters to your attention.

Rating: 8 out of 10

Categories: Elsewhere


Subscribe to jfhovinne aggregator