Elsewhere

Sylvestre Ledru: Rebuild of Debian using Clang 3.5.0

Planet Debian - Thu, 11/09/2014 - 14:17

Clang 3.5.0 has just been released. A new rebuild has been done highlight the progress to get Debian built with clang.

tl;dr: Great progress. We decreased from 9.5% to 5.7% of failures. Full results are available on http://clang.debian.net

At time of the rebuild with 3.4.2, we had 2040 packages failing to build with clang. With 3.5.0, this dropped to 1261 packages.

Fixes

With Arthur Marble and Alexander Ovchinnikov, both GSoC students, we worked on various ways to decrease the number of errors.

Upstream fixes

First, the most obvious way, we fixed programming bugs/mistakes in upstream sources. Basically, we took categories of failure and fixed issues one after the other. We started with simple bugs like 'Wrong main declaration', 'non-void function should return a value' or 'Void function should not return a value'.

They are trivial to fix. We continued with harder fixes like ' Undefined reference' or 'Variable length array for a non POD (plain old data) element'.

So, besides these one, we worked on:


In total, we reported 295 bugs with patches. 85 of them have been fixed (meaning that the Debian maintainer uploaded a new version with the fix).

In parallel, I think that the switch by FreeBSD and Mac OS X to Clang also helped to fix various issues by upstreams.

Hacking in clang

As a parallel approach, we started to implement a suggestion from Linus Torvalds and a few others. Instead of trying to fix all upstream, where we can, we tried to update clang to improve the gcc compatibility.

gcc has many flags to disable or enable optimizations. Some of them are legacy, others have no sense in clang, etc. Instead of failing in clang with an error, we create a new category of warnings (showing optimization flag '%0' is not supported) and moved all relevant flags into it. Some examples, r212805, r213365, r214906 or r214907

We also updated clang to silent some useless arguments like -finput_charset=UTF-8 (r212110), clang being UTF-8 compliant.

Finally, we worked on the forwarding of linker flags. Clang and gcc have a very different behavior: when gcc does not know an argument, it is going to forward the argument to the linker. Clang, in this case, is going to reject the argument and fail with an error. In clang, we have to explicitly declare which arguments are going to be transfer to the linker. Of course, the correct way to pass arguments to the linker is to use -Xlinker or -Wl but the Debian rebuild proved that these shortcuts are used. Two of these arguments are now forwarded:

  • -z keyword - r213198
  • -u Force symbol to be entered in the output file as an undefined symbol - r211756. This one fixed most of the haskell build failures. It fixed the most common issue that we had (701 occurrences but this does not mean that all these packages build fine now, some haskell-based package are failing later in the process)
New errors

Just like in other releases, new warnings are added in clang. With (bad) usage of -Werror by upstream software, this causes new build failures:

I also took the opportunity to add some further categorizations in the list of errors. Some examples:

Next steps

The Debile project being close to ready with Clément Schreiner's GSoC, we will now have an automatic and transparent way to rebuild packages using clang.

Conclusion

As stated, we can see a huge drop in term of number of failures over time:

Hopefully, Clang getting better and better, more and more projects adopting it as the default compiler or as a base for plugin/extension developments, this percentage will continue to decrease.
Having some kind of release goal with clang for Jessie+1 can now be considered as potentially reachable.

Want to help?

There are several things which can be done to help:

  • Point me common error patterns in the Not categorized list of errors to create new categories
  • Report and fix packages
  • As an upstream, integrate clang as part of your continuous integration system
  • Hack on cqa-scanlogs, the error detection tool to detect error patterns (example: Undetected error). This tool is used also for the regular rebuilds of the archive.
  • Improve clang.debian.net website
Acknowledgments

Thanks to David Suarez for the rebuilds of the archive, Arthur Marble and Alexander Ovchinnikov for their GSoC works and Nicolas Sévelin-Radiguet for the few fixes.

Original post blogged on b2evolution.

Categories: Elsewhere

Drupal 8 Rules: #d8rules updates, BoF & sprints at DrupalCon Amsterdam

Planet Drupal - Thu, 11/09/2014 - 13:05

Hello everyone!

DrupalCon Amsterdam is coming close and we are working hard to get Milestone 1 for porting the Rules module to Drupal 8 done. Here's an outline where you can join fago, klausi, nico and many others for updates on the initiative, discussions and hands-on during the sprints!

Drupal 8 Contrib Module Update (shared session)

Tuesday · 14:15 - 15:15, Room: Auditorium (Wunderkraut)
https://amsterdam2014.drupal.org/session/drupal-8-contrib-module-update

A quick, 12-minutes update on the status of Rules for Drupal 8 together with Webform (quicksketch), Display Suite (by aspilicious), Media (by daveried/slashrsm), Search API (by drunken monkey), Commerce (by bojanz), Redirect, Global Redirect, Token, Pathauto (by berdir), Panels (by japerry) & Simplenews (by miro_dietiker/ifux).

#d8rules initiative meeting (BoF)

Thursday 13:00 - 14:00, Room: Emerald Lounge E
https://amsterdam2014.drupal.org/bof/d8rules-initiative-meeting

Let's get into a deeper discussion about the #d8rules initiative in this birds-of-a-feather session. We'll inform you about the current state of funding & development and prepare you well for the sprints on Friday.

#d8rules sprints

Friday 9:00 - 18:00, Coder lounge in the venue
https://groups.drupal.org/node/427578

Sprinting at DrupalCon is THE way to learn about contributing to Drupal in general. With our training experience from DrupalCamp Alpe-Adria and Drupalaton we know that working on Rules 8.x is a great way to get started with the new programming paradigmes of Drupal 8. Join fago, klausi to port actions, fix integrations for Drupal 8 core and get Milestone 1 done in general.

#d8rules sprints are focused mainly on Friday, but we will also be around for extended sprints prior and after the conference. Check out the information about all the sprints at and around DrupalCon Amsterdam by gabor. And please don't forget: thank you for helping us estimate the number of people attending and sign-up using the sprints spreadsheet.


#d8rules trainings & sprints at DrupalCamp Alpe-Adria, Slovenia, May 2014

We are looking forward meeting everyone there. As always, you can find us on irc: #drupal-rules and don't forget to use the twitter hashtag #d8rules.

dasjo on behalf of the #d8rules team

Categories: Elsewhere

Matthias Klumpp: Listaller: Back to the future!

Planet Debian - Thu, 11/09/2014 - 10:14

It is time for another report on Listaller, the cross-distro 3rd-party package installer, which is now in development for – depending how you count – 5-6 years. This will become a longer post, so you might grab some coffee or tea

The original idea

The Listaller project was initially started with the goal to make application deployment on Linux distributions as simple as possible, by providing a unified package installation format and tools which make building apps for multiple distributions easier and deployment of updates simple. The key ideas were:

  • Seamless integration of all installation steps into the system – users shouldn’t care about the origin of their application, they just handle all installed apps with the same tool and update all apps with the same interface they use for updating the system.
  • Out-of-the-boy sandboxing for all 3rd-party apps
  • Easy signing and key-validation for Listaller packages
  • Simple creation of updates for developers
  • Resource-sharing: It should always be clear which application uses which library, duplicates should be avoided. The distribution-provided software should take priority, since it is often well-maintained and receives security updates.
The current state

The current release of Listaller handles all of this with a plugin for PackageKit, the cross-distro package-management abstraction layer. It hooks into PackageKit and reads information passing through to the native distributor backend, and if it encounters Listaller software, it handles it appropriately. It can also inject update information. This results in all Listaller software being shown in any PackageKit frontends, and people can work with it just like if the packages were native packages. Listaller package installations are controlled by a machine policy, so the administrator can decide that e.g. only packages from a trusted source (= GPG signature in trusted database) can be installed. Dependencies can be pulled from the distributor’s repositories, or optionally from external sources, like the PyPI.

This sounds good on paper, but the current implementation has various problems.

The issues

The current Listaller approach has some problems. The biggest one lies in the future: Soon, there will be no PackageKit plugins anymore! PackageKit 1.0 will remove support for them, because they appear to be a major source for crashes, even the in-tree plugins cause problems. Also, the PackageKit service itself is currently being trimmed of unneeded features and less-used code. These changes in PackageKit are great and needed for the project (and I support these efforts), but they cause a pretty huge problem for Listaller: The project relies on the PackageKit plugin – if used without it, you loose the system-integration part, which is one of the key concepts of Listaller, and a primary goal.

But this issue is not the only one. There are more. One huge problem for Listaller is dependency-solving: It needs to know where to get software from in case it isn’t installed already. And that has to be done in a cross-distributional way. This is an incredibly complex task, and Listaller contains lots of workarounds for various quirks. It contains so much hacks for distro-specific stuff, that it became really hard to understand. The Listaller dependency model also became very complex, because it tried to handle many corner-cases. This is bad, of course. But the workarounds weren’t added for fun, but because it was assumed to be easier than to fixing the root cause, which would have required collaboration between distributors and some changes on the stack, which seemed unlikely to happen at the time the code was written.

The systemd effort

Also a thing which affects Listaller, is the latest push from the systemd team to allow cross-distro 3rd-party installations to happen. I definitively recommend reading the linked blogpost from Lennart, if you have some spare time! The identified problems are the same as for Listaller, but the solution they propose is completely different, and about three orders of magnitude more invasive than whatever the Listaller project had in mind (I make these numbers up, so don’t ask!). There are also a few issues I see with Lennarts approach, I will probably go into detail about that in another blogpost (e.g. it requires multiple copies of a library lying around, where one version might have a security vulnerability, and another one doesn’t – it’s hard to ensure everything is up to date and secure that way, even if you have a top-notch sandbox). I have great respect for the systemd crew and especially Lennart, and I hope them to succeed with their efforts. However, I also think Listaller can achieve a similar things with a less-invasive solution, at least for the 3rd-party app-installations (Listaller is one of the partial-fix solutions with strict focus, so not a direct competitor to the holistic systemd approach. Both solutions could happily live together.)

A step into the future

Some might have guessed it already: There are some bigger changes coming to Listaller! The most important one is that there will be no Listaller anymore, at least not in its old form.

Since the current code relies heavily on the PackageKit plugin, and contains some ugly workarounds, it doesn’t make much sense to continue working on it.

Instead, I started the Listaller.NEXT project, which is a rewrite of Listaller in C. There are a some goals for the rewrite:

  • No stupid hacks and workarounds: We will not add any workaround. If there is a problem, we will fix it at its source, even if that might be more invasive.
  • Trimmed down project: The new incarnation of Listaller will only support installations of statically linked software at the beginning. We will start with a very small, robust core, and then add more features (like dependency-solving) gradually, but only if they are useful. There will be no feature-creep like in the previous version.
  • Faster development cycle: Releases will happen much faster, not only two or three times a year
  • Integration: Since there is no PackageKit plugin anymore, but integration is still one of Listaller’s key concepts, we will integrate Listaller into downstream tools, ranging from Apper to GNOME-Software. Richard Hughes will help with the integration and user interfaces, so Listaller applications get displayed properly.
  • AppStream-first: AppStream is the ultimate tool for Listaller to detect dependencies. With the 0.6 release, the Listaller component-concept was merged into it, which makes it a very powerful and non-hackish solution for dependency-detection. We will advance the use of its metadata, and probably use it exclusively, which would restrict Listaller to only work properly on distributions which ship AppStream metadata.
  • No desktop-only focus: The previous Listaller was focused only on desktop GUI apps. The new version will be developed with a much larger target audience in mind, including server deployments (“Can I use it to deploy my server app” is one very frequently asked questions about Listaller – with the new version, the answer is yes)
  • We will continue to improve the static-linking and cross-distro development toolchain (libuild, with ligcc, lig++ and binreloc), to make building portable apps easier.

I made a last release of the 0.5.x series of Listaller, to work with PackageKit 0.9.x – the future lies in the C port.

If you are using Listaller (and I know of people who do, for example some deploy statically-linked stuff on internal test-setups with it), stay tuned. The packaging format will stay mostly compatible with the current version, so you will not see many changes there (the plan is to freeze it very soon, so no backwards-incompatible changes are made anymore). The o.5.x series will receive critical bugfixes if necessary.

Help needed!

As always, there is help needed! Writing C is not that difficult But user feedback is welcome as well, in case you have an idea. The new code will be hosted on Github in the new listaller-next branch (currently not that much to find there). Long-term, we will completely migrate away from Launchpad.

You can expect more blogposts about the Listaller concepts and progress in the next months (as soon as I am done with some AppStream-related things, which take priority).

Categories: Elsewhere

Steve Kemp: A small email utility and other updates.

Planet Debian - Thu, 11/09/2014 - 09:09

Last night I was looking for an image I knew a model had mailed me a few months ago, as we were talking about rescheduling a shoot at the weekend. I couldn't find it, even with my awesome mail client and filing system.

With some free time I figured I could write a little utility to dump all attachments from email folders, and find it that way.

It did cross my mind that there is the simple mail-utility for dumping headers, etc, called formail, which is distributed alongside procmail, but it doesn't handle attachments ..

I was tempted to write a general purpose script to dump attachments, email header values, etc, etc but given the lack of time I merely solved my own problem.

I suspect there is room for a "mail utilities" package, similar to Joey's "moreutils" and my "". However I note that there is a GNU Mailutils which does things differently than I'd expect - i.e. it contains a POP3 server.

Still if you want to dump attachments from emails, have GMIME installed, and want to filter by attachment-name, or MIME-type, you might look at my trivial attachment-dump program.

Related to that I spent some time last night updating my photography site, so the animals & pets section has updated images at least.

During the course of that I found a bug in my static-site generator, templer which stopped it from automatically populating image height/widths when called in a glob:

Title: Pets & Animals Images: file_glob( "*.jpg" ) --- This is the page body, it now has access to a variable called 'images' which is a HTML::Template loop-structure containing name/height/width/etc for each image in the current directory.

That should now be resolved, and life should once again be good.

Categories: Elsewhere

InternetDevels: Thanks for the great time at Lviv Euro DrupalCamp 2014!

Planet Drupal - Thu, 11/09/2014 - 08:55

Four months of preparation. Three golden sponsors. Two days. One Lviv Euro Drupal Camp 2014.

Ladies and gentleman, we can proudly announce — we did it! Our camp became the biggest Drupal-evet taking place in Ukrane this year, which got together the most cheerful and friendly drupalers. We hope, that all those 150 people have got a huge pile of positive emotions and impressions. But let’s get in details inch by inch :).

Read more
Categories: Elsewhere

Dirk Eddelbuettel: pkgKitten 0.1.1: Still creating R Packages that purr

Planet Debian - Thu, 11/09/2014 - 03:12

A maintenance release 0.1.1 of pkgKitten is now on CRAN.

It has only one small change: the function playWithPerPackageHelpPage() was factored out of the main function kitten() as I happened to be needing something just like playWithPerPackageHelpPage() to make packages created by the Rcpp function Rcpp.package.skeleton() a little nicer.

We also added a NEWS.Rd file which restates major release features. As it is so short, we include it in its entirety.

Changes in version 0.1.1 (2014-09-09)
  • New (exported) function playWithPerPackageHelpPage() which lets other packages create a non-complaint-generating package help page

Changes in version 0.1.0 (2014-06-13)
  • Initial public version and CRAN upload

More details about the package are at the pkgKitten webpage and the pkgKitten GitHub repo.

Courtesy of CRANberries, there is also a diffstat report for this release

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Dcycle: An approach to code-driven development in Drupal 8

Planet Drupal - Wed, 10/09/2014 - 22:28
What is code-driven development and why is it done?

Code-driven development is the practice of placing all development in code. How can development not be in code?, you ask.

In Drupal, what makes your site unique is often configuration which resides in the database: the current theme, active modules, module-specific configuration, content types, and so on.

For the purpose of this article, our goal will be for all configuration (the current theme, the content types, module-specific config, the active module list...) to be in code, and only content to be in the database. There are several advantages to this approach:

  • Because all our configuration is in code, we can package all of it into a single module, which we'll call a site deployment module. When enabled, this module should provide a fully workable site without any content.
  • When a site deployment module is combined with generated content, it becomes possible to create new instances of a website without cloning the database. Devel's devel_generate module, and Realistic Dummy Content can be used to create realistic dummy content. This makes on-ramping new developers easy and consistent.
  • Because unversioned databases are not required to be cloned to set up new environments, your continuous integration server can set up new instances of your site based on a known good starting point, making tests more robust.
Code-driven development for Drupal 7

Before moving on to D8, let's look at a typical D7 workflow: The technique I use for developing in Drupal 7 is making sure I have one or more features with my content types, views, contexts, and so on; as well as a site deployment module which contains, in its .install file, update hooks which revert my features when needed, enable new modules, and programmatically set configuration which can't be exported via features. That way,

  • incrementally deploying sites is as simple as calling drush updb -y (to run new update hooks).
  • deploying a site for the first time (or redeploying it from scratch) requires creating the database, enabling our site deployment module (which runs all or update hooks), and optionally generating dummy content if required. For example: drush si -y && drush en mysite_deploy -y && drush en devel_generate && drush generate-content 50.

I have been using this technique for a few years on all my D7 projects and, in this article, I will explore how something similar can be done in D8.

New in Drupal 8: configuration management

If, like me, you are using features to deploy websites (not to bundle generic functionality), config management will replace features in D8. In D7, context is used to provide the ability to export block placement to features, and strongarm exports variables. In D8, variables no longer exist, and block placement is now exportable. All of these modules are thus no longer needed.

They are replaced by the concept of configuration management, a central API for importing and exporting configuration as yml files.

Configuration management and site UUIDs

In Drupal 8, sites are now assigned a UUID on install and configuration can only be synchronized between sites having the same UUID. This is fine if the site has been cloned at some point from one environment to another, but as mentioned above, we are avoiding database cloning: we want it to be possible to install a brand new instance of a site at any time.

We thus need a mechanism to assign the same UUID to all instances of our site, but still allow us to reinstall it without cloning the database.

The solution I am using is to assign a site UUID in the site deployment module. Thus, in Drupal 8, my site deployment module's .module file looks like this:

/** * @file * site deployment functions */ use Drupal\Core\Extension\InfoParser; /** * Updates dependencies based on the site deployment's info file. * * If during the course of development, you add a dependency to your * site deployment module's .info file, increment the update hook * (see the .install module) and this function will be called, making * sure dependencies are enabled. */ function mysite_deploy_update_dependencies() { $parser = new InfoParser; $info_file = $parser->parse(drupal_get_path('module', 'mysite_deploy') . '/mysite_deploy.info.yml'); if (isset($info_file['dependencies'])) { \Drupal::moduleHandler()->install($info_file['dependencies'], TRUE); } } /** * Set the UUID of this website. * * By default, reinstalling a site will assign it a new random UUID, making * it impossible to sync configuration with other instances. This function * is called by site deployment module's .install hook. * * @param $uuid * A uuid string, for example 'e732b460-add4-47a7-8c00-e4dedbb42900'. */ function mysite_deploy_set_uuid($uuid) { \Drupal::config('system.site') ->set('uuid', $uuid) ->save(); }

And the site deployment module's .install file looks like this:

/** * @file * site deployment install functions */ /** * Implements hook_install(). */ function mysite_deploy_install() { // This module is designed to be enabled on a brand new instance of // Drupal. Settings its uuid here will tell this instance that it is // in fact the same site as any other instance. Therefore, all local // instances, continuous integration, testing, dev, and production // instances of a codebase will have the same uuid, enabling us to // sync these instances via the config management system. // See also https://www.drupal.org/node/2133325 mysite_deploy_set_uuid('e732b460-add4-47a7-8c00-e4dedbb42900'); for ($i = 7001; $i < 8000; $i++) { $candidate = 'mysite_deploy_update_' . $i; if (function_exists($candidate)) { $candidate(); } } } /** * Update dependencies and revert features */ function mysite_deploy_update_7003() { // If you add a new dependency during your development: // (1) add your dependency to your .info file // (2) increment the number in this function name (example: change // change 7003 to 7004) // (3) now, on each target environment, running drush updb -y // will call the mysite_deploy_update_dependencies() function // which in turn will enable all new dependencies. mysite_deploy_update_dependencies(); }

The only real difference between a site deployment module for D7 and D8, thus, is that the D8 version must define a UUID common to all instances of a website (local, dev, prod, testing...).

Configuration management directories: active, staging, deploy

Out of the box, there are two directories which can contain config management yml files:

  • The active directory, which is always empty and unused. It used store your active configuration, and it is still possible to do so, but I'm not sure how. We can ignore this directory for our purposes.
  • The staging directory, which can contain .yml files to be imported into a target site. (For this to work, as mentioned above, the .yml files will need to have been generated by a site having the same UUID as the target site, or else you will get an error message -- on the GUI the error message makes sense, but on the command line you will get the cryptic "There were errors validating the config synchronization.").

I will propose a workflow which ignores the staging directory as well, for the following reasons:

  • First, the staging directory is placed in sites/default/files/, a directory which contains user data and is explicitly ignored in Drupal's example.gitignore file (which makes sense). In our case, we want this information to reside in our git directory.
  • Second, my team has come to rely heavily on reinstalling Drupal and our site deployment module when things get corrupted locally. When you reinstall Drupal using drush si, the staging directory is deleted, so even if we did have the staging directory in git, we would be prevented from running drush si -y && drush en mysite_deploy -y, which we don't want.
  • Finally, you might want your config directory to be outside of your Drupal root, for security reasons.

For all of these reasons, we will add a new "deploy" configuration directory and put it in our git repo, but outside of our Drupal root.

Our directory hierarchy will now look like this:

mysite .git deploy README.txt ... drupal_root CHANGELOG.txt core ...

You can also have your deploy directory inside your Drupal root, but keep in mind that certain configuration information are sensitive, containing email addresses and the like. We'll see later on how to tell Drupal how it can find your "deploy" directory.

Getting started: creating your Drupal instance

Let's get started. Make sure you have version 7.x of Drush (compatible with Drupal 8), and create your git repo:

mkdir mysite cd mysite mkdir deploy echo "Contains config meant to be deployed, see http://dcycleproject.org/blog/68" >> deploy/README.txt drush dl drupal-8.0.x mv drupal* drupal_root cp drupal_root/example.gitignore drupal_root/.gitignore git init git add . git commit -am 'initial commit'

Now let's install our first instance of the site:

cd drupal_root echo 'create database mysite'|mysql -uroot -proot drush si --db-url=mysql://root:root@localhost/mysite -y

Now create a site deployment module: here is the code that works for me. We'll set the correct site UUID in mysite_deploy.install later. Add this to git:

git add drupal_root/modules/custom git commit -am 'added site deployment module'

Now let's tell Drupal where our "deploy" config directory is:

  • Open sites/default/settings.php
  • Find the lines beginning with $config_directories
  • Add $config_directories['deploy'] = '../deploy';

We can now perform our first export of our site configuration:

cd drupal_root drush config-export deploy -y

You will now notice that your "deploy" directory is filled with your site's configuration files, and you can add them to git.

git add . git commit -am 'added config files'

Now we need to sync the site UUID from the database to the code, to make sure all subsequent instances of this site have the same UUID. Open deploy/system.site.yml and find UUID property, for example:

uuid: 03821007-701a-4231-8107-7abac53907b1 ...

Now add this same value to your site deployment module's .install file, for example:

... function mysite_deploy_install() { mysite_deploy_set_uuid('03821007-701a-4231-8107-7abac53907b1'); ... Let's create a view! A content type! Position a block!

To see how to export configuration, create some views and content types, position some blocks, and change the default theme.

Now let's export our changes

cd drupal_root drush config-export deploy -y

Your git repo will be changed accordingly

cd .. git status git add . git commit -am 'changed theme, blocks, content types, views' Deploying your Drupal 8 site

At this point you can push your code to a git server, and clone it to a dev server. For testing purposes, we will simply clone it directly

cd ../ git clone mysite mysite_destination cd mysite_destination/drupal_root echo 'create database mysite_destination'|mysql -uroot -proot drush si --db-url=mysql://root:root@localhost/mysite_destination -y

If you visit mysite_destination/drupal_root with a browser, you will see a plain new Drupal 8 site.

Before continuing, we need to open sites/default/settings.php on mysite_destination and add $config_directories['deploy'] = '../deploy';, as we did on the source site.

Now let the magic happen. Let's enable our site deployment module (to make sure our instance UUID is synched with our source site), and import our configuration from our "deploy" directory:

drush en mysite_deploy -y drush config-import deploy -y

Now, on your destination site, you will see all your views, content types, block placements, and the default theme.

This deployment technique, which can be combined with generated dummy content, allows one to create new instances very quickly for new developers, testing, demos, continuous integration, and for production.

Incrementally deploying your Drupal 8 site

What about changes you make to the codebase once everything is already deployed. Let's change a view and run:

cd drupal_root drush config-export deploy -y cd .. git commit -am 'more fields in view'

Let's deploy this now:

cd ../mysite_destination git pull origin master cd drupal_root drush config-import deploy -y

As you can see, incremental deployments are as easy and standardized as initial deployments, reducing the risk of errors, and allowing incremental deployments to be run automatically by a continuous integration server.

Next steps and conclusion

Some aspects of your site's configuration (what makes your site unique) still can't be exported via the config management system, for example enabling new modules; for that we'll use update hooks as in Drupal 7. I'm still having some errors doing this in D8, but I'm working on it!

Also, although a great GUI exists for importing and exporting configuration, I chose to do it on the command line so that I could easily create a Jenkins continuous integration job to deploy code to dev and run tests on each push.

For Drupal projects developed with a dev-stage-prod continuous integration workflow, the new config management system is a great productivity boost.

Tags: blogplanet
Categories: Elsewhere

Lucas Nussbaum: Will the packages you rely on be part of Debian Jessie?

Planet Debian - Wed, 10/09/2014 - 21:28

The start of the jessie freeze is quickly approaching, so now is a good time to ensure that packages you rely on will the part of the upcoming release. Thanks to automated removals, the number of release critical bugs has been kept low, but this was achieved by removing many packages from jessie: 841 packages from unstable are not part of jessie, and won’t be part of the release if things don’t change.

It is actually simple to check if you have packages installed locally that are part of those 841 packages:

  1. apt-get install how-can-i-help (available in backports if you don’t use testing or unstable)
  2. how-can-i-help --old
  3. Look at packages listed under Packages removed from Debian ‘testing’ and Packages going to be removed from Debian ‘testing’

Then, please fix all the bugs :-) Seriously, not all RC bugs are hard to fix. A good starting point to understand why a package is not part of jessie is tracker.d.o.

On my laptop, the two packages that are not part of jessie are the geeqie image viewer (which looks likely to be fixed in time), and josm, the OpenStreetMap editor, due to three RC bugs. It seems much harder to fix… If you fix it in time for jessie, I’ll offer you a $drink!

Categories: Elsewhere

Acquia: Mike Meyers explains – Help Drupal and it will help you: contribute!

Planet Drupal - Wed, 10/09/2014 - 21:20

Michael E. Meyers, VP Large Scale Drupal at Acquia, knows better than most how contributing to the open source project you are relying on to build or improve your business will pay off. He and his team did just that when they successfully built and sold NowPublic.com – the first venture capital funded, Drupal-based startup – while making massive contributions to the Drupal project along the way.

He and I invite you not only to use Drupal, but also to make it better and to get involved with its community. If you're only using it without giving back, you're not getting the full benefit it could be giving you. Come to DrupalCon Amsterdam or an event near you and make a difference!

Categories: Elsewhere

Isovera Ideas & Insights: Drupal Commerce Tips: Implementing SimpleXML to Generate Invoice Files

Planet Drupal - Wed, 10/09/2014 - 20:46

Part 1 - Implementing SimpleXML to generate Invoice files

As part of the work with one of our E-Commerce clients it was a requirement to integrate with the RetailPro back-office tool.  While the following post is specific to the RetailPro files and system. This same techniques could easily be re-purposed to help integrate various other systems that require invoice files to be sent to a 3rd party system. 

Categories: Elsewhere

ThinkShout: Nonprofit Website Benchmarks

Planet Drupal - Wed, 10/09/2014 - 18:00

Your website is a unique snowflake with singular requirements. While there’s surely overlap, the make-up of your audience is different from every other site, because your mission – and the content you put on your website to support that mission – is different from every other nonprofit.

But you’ve probably wondered how your site compares to others. I know I’ve always wanted access to website benchmarks – some way to see if the trends I notice in our own dashboards are reflective of larger patterns.

Of course, benchmarks can be dangerous. If you obsess about them, the action becomes whatever the opposite of navel gazing is.

When you analyze the efficacy of your own website, you always need to consider it within the context of your own target audiences and organizational goals instead of worrying that your overall bounce rate is 4% higher than the "average."

After all, if your site has the profile of a news organization, with links to recent articles shared widely through third-party channels, you should expect a higher bounce rate: people come, consume, and leave. That might be okay! The goal of that content may have been to spread some news, not generate donations.

Benchmarks can inform us about larger trends, though, and it’s damned annoying that they’re so hard to find.

It’s been almost four years since Groundwire published its 2010 Website Benchmarks Report. (Fortunately, you can still download the PDF from a third-party site.) Since then, Google has discontinued its Analytics Benchmarks Report – and even the semi-useful newsletter that followed.

You can order a $395 report from Marketing Sherpas (PDF), which may or may not be helpful; I certainly didn’t buy it. KISSmetrics published an infographic about bounce rates a couple of years ago with some interesting data… that may have come from a 2006 post to a Yahoo group.

Pew Research recently released an interesting study comparing news sites, using aggregated data from ComScore. I love how they’ve analyzed visitor loyalty across segments, but rare is the nonprofit that’s going to share the content goals of CNN or NBC News.

On our side, there are a couple of stats of interest in the latest Benchmarks Study from M+R and NTEN and the 2013 Online Marketing Benchmark Study for Nonprofits from Blackbaud, but they’re mostly focused on traffic growth and donation page conversions.

At ThinkShout, because we start almost all of our engagements by exploring how our client’s audiences use their current website, we felt it would be important to have some point of comparison. To facilitate discussion around questions like "How much do we need to worry about mobile?" or “How can we convert the influx of new visitors into engaged users?”, we’ve aggregated the data we have available to us.

It seems only fair to share some of it.

This data is in no way reflective of the industry as whole, and it is very top-level. It represents fifteen organizations with diverse missions and traffic patterns, ranging from a few thousand sessions per month to more than 100,000. These are also generally organizations that have recognized the need to redesign their website.

With those caveats, we hope this data may help you understand that some of what you see in your own analytics may, in fact, be reflective of broader trends.

The following are three I see in the 37,000,000 pageviews we have access to.

Search Is the Ultimate Shortcut

Back in 2010, Groundwire found that search engines referred 55% of traffic to the nonprofit websites in their study. While we can’t do a direct comparison since the sites in question aren’t the same, I feel comfortable making the blanket statement that the trend is toward more traffic coming from search.

Here are the mean / median numbers since 2011:

  • 2011: 47.06% / 43.97%

  • 2012: 48.12% / 50.12%

  • 2013: 52.99% / 54.02%

So far in 2014, those numbers have increased to 55% (mean) / 60% (median).

That means, of course, that traffic from the other two legs of the standard triumvirate have dropped:

  • 2011: 22.04% / 22.13%

  • 2012: 23.08% / 20.28%

  • 2013: 18.67% / 17.00%

  • 2014 (to date): 16.18% / 13.98%

  • 2011: 28.87% / 24.76%

  • 2012: 26.81% / 20.75%

  • 2013: 25.36% / 21.42%

  • 2014 (to date): 25.82% / 19.75%

Essentially, the best home page ever created will not solve your problems, as your users are more and more likely to turn to search first to find what they’re looking for. Your information architecture must take into account the fact that the first page your visitors encounter may be deep in your website.

User Experience starts in the first place your users experience you. You can use your data to make some assumptions about where that’s mostly likely to happen and optimize the top landing pages, but, more and more, you need to worry about every piece of content on your site.

Stop Wondering if Mobile Is Important Right Now

It is. Groundwire found that in 2010, the median number of mobile visitors to the sites in their study was just 1%. That’s changed over the past few years. Looking just at mobile phones (not tablets), you can see the surge:

  • 2011: 5.70% / 4.97%

  • 2012: 7.68% / 7.67%

  • 2013: 12.84% / 12.04%

  • 2014 (to date): 18.68% / 17.45%

I’ve seen the argument made that mobile doesn’t require your attention quite yet because it still represents, according to this data, just 1 in 5 visits.

Our suggestion would be to get ahead of the curve instead of fighting to catch up. The growth in mobile traffic is having an obvious impact on the most basic measures of engagement.

Mean Median Year Bounce Rate Time per Session Pages per Session Bounce Rate Time per Session Pages per Session 2011 62.30% 2:04 2.30 67.07% 1:57 2.09 2012 66.17% 1:58 2.06 67.20% 1:56 1.89 2013 70.48% 1:54 1.87 70.10% 1:39 1.84

[I’ve excluded 2014 because I haven’t cleaned out the data from sites that have gone through a responsive redesign. This isn’t meant to be a marketing piece, but I can tell you that redesigning for mobile can have a significant impact.]

My suspicion is that as people grow more comfortable using their smartphones to browse the Internet, visiting brands that have implemented a mobile strategy, they’re less likely to put up with sites that don’t work as well as they expect in their mobile browsers.

By structuring your content properly, you can create ways to put the content mobile users are most interested in front and center. If nothing else, you should take the time to understand what mobile visitors are doing on your site as a first step.

Loyalty Is Hard to Come By (or Even Define)

Perhaps, given the growth in search and mobile traffic, it’s no surprise that the percentage of "New" visitors has increased over the years:

  • 2011: 65.65% / 66.01%

  • 2012: 70.34% / 70.13%

  • 2013: 74.35% / 75.00%

On the flip side, "Loyal" visitors (defined as those with at least three visits in the period under review) have crashed:

  • 2011: 20.86% / 19.00%

  • 2012: 15.64% / 15.66%

  • 2013: 12.16% / 12.26%

And those numbers aren’t just relative percentage drops, caused by increases in other types of traffic, because the real numbers have dropped as well:

  • 2011: 44,734 / 33,894

  • 2012: 43,076 / 29,989

  • 2013: 37,757 / 22,781

I’m fairly confident, given the scale of the data, that this isn’t simply because people are getting better at clearing their cookies regularly – there’s wide variation in estimates, much of it from surveys of user behavior, rather than actual data – or browsing the web incognito. Those technologies have been around for years, and I doubt even Mr. Snowden has had that much of an impact on people’s everyday habits. (But I’m willing to be convinced otherwise.)

This loss could also be reflective of device fragmentation. Google and others are working on ways to track users across the many devices they use, but right now, if somebody visits you on August 14th on their laptop, comes back on September 9th on their tablet, and then again on October 21st on their phone, they would be classified as three different New users. As we see mobile traffic grow, it may mean that right now, we don’t have an accurate way to track visitor loyalty.

Perhaps some of that loyalty is being transferred elsewhere, however, from the website to social spaces, third party donation pages, or even mobile apps. In that case, the better sites are at converting website visitors into these different kinds of traffic, the less relevant the website will become to them.

Rare is the constituent who comes once a year to donate. Engagement, even within a multi-channel strategy, has to strike a balance between assuming that visitors will find continued value from the resources we make available on the web and moving new visitors to spaces where they’re more likely to continue to interact with us and each other.

In any case, if visitors are increasingly less likely to return to nonprofit websites, we need to rethink some of our engagement strategies. If they’re turning elsewhere because they aren’t finding what they want, when they want it, we’ve got some serious work to do.

Interested in Helping Out?

I’ve made the spreadsheet with the cleaned data for 2011-2013 publicly available. We hope you’ll check it out and add insights of your own!

But beyond using it as a tool to inform your own website, we could use your help. What other data should we track and share? What do you want to know?

And more importantly, are you willing to share your own?

If you’d like to work with us on a template that can be used to collect data from other nonprofits, just let me know. We’ll keep the org-by-org breakdown anonymous. Once we reach an arbitrarily large threshold – 100,000,000 pageviews? – we can do some follow-up work.

Lev Tsypin, ThinkShout’s Director of Engineering, has suggested that we could go so far as to set up a Google form that feeds an anonymized spreadsheet. We could then make that data available via JSON, for further manipulation.

In the end, I believe that the more we aggregate and share, the better we’ll identify the problem areas we need to focus on as a community.

Categories: Elsewhere

Drupal Watchdog: Extending Drupal 7 Services into an E-commerce App

Planet Drupal - Wed, 10/09/2014 - 17:03
Article

By now, you’ve surely heard that Drupal 8 is pushing boundaries for users, content editors, administrators, and developers – if not, where have you been?

Obviously, not every site has the luxury of being able to migrate and adopt new technologies. If you are in the same position as me, then many of your clients – or indeed, the company you work for – have spent lots of time and money investing in Drupal 7; at this point, urging a new platform on them is probably going to fall on deaf ears. But all is not lost! Drupal 7 can still create a powerful RESTful Web Service API or a terrific mobile application.

So the old adage again rings true: “Where there is a Drupal 8 core initiative, there is a Drupal 7 contributed module.”

Having attended the Commerce 2.x sprint in Paris, I can safely say that if you do eventually upgrade to Drupal 8, it won’t be a massive undertaking molding your Drupal 8 Core services output to match that of Drupal 7 contrib services. Not all your work will be lost.

When I first started down the road of mobile apps, I was certain I did not have the time to learn Objective C and Java, as well as the intricacies of each mobile platform. (Sound familiar?) All I wanted was the ability to utilize the Drupal technology I already knew, and then enable PhoneGap without having to learn any device-specific language.

For purposes of this article, I won’t assume you know mobile apps technology nor much about Drupal’s Services module, REST, jQuery Mobile, Xcode, or PhoneGap. I do assume that you know Drupal, can work your way around the administration interface, and know some basic PHP and Javascript. With that in mind let’s take an overview of technologies you can utilize for your mobile application.

First, let’s install DrupalGap and its dependencies.

Now we have a core DrupalGap service and the basic service resources we might want for our mobile application. All the generic things we use on most Drupal sites are right there: resources are content-available for query through endpoints; and DrupalGap supplies us with a default endpoint.

Categories: Elsewhere

Deeson Online: Deeson: Drupal Association Autologout and Limit Modules webinar

Planet Drupal - Wed, 10/09/2014 - 16:31
Deeson: Drupal Association Autologout and Limit Modules webinarBy Lizzie Hodgson | 10th September 2014

John Ennew is a solutions architect here at Deeson. He recently carried out a webinar focusing on Drupal Autologout and Limit Modules.

Attendees included Drupal developers, solutions architects, and programmers. John took the attendees through a series of steps, and in the webinar, explained how to:

  • Log users out after a period of inactivity
  • Prevent users from having more than one active session at a time
  • Apply your company password policy to your web site
  • Configure flood control settings
  • Check the health of your site to identify potential security problems
The webinar

Going to DrupalCon Amsterdam? Come and find us - tweet to meet @deesonlabs

 

Categories: Elsewhere

Acquia: Web services in Drupal 8 Core

Planet Drupal - Wed, 10/09/2014 - 16:13

Some of the great news in Drupal 8 development was the introduction of web services directly in core, allowing other applications to interact with Drupal to consume exposed information or services without the need to install contributed modules.

Let’s look at the list of modules that ship with D8 core related with web services:

Categories: Elsewhere

LightSky: Apple Watch is a Great Sign for Drupal

Planet Drupal - Wed, 10/09/2014 - 15:42

Earlier this summer Dries Buytaert, the original creator of Drupal, had a pretty visionary keynote at DrupalCon, which we talked about in pretty good detail in a podcast.  One of the things that we mentioned in our analysis of his keynote is that while what Dries was presenting was looking pretty far into the future, it showed that Drupal is being guided on the right path to be positioned well in the future of the web, and Apple's announcement yesterday of the new Apple Watch just supports this position.  The Apple Watch is exactly the type device that Dries is trying to position Drupal to be ready to feed content to.

Screen Size Could Make the Current Web Obsolete

Obviously anyone who has looked at the pictures or video of Apple's new watch can immediately see that you aren't going to be viewing a website on it.  Really in any form, a website as we know it won't be displayable on this type of device.  Even the best designers and front-end developers aren't going to be able to make a site responsive down to this size effectively, and the important thing to note is that this is totally ok.  But it is not just small screens that change the way content is going to have to be consumed, big screens cause just as much problem.  We aren't going to be displaying whole websites on billboards, or 6x4 signs found on the sidewalks in major cities, we have to be able to feed content to them.

Be Ready, Change is Coming

It probably isn't going to quickly eliminate web sites as we know it, but agencies and other organizations need to be ready for change in the way their content is consumed.  Whether it be how products are displayed to potential buyers, your store hours, notifications of sales and events, it doesn't matter what the content is, the way it is consumed will be different in just a matter of months even than it is now.  CMS frameworks need to be ready, and the reality is that many just aren't ready, and aren't headed in the right direction to be ready.  Drupal on the other hand, is on the right track.

For more tips like these, follow us on social media or subscribe for free to our RSS feed and newsletter. You can also contact us directly or request a consultation
Categories: Elsewhere

4Sitestudios.com Drupal Blog: BuildingRating.org Website Redesign

Planet Drupal - Wed, 10/09/2014 - 15:30
BuildingRating.org Website RedesignThe Challenge The Institute for Market Transformation (IMT) is a Washington, DC-based nonprofit organization promoting energy efficiency, green building, and environmental protection in the United States and abroad. They approached 4Site to redesign the BuildRating.org website, an international exchange for information on building rating disclosure policies and programs, from the ground up. Our Solution 4Site worked with IMT to develop a comprehensive strategy for the BuildRating.org redesign to ensure that the new site provided as many user friendly tools as possible to help visitors find the specific information they needed. We built a world map of jurisdictions with sustainability standards, a robust search with filters for locations, topics, and content types, and a policy comparison tool in which visitors can select jurisdictions and topics to compare in a custom populated table format. The Results Visit the BuildingRating.org WebsiteInstitute for Market TransformationService: DesignInfographics & Visual ContentWebsite ThemingDevelopmentData Visualization & MappingWebsite DevelopmentStrategyContent StrategyUser Experience
Categories: Elsewhere

Drupal Easy: DrupalEasy Podcast 138: Touch of Gray (Rick Manelius - PCI Compliance)

Planet Drupal - Wed, 10/09/2014 - 15:14
Download Podcast 138

Rick Manelius (rickmanelius), project architect at NEWMEDIA, and one of the leading minds in our community when it comes to PCI Compliance, joins Mike Anello to further demystify PCI Compliance and the role it plays in any site that involves credit card data. We also discuss two-factor authentication, when we might see a Drupal 8 beta, and Drupal’s persistence.

read more

Categories: Elsewhere

Cocomore: Meet us in Amsterdam

Planet Drupal - Wed, 10/09/2014 - 15:13

There is the European DrupalCon happening from Sept. 29th to the Oct. 3rd in Amsterdam and a team of Cocomore - as one of the biggest Drupal shops in Germany and Spain - will of course attend.

read more

Categories: Elsewhere

DrupalCon Amsterdam: Training spotlight: Design, Prototype, and Style in Browser

Planet Drupal - Wed, 10/09/2014 - 14:31

Design, Prototype, and Style in Browser (formerly Advanced Sass and Compass for RWD) is back! One of our most popular courses returns, with even more great new content - and now is your chance to attend this training at DrupalCon Amsterdam!

With more mobile device activations per day than human births and full internet browsers coming to television sets and gaming consoles (both home and portable), the old techniques we have used to create pixel-perfect sites for desktop audiences have already become a thing of the past.

We will explore content strategy as a method for designing responsive websites, building separate components and layouts, and will emphasize creating DRY code. We will dive deep into the power of Sass and Compass and a handful of JavaScript tools and how they can be utilized for your growing website. These tools can ease much of the hard work related to creating truly awesome responsive websites.

Meet the Trainers from Four Kitchens

Chris Ruppel (rupl) and Ian Carrico (iamcarrico) are Frontend and Backend developers at Four Kitchens respectively. Both are well-known in the Drupal community as both RWD and Sass experts, having trained and spoken at numerous events around the world, including Portland, Denver, New York, Austin, San Francisco, and Munich, Germany.

Neither are strangers to community contribution: Ian maintains the Aurora base theme and Magic module and contributes to many RWD-related Compass extensions such as Toolkit, Singularity, and Breakpoint. Chris maintains the Modernizr module and has contributed to Modernizr, the Drupal 8 HTML5/Mobile initiatives, and the Drupal.org D7 upgrade.

Attend this Drupal Training

This training will be held on Monday, 29 September from 09:00-17:00 at the Amsterdam RAI during DrupalCon Amsterdam. The cost of attending this training is €400 and includes training materials, meals and coffee breaks. A DrupalCon ticket is not required to register to attend this event.

Register today

Categories: Elsewhere

Rapha&#235;l Hertzog: Freexian’s first report about Debian Long Term Support

Planet Debian - Wed, 10/09/2014 - 13:30

When we setup Freexian’s offer to bring together funding from multiple companies in order to sponsor the work of multiple developers on Debian LTS, one of the rules that I imposed is that all paid contributors must provide a public monthly report of their paid work.

While the LTS project officially started in June, the first month where contributors were actually paid has been July. Freexian sponsored Thorsten Alteholz and Holger Levsen for 10.5 hours each in July and for 16.5 hours each in August. Here are their reports:

It’s worth noting that Freexian sponsored Holger’s work to fix the security tracker to support squeeze-lts. It’s my belief that using the money of our sponsors to make it easier for everybody to contribute to Debian LTS is money well spent.

As evidenced by the progress bar on Freexian’s offer page, we have not yet reached our minimal goal of funding the equivalent of a half-time position. And it shows in the results, the dla-needed.txt still shows around 30 open issues. This is slightly better than the state two months ago but we can improve a lot on the average time to push out a security update…

To have an idea of the relative importance of the contributions of the paid developers, I counted the number of uploads made by Thorsten and Holger since July: of 40 updates, they took care of 19 of them, so about the half.

I also looked at the other contributors: Raphaël Geissert stands out with 9 updates (I believe that he is contracted by Électricité de France for doing this) and most of the other contributors look like regular Debian maintainers taking care of their own packages (Paul Gevers with cacti, Christoph Berg with postgresql, Peter Palfrader with tor, Didier Raboud with cups, Kurt Roeckx with openssl, Balint Reczey with wireshark) except Matt Palmer and Luciano Bello who (likely) are benevolent members of the LTS team.

There are multiple things to learn here:

  1. Paid contributors already handle almost 70% of the updates. Counting only on volunteers would not have worked.
  2. Quite a few companies that promised help (and got mentioned in the press release) have not delivered the promised help yet (neither through Freexian nor directly).

Last but not least, this project wouldn’t exist without the support of multiple companies and organizations. Many thanks to them:

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator - Elsewhere