S. M. Bjørklund: How to migrate content from drupal 6 to 7 by using Migrate_d2d - Part 4 - field mappings

Planet Drupal - Sun, 07/06/2015 - 18:11

This is probably the last post in this series. I will try in this article to bring it all together. This will also be the most code heavy article. If you are new to migrate and Drupal-to-Drupal data migration, make sure you read and understand the first articles before preceding.

Mapping fields (field mappings)

Migrate have no way of knowing your plans for your Drupal 6 CCK fields or to what fields you are planning to store the data in Drupal 7. Perhaps you do not want or need to migrate all your old data. Source and target field have the same field name and type, but sometime you might want to fix a bad decisions made in the past and reorganize your architecture. Migrate call this field mappings. What ever reason you might have, you will need to share these ideas with Mirate. The basic format is like this:

$this->addFieldMapping('drupal7-field_name', 'drupal6-field_name')

More details are found in the official documentation at drupal.org.

An example of this is found in article.inc:

$this->addFieldMapping('field_bar', 'field_foo');

This map field_foo (drupal 6) to the cleverly named field field_bar (drupal 7). This is all it take to get a text field like this migrated if you re-run the node migration drush mi Article.

Categories: Elsewhere

DrupalOnWindows: Making namespaced callbacks work in Drupal 7 (without hacking core and with bound parameters)

Planet Drupal - Sun, 07/06/2015 - 07:00
Language English

What is the best way to prepare for Drupal 8 and make your projects easy (and cheap) to migrate to D8? Start using Drupal 8 programming patterns now as much as D7 allows you to....

I guess that most of you are already doing that - and have done for a few years now - with custom crafted frameworks that, as much as possible, use modern design patterns not stuck in 20 y/o spaguetty code. D7 is spaguetty, your custom modules and code need not to be so.

More articles...
Categories: Elsewhere

Dirk Eddelbuettel: RcppArmadillo

Planet Debian - Sat, 06/06/2015 - 02:18

Conrad put out a new minor release 5.200.1 of Armadillo yesterday. Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab.

Our corresponding RcppArmadillo release is now on CRAN and on its way into Debian. See below for the brief list of changes.

Changes in RcppArmadillo version (2015-06-04)
  • Upgraded to Armadillo release 5.200.1 ("Boston Tea Smuggler")

    • added orth() for finding the orthonormal basis of the range space of a matrix

    • expanded element initialisation to handle nested initialiser lists (C++11)

    • workarounds for bugs in GCC, Intel and MSVC C++ compilers

  • Added another example to inst/examples/fastLm.r

Courtesy of CRANberries, there is also a diffstat report for the most recent CRAN release. As always, more detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Angie Byron: An analysis of Drupal major version adoption

Planet Drupal - Fri, 05/06/2015 - 22:44

TL;DR We need to ship D8. ;)

I was sent this question today from a co-worker:

"We always talk anecdotally about how Drupal adoption slows before a new release and then picks back up. Do we have any data to support that for Drupal 7 or Drupal 6? I’d love to know the impact of Drupal 8 as well – but not sure that’s possible. Any thoughts?"

This is a great question, but since email is where information goes to die ;), I figured I would copy my response into a blog post as well.

Show me the data!

Since D8 has been in development so long, we don't have enough data showing on https://www.drupal.org/project/usage/drupal anymore since it prunes it after 3 years. :(

Here's a graph I made from trawling through historical data on https://archive.org/web/ though (source):

This only goes back to June 2008 which is after D6 came out, so it's not ideal, but we can still glean some useful data out of it.

Drupal 6

Here is a screenshot of the data from just prior to Drupal 7's release in January 2011:

  • In December 2008 there were 77K installs of D6 (compared to 0 in January since it wasn't out yet :)) (77K% increase). This is when D7 was in active development.
  • At the end of 2009 there were 203K installs of D6 (163% increase). This was when D7 was in feature freeze.
  • At the end of 2010 there were 323K installs of D6 (59% increase). This was when D7 was just about to ship.
  • At the end of 2011 there were 292K installs of D6 (9% decrease). This is when D7 had been out for about a year and several key contributed modules were ported.
  • D6 usage has been declining ever since, and is currently at about 135K installs.
Drupal 7

Here is the data from 2011 to today:

  • At the end of 2010 there were 6.5K installs of D7. This is when D7 was just about to be released.
  • At the end of 2011 there were 230K installs of D7 (3438% increase). This is when D7 had been out for about a year and several key contributed modules were ported, and D8 was just beginning development (was mostly D7 bug fixes at this point). Of note, D7 usage eclipsed D6 usage just a few months later (Feb 2012).
  • At the end of 2012 there were 522K D7 installs (127% increase). This is when D8 was nearly done with feature development.
  • At the end of 2013 there were 728K D7 installs (39% increase). This is after D8 was in code freeze.
  • At the end of 2014 there were 869K (19% increase). This is when D8 was in beta.
  • As of last week (mid-2015) there were 984K installs (13% increase). D8 is currently still in beta, with ~25 critical issues remaining before release candidates.

There are a few patterns we can discern from this data:

  • There is an enormous uptick in Drupal usage every new major release (though it's delayed until it reaches a "stable" state, i.e. after enough contributed modules are ported).
  • After that initial year or two of exponential growth, it slows down a lot.
  • The closer the next version is to being released, the slower the growth is of the current version. Generally, this is because people will postpone projects and/or use non-Drupal solutions to avoid incurring a major version upgrade.
  • Usage of the older stable version starts to decline after the newer major version reaches the "stable" state.
Why Drupal 8 will make this more betterer

There are a few enormous shifts coming with D8 that should change these patterns significantly:

  • Drupal 8 is much more fully-featured out of the box than any of its predecessors, so for many sites there is no need to wait on any contributed modules to begin building. Therefore we reach "stable" state (for sites that can do what they need to with just core) at Day 0, not 6-12 months later.
  • A number of key contributed modules that delayed porting of other key contributed modules in D6/D7 (Views, Entity Reference, Date, etc.) were moved into core in D8. So they're available right now—even before release—to build on. And indeed we're seeing other big ecosystem modules (Commerce, Rules, etc.) porting now, while D8 is still in development.
  • D8 will end the 3-4 year "big bang" release cycle. Instead, we'll be doing "small bang" releases every 6 months with non-backwards compatibility breaking feature/API improvements. That means we should hopefully stave off adoption decline much longer, and possibly even sustain the "hyper adoption" rate for much longer.
  • We will still eventually have a D9 "big bang" release (3-4 years from now) with backwards compatibility breaks, but only after it's amassed enough awesome functionality that couldn't be otherwise backported to D8. This will provide us with another "epochal" marketing event that D8 is giving us today (well, soon) in order to drive adoption even further.

Sorry, that was probably Way Too Much Information™ but hey, the more you know. ;)

Tags: drupaldrupal 8acquia
Categories: Elsewhere

Laura Arjona: Games

Planet Debian - Fri, 05/06/2015 - 21:04

Nota: Este artículo, en español, aquí.

I’m not a gamer, although probably I’ve played with machines/computers more than most of the girls of my age. My path has been: handheld machine, Pong and tenis in my uncle’s console, MSX (with that one, in addition to playing, I learnt what was an algorithm and a program, and I started to write small programs in Basic, and I copied and ran the source code of small games and programs to make graphics, which I found in MSX magazines). My parents considered that arcade machines in bars were like slot machines so they were banned for us (even pinball, only table soccer was saved from them, and only if my father was playing with us).

In the MSX I played Magical Tree, Galaxians, Arkanoid, Konami’s Sports Games, Spanish games from Dinamic, and some arcades like Golden Axe, Xenon, and maybe some more. The next computers (PC AT, later a 286) were not so for gaming; let’s say that we played more with the printer (Harvard Graphics, Bannermania…). Later, I was interested in other things more than in computer games, and later there were highschool homework, dBase III, and later the University and programming again and more, and it was the end of gaming, computer was for office and Uni homework.
Later, it came the internet and since then, reading and writing and communicating was more interesting for me than playing.

I was not good at playing, and if you are not good, you play less, and you don’t get better, so you begin to find other ways to loose your time, or to win it :)

The new generation

My son is 6 years old now, and I’m living with him a second adventure about games. Games have changed a lot, and the family computing try to stay in the libre software side whenever I am the one that can decide, so sometimes some challenges arise.

Android (phone and tablet)

The kid has played games in the phone and tablet with Android since he was a baby. We tried some of the last years popular games. I am not so keen of banning things, but I don’t feel comfortable with the popular games for Android (advertisements, nonfree software, addictive elements, massive data recollection and possible surveillance…), so I try to “control without being Cruella de Vil”. Some techniques I use:

  • We agree in the amount of time for using the table, “setting a tomato” to control the time (thanks Pomodoro in F-Droid)
  • I set the airplane mode and shut down internet everytime that I can. I never register account or login to play (if it’s mandatory to login in Google or create an account, sorry but we cannot play, or we create a new empty profile).
  • I put barriers to installing games (for example, I say that he should uninstall 2 or 3 games first if he wants a new one, or he should explain well why he wants that game and why the ones that were installed became boring suddenly).
  • I’ve never bought games for Android, and I’m not thinking about buying them in the future. I prefer to donate for some libre game project.
  • I try to divert attention to other games (not computer games, usually).
  • If the equivalent non-computer game exists, we play that one (tic-tac-toe, hungman, battleship…)
  • I seldom play in the phone/tablet, unless we play together.

On the other side, in my phone there is no Google Play, so we have been able to discover the section “Games” of F-Droid.

We have tried (all of them available in F-Droid, emphasis in the ones that he liked best): 2048, AndroFish, Bomber, Coloring for Kids, Core, Dodge, Falling Blocks, Free Fall, Frozen Bubble, HeriSwap (this one in the tablet), Hex, HyperRogue, Meerkat Challenge, Memory, Pixel Dungeon, Robotfindskitten, Slow it!, Tux Memory, Tux Rider, Vector Pinball.

Playing in my phone with CyanogenMod and having downloaded the games from F-Droid provides a relief similar to the one when playing a non-computer game. At least with the games that I have listed above. Maybe it is because they are simpler games, or they make me remember the ones that I played time ago. But it’s also because of the peace of mind of knowing that they are libre software, that have been audited by the F-Droid community, that they don’t abuse the user.

The same happens with Debian, what takes me to the next part of this blogpost.

Computer games: Debian

The kid has learnt to play with the tablet and the phone before than with the computer, because our computers have no joysticks nor touchscreens. He learnt to use the touchpad before than the mouse, because it’s easier and we have no mouse at home. He learnt to use the mouse at school, where they “work” with educative games via CD or via web, using Flash :(

So Flash player appears and dissapears from my Debian setup depending on his willness to play with the school “games”.

Until few time ago, in the computer we played with GCompris, ChildsPlay, and TuxPaint.
When he learnt to use the arrow keys, I installed Hannah and he liked a lot, specially when we learnt to do “hannah -l 900″ :)

Later, in ClanTV there were advertisements about some online computer games about their favorite series, resulting in that they need Flash or a framework called Unity3D (no, it’s not Ubuntu’s Unity), and after digging a bit I decided that I was not going to install that #@%! in my Debian, so when he insisted in play those games, I booted the Windows 7 partition in his father’s laptop and I installed it there.

Windows is slow and sad in the computer, and those web games with that framework are not very light, so luckily they have not become very interesting.

We have not played in the computer much more, maybe some incursion in Minetest, what takes me to the next section.

(Not without stating my eternal thanks to the Games Team in Debian. I think they do a very important work and I think that next year I’ll try to get involved in some way, because I know that the future of our family computer games is tied to libre games in Debian).

PlayStation 3 and Minecraft

Some time ago my husband bought a PlayStation 3 for home. The shop had discounts prices and so. He would play together with the kid an so.
The machine came home with some games “for free” (included in the price), but most of them were classified for +13 or so, so the only two left were Pro Evolution Soccer, and Minecraft.
I decided not to connect the machine to the network. Maybe we are loosing cool things, but I feel safer like that. So, no ethernet cable plugged, no registration in the Sony shop (or whatever its name is).

The controllers are quite complex for the three of us. They are DualShock don’t-know-what, and I think there is something (software) that makes the game adaptative to the person playing, because my husband is worse player after the son plays, if they play in turns and use the same controller.

The kid liked Minecraft. I didn’t know anything about that game (well, I knew that there was a libre clone called Minetest), so, for learning the basics I had a look at the wiki and searched videos about “how to …” and we began learning. Now, the kid can read a bit so he needs less help, and he has watched a lot of videos about Minecraft, so he is interested in exploring and building.

I had a look at Minetest, and I installed it in Debian. Having to use the keyboard is a disadvantage, and we didn’t know how to dig, so it was not much attractive at first sight. I have looked a bit about how to use the PS3 controller in the computer, via USB, and it seems to work, but I suppose I need to write something to match each controller button with the corresponding key and subsequent action in Minetest. This is work, and I am lazy, and the boy seems not very interested in playing with the computer.

Watching the videos we have infered that it’s possible to download saved games and worlds to upload them in the videogame console. We have done some tests. I wanted to upload a saved game about an amusement park, but the file was in a folder of name NPEB01899* and even when the PS3 saw it to copy it from USB to the console, later it didn’t appear in the list of saved games (our sved games were in folders named  BLES01976). And renaming the folder didn’t work, of course. I understood that we had met Sony’s restrictions, so I searched for more info. The games are saved using an encryption key and you are not able to use saved games from consoles in other world zone or using a media different than ours (the game can be played using a disc or purchasing it in the digital shop, it seems). Very ugly all of this! I read somewhere that there is certain software (libre software, BTW) that allows to break the encryption and re-encrypt the saved game with the zone and type of media of your console, but it seems the program only works for Windows, and it needs a console ID that we have not, because we didn’t register the console in the PlayStation network. All these things look shaky grounds for me, unpleasant stuff, I don’t want to spend time on this, maybe I should learn a bit more about Minetest and make it work and interesting and tell Sony go fly a kite. Finally, I found a saved game in the same format as ours (BLES01976), it’s not an amusement park but it is a world with interesting places to explore and many things already built, so I’ve tried to import it and it worked, so my son will be happy for some time, I suppose.

We have tried Minetest in the tablet too, but the touchscreen is not comfortable for this kind of games.

I feel quite frustrated and angry about this issue of Sony’s restrictions on saved games. So I suppose that in the next months I’ll try to learn more about Minetest in Debian, game controllers in Debian, and games in Debian in general. So I hope to be able to offer cool stuff to my son, and he becomes more interested in playing in a safe environment which does not abuse the user.

And with this, I finish…

Libre games in GNU/Linux, Debian, and info about games in internet

When we have searched info about games in internet, I found that many times you need to go out from the secure environment: webpages with links to downloads that who knows if they contain what they say they contain, advertisements, videoblogs with a language not adequate for kids (or any person that loves their mother language)… That’s why I believe the path is to go into detail about libre games provided by the distro you use (Debian in my case). Here I bookmark a list of website with info that surely will be useful for me, to read in depth:

We’ll see how it goes.


You can comment in this Pump.io thread.

Filed under: My experiences and opinion Tagged: Debian, English, F-Droid, Free culture, Free Software, Games, libre software, Moving into free software
Categories: Elsewhere

cs_shadow: Drupal Media in GSoC 2015

Planet Drupal - Fri, 05/06/2015 - 19:59
Drupal Media in GSoC 2015

Drupal Media has benefited from Google Summer of Code in the past. Last year one project under Drupal was directly a part of the Media Initiative - Entity Embed module for Drupal 8. I was that lucky student who got this wonderful opportunity to work under the mentorship of Media Initiative leads - Dave Reid (davereid) and Janez Urevc (slashrsm) but that’s a story from the past and you can read more about it in this blog post. In this post I’ll talk about the participation of Drupal Media in GSoC 2015.

3 Projects from Drupal Media, woot!

This year is turning out to be a great one for the Media Initiative in Google Summer of Code. We proposed three projects this year - all for Drupal 8 - and all of those were accepted and we managed to get three outstanding students. We take pride in telling everyone that not only all three students are Core contributors, they have also contributed to one or more Media modules already.

Now I would like to introduce the three students:

  • Jayesh Solanki (jayeshsolanki). Jayesh is a GSoC 2014 alumnus and last year he successfully ported Disqus module to Drupal 8. This year, he will be assisting with the development of Entity Browser. Entity browser is a module for Drupal 8 that tries to provide  powerful and flexible framework for searching & selecting of  entities. If you want to follow the development of this module or project, all the development will be happening in this Github repository. Janez Urevc (slashrsm) will be mentoring this project.
  • Prateek Mehta (prateekMehta). Prateek will be working on developing URL embed module for Drupal 8. This project aims to build a framework for CKEditor which would allow an end-user to  display an embedded representation of a URL, the content of the URL can  be a video, images, rich text or a link. This framework will handle URLs  from various third party sites and essentially replace oEmbed module  from Drupal 7. For more details, refer to this architecture discussion. Currently, the development is happening in this Github repository but this will change soon when we get a new namespace on Drupal.org (don’t worry, we’ll keep you posted). Dave Reid (davereid) and I will be mentoring this project.
  • Yuvraj Singh (root_brute). Yuvraj will be working on develop Embed module for Drupal 8 which will be an API level module. The idea of this project is to abstract  the buttons, embed form, and display plugins from Entity Embed module into a  generic Embed module that can be used by both Entity Embed, URL Embed,  and other embeddables in Drupal 8. For more details, refer to this architecture discussion.  If you want to follow the development of this module or project, all the development will be happening in this Github repository. Dave Reid (davereid) and I will be mentoring this project.

We have high hopes from all these three projects and hope that all of these will finish successfully on schedule and I’m very happy to tell you that all the signs are pretty good so far.

Looking forward to an exciting summer of code.

Tags: Drupal PlanetGoogle Summer of Codegsoc2015gsoc
Categories: Elsewhere

Drupal Association News: What’s new on Drupal.org? - May 2015

Planet Drupal - Fri, 05/06/2015 - 19:35

Look for links to our Strategic Roadmap highlighting how this work falls into our priorities set by the Drupal Association Board and Drupal.org Working Groups.

Organization and User Profile Improvements Explicit Attribution Option for ‘I am A Volunteer’

As a part of our effort to recognize individual contributions to the Drupal ecosystem we’ve slightly adjusted the options available to a user when making an attribution in the issue queues. Instead of simply assuming that a comment made without an attribution to an organization or customer is done by a volunteer - we now allow volunteers to explicitly mark their work as such. Requiring a positive affirmation of the volunteer attribution should improve the accuracy of the data we are gathering about the Drupal ecosystem.

This now means a user can make issue comment attributions in the following ways:

  1. Without attribution
  2. As a volunteer
  3. On behalf of an organization and/or customer
  4. Both as a volunteer and on behalf of an organization and/or customer.

We are seeing a rate of around 30% of issue comments attributed to an organization, customer or as volunteer work. We hope to see that rate increase steadily.

To date, there have also been over 7,000 issue credits that have been awarded to over 2,300 users and 175 organizations. We are looking forward to displaying these credits on user and organization profiles in the month of June and beginning to find new ways to reward our top contributors.

Content Strategy and Visual Design System for Drupal.org

Our collaboration with Forum One on developing content strategy for Drupal.org finished a few weeks ago. While recommendations were published in the issue queues earlier, we decided to use DrupalCon Los Angeles as an opportunity to present the work done and future plans in more detail, and get direct feedback from community members. Check out session slides or video if you want to know more on proposed changes to Drupal.org IA and content strategy.

Right now we are working on a few preparations steps before we can start implementing the changes. The first one of those steps would be a card sort exercise to validate our proposed IA and navigation with Drupal.org users. More blog posts and issues will follow as we move further.

Issue Workflow and Git Improvements

The Drupal Association has been preparing a plan for a new issue workflow on Drupal.org - with some very exciting improvements planned to create a workflow that is both familiar to other repository hosts and yet unique to the needs of the Drupal community.

Perhaps the greatest value of the new Git workflow will be the presence of per-issue repositories and pull requests on Drupal.org issues without forking the issue conversations. Drupal.org will use git namespaces to provide every developer working on an issue with their own branch. Developers will be able to pull in the latest changes from HEAD, or changes from other users’ branches. Drupal.org will be able to summarize the commits, take the changeset and run tests, and help maintainers manage the merge process to push changes upstream.

This architecture will make additional features possible as well:

  • The patch based workflow will continue to work - behind the scenes Drupal.org will create commits on namespaced branches from these patches so that these code contributions will be first-class citizens with the new git workflow.
  • We will be able to provide an inline editor for code in issues - simplifying the workflow for contributions such as code style fixes, documentation, quick typo corrections, etc.
  • We can provide the option to compare any two changes in an issue, giving us automated interdiff functionality.
  • We can identify merge conflicts across issues - to hopefully prevent conflicts across issues before they become too deeply entangled.

This planning work culminated in a presentation at DrupalCon Los Angeles - where the community provided some great feedback, and dove into help us with some architectural components during the extended sprints.

Implementation of the new Issue Workspaces architecture will certainly take some time - but we’re excited to have a plan to work from as we move forward.

Community Initiatives Two Factor Authentication

May saw the initial roll out of Two-Factor Authentication on Drupal.org. Users with elevated privileges on Drupal.org now have the option of enabling TFA, and this may become required for all elevated roles in future.

Next we want to make two factor available to all authenticated users on Drupal.org. However, before we can allow every user to enable two factor it is important that we create a support policy for resetting accounts with TFA enabled, which is still under discussion.


DrupalCon Los Angeles was a great opportunity to meet with the community and talk about the current state of DrupalCI, and it’s upcoming release.

As of the end of May, DrupalCI is very close to being ready for integration on Drupal.org. All of the environments requested for the MVP deployment are functional, and the Drupal Association staff is getting ready to demo the integration with Drupal.org on a development site - at the same time work is continuing on the results site componenet and the test-runner’s results publishing capabilities.

DrupalCI will be rolled out in parallel with the existing PIFT/PIFR infrastructure for at least a few months following initial deployment as a sanity check.


Click-testing has identified several additional issues going into the end of May, and the Association team continues to work on knocking the issues down as they appear. When the current set of identified issues is resolved, we intend to notify the most active translation groups and ask them to perform a final round of testing on the staging environment.

When any issues from that final round of testing are resolved, we will deploy the D7 version of Localize.drupal.org.

Revenue-related projects (funding our work) DrupalCons

DrupalCon Los Angeles was a productive and fun event for the community and the Association staff - in every way a success. At the conference we made several announcements about the upcoming DrupalCons, including 2016 locations.

First, we announced the opening of the call for papers for DrupalCon Barcelona, September 21st-25th. The call for papers for Barcelona closes on June 8th.

We then announced our next two conferences, and launched their websites.
DrupalCon Asia will be held in Mumbai in February of 2016.

And the next DrupalCon North America will be held on May 9th-13th, 2016 in New Orleans!

Sustaining Support and Maintenance

The Git servers replacing our existing Git infrastructure are nearly ready for thorough testing and deployment. These servers give us a highly available cluster for git.drupal.org, in addition to increased storage capacity, a newer operating system, and dedicated hardware for Git services on Drupal.org.

Our Fastly CDN deployment for downloads (ftp.drupal.org) was a success, and soon to follow is the same new architecture for updates traffic (updates.drupal.org). This new architecture uses dynamic purging to reduce the number of update requests served by our origin servers. It also decreases the latency between packaging a release and serving the update data from a number of minutes to a few seconds.

As always, we’d like to say thanks to all volunteers who are working with us and to the Drupal Association Supporters, who made it possible for us to work on these projects.

Follow us on Twitter for regular updates: @drupal_org, @drupal_infra.

Categories: Elsewhere

Simon McVittie: Why polkit (or, how to mount a disk on modern Linux)

Planet Debian - Fri, 05/06/2015 - 18:32

I've recently found myself explaining polkit (formerly PolicyKit) to one of Collabora's clients, and thought that blogging about the same topic might be useful for other people who are confused by it; so, here is why udisks2 and polkit are the way they are.

As always, opinions in this blog are my own, not Collabora's.

Privileged actions

Broadly, there are two ways a process can do something: it can do it directly (i.e. ask the kernel directly), or it can use inter-process communication to ask a service to do that operation on its behalf. If it does it directly, the components that say whether it can succeed are the Linux kernel's normal permissions checks (DAC), and if configured, AppArmor, SELinux or a similar MAC layer. All very simple so far.

Unfortunately, the kernel's relatively coarse-grained checks are not sufficient to express the sorts of policies that exist on a desktop/laptop/mobile system. My favourite example for this sort of thing is mounting filesystems. If I plug in a USB stick with a FAT filesystem, it's reasonable to expect my chosen user interface to either mount it automatically, or let me press a button to mount it. Similarly, to avoid data loss, I should be able to unmount it when I'm finished with it. However, mounting and unmounting a USB stick is fundamentally the same system call as mounting and unmounting any other filesystem - and if ordinary users can do arbitrary mount system calls, they can cause all sorts of chaos, for instance by mounting a filesystem that contains setuid executables (privilege escalation), or umounting a critical OS filesystem like /usr (denial of service). Something needs to arbitrate: “you can mount filesystems, but only under certain conditions”.

The kernel developer motto for this sort of thing is “mechanism, not policy”: they are very keen to avoid encoding particular environments' policies (the sort of thing you could think of as “business rules”) in the kernel, because that makes it non-generic and hard to maintain. As a result, direct mount/unmount actions are only allowed by privileged processes, and it's up to user-space processes to arrange for a privileged process to make the desired mount syscall.

Here are some other privileged actions which laptop/desktop users can reasonably expect to “just work”, with or without requiring a sysadmin-like (root-equivalent) user:

  • reconfiguring networking (privileged because, in the general case, it's an availability and potentially integrity issue)
  • installing, upgrading or removing packages (privileged because, in the general case, it can result in arbitrary root code execution)
  • suspending or shutting down the system (privileged because you wouldn't want random people doing this on your server, but should normally be allowed on e.g. laptops for people with physical access, because they could just disconnect the power anyway)

In environments that use a MAC framework like AppArmor, actions that would normally be allowed can become privileged: for instance, in a framework for sandboxed applications, most apps shouldn't be allowed to record audio. This prevents carrying out these actions directly, again resulting in the only way to achieve them being to ask a service to carry out the action.

Ask a system service to do it

On to the next design, then: I can submit a request to a privileged process, which does some checks to make sure I'm not trying to break the system (or alternatively, that I have enough sysadmin rights that I'm allowed to break the system if I want to), and then does the privileged action for me.

You might think I'm about to start discussing D-Bus and daemons, but actually, a prominent earlier implementation of this was mount(8), which is normally setuid root:

% ls -l /bin/mount -rwsr-xr-x 1 root root 40000 May 22 11:37 /bin/mount

If you look at it from an odd angle, this is inter-process communication across a privilege boundary: I run the setuid executable, creating a process. Because the executable has the setuid bit set, the kernel makes the process highly privileged: its effective uid is root, and it has all the necessary capabilities to mount filesystems. I submit the request by passing it in the command-line arguments. mount does some checks - specifically, it looks in /etc/fstab to see whether the filesystem I'm trying to mount has the “user” or “users” flag - then carries out the mount system call.

There are a few obvious problems with this:

  • When machines had a static set of hardware devices (and a sysadmin who knew how to configure them), it might have made sense to list them all in /etc/fstab; but this is not a useful solution if you can plug in any number of USB drives, or if you are a non-expert user with Linux on your laptop. The decision ought to be based on general attributes of devices, such as “is removable?”, and on the role of the machine.
  • Setuid executables are alarmingly easy to get wrong so it is not necessarily wise to assume that mount(8) is safe to be setuid.
  • One fact that a reasonable security policy might include is “users who are logged in remotely should have less control over physically present devices than those who are physically present” - but that sort of thing can't be checked by mount(8) without specifically teaching the mount binary about it.
Ask a system service to do it, via D-Bus or other IPC

To avoid the issues of setuid, we could use inter-process communication in the traditional sense: run a privileged daemon (on boot or on-demand), make it listen for requests, and use the IPC channel as our privilege boundary.

udisks2 is one such privileged daemon, which uses D-Bus as its IPC channel. D-Bus is a commonly-used inter-process system; one of its intended/designed uses is to let user processes and system services communicate, especially this sort of communication between a privileged daemon and its less-privileged clients.

People sometimes criticize D-Bus as not doing anything you couldn't do yourself with some AF_UNIX sockets. Well, no, of course it doesn't - the important bit of the reference implementation and the various interoperable reimplementations consists of a daemon and some AF_UNIX sockets, and the rest is a simple matter of programming. However, it's sufficient for most uses in its problem space, and is usually better than inventing your own.

The advantage of D-Bus over doing your own thing is precisely that you are not doing your own thing: good IPC design is hard, and D-Bus makes some structural decisions so that fewer application authors have to think about them. For instance, it has a central “hub” daemon (the dbus-daemon, or “message bus”) so that n communicating applications don't need O(n²) sockets; it uses the dbus-daemon to provide a total message ordering so you don't have to think about message reordering; it has a distributed naming model (which can also be used as a distributed mutex) so you don't have to design that; it has a serialization format and a type system so you don't have to design one of those; it has a framework for “activating" run-on-demand daemons so they don't have to use resources initially, implemented using a setuid helper and/or systemd; and so on.

If you have religious objections to D-Bus, you can mentally replace “D-Bus” with “AF_UNIX or something” and most of this article will still be true.

Is this OK?

In either case - exec'ing a privileged helper, or submitting a request to a privileged daemon via IPC - the privileged process has two questions that it needs to answer before it does its work:

  • what am I being asked to do?
  • should I do it?

It needs to make some sort of decision on the latter based on the information available to it. However, before we even get there, there is another layer:

  • did the request get there at all?

In the setuid model, there is a simple security check that you can apply: you can make /bin/mount only executable by a particular group, or only executable by certain AppArmor profiles, or similar. That works up to a point, but cannot distinguish between physically-present and not-physically-present users, or other facts that might be interesting to your local security policy. Similarly, in the IPC model, you can make certain communication channels impossible, for instance by using dbus-daemon's ability to decide which messages to deliver, or AF_UNIX sockets' filesystem permissions, or a MAC framework like AppArmor.

Both of these are quite “coarse-grained” checks which don't really understand the finer details of what is going on. If the answer to "is this safe?” is something of the form “maybe, it depends on...”, then they can't do the right thing: they must either let it through and let the domain-specific privileged process do the check, or deny it and lose potentially useful functionality.

For instance, in an AppArmor environment, some applications have absolutely no legitimate reason to talk to udisks2, so the AppArmor policy can just block it altogether. However, once again, this is a coarse-grained check: the kernel has mechanism, not policy, and it doesn't know what the service does or why. If the application does need to be able to talk to the service at all, then finer-grained access control (obeying some, but not all, requests) has to be the service's job.

dbus-daemon does have the ability to match messages in a relatively fine-grained way, based on the object path, interface and member in the message, as well as the routing information that it uses itself (i.e. the source and destination). However, it is not clear that this makes a great deal of sense conceptually: these are facts about the mechanics of the IPC, not facts about the domain-specific request (because the mechanics of the IPC are all that dbus-daemon understands). For instance, taking the udisks2 example again, dbus-daemon can't distinguish between an attempt to adjust mount options for a USB stick (probably fine) and an attempt to adjust mount options for /usr (not good).

To have a domain-specific security policy, we need a domain-specific component, for instance udisks2, to get involved. Unlike dbus-daemon, udisks2 knows that not all disks are equal, knows which categories make sense to distinguish, and can identify which categories a particular disk is in. So udisks2 can make a more informed decision.

So, a naive approach might be to write a function in udisks2 that looks something like this pseudocode:

may_i_mount_this_disk (user, disk, mount options) → boolean { if (user is root || user is root-equivalent) return true; if (disk is not removable) return false; if (mount options are scary) return false; if (user is in “manipulate non-local disks” group) return true; if (user is not logged-in locally) return false; # https://en.wikipedia.org/wiki/Multiseat_configuration if (user is not logged-in on the same seat where the disk is plugged in) return false; return true; } Delegating the security policy to something central

The pseudocode security policy outlined above is reasonably complicated already, and doesn't necessarily cover everything that you might want to consider.

Meanwhile, not every system is the same. A general-purpose Linux distribution like Debian might run on server/mainframe systems with only remote users, personal laptops/desktops with one root-equivalent user, locked-down corporate laptops/desktops, mobile devices and so on; these systems should not necessarily all have the same security policy.

Another interesting factor is that for some privileged operations, you might want to carry out interactive authorization: ask the requesting user to confirm that the action (which might have come from a background process) should take place (like Windows' UAC), or to prove that the person currently at the keyboard is the same as the person who logged in by giving their password (like sudo).

We could in principle write code for all of this in udisks2, and in NetworkManager, and in systemd, ... - but that clearly doesn't scale, particularly if you want the security policy to be configurable. Enter polkit (formerly PolicyKit), a system service for applying security policies to actions.

The way polkit works is that the application does its domain-specific analysis of the request - in the case of udisks2, whether the device to be mounted is removable, whether the mount options are reasonable, etc. - and converts it into an action. The action gives polkit a way to distinguish between things that are conceptually different, without needing to know the specifics. For instance, udisks2 currently divides up filesystem-mounting into org.freedesktop.udisks2.filesystem-mount, org.freedesktop.udisks2.filesystem-mount-fstab, org.freedesktop.udisks2.filesystem-mount-system and org.freedesktop.udisks2.filesystem-mount-other-seat.

The application also finds the identity of the user making the request. Next, the application sends the action, the identity of the requesting user, and any other interesting facts to polkit. As currently implemented, polkit is a D-Bus service, so this is an IPC request via D-Bus. polkit consults its database of policies in order to choose one of several results:

  • yes, allow it
  • no, do not allow it
  • ask the user to either authenticate as themselves or as a privileged (sysadmin) user to allow it, or cancel authentication to not allow it
  • ask the user to authenticate the first time, but if they do, remember that for a while and don't ask again

So how does polkit decide this? The first thing is that it reads the machine-readable description of the actions, in /usr/share/polkit-1/actions, which specifies a default policy. Next, it evaluates a local security policy to see what that says. In the current version of polkit, the local security policy is configured by writing JavaScript in /etc/polkit-1/rules.d (local policy) and /usr/share/polkit-1/rules.d (OS-vendor defaults). In older versions such as the one currently shipped in Debian unstable, there was a plugin architecture; but in practice nobody wrote plugins for it, and instead everyone used the example local authority shipped with polkit, which was configured via files in /etc/polkit-1/localauthority and /etc/polkit-1/localauthority.d.

These policies can take into account useful facts like:

  • what is the action we're talking about?
  • is the user logged-in locally? are they active, i.e. they are not just on a non-current virtual console?
  • is the user in particular groups?

For instance, gnome-control-center on Debian installs this snippet:

polkit.addRule(function(action, subject) { if ((action.id == "org.freedesktop.locale1.set-locale" || action.id == "org.freedesktop.locale1.set-keyboard" || action.id == "org.freedesktop.hostname1.set-static-hostname" || action.id == "org.freedesktop.hostname1.set-hostname" || action.id == "org.gnome.controlcenter.datetime.configure") && subject.local && subject.active && subject.isInGroup ("sudo")) { return polkit.Result.YES; } });

which is reasonably close to being pseudocode for “active local users in the sudo group may set the system locale, keyboard layout, hostname and time, without needing to authenticate”. A system administrator could of course override that by dropping a higher-priority policy for some or all of these actions into /etc/polkit-1/rules.d.

  • Kernel-based permission checks are not sufficiently fine-grained to be able to express some quite reasonable security policies
  • Fine-grained access control needs domain-specific understanding
  • The kernel doesn't have that information (and neither does dbus-daemon)
  • The privileged service that does the domain-specific thing can provide the domain-specific understanding to turn the request into an action
  • polkit evaluates a configurable policy to determine whether privileged services should carry out requested actions
Categories: Elsewhere

Acquia: Drupal 8 - 1st product of the PHP-FIG Era

Planet Drupal - Fri, 05/06/2015 - 17:45
Language Undefined

I was happy to talk with two major contributors to Drupal 8 at the same time at Drupal South 2015 in Melbourne Australia. At the time we recorded our conversation in March 2015, Hussain Abbas from Bangalore, India and Jibran Ijaz from Lahore Pakistan had both contributed well over 100 patches to D8. In this podcast we talk about their history in Drupal, open source software as a force for good in society, the benefits of contribution, Drupal as the 1st project of the PHP-FIG era, Drupal 8 for developers, the incredible energy and size of the Australasian Drupal community, and more.

Categories: Elsewhere

Cocomore: MYSQL - Backup & Recovery

Planet Drupal - Fri, 05/06/2015 - 16:47

Backups are very important for every application, especially if a lot of data is stored in your database. For a website with few updates it is not so important to do backups regularly, you can just take the backup of last week for restoring the site and if there was just one or two updates, you can add them manually afterwards. But if you run a community site with user generated content and a lot of input the topic backup & recovery becomes a lot more important but also complex. If the last backup is from last night you have to consider all the updates that were made in the meantime.

read more

Categories: Elsewhere

Acquia: Drupal in China

Planet Drupal - Fri, 05/06/2015 - 16:11

That's me: front row, second from the left.

On 14th March 2015, as everyone was coming down from the month-long Valentines Day high, I was in the midst of an exciting Open Source event in Shanghai, China.

With several hundred attendees, the camp attracted people from all over China and the world. Experts and beginners -- in both Drupal and a number of Open Source technologies -- engaged in conversations about CMS, design, language and even solar panels.

Partnering with the Shanghai Barcamp, DrupalCamp China brought together Drupal users, developers, architects, and entrepreneurs from all over China, and the world, to talk Drupal and Open Source.

I was lucky enough to have the opportunity to be asked to speak about Drupal 8 at the event.

As both a Drupal 8 advocate, and someone highly interested in the development of the Drupal community in the Asia Pacific & Japan region (you can read about my Acquia background here), I jumped at the chance to give my take on how the latest version of Drupal will revolutionize open source in China.

Covering the key changes and new features that make Drupal 8 the most exciting version of the framework yet, we took the whistle stop tour of all the features Drupal most needs out of the box:

  • multilingual
  • mobile first
  • inline editing
  • site preview
  • configuration management
  • REST
  • incorporation of other PHP projects

Since this was a Barcamp event, it wasn’t only Drupal community members benefiting from this knowledge, but the entire Shanghai and Chinese technology community.

My friend Sheng Wang has written up a more in-depth report on the day's events, and a broader analysis of Drupal in China. I highly recommend that you check it out.

Both Sheng and I agree: knowing the level of adoption of Drupal in China currently, and looking at the benefits companies are able to take advantage of when laid out against existing solutions, it’s only a matter of time and understanding that will propel Drupal into the forefront -- as it has done in so many other countries.

Tags:  acquia drupal planet
Categories: Elsewhere

CiviCRM Blog: 16-19 July 2015: NYC Drupal Camp, Aegir Summit and CiviCRM turn-key hosting

Planet Drupal - Fri, 05/06/2015 - 15:44

For those of you in the New York City area, 16-19 July 2015 is NYC Drupal Camp (pronounced "nice camp"), an annual grassroots non-profit conference run by volunteers. The event covers a broad range of topics related to Drupal. As part of the camp, the developers of the Aegir hosting system have organised the first Aegir Summit, 16-18 July.

For those not familiar with Aegir: it is a control panel based on Drupal and Drush to help automate the installation of Drupal, typically in a multi-site architecture (1 code base, many independant sites). With the provision_civicrm module, Aegir can also automate the installation of CiviCRM. This means that with a few clicks, you can create a new database, install Drupal and CiviCRM, configure the web server and optionally manage the SSL certificate. It also helps to automate other tasks, such as backups, upgrades and cloning. Need to create a new testing site for a client? Two clicks and it's ready. If you often create the same types of sites: create a model site, then clone it everytime you need a new instance.

There are also plans to support WordPress in Aegir. I have been working on a prototype that uses the command line tool wp-cli instead of Drush. If you would like to try it, please keep it mind that it is highly experimental and requires patching Aegir 3 (which is still in beta, although there are Debian/Ubuntu packages). The code is available here: hosting_wordpress (please read the 'Readme' file for installation notes).

If you are into Docker and other types of farming, you might find the Aegir Summit interesting as well. There has been a lot of talk about moving Aegir to a more Docker/container-friendly architecture in the next phase, for example.

Long story short: if you are in the NYC area, it would be great to see you there. If you cannot make it and you are interested in any of the above, feel free to leave a comment on this blog post or contact me by e-mail: mathieu at symbiotic.coop.

If you are a CiviCRM hosting provider and you would like to provide a self-serve online form so that your future users can test your services and create a new CiviCRM instance, that's possible too. Hopefully we will have a demo on time for the Aegir Summit, but in the mean time, I will leave the following teaser below (and yes, the sign-up form is a CiviCRM form!) :-)

Categories: Elsewhere

J-P Stacey: Client/agency relationships at last week's Oxford Drupal User Group

Planet Drupal - Fri, 05/06/2015 - 15:40

Two days ago was June's Oxford Drupal User Group. In a similar manner to our March special event, we were very lucky to get two local speakers: this time round, our speakers were both the main points of contact at their respective organizations: leading Drupal projects themselves, but as part of that having to work with external suppliers, Drupal agencies brought in for their relevant expertise.

Read more of "Client/agency relationships at last week's Oxford Drupal User Group"

Categories: Elsewhere

Lullabot: Let's Chat About Web Accessibility

Planet Drupal - Fri, 05/06/2015 - 14:57

Join Amber Matz as she chats with web accessibility aficionados Mike Gifford, Chris Albrecht, and Helena Zubkow about what web developers and Drupalistas can do to build more accessible web sites. How has web accessibility changed over the years? Who is being left behind? What are some common gotchas? What are some easy ways to get started testing for accessibility? All these questions and more are discussed in today's podcast. Don't forget to check out the links and resources in the show notes for all sorts of useful things mentioned in our discussion.

Categories: Elsewhere

Patrick Schoenfeld: Testing puppet modules: an overview

Planet Debian - Fri, 05/06/2015 - 14:32

When it comes to testing puppet modules, there are lot of options, but for someone entering the world of puppet module testing, the pure variety may seem overwhelming. This is a try to provide some overview.

So you’ve written a puppet module and would like to add some tests. Now what?As of today, puppet tests basically can be done in two ways, complementing each other:

  • Catalog tests (e.g. testing the compiled puppet catalog)
  • Functional/Acceptance tests in a real environment

Catalog tests
In most cases you should at least write some catalog tests.
As of writing this (June 2015) the tool of choice is rspec-puppet. There used to be at least one other and you might have heard about it, but it’s deprecated. For an introduction to this tool, you are best served by reading its brief but sufficient docs.

Function acceptance tests
If catalog testing is not enough for you (e.g. you want to test that your website profile is actually installing and serving your site on port 80 and port 443) the next logical step is to write beaker tests for tests in a real system (as real as a virtual machine can be). This is also what you need, if you are writing custom types.

Today’s tool of choice for this job is beaker with beaker-rspec. After you’ve written some rspec tests, this might feel similar. Since the documentation, might not seem very newbie friendly at first glance, the page

Howto Beaker

lists the relevant pages in the documentation to get started in a sensible order. Basically it’s: Update your modules build depencies (Gemfile), decide for a hypervisor, create (or describe) your test environment, write spec tests and execute them

Skeleton of a module

If testing puppet modules falls into your lap and you’ve already written your puppet code, it’s to late too start with a module anatomy as generated by

puppet module generate

But: It’s certainly a good bet to know which technologies are todays common best practice.

Further reading

A very good guide to setting things up and writing tests of both types is the threepart blog post by Mickaël Canévet written for camptocamp. It is a basic guide into test-driven development (writing tests before writing actual code) on a practical example.


Categories: Elsewhere

Drupal core announcements: Recording from June 5th 2015 Drupal 8 critical issues discussion

Planet Drupal - Fri, 05/06/2015 - 14:07

It came up multiple times at recent events that it would be very helpful for people significantly working on Drupal 8 critical issues to get together more often to talk about the issues and unblock each other on things where discussion is needed. While these do not by any means replace the issue queue discussions (much like in-person meetings at events are not), they do help to unblock things much more quickly. We also don't believe that the number of or the concrete people working on critical issues should be limited, so we did not want to keep the discussions closed. After our first meeting last week, here is the recording of the second meeting from today in the hope that it helps more than just those who were on the meeting:

Unfortunately not all people invited made it this time. If you also have significant time to work on critical issues in Drupal 8 and we did not include you, let me know as soon as possible.

The issues mentioned were as follows:

Alex Pott
Performance issue: https://www.drupal.org/node/2470679
Entity title: https://www.drupal.org/node/2498849
Render cache for views: https://www.drupal.org/node/2381277

daniel wehner, Gábor Hojtsy
Make Views bulk operations entity translation aware: https://www.drupal.org/node/2484037

Lee Rowlands
Ensure #markup is XSS escaped in Renderer::doRender(): https://www.drupal.org/node/2273925
Create a php script that can dump a database for testing update hooks: https://www.drupal.org/node/2497323
Views::getApplicableViews() initializes displays during route rebuilding etc.: https://www.drupal.org/node/2497017

Jibran Ijaz
FieldItemInterface methods are only invoked for SQL storage and are inconsistent with hooks: https://www.drupal.org/node/2478459#comment-9983133

Alex Pott
FieldItemInterface: https://www.drupal.org/node/2478459
Ajax form patch: https://www.drupal.org/node/2263569

daniel wehner
HTML IDs: https://www.drupal.org/node/1305882

Jibran Ijaz
PHP Script for dumping the database: https://www.drupal.org/node/2497323

Categories: Elsewhere

Guido Günther: Debian work in May

Planet Debian - Fri, 05/06/2015 - 14:05

May was the first month I started to contribute to Debian LTS under the Freexian umbrella. In total I spent six hours working on:

My current work flow looks like

Now I have an already patched source tree to add the backported patches to. Especially in cases where the Jessie version is already fixed this makes it rather quick to get an idea what the affected versions are and to see how the code evolved over time.

In order for this to work properly I made (on non LTS time) some improvements to gbp:

  • git-pbuilder now knows about LTS so it can create chroots like:

    DIST=squeeze-lts git-pbuilder create
  • gbp buildpackage is now clever enough to figure out the distribution to build for from the current branch name if you adhere to DEP14. So in case you're building from a git branch named debian/squeeze-lts it will automatically pass DIST=squeeze-lts to git-pbuilder. This needs

    [buildpackge] dist=DEP14

    in gbp.conf.

  • gbp pq now tries harder to preserve patch names. While having patch names adhere to what git am writes out is nice but renaming patches just leads to too much noise when importing and exporting from existing packages (#761161). gbp pq still needs to improve in preserving DEP-3 header information though (#785274).

Categories: Elsewhere

Mike Gabriel: My FLOSS activities in May 2015

Planet Debian - Fri, 05/06/2015 - 12:41

May 2015 has been mainly dedicated to these three fields of endeavour:

  • development of nx-libs (3.6.x branch), license clarification of nxcomp (a library in nx-libs)
  • contribution to Debian LTS, Debian packaging
  • test deployment of Ganeti and Ganeti Manager (a web frontend for Ganeti)
Received Sponsorship

I am happy to report that I received a personal sponsoring over 3.000,- EUR from a sponsor not to be named in May 2015. The sponsoring has been dedicated to supporting my work on The Arctica Project.

Last month's contributions of mine (8h) to the Debian LTS project had been contracted by Freexian [1] again. Thanks to Raphael Hertzog for having me on the team. Thanks to all the people and companies sponsoring the Debian LTS Team's work.

Development and License of nx-libs 3.6.x

What has been achieved in May 2015 concerning the nx-libs development?

  • nx-libs(-lite) continues to stay DFSG-compliant (see [2] for details)
  • Ulrich Sibiller has started working on enabling the RandR based Xinerama extension protocol in nxagent (the prototypes already work quite well) [3]. His work will make it possible to:
    • drop libNX_Xinerama from nx-X11,
    • drop the need of xinerama.conf file that needs to be updated from the client side of the session whenever a RandR change occurs on the client side
    • drop some nasty LD_LIBRARY_PATH hack from x2goruncommand (in X2Go)

read more

Categories: Elsewhere

Sooper Drupal Themes: Case Study: Glazed Drag and Drop Drupal Theme

Planet Drupal - Fri, 05/06/2015 - 12:32

On June 1st 2015, SooperThemes.com released the first Drupal Theme that integrates a visual front-end Drag and Drop page builder. I have worked on this project for almost a year, with some breaks in between for client projects that pay the bills. With the release of this theme I have retired all other SooperThemes drupal themes. All new designs will use Glazed theme as a platform. It was a great adventure and I feel proud to show you what I think is the best I could do.

Project Goals 1. Empowering novice users in building high-end responsive websites

I think the Wordpress ecosystem for paid plugins has worked rather well for the WordPress community. Wordpress has various plugins that do Drag and Drop better then any other web application. Now there are also various open source solutions emerging for WordPress. This is an example of how the WordPress community profits from a thriving paid-plugin ecosystem. 

Drupal is really lagging behind in the user experience for building responsive websites. I hope that my Drag and Drop theme will help reverse the trend and attract more young people who are interested in Drupals' flexibility and power as a CMS. In Glazed theme, building grids and setting breakpoints is all done without writing a single line of code:


2. Helping experienced site builders work faster through development automation Amazee Labs employs three back end developers, but nine site builders Michael Schmidt, Amazee Labs

Like many Drupal shops, Amazee Labs has discovered you can provide the most value to your customers by hiring several site builder for every programmer in your team. Automation is good for everyone: you can build more websites in less time and with less training.

Drag and drop web building is not just a gimmick anymore. The tools have evolved to be more powerful, produce better code, and leverage MVC frameworks to create a fluent site building experience that runs mostly in the browser. In Glazed theme this experience is integrated with Views and the block system: you can create highly dynamic pages and even dashboards with Drag and Drop, without loosing re-usability of components you build.


Visual Design and Front-end Architecture

Glazed at the core is architected to be a platform for design. Still it's neccessary that Glazed has a visual identity through which it can demonstrate how good Drupal can look right out of the installer: 

Glazed Drupal Theme Main Demo

There are also several demo packs that demonstrate different niche-designs that are built with Glazed.

Bootstrap to the bone

One of my goals with Glazed theme was for everything to be naturally mobile-friendly. The decision to integrate with Bootstrap 3 seems like an easy one but at first it was not. I was always wary of CSS frameworks because they somewhat limit design freedom, whereas Susy and Zen grids are more sophisticated tools that I had always used in the past. However, the overwhelming availability of Drupal integrations with Bootstrap 3, and the excellent documentation and support that comes from the Twitter Bootstrap team sealed the deal for me.

Glazed theme integrates with Bootstrap on many levels:

  • Bootstrap based Drag and Drop page builder
  • Bootstrap views integration
  • Bootstrap fields API integration
  • Bootstrap block class integration 
  • Bootstrap shortcode library
  • Bootstrap basetheme
  • Bootstrap image style and media integration
  • Bootstrap based Drupal Distribution ( CMS Powerstart )
SASS Bootstrap and a library of CSS Elements

I just love SASS, it makes writing CSS a joyful experience without nasty browser prefixes and futile code repitition. Having an awesome Drag and Drop page builder is boring if you don't have beautiful design elements to drop into your website. Glazed comes with an army of naturally mobile-friendly beautiful elements. To get an idea of what this means take a look at the Motion Box, Time Line and Pricing Table elements in the main demo. These elements can be dropped anywhere in a your webpage and you can edit them without writing a single line of code! This is a one-step process and you get fully customizeable elements.

Icon Box Element

In order to keep track of the design elements and element variations I was coding I felt the need to have more order and logic in the HTML code that marks up the elements. After research in existing CSS namespacing methods I decided on a BEM (block__element--modifier) namespace. 

.stpe-imagebox__figure {   &.stpe-imagebox__figure--akan {     .stpe-imagebox__image {       opacity: 0.7;     }     .stpe-imagebox__fig-caption {       top: auto;       bottom: 0;       height: 50%;       text-align: left;     }     .stpe-imagebox__title, .stpe-imagebox__fig-content {       transform: translate3d(0, 40px, 0);     } Improving Drupal's HTML Output
  • Fences
  • html5_tools
  • Metatag

Drupal does not naturally produce the HTML code that frontend-developers will fall in love with. Fortunately, this is easily fixed with a few add-ons. The first is the Bootstrap basetheme, which overrides most of Drupals' templates. This will not only get you nicer formatting but also cool Bootstrap forms and form buttons. Looks great on administrative pages!

To gain even more control of Drupals' HTML I integrated the Fences and HTML5_tools modules. This adds field-level control of HTML output. I integrated the metatag module for better SEO general future-friendliness. Metatags give search engine and other non-human readers a deeper level of information about the pages and structure of the website.

Drupal integrated Drag and Drop page builder
  • Front-end visual page building
  • HTML5 Based.
  • Pages work fine in regular backend WYSIWYG
  • Blocks, Views integration
  • Refined user experience
  • Naturally mobile-friendly
  • MVC with Backbone and Underscore JS

You may have seen drag and drop builders like Visual Composer in Wordpress. Visual Compores is the best selling Wordpress plugin ever. I think the page builder experience in Glazed is even better. More visual, more editable, less shortcodes. In fact, no shortcodes are used. The Glazed drag and drop builder leverages Backbone and Underscore JS. This means that the document (webpage) is the data. All of the controls and metadata are valid HTML. No shortcodes, no processing needed to render a page after you save it.

The UI is also very user friendly. There is a sidebar that contains visual shortcuts to pre-made beautiful design elements. You can drop them anywhere in your page and edit them, make them bigger or smaller. No problem. The grid system is based on bootstrap. With the greatest ease you can make 3 columns, 4 columns, or anything you can imagine within a 12 column grid system. You can easily control for each row individually at which breakpoint (Screen size) the layout will collapse to a vertical, mobile-friendly stack. 

Views, Media and Blocks integration

Our drag and drop builder is not just a vanity tool. It's integrated with Drupal and makes available all Drupal blocks as well as all views. This allows you to create dynamic, complex pages. For example, on sooperthemes.com I could easily re-create and improve the customer dashboard with our drag and drop tool. The dashboard contains several views that show download links and documentation for products that are bought by a user. It was no problem to drop the views into the page and then surround them with drag and drop mobile-friendly peripheral content and imagery.

Managing images is done with the Media module. Images can easily be added, used, re-used and then resized using bootstrap-grid image styles. Moreover, our page builder features an awesome animation engine and a number of preset image effects that help you build an immersive experience. My favorite element is the CSS3  Motion Image box:

Motion Image Box Element


Installation Profile Builder
  • Open source CMS Drupal Distro
  • Auto-download custom selection of features and dependencies
  • I plan to add hosting integration in the future

All these modules, libraries and settings need to be carefully set-up to make all this code work. To this purpose I have built an open source Drupal distribution called Drupal CMS Powerstart. I have blogged about CMS Powerstart before so I will skip that and talk about Custom Installation profiles. Drupal distributions are a great way to ship different kinds of Drupal. Unfortunately the way distributions are processed and displayed at drupal.org is very rigid and not enticing to prospective users. What I built at sooperthemes.com is a custom installation profile build interface: http://www.sooperthemes.com/minisites/drupal-cms-powerstart-custom

Thanks to CMS Powerstarts' autonomous CMS components you can configure a Drupal installation profile that contains only the features you need. Once you make a selection my webserver will download all modues, libraries and patches that are needed for your selected feaures. This service is totally free! The installation profile even contains block/region layout configurations for dozens of free themes, so that all blocks will be in the right region for each installed feature. My server has already built over a hundred custom installation profiles for CMS Powerstart and I'm actually surprised it's holding up. I hope people will also be interested in buying a subscription for Glazed on sooperthemes.com so that I can buy a new server. I'm not running a trusty but laggy 7 years old AMD Dual core machine.

Community Contributions

SooperThemes loves Drupal and is committed to improve Drupal not only in the premium themes landscape but also as an open source platform. During a year of development for Glazed theme and its backend tools I contributed a number of modules as well as a bunch of patches and the Drupal CMS Powerstart distribution.

Drupal CMS Powerstart

CMS Powerstart is a collection of CMS related components that are enhanced and glued together by the cms_core module. You actually don't need to use the installation profile or cms_core module to use the components, they are all autonomous and you can easily add cms_blog or cms_events to add functionality to any existing Drupal 7 website.

Thank you for reading my case study. If you are still interested to learn more, check out my drupal themes website. I'm also working on a video tutorial that demonstrates the Drag and Drop interface, keep an eye on my blog or twitter or linkedin to get an update for that.



Tags planet glazed drag and drop drupal case study Drupal 7.x
Categories: Elsewhere


Subscribe to jfhovinne aggregator - Elsewhere