Planet Drupal

Subscribe to Planet Drupal feed - aggregated feeds in category Planet Drupal
Updated: 3 min 5 sec ago

Drupal 8 and iOS: How to DELETE a node from Drupal 8 with iOS device ?

Sat, 10/05/2014 - 15:12
How to DELETE a node from Drupal 8 with iOS device ?

In this tutorial I will teach you how to use NSURLSession dataTask to delete a node from Drupal 8 via REST services.

It is required that you have enough permission to delete any content via REST. To check that navigate to /admin/people/permission on your Drupal 8 site. Here I am assuming that you have REST enabled on your site.  You should give DELETE permission to Admin and Authenticated user only as shown in flowing figure.

One other thing I want to mention here is you have to add basic_auth as supported_auth for DELETE method for entity:node in your rest.settings.yml file. But as in alpha 11 settings files are not getting generated I would suggest you to use REST UI ( ) contributed module for this purpose.

rest.settings.yml for DELETE

 Now add new simple page on your Drupal site and remember its node ID. Ok this is for Drupal side. Now let’s move on to our iOS application.

Storyboard for this app

 Our app just have one button and on label as shown in figure. Here you will type node ID in text field and click on “Delete Node” button.


As we have hook up “deleteNode” method with “Delete Node” button on clicking “Delete Node” button following method will get executed.

This method creates new NSMutableURLRequest and sets it http method to “DELETE”. Now it will create NSURLSessionConfiguration object and sets appropriate headers like “Authorization”. Here we are using http basic auth. Finally we create NSURLSession and NSURLSessionDataTask and resume the task.


After successful deletion you will get notified by app.

Here is code for method that will be called when “Delete Node” button is pressed.


- (IBAction)deleteNode:(id)sender {


    [self.nodeDetailsTextField resignFirstResponder]; // just tom make keyboard go away


    NSString *nodeID = [self.nodeDetailsTextField text]; // node ID as string


    NSURL *baseURL = [NSURL URLWithString:@""];


    NSURL *nodeURL = [baseURL URLByAppendingPathComponent:nodeID]; // create node ID like "baseURL/node/1"


    NSMutableURLRequest *request = [[NSMutableURLRequest alloc]initWithURL:nodeURL];


    [request setHTTPMethod:@"DELETE"]; // make request a "DELETE" request


    NSURLSessionConfiguration *config = [NSURLSessionConfiguration defaultSessionConfiguration]; // get the default configuration object


    NSString *userNamePassWordString = [NSString stringWithFormat:@"username:pasword"];


    NSData *passWordData = [userNamePassWordString dataUsingEncoding:NSUTF8StringEncoding]; // get NSData form NSString


    NSString *passWordString = [passWordData base64EncodedStringWithOptions:0]; // in http basic auth "Authorization" header accept encoded credential details


    NSString *basicAuthString = [NSString stringWithFormat:@"Basic %@",passWordString]; // "Authorization" header format i.e "Authorization" : Basic 123jkh43qe2erf= "


    [config setHTTPAdditionalHeaders:@{@"Authorization": basicAuthString,@"Accept":@"application/json"}]; // set headers on configuration

    NSURLSession *session = [NSURLSession sessionWithConfiguration:config]; // create session


    // create data task for our "DELETE" request


    NSURLSessionDataTask *deleteTask =  [session dataTaskWithRequest:request completionHandler:^(NSData *data, NSURLResponse *response, NSError *error) {


        NSHTTPURLResponse *httpResponse = (NSHTTPURLResponse *)response;


        if (!error && httpResponse.statusCode == 204 ) {


            dispatch_async(dispatch_get_main_queue(), ^{ //multithreading

                UIAlertView *alert = [[UIAlertView alloc]initWithTitle:@"204" message:[NSString stringWithFormat:@" node %@ deleted",nodeID] delegate:nil cancelButtonTitle:@"OK" otherButtonTitles: nil];

                [alert show];






    [deleteTask resume];



 Now try to reload page on your site it will say “page not found”.  You can also navigate to “admin” -> “reports” -> “dblog” to check log for the deletion we done.

Log for deletion

Now obviously we don’t want such application that we enter node ID to delete it. We can make a fairly sophisticated app from this one that has login screen to validate user first and than a UITableView to show list of all nodes on our site and then we delete them from app. But I just wanted to focus on how to use “DELETE” request and pass credential for basic authentication. At last I would like to mention that if you have older alpha versions than you may get “Dependency Injection Error” .

I hope this would help you.

Categories: Elsewhere

netsperience 2.x: Site Launch: FINclusion Lab Beta

Fri, 09/05/2014 - 21:30
We have launched FINclusion Lab Beta

a new Drupal 7 site for MIX (Microfinance Information Exchange)

It provides analytic tools to interpret the "proliferation of data in the field of financial inclusion" by aggregating data in Tableau dashboards and Mapbox maps.

The platform "provides financial service providers, policy makers, regulators, and other development professionals the opportunity to identify problems and devise solutions for increasing financial inclusion in their countries through interactive data tools and visualizations."

The FINclusion Lab team has worked over the past two years to gather data on supply and demand for financial services at the sub-national level for a growing number of countries in Africa, Asia, and Latin America.

We are using the Drupal Tableau module, sponsored by MIX, to display Tableau data visualization dashboards via Drupal Views.

Other MIX sites with free and premium data about microfinance: MIX Market and MIX Market Reports

coming soon: translations in French, Spanish and Russian using the Drupal i18n  module for localization.

Categories: Elsewhere

more onion - devblog: New distro for eCampaigning, fundraising and digital marketing

Fri, 09/05/2014 - 17:34

A few weeks ago we've released our first public open-source version of Campaignion. It's a tool that helps non-profits and NGOs with online campaigning, fundraising and digital marketing. In this blopgost we'll explain some of the technical background and how it works.

Categories: Elsewhere

Acquia: Upgrade your modules to Drupal 8 now! - Bram Goffings

Fri, 09/05/2014 - 16:53
Upgrade your modules to Drupal 8 now!

During the 2014 Drupal Devs Days in Szeged Hungary, Bram Goffings (aka aspilicious) and I had a chat about his experience upgrading Display Suite early on and maintaining it during the ongoing Drupal 8 development cycle. He says it was a great learning experience and that it is time for everyone else to start their upgrades in preparation for the Drupal 8 beta release.

Categories: Elsewhere

Drupal governance announcements: New ideation tool proposal

Fri, 09/05/2014 - 15:58

Greetings from your friendly neighbourhood Software Working Group! has a proposal + mockups on a new way to handle feature requests for in order to help both the Software Working Group and other working groups prioritize the feature roadmap for Since this tool will become the method through which the Drupal community (in the widest sense of the word) makes known their needs/wants/desires, we'd love to hear your comments and feedback on the proposal.

Community feedback period is open until May 18th, as we're hoping to have an initial version of this tool ready by Austin.

Categories: Elsewhere

Code Karate: Drupal 7 Commerce Coupon

Fri, 09/05/2014 - 14:06

In this Daily Dose of Drupal video we feature the Drupal 7 Commerce Coupon, Commerce Coupon Fixed Amount, and

Categories: Elsewhere

KnackForge: Joyride tour in older versions of jQuery

Fri, 09/05/2014 - 13:50

Joyride, the nice guide/tour library based on jQuery, does not work in older version of jQuery (like 1.3 or 1.4). We had a Drupal 6 project, where we wanted it. Our Drupal 6 site used jQuery 1.3 version.  We tweaked the joyride code to make it work in jQuery 1.3. Basically,

1. We included a function that is not a part of 1.3 version (isEmptyObject) 

Categories: Elsewhere

DrupalCon Amsterdam: Submit your sessions for DrupalCon Amsterdam

Fri, 09/05/2014 - 08:00

DrupalCon Amsterdam is less than six months away, and we’re swinging into full planning mode. The tracks have been set, and now it’s time for the community to step up: we are now accepting session submissions.

Here are the tracks for DrupalCon Amsterdam, as well as what we are seeking for each:

Coding and Development
  • Drupal 8 for Drupal 7 developers
  • Tools for upgrading your modules
  • OOP methodology relevant for Drupal 8 projects
  • Introductory sessions to the Drupal 8 changes, new entity system, plugin system, etc
  • Software quality as a tool, not a goal
  • BDD and TDD team workflows used in real projects
  • Software design oriented to security
Core Conversations: Achieving sustainability
  • Improving (website, testbot, process)
  • Funding core
  • Performance and tracking data overtime
  • Welcoming designers, UX professionals, and architects
  • Behat/Frontend testing
DevOps: Breaking down the silos
  • Establishing DevOps Culture
  • Continuous Delivery
  • Logging / Log Data Handling
  • Dashboard Design / Data Visualization
  • Monitoring
  • Automated Deployments / Strategies
  • Continuous Integration
  • Testing
  • Performance
  • Security
  • Infrastructure Management (Chef, Puppet, Ansible, Salt)
  • Business value of DevOps
Drupal Business: Drupal's Golden Age
  • Topics will be listed pending the outcome of a survey of Drupal agencies and clients.
Frontend: Futuristic tools and techniques
  • Preprocessors
  • CSS architecture
  • Templating systems (ie TWIG)
  • Performance
  • New HTML5 specifications & browser features
  • Automation tools
  • Javascript Libraries
  • Grid and UI frameworks
  • New age design tools
  • Solving new design problems
Site Building: Building sites fast without needing to code!
  • Drupal 8: which modules exist, which are coming, and what is changing
  • Drupal 7 to Drupal 8
  • Comparison of different techniques for Layouting that currently exist
  • Studies and real life examples of Site Builds (Special ways, new ways)

In addition to the above, there will also be two mini tracks this year on case studies and PHP, which we will post more details on soon.

Still not sure what to talk about? Check out some of the proposed sessions. Do any of the above ideas light your fire, or do you think you have something even cooler to share with the community? Send us an idea for a session you would like to host by Friday, 13 June.

Categories: Elsewhere

VM(doh): How We Test Drupal 7 Modules on Travis CI

Fri, 09/05/2014 - 03:24

If you've been working much in open source software recently - especially PHP - there's little doubt that you have likely heard of Travis CI. For those don't know about Travis CI, it's a free hosted continuous integration solution for public open source projects (they do have a paid version for private projects, too). Travis CI is a great tool for running your automated tests on each commit and pull request for your project's Github repository.

At VM(doh), we believe that if it's not tested it's broken. We use Travis CI for testing the libraries and Drupal modules that we build. In the past, we've used our own Jenkins server for that task, but we decided that moving to Travis CI meant one less thing to maintain.

Why don't we just use's testing infrastructure? Simply put, the infrastructure that we need is not in place there. We tend to deal with third-party applications, and testing things like integration with Elasticsearch or cloud queues just isn't possible on

Running tests for Drupal 7 modules is easy on Travis CI. You just have to be willing to do a little work in the .travis.yml file so that Drupal's Simpletest tests will run. Yes, Drupal 7 uses Simpletest instead of PHPUnit, and thankfully Drupal 8 will be getting more sane and using PHPUnit.

Below is the current (as of today, at least) .travis.yml file for the Search API Elasticsearch module.

language: php php: - 5.3 - 5.4 - 5.5 matrix: allow-failures: - php: 5.5 env: global: - ES_VER=1.0.1 - ES_MAPPER_ATTACHMENTS_VER=2.0.0.RC1 - ES_TRANSPORT_THRIFT_VER=2.0.0.RC1 - ES_GEOCLUSTER_FACET_VER=0.0.10 - ES_WAIT_ON_MAPPING_CHANGE=true - DATABASE='drupal' - DB_USERNAME='root' - DB_ENCODE='utf8' - MODULE_PATH='build/sites/all/modules' - ES_REQUIRE='no-dev' matrix: - DRUPAL_3RD_PARTY='composer_manager' - DRUPAL_3RD_PARTY='libraries' mysql: database: $DATABASE username: $DB_USERNAME encoding: $DB_ENCODE before_install: - composer self-update - pear channel-discover install: - pear install drush/drush - phpenv rehash - ./tests/bin/ before_script: - echo 'sendmail_path = /bin/true' >> ~/.phpenv/versions/$(phpenv version-name)/etc/conf.d/travis.ini - drush --yes make tests/includes/search_api_elasticsearch.make build - mkdir -p $MODULE_PATH - git archive $(git rev-parse --abbrev-ref HEAD) | tar -x -C $MODULE_PATH - cd build - drush --yes site-install minimal --db-url="mysql://$DB_USERNAME@$DATABASE" - drush --yes dis search - drush --yes en $DRUPAL_3RD_PARTY - drush --yes en search_api_elasticsearch - drush --yes en simpletest script: - drush test-run 'Search API Elasticsearch' --debug notifications: irc:

That might look a little overwhelming at first, but it's really rather simple especially for what it does. For this module, we're actually running six test builds. We support both the Libraries API and Composer Manager for using the Elastica dependency, plus we test on 5.3 (Drupal 7 minimum), 5.4 (widely deployed), and 5.5 (failures allowed, but we do want to see if we need to make changes for compatibility).

Before installing Drupal, we install Drush to make our lives easier. These days, installing Drush from it's PEAR repository installs a 6.x version by default, and that's great because "drush test-run" now only uses an exit status of "0" on success. In the past, you had to grep the test results to check for errors and then manually return an exit status of "1" to mark a failure. (Drupal's script also returns an exit status of "0" whether or not it succeeds.) By the way, we also use a shell script to configure and install multiple instances of Elasticsearch for testing.

There is a very important PHP configuration change that you'll want to make before actually installing Drupal. When you install Drupal, it tries to send an email to the admin email. To trick Drupal, we tell PHP to use /bin/true as the sendmail_path on line 43.

The rest of the script is essentially going through the steps you'd use for a Drupal installation and using Drush to run the tests. We run the tests in debug mode so that we can see what Drush was doing every step of the way.

The last section you'll notice is the notifications. Travis CI allows you to send notifications of build failures and fixes to many destinations. We typically send public notifications to our public IRC channel.

Now what about external APIs where you need credentials? Travis CI supports this, too. Let's look at the .travis.yml file for Marconi:

language: php php: - 5.3 - 5.4 - 5.5 env: global: - secure: "YwRjLD4i/nDLsNSThxJMu4l0cjMQYN/MfVAkIL6qKlxieBdlWcD9XwcBkNgf5aHsdu/9O/oEmsjCw4LJwxMCKKTqr7kkRA5u+PZ4v82DkPcixFAox/Mmxs1EUhP0YBD3uWXH3qCVpcWL6OVxUTfcfNRO8xgT4o9lXT7ZkDs7j1k=" - secure: "gmU2uAQXNk/GOe/HcVOlf4HdtzqCa4OxLU3HVynZBPWHcHXsoIcpjzD/p8XmWbuujnHcwZWC0w3+VwRtRc1RIyMKpMWhElsuG5JxHY9Djd6QMoJ/Zz2A0cI7V8JXb6p8269y+HCDvfX4X8A27vhsl72udvQTgv6cwVXPadvFEVA=" - secure: "XUG2/FgdfKHlcPwQOPVW7zQ9VqHauXKsjQXDvo+Q3HL7ko6DzOX5BdS4BL4B6x6485Mx/Sen8bKQULQkPAZeGelpIgBCjTYeclY4N/92N9au2hQ6ibIa7me5TsYhPR0unpGZhtIYrNvWqzMUMtZTigKCvmmCdt889DIzErCX3xA=" - secure: "NMo4U85i0edUtRIrI77lyRs4eyWcsnEZbRqi/qvrPtvtPHBZxU0ovu7rXjijI6DHyzBIwH6nkmJrfE8kWX3G/xxqN2EQpXEdJZSdhgL09mA5PewPe9gW27WVG4Z1PzEFZJQp8kJM3mYmcX3gy3xuL89DZ7ZDYlZMNoSG/R1hlEY=" - DATABASE='drupal' - DB_USERNAME='root' - DB_ENCODE='utf8' - MODULE_PATH='build/sites/all/modules/marconi' mysql: database: $DATABASE username: $DB_USERNAME encoding: $DB_ENCODE before_install: - composer self-update - pear channel-discover install: - pear install drush/drush - phpenv rehash before_script: - echo 'variables_order = "EGPCS"' >> ~/.phpenv/versions/$(phpenv version-name)/etc/conf.d/travis.ini - echo 'sendmail_path = /bin/true' >> ~/.phpenv/versions/$(phpenv version-name)/etc/conf.d/travis.ini - drush --yes make tests/includes/marconi.make build - mkdir -p $MODULE_PATH - git archive $(git rev-parse --abbrev-ref HEAD) | tar -x -C $MODULE_PATH - cd build - drush --yes site-install minimal --db-url="mysql://$DB_USERNAME@$DATABASE" - drush --yes en marconi - drush --yes en simpletest script: - drush test-run Marconi --debug

For the most part, the .travis.yml file is the same as the one for Search API Elasticsearch. However, there are a few things you need to do to get tests to run when using a third-party API.

In the Marconi module, we call environment variables for credentials during testing. Travis CI presents two easy to overcome problems for this situation. The first is how to include API credentials without exposing API credentials. Fortunately, Travis CI has a way to encrypt your environment variables, and we do that on lines 10-13 to setup the API credentials we need for testing.

The second problem is telling PHP to use environment variables. The default configuration for variables on Travis CI is "GPCS" which means that Travis CI won't use environment variables. That's why we change the variables_order setting to "EGPCS" on line 33.

Obviously, if you don't need to install or use a thid-party application, you can skip those parts of running an extra build script or setting secure environment variables, but the rest remains the same.

Categories: Elsewhere

Phase2: Better, Stronger, Faster! A New Open Atrium Installer.

Fri, 09/05/2014 - 01:10

Installing Drupal from scratch is a relatively painless for most people.  However, installing a large Drupal distribution such as Open Atrium 2 has the potential to be painful.  Core Drupal only contains a handful of modules that need to be installed, but Open Atrium 2 is a very feature-rich distribution with nearly 200 modules to be installed.  An installation of Open Atrium typically takes 15-20 minutes and consumes significant server resources during that period.  In this article, I will show how we reduced that installation time down to only TWO minutes!

Overview of Drupal Installation

Have you ever looked at the Drupal core installer code, or traced through it?  It’s actually a fairly robust and flexible installer framework.  The installation consists of several different “steps”.  Each step can be a form asking for information, such as the database information, site information, theme selection, etc.  Or a step can simply be a function that performs processing, such as the step to check if the system requirements of Drupal are met.  Each step can optionally be processed via the Batch API, such as the step that installs each required module.

The Drupal installer can be run interactively, or non-interactively, such as when using the “drush site-install” command.  During interactive installs, separate HTTP requests are used for each step of the process, and for each “batch” of processing done by the batch steps.  Batch processing helps avoid timeout errors, but a single step that takes too long can still cause a timeout.  The installer tries to break batch steps into one-second requests, but that only allows fast steps to be combined into single requests.  A batch step that takes more than one second only ensures that a new request is used for the next batch, and can still cause a timeout.

Caching during the installation process is also very complex.  In general the installer minimizes the amount of cache clearing to speed up the install process.  However, during module installation, some modules might depend on the existence of other modules and without some cache clearing, modules installed during the same batch request might not see each other.  This becomes even more difficult if a module is actually a Features export.  Features has it’s own caches and sometimes needs to rebuild a feature to put configuration into the database that might be needed by other modules.

Installing Open Atrium 2

What appears to be a straight-forward installation process for Drupal core becomes exponentially more complicated when installing a distribution that uses hundreds of modules and Features, such as Open Atrium.  An interactive installation of Atrium begins fine and quickly enables the first 30 modules.  Soon it becomes slower and slower.  Enabling the last module takes nearly ten times the first module.  Each Feature module that is installed causes the info files of all previously enabled modules to be reloaded into cache.   So the more modules installed, the slower Features becomes.  There are many issues, such as this in the Features queue to try and address some of these performance issues, but it is a very complex process.

Because modules take longer and longer to enable, some Open Atrium installs run into PHP timeout problems and need to increase their limit from the 30-second default up to 60-seconds.  This is often frustrating because these timeouts often near the end of the installer after 15 minutes have already passed.  In general, a successful Open Atrium 2 installation takes around 15 to 20 minutes, which is far too long for most people trying to make a quick evaluation of Open Atrium functionality.

There Must be a Better Way!

During the recent Drupal NYC Camp, we held an Open Atrium 2 training session.  During this training, we had 20 people all installing Open Atrium at the same time.  We all collectively waited 20 minutes just to get 20 identical copies of Open Atrium 2 installed on our computers.  Wouldn’t it have been nice to simply install Open Atrium once and then clone that onto the other computers?  That’s when I realized there was a better way to install Open Atrium…by cloning a previous installation!

The new 2.18 version of Open Atrium contains this new installer option.  After entering your database information, you are prompted for the Installation Method.  The Standard Drupal installation method is still available (if you really want to wait 20 minutes!).  The new Quick installation option is the default.  Selecting Quick installation will direct the installer to import from an existing database dump that is saved within the Open Atrium code repository.

Instead of a batch process step to install and enable each module, the Quick install step uses a batch process to import database tables from the sql dump.  This will only work if you are using MySQL.  If you are using a different database engine you won’t get prompted for the new Quick install…it will just use the Drupal default installer.

After only TWO minutes, the installer should finish importing the database tables.  You will then be prompted to fill in your Site information as normal.

Technical Details

Importing an existing database from the Drupal installer turned out to be trickier than originally anticipated.  The Drupal installer isn’t happy when certain database tables or variables stored in the database are ripped out from under it.  Another complexity was the fact that the new database might have a different database “prefix” for table names.  The new installer code actually parses the MySQL database dump line by line to split the SQL statements into batches for each table and to replace table names with the new database prefix.

To visualize the difference between the standard Drupal installer and the new Quick install, I ran both and sent the statistics into Graphite:

The first 20 minutes of this graph shows the normal Drupal installation of Open Atrium 2.  The CPU is almost fully utilized during the entire install process.  The resident memory used by Apache climbs as the installer adds more and more modules to the site.  At around 08:27 the first Feature export module is enabled, causing a sharp increase in the amount of memory being used.  The decrease in CPU at 08:30.5 (when the green CPU line goes to zero) is caused by the installer asking for Site Information.  After that step, the Drupal installer clears all caches, runs cron, and does other cleanup.  This increases the resident Apache memory even further.

The second set of data starting at 08:36 is for the new Quick install option.  You’ll notice the actual install time is around 2-3 minutes.  Once again, the decrease in CPU at 08:38.5 is when the installer prompted for Site information.  Thus, the data after that represents the same cache clearing and install cleanup as in the full installer.  Ultimately the same amount of memory is needed to fully clear the cache and clean up the site as the same modules have been installed and enabled at that point.

In fact, the Quick installation process itself consists of importing over 300 database tables, followed by it’s own cache clear.  The memory increase at around 08:37.5 is caused by cache clear that the Quick installer executes after all the database tables have been imported.  During the actual database import, memory usage in Apache is flat.


Any other Drupal distribution that requires a large number of modules should be able to leverage the code used in Open Atrium.  The code is all within the “install_from_db” subdirectory of the profile, which can be found in the Open Atrium project on  The huge reduction in installation time should help retain new clients that are evaluating Open Atrium and Drupal for their organizations and generally improve first impressions of Drupal.

Categories: Elsewhere

Julian Granger-Bevan: How to diagnose a bloated Drupal database

Thu, 08/05/2014 - 21:47

Drupal websites use a database to store content and configuration relating to the running of the website.

For most installations (i.e., not large scale deployments), the database will be relatively compact. As an example, the websites that I maintain have databases between 5MB and 20MB. The specific size is affected by the amount of content on the websites and which modules are enabled.

Therefore, I was surprised to receive a notification from my host warning me that I was using over 2GB in database storage! Naturally I thought something must be wrong.

In this post, I'll share how I solved the issue for myself with some tips in case you suffer a similar issue yourself.

(Quick hint, use drush sqlc to run SQL commands against your website's database).


Database level

Find the affected database. Unless you are using a multi-site setup, each of your websites will be based in a different database.

SELECT table_schema "DB Name", ROUND(SUM(data_length + index_length) / 1024 / 1024, 1) "DB Size in MB" FROM information_schema.tables GROUP BY table_schema; +--------------------+---------------+ | DB Name | DB Size in MB | +--------------------+---------------+ | information_schema | 0.2 | | drupal_database | 1110.2 | +--------------------+---------------+ 2 rows in set (0.01 sec)

As you can see, my database user can only see two databases - information_schema containing SQL internals, and the main database (re)named drupal_database.


Table level

The next step is to find out what is causing that database to become so large. Drill-down to the tables within the database.

I found this "order by size" query helped a lot. It will only show the 5 largest tables, but this is where any problem will be. If there are no unusually large tables, it could just be that your website has grown large organically.

SELECT table_name, table_rows, data_length, index_length, ROUND(((data_length + index_length) / 1024 / 1024),2) 'Table Size in MB' FROM information_schema.tables WHERE table_type = 'BASE TABLE' ORDER BY data_length DESC LIMIT 5; +---------------------+------------+-------------+--------------+------------------+ | table_name | table_rows | data_length | index_length | Table Size in MB | +---------------------+------------+-------------+--------------+------------------+ | queue | 137362 | 1137704424 | 4017152 | 1088.83 | | field_revision_body | 1731 | 5259276 | 154624 | 5.16 | | field_data_body | 1731 | 5259276 | 145408 | 5.15 | | feeds_log | 27455 | 2712868 | 1131520 | 3.67 | | menu_router | 401 | 406128 | 76800 | 0.46 | +---------------------+------------+-------------+--------------+------------------+ 5 rows in set (0.01 sec)

In my example, the queue table occupies 98% of my the space used by this database. This is where the problem lies.

For context, Drupal uses the queue table to store tasks that need to be carried out on cron runs, and if cron is running successfully should contain no records or only a few that have arisen since the last cron run.


Task level

To find out what was going on, I looked at the types of tasks contained in this table:

SELECT name, COUNT(1) FROM queue GROUP BY name; +---------------------+----------+ | name | COUNT(1) | +---------------------+----------+ | feeds_source_import | 137498 | +---------------------+----------+ 1 row in set (0.07 sec)

There was only one task remaining, relating to the regular import of nodes from RSS feeds using the feeds module. Checking the settings for this importer, I realised that the import was scheduled to run every 15 minutes, yet cron only runs on this website once an hour. This means that each time cron ran, it could not get through all of the tasks required, and the backlog just started growing.

The long term solution was easy. Increase the frequency of the cron runs, and decrease the frequency of the node imports to ensure that cron is always on top of the tasks it needs to complete.

There is also the issue of the remaining bloated table. Whilst this will naturally diminish back to its normal size as cron catches up, if you're worried about how long this will take you could truncate the table once to ensure that the problem is solved. There's a risk with that though, if tasks in the table turn out to actually be required.



For context, here are the results of the same queries against a (healthier) website.

As you can see there are no tables that massively exceed the others in size, and the overall database size is under 10MB.

+--------------------+---------------+ | DB Name | DB Size in MB | +--------------------+---------------+ | information_schema | 0.2 | | drupal_database | 6.1 | +--------------------+---------------+ 2 rows in set (0.12 sec) +---------------------+------------+-------------+--------------+------------+ | TABLE_NAME | table_rows | data_length | index_length | Size in MB | +---------------------+------------+-------------+--------------+------------+ | field_revision_body | 423 | 1556716 | 47104 | 1.53 | | menu_router | 450 | 425076 | 80896 | 0.48 | | system | 390 | 313544 | 73728 | 0.37 | | field_data_body | 37 | 135968 | 18432 | 0.15 | | registry | 965 | 94016 | 53248 | 0.14 | +---------------------+------------+-------------+--------------+------------+ 5 rows in set (0.02 sec) +----------+ | count(1) | +----------+ | 0 | +----------+ 1 row in set (0.00 sec) Category: WebsitesTags: databaseDrupalDrupal Planet
Categories: Elsewhere

agoradesign: Fatal errors may cause infinite Feeds batch runs and exasperate you!

Thu, 08/05/2014 - 20:50

Every developer knows that kind of situations, where spend hours chasing a problem, that suddenly appeared, and the more you debug and dive into fixing it, t

Categories: Elsewhere

Aten Design Group: Someone Dropped a New Website in Your Lap, Now What?

Thu, 08/05/2014 - 18:38

At Aten, I tend to work on already live websites. Sometimes this means small bug fixes. Sometimes it encompasses information architecture, design work, a weeklong development sprint or working on the front end. In most cases my work is on a site I didn't build originally and often on a site Aten didn't build.

I started putting some notes together on some of the gotchas I run across when working on new-to-me sites. This will become a series of blog posts, but one always has to start at the beginning: getting the site working on your local environment.

We're assuming you already have some sort of a LAMP/MAMP/WAMP stack working.

Version Control

Next, you'll need access to the code to get it into your preferred version control system. Make sure you don't commit any settings.php files, the files directory, and possibly .htaccess if it contains server-specific settings. If using Git, remember that just because you've added these files/directories to .gitignore, doing git add . will add them anyway. If the code is already in a repository this step should be as easy as git clone [PATH]. If not, you might need to use something like tar czvf ~/everything.tar.gz public_html to make a copy of the whole file structure. I've also seen cases where JavaScript libraries are maintained as separate Git repositories within the main one. If those are in .gitignore a regular git clone of the main repo will skip these. I prefer to keep everything together by removing the entries in .gitignore. Run git clone on each library, delete each library's .git folder, and finally use git add to add them to the main repository.


After you have the code, you'll need to make a database export and import it. Drush sql-sync can do this, assuming [but that presumes] you have ssh keys and aliases configured and an already running site that drush can bootstrap. From the MySQL command line, create database [DATABASE_NAME]; use [DATABASE_NAME]; source ./[FILENAME.sql] always works. Don't forget to create a database user with something like: GRANT ALL ON database.* TO user@localhost IDENTIFIED BY 'someLongAndCompletelyRandomPassword'; FLUSH PRIVILEGES;.


Along with the database, you'll need to deal with the files directory. If you had to tar the whole thing you've already got many megabytes of files on your hard drive. If you want to avoid that, Stage File Proxy can help by downloading images from a production website as needed on a page by page basis. While this will help, there are some images it has problems dealing with. Another possibility is using Apache rewrites to load image resources from the production site.


My final steps are to add an entry to /etc/hosts for the new local domain, and an entry or new vhosts .conf file to direct that domain to the correct web root directory. Finally, flush DNS cache and restart apache. These steps will differ slightly, depending on your specific LAMP implementation.

Once we have the site up and running, it's time to make some improvements. Next time I'll discuss how to track down some specific situations when you know very little about how a site was built.

Categories: Elsewhere

Blink Reaction: How to Set up Symfony

Thu, 08/05/2014 - 17:23

Long-time Drupal developer Wes Roepken walks you through the steps involved in getting Symfony set up and ready to work.

Categories: Elsewhere

ThinkShout: What Nonprofits Can Learn About Content Structure… from Pearl Jam

Thu, 08/05/2014 - 17:00

Photo by Phil King, licensed under CC 2.0.

Pearl Jam have been posterboys for a lot of things, but probably not structured web content. Content strategists like to point to NPR’s Create Once, Publish Everywhere (COPE) framework, to large media outlets, sometimes to the U.S. government – but given the breadth of coverage (and budgets) available to those entities, making the move to fully structured content may seem daunting in the nonprofit context.

If Pearl Jam can do it, so can you.

The basic concept is this: by separating out the most important components of your content into “fields”, instead of dumping everything from images to embedded videos to pull quotes into a WYSIWYG editor, you’ll be able to :

  • Display your content responsively across devices;
  • Share it more easily with your affiliates and supporters; and
  • Create dynamic ways to surface relevant content and encourage engagement.

In a striking example of why tech folks shouldn’t be allowed to name concepts, creating fields to structure your content is affectionately known as the difference between making blobs and chunks.

If you use a modern CMS, you’ve already used structured content to a degree. The title of your page or post is almost always separate from the body. This allows you, at the most basic level, to build a dynamic page of blog posts that displays only the title and maybe a snippet of the body, which then links off to a detail page containing the full post.

The New York Times uses this concept, breaking out fields for author, publication date, and more for its news stories. Amazon has taken it to an entirely different level by assigning scores of categories to its products; when you narrow down your mattress search to a queen size goose down featherbed from Pacific Coast, you’re taking advantage of structured data (in the form of faceted search).

What Pearl Jam has done – and what every nonprofit should think about doing – is match the motivations their audience has in visiting their website to PJ’s own (organizational) goals and structured their site content so the two complement each other.

Pearl Jam’s core offering is music. People visit their website to find that music, either in the form of upcoming (or past) shows, lyrics, or songs they can buy. So, much of Pearl Jam’s website is structured around the concept of the song.

[Note that I don’t have any insider’s knowledge about the exact structure or software they’re using. This is just how we would do it if we built their site in Drupal.]

Practically every song Pearl Jam has ever recorded or performed live has a place on the website, and they’re all structured the same:

  • Title
  • Release Date
  • Composer
  • Artist
  • Image
  • Lyrics

That’s it. Everything else on that page, and much of the site, is built through the application of structured data.

If you look at an individual album, you’re actually looking at a different content type, which has its own structure:

  • Title
  • Release Date
  • Cover Image
  • Purchase Links
  • Body
  • Song [REFERENCE]

It’s that REFERENCE field that’s key. Every album is a collection of references to the individual songs, rather than list built by hand. (On Drupal, we’d probably use something like Entity Reference.) Clicking on an individual song takes you to its detail page.

It gets more interesting when you look at a Setlist, another structured content type:

  • Venue
  • Location
  • Date
  • Concert Poster Image
  • Product Links
  • Bootleg Image
  • Song [REFERENCE]
  • Live Image [REFERENCE]

A setlist is built up using the same song REFERENCE field as an album; each song exists as a single entity, but it can be referenced from hundreds of other pages (in the case of a classic like “Jeremy”).

All the way back in 2000, Pearl Jam started recording every show they did off the mixing board so they could sell high-quality recordings. While you can’t quite get every one of the 672 versions of “Alive” they’ve performed over the years, you can come pretty close.

Setlists include the all-important link to purchase a copy of an entire live performance.

This relational system has created endless connections between the Songs they’ve performed – their core content offering – and where and when they’ve performed them. By then layering on the ability to purchase copies of those concerts at any time, Pearl Jam has taken one of the primary motivations of their audience – to engage with PJ’s music – and tied it directly to their organizational goal of making money, without shoving that in your face.

It’s also worth noting that structured data has also allowed Pearl Jam to flesh out the detail pages for each of its content types with just a few lines of code.

On a song page, the “First Played”, “Last Played”, and “Times Played” lines are created dynamically, as is the list of every place and date it’s been performed. Tours are created by referencing each of the setlists. I imagine that the slider showing all of the album covers is created by pulling the cover image associated with each album (instead of being inserted by hand).

Once your content is structured, the ways you can reformat and display it are limited only by your imagination communications plan, your organizational goals, and your CMS. Oh, and it helps if you understand the motivations of your various audiences.

What’s your core content offering? Can you create a similar structure? Have you already?

And if anybody with PhotoShop skills wants to create that new poster for Pearl Jam, highlighting their mastery of structured content...

Categories: Elsewhere

Poplarware: The case for a small Drupal shop contributing to Drupal

Thu, 08/05/2014 - 15:27

Dries Buytaert, the CEO of Aquia and head of the open-source Drupal project, recently wrote a blog post about the business case for hiring a Drupal core contributor. Dries wrote about the measurable effect that a larger Drupal shop can realize from hiring a contributor full-time.

read more

Categories: Elsewhere

James Oakley: Updating Drupal core with bash and drush

Thu, 08/05/2014 - 14:55

Yesterday, Drupal 7.28 was released.

People rush to upgrade, knowing that there will be a tranche of bug-fixes that may resolve longstanding issues.

People hesitate to upgrade, because updating Drupal core is not as simple as we'd like.

Other times, the core update is a security release, and you can't afford to wait.

This does not need to be painful!!

Upgrading core in Drupal 7

You have probably read the official documentation on doing this. … Read more about Updating Drupal core with bash and drush

Blog Category: Drupal Planet
Categories: Elsewhere frontpage posts for the Drupal planet: Drupal 7.28 released

Thu, 08/05/2014 - 06:19

Drupal 7.28, a maintenance release with numerous bug fixes (no security fixes) is now available for download. See the Drupal 7.28 release notes for a full listing.

Download Drupal 7.28

Upgrading your existing Drupal 7 sites is recommended. There are no major new features in this release. For more information about the Drupal 7.x release series, consult the Drupal 7.0 release announcement.

Security information

We have a security announcement mailing list and a history of all security advisories, as well as an RSS feed with the most recent security advisories. We strongly advise Drupal administrators to sign up for the list.

Drupal 7 includes the built-in Update Manager module, which informs you about important updates to your modules and themes.

There are no security fixes in this release of Drupal core.

Bug reports

Drupal 7.x is being maintained, so given enough bug fixes (not just bug reports), more maintenance releases will be made available, according to our monthly release cycle.


Drupal 7.28 contains bug fixes and small API/feature improvements only. The full list of changes between the 7.27 and 7.28 releases can be found by reading the 7.28 release notes. A complete list of all bug fixes in the stable 7.x branch can be found in the git commit log.

Update notes

See the 7.28 release notes for details on important changes in this release.

Known issues

Changes made to the Update Manager module in this release may lead to performance slowdowns in certain cases (including on rare page loads for site visitors, if the site is using the automated cron feature). See the release notes for more information.

Front page news: Planet DrupalDrupal version: Drupal 7.x
Categories: Elsewhere

Dries Buytaert: The investment case for employing a Drupal core contributor

Wed, 07/05/2014 - 23:40
Topic: DrupalAcquiaBusiness

I've long been convinced that every well-run Drupal agency of 30 people or more can afford to hire a Drupal core contributor and let him/her work on Drupal core pretty much full-time. A healthy Drupal agency with 30 people should be able to do $5MM in revenue at a 15% net profit margin #1. This means they have $750k in profits that can be invested in growth, saved as reserves, or distributed among the owners.

There are many ways you can invest in growth. I'm here to argue that hiring a Drupal core contributor can be a great investment, that many Drupal agencies can afford it, and that employing a Drupal core contributor shouldn't just be looked at as a cost.

In fact, Chapter Three just announced that they hired Alex Pott, a Drupal 8 core maintainer, to work full-time on Drupal core. I couldn't be more thrilled. Great for Alex, great for Drupal, and great for Chapter Three! And a good reason to actually write down some of my thoughts.

The value of having a Drupal core contributor on staff

When Drupal 8 launches it will bring with it many big changes. Having someone within your company with first-hand knowledge of these changes is invaluable on a number of fronts. He or she can help train or support your technical staff on the changes coming down the pipe, can help your sales team answer customer questions, and can help your marketing team with blog posts and presentations to establish you as a thought-leader on Drupal. I believe these things take less than 20% of a Drupal core contributor's time, which leaves more than 80% of time to contribute to Drupal.

But perhaps most importantly, it is a crucial contribution that helps ensure the future of the Drupal project itself and help us all avoid falling into the tragedy of the commons. While some core contributors have some amount of funding — ranging from 10% time from their employers to full-time employment (for example, most of Acquia's Office of the CTO are full-time core contributors) — most core contribution happens thanks to great personal sacrifice of the individuals involved. As the complexity and adoption of Drupal grows, there is a growing need for full-time Drupal contributors. Additionally, distributing employment of core contributors across multiple Drupal organizations can be healthy for Drupal; it ensures institutional independence, diversified innovation and resilience.

Measuring the impact of a Drupal core contributor on your business

While that sounds nice, the proof is in the numbers. So when I heard about Chapter Three hiring Alex Pott, I immediately called Chapter Three to congratulate them, but I also asked them to track Alex's impact on Chapter Three in terms of sales. If we can actually prove that hiring a Drupal core contributor is a great business investment, it could provide a really important breakthrough in making Drupal core development scalable.

I asked my team at Acquia to start tracking the impact of the Drupal core contributors on sales. Below, I'll share some data of how Acquia tracked this and why I'm bullish on there being a business case.

For Acquia, high quality content is the number one way to generate new sales leads. Marketers know that the key to doing online business is to become publishers. It is something that Acquia's Drupal developers all help with; developers putting out great content can turn your website into a magnet. And with the help of a well-oiled sales and marketing organization, you can turn visitors into customers.

Back in December, Angie "webchick" Byron did a Drupal 8 preview webinar for Acquia. The webinar attracted over 1,000+ attendees. We were able to track that this single piece of content generated $4.5MM in influenced pipeline #2, of which we've managed to close $1.5MM in business so far.

Even more impressive, Kevin O'Leary has done four webinars on Drupal's newest authoring experience improvements. In total, Kevin's webinars helped generate $9MM in influenced pipeline of which almost $4MM closed. And importantly, Kevin had not worked on Drupal prior to joining Acquia! It goes to show that you don't necessarily have to hire from the community; existing employees can be made core contributors and add value to the company.

Gábor Hojtsy regularly spends some of his time on sales calls and helped close several $500k+ deals. Moshe Weitzman occasionally travels to customers and helped renew several large deals. Moshe also wrote a blog post around Drupal 8's improved upgrade process using Migrate module. We aren't able to track all the details yet (working on it), but I'm sure some of the more than 3,200 unique viewers translated in to sales for us.

Conclusion: investment returned, and then some

Obviously, your results may vary. Acquia has an amazing sales and marketing engine behind these core contributor, driving the results. I hope Chapter Three tracks the impact of hiring Alex Pott and that they share the results publicly so we can continue to build the business case for employing full-time Drupal contributors. If we can show that is not just good for Drupal, but also good for business, we can scale Drupal development to new highs. I hope more Drupal companies will start to think this way.


#1 I assumed that of the 30 people, 25 are billable and 5 are non-billable. I also assumed an average fully-loaded cost per employee of $125k per head and gross revenue per head of around $180k. The basic math works out as follows: (25 employees x $180k) - (30 employees x $125k) = $750k in profit.

There are 365 days per year and about 104 weekend days. This means there are 260 business days. If you subtract 10 legal bank holidays you have 250 days remaining. If you subtract another 15 business days for vacations, conferences, medical leave and others, you have 230 business days left. With a blended hourly rate of $130 per hour and 75% utilization, you arrive at ~$180k gross revenue per billable head.

I confirmed these numbers with several Drupal companies in the US. Best in class digital agencies actually do better; they assume there are 2,000 billable hours in a year per head and maintain at least a 85% chargeability rate (i.e. 1,700 billable hours per head). Many companies do less because the maturity of their business, the market they are in, their geographic location, their ambitions, etc. It's not about what is "good" or "bad", but about what is possible.

#2 "Influenced pipeline" means that the content in question was one factor or touch point in what ultimately lead potential customers to become qualified sales leads and contacted by Acquia. On average, Acquia has 6 touch points for every qualified sales lead.

Categories: Elsewhere

DrupalCon Austin News: Drupal in Education Summit, June 2

Wed, 07/05/2014 - 22:02

Do you work at a college, university, or k-12 school? If so, then we invite you to the Drupal in Education Summit to be held at the University of Texas at Austin on June 2, from 9am-4pm (leaving you plenty of time to make the DrupalCon opening reception afterwards)!

Categories: Elsewhere