Planet Drupal

Subscribe to Planet Drupal feed - aggregated feeds in category Planet Drupal
Updated: 11 min 5 sec ago

Drupal Watchdog: Drupal In Context

Tue, 17/06/2014 - 20:38

For non-programmers like me, Drupal's biggest flaws show up when a site is almost right, but needs “just a few lines of code” to really make it work. Some people learn programming easily; others can't, or don't want to invest the time. Fortunately, there are Drupal tricks to get many of the benefits of programming without having to actually deal with PHP, CSS, or JavaScript. Here are some that'll get you most of the way there.

Refactor Your Problem

Whenever you run into a brick wall, stop, step back, and ask yourself: Does my site really need to have this feature? Could it be done in a different way?

Many times when I do this, I realize that I've had a specific idea in my head as to how something should work, rather than the end goal I'm trying to achieve. Stepping back to look at the site from a visitor's point of view often shows me a simpler way to produce what's needed — even if it doesn't work anything like my original idea.

Use the Megamodules

Most modules perform a specific task; "megamodules" give you ways to manipulate Drupal's inner workings through a simplified visual interface. The best-known is Views, which helps site builders deliver the results of a database query: in essence, it translates your point-and-click directives into SQL. (Views will be part of Drupal 8’s core package.)

But Views isn't the only megamodule out there.

  • Rules lets you create logic that reacts to actions by users;
  • Panels replaces tricky PHP theme templating in some cases;
  • Features packages site elements into a custom module;
  • Sweaver writes CSS for your theme, reflecting changes you make in a visual interface.
Learn What Programmers Do

Even if you don't plan to actually write code, getting to know the tools that programmers use will help you in two ways. First, you'll be able to implement code written by others; second, you'll be able to frame the problem better when you interact with programmers.

Categories: Elsewhere

ThinkShout: Creating an Infinite Scroll Masonry Block Without Views

Tue, 17/06/2014 - 19:00

The Views Infinite Scroll module provides a way to apply infinite scroll to the output of a view, but if you want to apply infinite scroll to custom block content, you're out of luck. I found myself in this position while developing a recently-launched site for The Salmon Project, which I'll use as a loose reference point as I walk you through my solution to applying both Masonry and Infinite Scroll to custom block content.

  1. Creating a block to hold our paged content that can be placed on a page
  2. Generating a paged content array to return as block content
  3. Applying Masonry to the paged content block
  4. Applying Infinite Scroll to the paged content block
  5. Creating an Infinite Scroll trigger
Creating a block to hold paged content

The first step is creating a block to hold our custom paged view that can be placed on a page. To create the block, we'll use a hook_block_info() followed by a hook_block_view().

<?php function my_module_block_info() { $block['masonry_content'] = array( 'info' => 'Masonry content', ); return $block } function my_module_block_view($delta = '') { $block = array(); $block['subject'] = ''; $block['content'] = my_module_masonry_content(); }

Here, we've defined which function we'll be generating our paged view from; namely my_module_masonry_content.

Generating a paged content array

Within our my_module_masonry_content function, we'll create a paged view of nodes. To do so, we'll use an EntityFieldQuery with the "pager" property, which causes the results of the query to be returned as a pager.

<?php $query->pager(3);

The argument passed to the pager function determines how many results at a time will be returned; this is analogous to setting the number of items to be shown per page in a view – keep this bit in mind, as we'll return to it later when implementing the infinite scroll.

Now we'll add some query conditions and, finally, execute the query. These conditions are, of course, site specific, but I'm including them as an example for thoroughness. For help constructing your EntityFieldQuery query, see the EntityFieldQuery api documentation page.

<?php $query->fieldCondition('field_example', 'value', array('value1', 'value2'), 'IN'); $query->propertyCondition('status', '1'); $results = $query.execute();

In this example, I've requested all nodes that have a field_example value of value1 or value2 and are published (i.e. node status property is equal to 1).

Once we have our results set, we will need to create a renderable array to return as the block content.

<?php foreach ($node_result['node'] as $row) { $node = entity_load_single('node', $row->nid); $output[] = node_view($node, 'category_term_page'); }

Then we'll apply pager theming to the output by explicitly adding it to the returned renderable array.

<?php $output['pager'] = array('#theme' => 'pager');

We then return our output array and get to the JavaScript…

Applying Masonry to the paged content block

To apply Masonry to the paged block content, we'll need some JavaScript so let's create a new JavaScript file within our module's js directory (js/my_module.js). This file will depend on the Masonry JavaScript library so we'll need to load it in addition to our new, custom JavaScript file.

Loading the required libraries

For optimal performance, we only want to load our JavaScript when the masonry_content block is present. To conditionally load the JavaScript we'll use a hook_block_view_alter().

<?php function my_module_block_view_alter(&$data, $block) { // only load libraries if masonry_content block is present if ($block->module == 'my_module' && $block->delta == 'masonry_content') $module_path = drupal_get_path('moduel', 'my_module'); $data['content']['#attached']['js'][] = $module_path . '/js/my_module.js'; $masonry_path = libraries_get_path('masonry'); $data['content']['#attached']['js'][] = $masonry_path . '/dist/masonry.pkgd.min.js'; } }

Now that we've got our required JavaScripts, let's take a look at how to apply Masonry to our paged content block.

// within my_module.js var container = $('#my-module-masonry-content); container.masonry({ // Masonry options itemSelector: '#my-module-masonry-content article.node' });

In this case, we're selecting the <div> that has our block id and the node items within that <div>.

Doing this will apply Masonry once to the items present after the initial page load, but since we are going to be loading more items via the pager, we'll need to re-apply Masonry after those new items are loaded. We haven't defined the "change" action yet, but will later when implementing the infinite scroll JavaScript.

// necessary to apply masonry to new items pulled in from infinite_scroll.js container.bind('change', function() { container.masonry('reloadItems'); container.masonry() }); });

Now that we've got Masonry applied to our block content, let's move on to getting the infinite scroll behavior in place.

Applying Infinite Scroll to the paged content block

To pull new items into our content block (to which Masonry is being applied) we'll leverage the Autopager library. This means we'll need to add Autopager to the list of JavaScript to be loaded conditionally when our block is present.

Loading more required libraries

Again, we'll use drupal_get_path() and libraries_get_path() to retrieve more required JavaScript from within our hook_block_view_alter().

<?php $autopager_path = libraries_get_path('autopager'); $data['content']['#attached']['js'][] = $autopager_path . '/jquery.autopager-1.0.0.js';

Now that we've got Autopager and my_module_infinite_scroll.js loaded, let's apply the infinite scroll…

The first thing we need to to is define the parameters that will be passed to Autopager.

<?php // Make sure that autopager plugin is loaded if($.autopager) { // define autopager parameters var content_selector = '#my-module-masonry-content'; var items_selector = content_selector + 'article.node'; var next_selector = '.pager-next a'; var pager_selector = '.pager'

Notice that $content_selector matches the selector we used to apply Masonry to each piece of block content. This is because Autopager will, behind the scenes, retrieve more content similar to what's already on the page when we click the "next" link.

The $next_selector and $pager_selector selectors are what target the "next" and "1, 2, 3…" links our pager exposes and, not incidentally, what Autopager uses to retrieve the next set of content. Recall from above our query returns 3 nodes at a time so the "next" link will cause 3 more nodes to be shown.

Though necessary to retrieve more content, we don't want to see these pager links so let's hide them.


…and now create our Autopager handler.

var handle = $.autopager({ autoLoad: false, appendTo: content_selector, content: items_selector, link: next_selector, load: function() { $(content_selector).trigger('change'); } });

The $(content_selector).trigger('change') bit is a key component of this snippet because the "change" action is what we are using to apply Masonry to new items added to our block (see the container.bind('change'… bit above).

Triggering the infinite scroll function

The Autopager handler we just defined acts as our gas with respect to the infinite scroll action, but we also need a brake. The following snippet, taken from views_infinite_scroll.js uses some fancy math to determine when the user has hit page bottom and only calls handle.autopager('load') when this is the case, effectively acting as the brake.

// Trigger autoload if content height is less than doc height already var prev_content_height = $(content_selector).height(); do { var last = $(items_selector).filter(':last'); if(last.offset().top + last.height() < $(document).scrollTop() + $(window).height()) { last = $(items_selector).filter(':last'); handle.autopager('load'); } else { break; } } while ($(content_selector).height() > prev_content_height);

You'll notice on The Salmon Project site I am not using infinite scroll; instead I opted for a "View more" button to trigger the autopager('load') action and some logic in the function bound to the "change" action to hide said button.

Regardless of the method you choose as a trigger, all the method needs to do is call Autopager's load function (analogous to hitting the hidden "next" pager link) to load more content.

And there you have it, an infinite scroll masonry block that loads 3 more nodes each time the user hits page bottom without the use of the Views module.

Categories: Elsewhere

Acquia: Why Drupal? Because It Will Help You Win

Tue, 17/06/2014 - 18:34

How will Drupal help you win? How will it help you do better? As business itself becomes digital, the benefits that a widely adopted, open source technology solution like Drupal can bring to organizations today are manyfold, but they can be boiled down to a few salient points.

The key ways Drupal will help you win:

Categories: Elsewhere

Drupal core announcements: Drupal core security release window on Wednesday, June 18

Tue, 17/06/2014 - 16:57
Start:  2014-06-18 (All day) America/New_York Sprint Organizers:  David_Rothstein

The monthly security release window for Drupal 6 and Drupal 7 core will take place on Wednesday, June 18.

This does not mean that a Drupal core security release will necessarily take place on that date for either the Drupal 6 or Drupal 7 branches, only that you should prepare to look out for one (and be ready to update your Drupal sites in the event that the Drupal security team decides to make a release).

There will be no bug fix release on this date; the next window for a Drupal core bug fix release is Wednesday, July 2.

For more information on Drupal core release windows, see the documentation on release timing and security releases, and the discussion that led to this policy being implemented.

Categories: Elsewhere

Mediacurrent: The Real Value of Drupalcon

Tue, 17/06/2014 - 16:50

I bet most people who have ever attended a DrupalCon before would agree that it takes a full week to process (and recoup) from all the community synergy and new information consumed during this epic annual event. 2014’s DrupalCon in Austin, TX was jam packed with awesome boasting the largest DrupalCon yet and also one of the most diverse. There were nearly 3,500 people from 60 countries in attendance! The Austin Convention Center was the perfect venue, and downtown Austin became the Drupal community’s stomping grounds all week as we filled restaurants, bars and hotels with Drupal chic.

Categories: Elsewhere Featured Case Studies: Campagna Center Responsive, Ecommerce Website

Tue, 17/06/2014 - 16:50
Completed Drupal site or project URL:

The Campagna Center is a non-profit organization located in Alexandria, Virginia centered on delivering superior academic and social development programs for individuals of all ages to inspire a commitment to learning and achievement. As with many non-profits, their website is an integral platform for keeping donors, volunteer members, and program attendees engaged and informed.

The company behind the development and design is New Target, Inc. based out of Alexandria, Virginia. New Target is a full service web company frequently partnering with associations, non-profits, and mission driven organizations to inspire and engage audiences on the web.

Key modules/theme/distribution used: OmegaRespond.jsViews Nivo SliderSuperfishRedirectMollomGoogle AnalyticsBlock referenceCommerceShortcodeOrganizations involved: New Target, Inc.Team members: Brian Newsomehayliejcastedohak55pgrujicpaige.elena
Categories: Elsewhere Featured Case Studies: Campagna Center Responsive, Ecommerce Website

Tue, 17/06/2014 - 16:50
Completed Drupal site or project URL:

The Campagna Center is a non-profit organization located in Alexandria, Virginia centered on delivering superior academic and social development programs for individuals of all ages to inspire a commitment to learning and achievement. As with many non-profits, their website is an integral platform for keeping donors, volunteer members, and program attendees engaged and informed.

The company behind the development and design is New Target, Inc. based out of Alexandria, Virginia. New Target is a full service web company frequently partnering with associations, non-profits, and mission driven organizations to inspire and engage audiences on the web.

Key modules/theme/distribution used: OmegaRespond.jsViews Nivo SliderSuperfishRedirectMollomGoogle AnalyticsBlock referenceCommerceShortcodeOrganizations involved: New Target, Inc.Team members: Brian Newsomehayliejcastedohak55pgrujicpaige.elena
Categories: Elsewhere

Chromatic: Converting Drupal Text Formats with Pandoc

Tue, 17/06/2014 - 16:36

Switching the default text format of a field is easy. Manually converting existing content to a different input format is not. What about migrating thousands of nodes to use a different input format? That isn't anyone's idea of fun!

For example, let's say that all of a site's content is currently written in markdown. However, the new editor wants to not only write all future content in the textile format, but also wants all previous content converted to textile as well for a consistent editing experience. Or perhaps you are migrating content from a site that was written in the MediaWiki format, but standard HTML is the desired input format for the new site and all of the content needs to be converted. Either way, there is a lot of tedious work is ahead if an automated solution is not found.

Thankfully there is an amazing command line utility called Pandoc that will help us do just that. Pandoc converts text from one input syntax to another, freeing you to do less mind numbing activities with your time. Let's take a look at how Pandoc can integrate with Drupal to allow you to migrate your content from one input format to another with ease.

After installing it on your environment(s), below is the basic function that provides Pandoc functionality to Drupal. It accepts a string of text to convert, a from format, and a to format. It then returns the re-formatted text. It's that simple.

* Convert text from one format to another.
* @param $text
*  The string of text to convert.
* @param $from
*  The current format of the text.
* @param $to
*  The format to convert the text to.
* @return
*  The re-formatted text.
function text_format_converter_convert_text($text, $from, $to) {
  // Create the command.
  $command = sprintf('pandoc -f %s -t %s --normalize', $from, $to);
  // Build the settings.
  $descriptorspec = array(
    // Create the stdin as a pipe.
    0 => array("pipe", "r"),
    // Create the stdout as a pipe.
    1 => array("pipe", "w"),
  // Set some command settings.
  $cwd = getcwd();
  $env = array();
  // Create the process.
  $process = proc_open($command, $descriptorspec, $pipes, $cwd, $env);
  // Verify that the process was created successfully.
  if (is_resource($process)) {
    // Write the text to stdin.
    fwrite($pipes[0], $text);
    // Get stdout stream content.
    $text_converted = stream_get_contents($pipes[1]);
    // Close the process.
    $return_value = proc_close($process);
    // A valid response was returned.
    if ($text_converted) {
      return $text_converted;
    // Invalid response returned.
    return FALSE;

We've written a barebones module around that function that makes our conversions much easier. It has a basic administration page that accepts from and to formats, as well as the node type to act upon. It will then run that conversion on the Body field of every node of that type. It should be noted that this module makes no attempt to adjust the input format settings or to ensure that the modules required for parsing the new/old format are even installed on the site. So treat this module as a migration tool, not a seamless production ready solution!

Pandoc has quite a few features and options, so check out the documentation to see how it will best help you. You can also see the powers of Pandoc in action with this online demo. Let us know if you use our module and as always, test any text conversions in a development environment before doing so on a live site! Please note: CHROMATIC nor myself bear any liability for this module's usage.

Categories: Elsewhere

CTI Digital: Creating and using a public/private SSH key-pair in Mac OS X 10.9 Mavericks

Tue, 17/06/2014 - 16:34
In the following article, we’re going to run through the process of creating a public/private SSH key-pair in OS X 10.9.   Once this is done, we’ll configure our GitHub account to use the public key, create a new repository and finally pull this repository down onto our machine via SSH.   Before setting up an SSH key on our system we first need to install GIT. If you’ve already installed GIT please proceed to the next section - otherwise lets get started.    Installation and configuration of GIT   To install GIT on Mac OS X 10.9, please navigate to the following URL ( and click the “Download for Mac” button.     Fig 1: Download options available at   Once the *.dmg file has downloaded, double-click the file to mount it and in the new finder window that pops up, double-click on the file “git-1.9.2-intel-universal-snow-leopard.pkg” (the file will have likely changed name somewhat by the time you read this article, but aside from the version number, it should still be quite similar).   If you get the error highlighted in “Fig 2” when trying to open the file simply right-click on the *.pkg file and click “Open”. You should then see a new dialogue window similar to the one displayed in “Fig 3”, which will allow you to continue on to the installation process.   Fig 2: The error an end-user will see when trying to open a non-identified file if the “Allow apps downloaded from” section of “Security & Privacy” is set to “Mac App Store and identified developers” within “System Preferences”.   Fig 3: When right-clicking the *.pkg file and clicking “Open” the end-user is given a soft warning but now, unlike “Fig 2” we're able to bypass this dialogue by clicking “Open”.   The installation process for Git is fairly self explanatory, so I won’t go into too much detail - In a nutshell you will be asked to install Git for all users of the computer (I suggest leaving this at it’s default value) and you’ll be asked if you want to change the location of the installer (unless you have good reason to change the Git install location this should be left to the default value).    Finally, as part of the installation process you’ll be prompted to enter your system password to allow the installer to continue as shown in - type your password and click “Install Software”. If all goes well at the end of the installation process you should see the message “The installation was successful.”. At this stage you can click “Close” to close the installer.    Fig 4: Prior to installation, the GIT installer will require you to enter your system password to allow it to write files to the specified locations.   After the Git installation process we need to open a new instance of the Terminal application. This can be accomplished by opening the finder, clicking the “Applications” shortcut in the sidebar, scrolling to the bottom of the applications listing in the main window, double-clicking “Utilities” and finally double-clicking on “Terminal”.  

Pro tip: A much quicker way of accessing the Terminal is by pressing “Cmd+Space” to bring up Spotlight, typing “Terminal” and hitting the enter key. Once you become familiar with Spotlight it becomes indispensable!

  Once the Terminal window is open, type “git --version” and hit enter. If you’re running a fresh install of Mac OS X 10.9 at this stage you will likely be shown a message telling you that Developer Tools was not found and a popup will appear requesting that you install the tools. Click “Install” on the first dialogue window and when the next popup is displayed, click “Agree”.     Fig 5: The message most users will receive with a fresh install of OS X 10.9 when typing “git --version” into the terminal.   After the installation of Developer Tools, restart the Terminal application and type the command “git --version” followed by hitting enter. This time you should see the version number of the Git application installed.     Fig 6: Terminal displaying the version number of the installed Git application.   Finally, for the installation and configuration of Git we’re going to configure some user-level settings (specifically your name and email address). These configuration settings will be stored in your home directory in a file named “.gitconfig”.    To configure these settings type the following into the terminal (replacing my name and email address with your own obviously!).   git config --global “Craig Perks”  git config --global    Once done, type “git config --list" and you should see a list of user configuration settings analogous to those shown in “Fig 7”.     Fig 7: A Terminal instance showing the configuration settings for the logged-in user.   Now that we have Git successfully installed, in the next section, let’s create our public/private key-pair and add them to our GitHub account.   Creating an SSH public/private key-pair!   In the Terminal, let’s ensure we’re in our home directory. We can navigate to it by typing the following command in the Terminal:   cd ~/    From here we want to create a folder to store our SSH keys in. My preference here is to store them in a hidden folder called ‘ssh’.    Pro tip: By prefixing a folder or a file name with a dot the you’re essentially saying to the system “Hide this” by default.   To create our SSH directory, type the following command into the Terminal window: mkdir .ssh Next, type the command “cd .ssh“ and hit enter followed by command “pwd”. At this point you should see that you’ve now successfully navigated into the “ssh” folder.     Fig 8: By typing “pwd” into the Terminal we’re shown a literal path to our present working directory, which as displayed is /Users/<username>/.ssh.   Now, let’s create our public/private key-pair. Type “ssh-keygen” into the Terminal and hit enter. At this point you’ll be asked to enter a name for your public/private key-pair. This name can be anything, but for this tutorial, I’ll use my first name with a suffix of _rsa.     Fig 9: Creation of a public/private key-pair with the name “”.   The creation of a passphrase is an optional step, but a recommended one. Enter a passphrase (short password of your choosing), hit enter and enter the same passphrase again. One your public/private key-pair has been generated, you’ll see a message similar to the one highlighted in “Fig 10”.   Fig 10: The message shown to an end-user upon successful creation of a public/private key-pair.   Now we have a public/private key-pair, we want to add our newly created key to the ssh-agent. This can be achieved by typing the following command (remembering to amend the private key file name with your own file):   ssh-add -K ~/.ssh/craig_rsa    If you created a passphrase in the previous step, you’ll be prompted to enter your passphrase now. If you successfully add your key to the agent you’ll see a message similar to the following “Identity added: /Users/craigperks/.ssh/craig_rsa (/Users/craigperks/.ssh/craig_rsa)”.   Once your key is added to the ssh-agent, type the command “ssh-add -l” into the Terminal and you’ll see it displayed in the list of known keys.     Fig 11: Our newly created key listed in the ssh-agent.   Now we have our public/private key-pair successfully created, let’s add our public key to our GitHub account, create a repository and clone the repository.   Creating a repository on GitHub and cloning this onto our machine.   I’m not going to go through the GitHub registration in this guide. If you haven’t already done so, register an account on and log-in.   Before we do anything on the GitHub website, we want to copy our public key. To do so, type the following command in the Terminal window (again substituting “craig_rsa” for whatever name you decided to give your key-pair”): pbcopy < ~/.ssh/   Once done, navigate over to GitHub and click the “Account Settings” icon in the toolbar as pictured.     Fig 12: The “Account Settings” icon as shown to logged-in GitHub users.   On the “Account Settings” page “SSH keys” should be listed in the left-hand sidebar. Click it and on the next page that loads click “Add SSH key”.     Fig 13: The “Add SSH key” button, which allows you to add public keys to your GitHub account.   On the next page, give your key a name and paste the contents of your key (that we previously copied with the pbcopy command) into the “Key” field.   Note: Although I’m showing the contents of a public key here, it’s a dummy key and will be deleted upon completion of this guide. You should only share your public key with trusted sources.     Fig 14: Form displayed to GitHub account holders when adding a new key to the site.   Now we have our public key loaded into Git, let’s create a new repository, by clicking the “+” icon displayed next to our username (located in the top-right of the toolbar when logged in). From the menu that pops-up, click “New repository” and you’ll be directed to   From here, give the repository a name of “test” and ensure “Initialize this repository with a README” is checked.      Fig 15: Page displayed to GitHub account holders when creating a new repository.   Finally click the “Create repository” button.   In the right-hand sidebar that is displayed on your newly created repository, “SSH clone URL” should be visible.     Fig 16: SSH clone URL link, which allows users to clone the Git repository.   Click the “copy to clipboard” icon under “SSH clone URL” and return to the Terminal application.   Type the command “cd ~/Desktop” into the Terminal window and hit enter. Now that we’re in the Desktop folder in the Terminal type the command “mkdir git” and hit enter. If you go to your Mac OS X desktop at this point you’ll see that a folder called “git” has been created.    Back in the Terminal window type “cd git” to move into this directory. Finally type “git clone” followed by pasting the code copied from the GitHub repository “SSH clone URL” into the Terminal window (for me this would be: git clone Hit enter when you’re ready and the repository will begin to clone.   If you’ve never cloned a repository from GitHub before, you may receive the message “The authenticity of the host ‘ (’ can’t be established” to continue type “yes” and hit enter and will be added to the list of known hosts.    Finally once the cloning is complete, type “cd test” to navigate into the newly created repository directory and finally type “ls -la” to display a listing of the folder (including hidden files).    If you see listed, you’ve just successfully cloned your Git repository!!     Fig 17: Our successfully cloned Git repository displaying its contents. _ _  If you spot an error in this tutorial, or have any questions, please feel free to get in touch with me on Twitter at @craigperks.  
Categories: Elsewhere

Advomatic: Fajitas, Front End Meets Design, and Remembering to Shower: A DrupalCon Recap

Tue, 17/06/2014 - 16:27

Like many of you, the AdvoTeam hit Austin a coupe of weeks ago for DrupalCon 2014. Now that we’ve had some time to digest all the knowledge dropped (pun intended), we’re sharing our favorite DrupalCon takeaways. We’ve even included links to the DrupalCon sessions, so you can share in the joy.


Amanda, Front-End Developer:

Design and front end are figuring out how to fit together. Do designers need to know how to code? Should they really uninstall Photoshop? And while front-enders are benefiting from all the dev work coming down the pipe, we’re a bit overloaded waiting to see what emerges when the dust settles.

Despite our gripes, it is pretty satisfying that our teams now recognize that the front-end is infinitely more complex than it was just a few years ago.

Lastly, I was blown away that the conference attendance was 20% women! For me, that’s an indication that the community is doing something right in terms of attracting women in ways that other softwares don’t. Kudos!

Monica-Lisa, Director of Web Development

Loved the session on running a shop of remote workers. The big takeaways: Be in constant, positive, fun communication with one another. Find the best tools to keep in touch. If you can’t work all the same hours, choose a few hours a day, every day to overlap. Have a lot of trust in your people. And of course, don’t forget to take a shower every once in a while.

Dave, Web Development Technical Manager

This was one of the best DrupalCons I’ve been to. Top of the list of sessions: Adam Edgerton’s talk on scaling a dev shop. This year was also one of the best DriesNotes (keynote by Dries Buytaert, founder and lead developer of Drupal.)

And in past years, I’ve spent a lot of time hanging out with anyone I bumped into. But this year, I spent almost all my time with the Advoteam; going biking and swimming - we even went to Monica’s mom’s house for fajitas one night.

Jim, Senior Web Developer

The Core Sprint was inspiring, because everyone was getting the help they needed while also giving help to others. Everyone knew different things, so as a group, we were all able to share our collective knowledge to get people set up on Drupal 8, review patches, and to commit new ones.

Jack, Front End Developer

Once again, Drupalcon has shown me that it's not safe (or fun) to get comfortable.  The tools we use to make our work go faster and smoother are constantly changing.  What’s all the rage this year will probably be obsolete next year.  Don't fall in love with any one way of doing things.

Front-end development and theming has never felt more "sink or swim", and that's probably a good thing.  However, as things get more and more complicated, the single front-end developer that knows everything becomes more of a mythological creature.  As new worlds of specialization open up, it becomes more important to have new specialists available.

Lastly, it was awesome to get some face time with the Advoteam.  It's good for the remote team's morale, and also nice to be reminded you work with other human beings that have other things to talk about besides technical Drupal talk.

Did you make it to DrupalCon? What sessions did we miss?


Categories: Elsewhere

Drupalize.Me: Drupal 8 Survey Insights

Tue, 17/06/2014 - 15:30

Last month we asked the Drupal community a few questions. We received 243 responses to our survey, and we'd like to share some of the information. While we're not making scientific generalizations from this data, it is an interesting picture of our community nonetheless. A big thank you to everyone who participated in the survey.

Here are 4 things we learned:

Categories: Elsewhere

Open Source Training: Building a Business Directory With Drupal

Tue, 17/06/2014 - 14:43

Over the last couple of weeks, several different OSTraining members have asked me about creating a directory in Drupal.

I'm going to recommend a 4-step process for creating a basic directory.

Using default Drupal, plus the Display Suite and the Search API modules, we can create almost any type of directory.

Categories: Elsewhere

Liran Tal's Enginx: Drupal Performance Tuning for Better Database Utilization – Introduction

Tue, 17/06/2014 - 08:13
This entry is part 1 of 1 in the series Drupal Performance Tuning for Better Database Utilization

Drupal is a great CMS or CMF, whichever your take on it, but it can definitely grow up to be a resources hog with all of those contributed modules implementing hooks to no avail. It is even worse when developers aren’t always performance oriented (or security oriented god save us all) and this can (unknowingly) take it’s toll on your web application performance.

Drupal performance tuning has seen it’s share through many presentation decks, tutorials, and even dedicated books such as PacktPub’s Drupal 6 Performance Tips but it seems to be an always continuing task to get great performance so here are some thoughts on where you should start looking.


Checklist for glancing further into Drupal’s rabbit hole and getting insights on tuning your web application for better performance:

  1. Enable MySQL slow query log to trace all the queries which take a long time (usually >1 is enough, and with later versions of MySQL or compliant databases like Percona or MariaDB you can also specify milliseconds for the slow query log)
  2. Enable MySQL slow query log to also log any queries without indexes
  3. Make sure to review all of those query logs with EXPLAIN to figure out which queries can be better constructed to employ good use of indexes. Where indexes are missing it’s worth reviewing if the database would benefit from modifying existing indexes (and not breaking older queries)
  4. Use percona-toolkit to review out standing queries
  5. Use New Relic’s PHP server side engine which can tune into your web application and provide great analysis on function call time, wall time, and overall execution pipelines. While it’s not a must, I’ve personally experienced it and it’s a great SaaS offering for an immediate solution without having to need to install alternatives like XHProf or Webgrind.


(adsbygoogle = window.adsbygoogle || []).push({});

The post Drupal Performance Tuning for Better Database Utilization – Introduction appeared first on Liran Tal's Enginx.

Categories: Elsewhere

AGLOBALWAY: Drupal to Excel with Views

Mon, 16/06/2014 - 23:06

Every now and then we need to export a bunch of content from our Drupal site into a spreadsheet. This can easily be accomplished through views with modules like Views Excel Export, which allows you to add a new display for your view. Also, you can add it as an attachment to your existing view, creating a button. Click, and you download a spreadsheet (or CSV, there are options!) just like that. Modules like this one work really well if your data structure is straightforward, and there is no need to format your spreadsheet. What happens if you want it to look a certain way, add borders to columns or rows? What if your view is referencing Entities? Suddenly Views Excel Export doesn't quite cut it anymore. Enter PHP Excel. While this module has no UI to speak of, it does add PHP Excel to your Libraries folder in your Drupal installation. The PHP Excel library gives you a ton of functions that allow you to format an Excel spreadsheet, as well as write to it directly. Let's say we wish to output our "Events" view to Excel. By creating our own custom module, we can call our view programmatically after referencing the PHP Excel library:

function my_custom_spreadsheet_view() {
$view = views_get_view('events');


Rather than calling


, we can use


to find and extract the exact information we want. Assign your data to variables so that you can loop through your Events and write them to Excel in the format you like.

$events = $view--->results;
// start writing to row 5 of your spreadsheet
$rowID = 5;
foreach($events as $event => $e){
// do magic with your data
->setCellValue('A'.$rowID, $data)
->setCellValue('B'.$rowID, $data1)
->setCellValue('C'.$rowID, $data2)
// move to the next row for the next Event

PHP Excel comes with a number of examples, simple and complex. Use these to learn how to format and generate your spreadsheet when you call your custom function.

Tags: drupaldrupal planet
Categories: Elsewhere

Acquia: Search API Drupal 8 Sprint June 2014

Mon, 16/06/2014 - 22:50

During 13th of June until the 15th of June we had a very successful Search API D8 sprint as announced at and at Intracto's site. The sprint was organized with the money but even more valuable than money is time.

Categories: Elsewhere

Acquia: Migrate in 8, DrupalCon post-mortem

Mon, 16/06/2014 - 22:16

At DrupalCon Austin, I led a Core Conversation panel, Status of Migrate in 8, along with chx, ultimike, Ryan Weal, and bdone. The conversation was well-received, and we fielded some good questions. To summarize what we went over, updated for continued developments:

Categories: Elsewhere

Acquia: First patch! Matt Moen and a village of contributors

Mon, 16/06/2014 - 19:23

Every DrupalCon, there's a moment I especially look forward to: the First-Patch "ritual". This time around, a patch written by Matt Moen, Technical Director at Kilpatrick Design, was selected for fast-track testing, approval, and was committed to Drupal 8 core in front of a few hundred of us at the Austin convention center. In this podcast, I talk with Matt about becoming a core contributor; we hear from Angie "webchick" Byron about how it takes a village to commit a patch; and I've included a quick refresher on how version control works with the Gitty Pokey from the DrupalCon Austin pre-keynote. You can see the full patch approval and commit process in the 2nd video embedded on this page.

Categories: Elsewhere

SitePoint PHP Drupal: Building a Drupal 8 Module: Blocks and Forms

Mon, 16/06/2014 - 18:00

In the first installment of this article series on Drupal 8 module development we started with the basics. We’ve seen what files were needed to let Drupal know about our module, how the routing process works and how to create menu links programatically as configuration.

In this tutorial we are going to go a bit further with our sandbox module found in this repository and look at two new important pieces of functionality: blocks and forms. To this end, we will create a custom block that returns some configurable text. After that, we will create a simple form used to print out user submitted values to the screen.

Drupal 8 blocks

A cool new change to the block API in D8 has been a switch to making blocks more prominent, by making them plugins (a brand new concept). What this means is that they are reusable pieces of functionality (under the hood) as you can now create a block in the UI and reuse it across the site - you are no longer limited to using a block only one time.

Let’s go ahead and create a simple block type that prints to the screen Hello World! by default. All we need to work with is one class file located in the src/Plugin/Block folder of our module’s root directory. Let’s call our new block type DemoBlock, and naturally it needs to reside in a file called DemoBlock.php. Inside this file, we can start with the following:

<?php namespace Drupal\demo\Plugin\Block; use Drupal\block\BlockBase; use Drupal\Core\Session\AccountInterface; /** * Provides a 'Demo' block. * * @Block( * id = "demo_block", * admin_label = @Translation("Demo block"), * ) */ class DemoBlock extends BlockBase { /** * {@inheritdoc} */ public function build() { return array( '#markup' => $this->t('Hello World!'), ); } /** * {@inheritdoc} */ public function access(AccountInterface $account) { return $account->hasPermission('access content'); } }

Like with all other class files we start by namespacing our class. Then we use the BlockBase class so that we can extend it, as well as the AccountInterface class so that we can get access to the currently logged in user. Then follows something you definitely have not seen in Drupal 7: annotations.

Continue reading %Building a Drupal 8 Module: Blocks and Forms%

Categories: Elsewhere

Friendly Machine: Quick Tip: Syncing Databases Between Drupal Websites with Backup and Migrate

Mon, 16/06/2014 - 16:48

A common scenario that Drupal developers and site builders run into is the challenge of keeping the database in sync between the dev, testing and production versions of a site. Web hosts like Pantheon (highly recommended) make this a snap, but what if you're using a VPS or some other hosting that doesn't have that functionality? One popular option is to use Drush, but that isn't a good fit for everyone.

Backup and Migrate (BaM) can be a great tool for helping with this sort of problem. In this post we'll talk about using BaM for this task and include a very handy companion service that makes things even easier. What I often see with site builders who are using Backup and Migrate is the manual downloading of backup files and then doing a manual restore from the downloaded file.

A great alternative to that process is setting up an Amazon S3 bucket (cloud storage) where you can directly place your backups from Backup and Migrate. Once each version of the site has the S3 bucket set up, keeping the database in sync becomes a snap.

Setting up an account with Amazon Web Services is free - you get 5 GB of Amazon S3 storage, 20,000 get requests, 2,000 put requests, and 15GB of data transfer each month for one year with the free account. If you start to use the service more heavily, you pay for what you use, but for most dev scenarios, you probably won't incur fees.

Amazon has a nice tutorial to walk you through the process of getting started with the service.

Once you have an account with AWS, you head back to Backup and Migrate on your Drupal site - /admin/config/system/backup_migrate. Go to the Destinations tab and you'll see an 'Add Destination' link as in the image below. Click that link and in the list on the next page, select 'Amazon S3 Bucket' as the destination.

You'll most likely be prompted to add a PHP library - simply follow the link provided in the prompt and download the library to the 'libraries' folder of your site. It's really easy, so don't be put off.

The next step is to fill in the form you see below with the information from your S3 bucket.

That's pretty much all there is to it. Make sure everything is working by running a manual backup to the new S3 bucket destination. A good next step is to configure scheduled backups on the target machine so you can periodically restore a fresh copy on your other environment(s).

To restore your database from the S3 bucket in the UI, you just go to the 'Destination' you have configured in Backup and Migrate for the bucket and click the 'restore' link next to the copy you want to use. A nice twist is using Drush to help with some of this.

If you're brand new to Drush but would like to learn, here's a good place to start. If you'd just like to know how to use Drush and Backup and Migrate together, here's a good tutorial on that topic.

If you have any comments on this post, you may politely leave them below.

Categories: Elsewhere

LightSky: CKEditor in Core and Why that is a Big Deal

Mon, 16/06/2014 - 16:37

I can remember back to the first time I installed Drupal.  I ran the install through CPanel (I know right, don’t hold it against me) and bingo there was my brand new website… or something like that.  Drupal 6 just didn’t offer me out of the box much to be excited about, and while Drupal 7 was certainly an improvement it still didn’t offer much more than a platform to simply get started with what I had just found out was going to be a long process.  I was building a site for my newly formed curling club, and I just learned that this was going to be a bit of work to get it where it needed to be, and even then I started to fear that I may not be able to teach anyone else how to update the site, which would mean even more work. 

I fought the urge, the strong urge to use a platform that was a little more, how do you say, out of the box ready.  But here we are several years later and I am glad I stuck with Drupal, the curling club is as well.  The functionality that I have been able to add over the years and the payment and registration integration just wouldn’t be quite possible under some of the other available systems. 

Drupal long took the approach that the core should be very lightweight, compact, and should allow site builders the flexibility to make decisions on what to use in their site.  I applaud this as a bloated core is just as bad as one that doesn’t accomplish the intended task, but usability has to be taken into mind.  As the user base for Drupal has grown though the need for a higher level of functionality to come out of the box has come as well. 

Drupal’s somewhat philosophical shift to adding some key features to core, like CKEditor, is a huge step in the right direction.  Now Drupal is a step closer after install to being ready to run, and positions itself to be used by more site builders and developers by decreasing the amount of work they will have to get a simple site up and running.  This is an indicator that you don’t have to load down core to the point where it interferes with enterprise level projects to concede a bit to the smaller players.  Bravo Drupal!

What features do you seem to install on every Drupal site.

For more tips like these, follow us on Twitter, LinkedIn, or Google+. You can also contact us directly or request a consultation

Categories: Elsewhere