Planet Drupal

Subscribe to flux Planet Drupal - aggregated feeds in category Planet Drupal
Mis à jour : il y a 20 min 3 sec

Code Drop: Writing Custom Mail Backends in Drupal 8

mer, 18/06/2014 - 21:03

Writing mail backends is probably something you won't do very often but this change notice caught my eye because I've been maintaining my own mail backend for Drupal 7 that handles mail in Simpletest when you're using Mime Mail. Drupal 8 has adopted annotations as a way to describe entities. Blocks, fields, widgets, formatters and more all have their own annotations and now so do mail backends. 

Emails sent using Mime Mail come through in a different format and break the Drupal 7 TestingMailSystem. Mime Mail should probably provide its own testing class but previously I wrote my own. Lets take a look at what a port of that class may look like in Drupal 8.

Catégories: Elsewhere

Get Pantheon Blog: The Great Drupal Multisite Debate

mer, 18/06/2014 - 19:15

Well, that was a little tense.

Thursday afternoon at DrupalCon Austin, we gathered the most over-attended BoF session I've seen in quite a while for "the Great Multisite Debate", in part sparked by blog posts I've written about how I don't believe this architecture is the future for Drupal. Clearly, I anchored on one end of the spectrum.

On the opposite end from me were Jakub and Kris, team members from Acquia, and Christopher Gervais, a leading member of the Aegir project. Occupying the mushy middle ground was Bryan Ollendyke — leader of the ELMS project — and Frank Cary, now VP of Product at Zivtech, but formerly of Sony Music, where he worked with one of the earliest large multisite builds. Jon Pugh did his best to moderate.

My Position In A Nutshell

I've laid it out in detail before, but to put it bluntly I don't think multisite is the future architecture for Drupal or any other open source website framework. We need something better. There's too much complexity pushed onto developers, and even with a perfect set of deployment tools the risk is still too high.

Multisite paints you into the same architectural corners as shared hosting. People say all the time that "that's not really a multisite thing", and it's true: you can be dragged down noisy neighbors, application complexity, and security problems for a whole host of reasons. However, multisite more or less forces you into these compromises, the same way choosing low-end shared hosting does.

To wit - you have a lot of different end-users (distinct sites) co-located onto a shared set of resources with no partitioning. There's no way around this if you want to run lots of sites on one codebase. If anyone does anything interesting (good or bad) there will be trouble.

That may be the only viable answer if you are on a very tight budget, or it could be an acceptable risk if your sites aren't very important. But it's not the future infrastructure for an Open Source Web. We don't 10x our number of installs by accepting those kinds of tradeoffs.

A Few Caveats

I expected fireworks, but the energy in the room was still was more emotional than I prepared for. After pondering that quite a bit, I think that might be because for many it feels like their work is being belittled when I critique multisite as an architecture. That seems obvious in hindsight, but I don't think I really appreciated it fully.

So: I am not trying to call anyone's work poor or sub-par. I've contributed code to Aegir. I've set up multisites successfully. I'm not saying you're wrong if you do or have done the same.

I am not saying multisite can't work; I'm just saying I've seen it fail. Often. Painfully. Mostly because it has a lot of flaws and risks. But if you're winning with it, more power to you. Please don't be shy about sharing your secrets for success with the wider world.

Likewise, I'm particularly sensitive to the notion that my blog posts could be used by an aggressive sales rep at Adobe or SiteCore or some other proprietary racket to try and warn people off Drupal as a solution, which was Jakub's main concern. That would suck, and I'll have to to start salting all my posts with caveats explaining that Adobe and SiteCore represent a far greater risk for an organization looking for a website platform compared to Multisite.

That Said...

There's a reason that Frank ends up saying things like, "well, if you're going to do something dangerous I want to give you prophylactic advice." There's a reason why many old hands in the Drupal community cast a skeptical eye towards multisite — or as Robert Douglass said in an earlier session, it "should die and burn in hell". There's a reason why Sony Music, which is Frank's main experience with big Multisite, actually moved away from it on their next refit.

I think it's often difficult in Drupal to day that anything is better than anything else, and I think this holds us back. Without the ability to be critical, we won't advance. If we don't advance, we'll ultimately end up being a footnote or less in history.

Just like Drupal core decided to move away from our familiar procedural hook system to Object Oriented programming and Symfony2 with the 8.0 release, I think we need to start looking beyond the status-quo for solving the problem of running a large numbers of sites. We have to do better if we want to see wider adoption.

I felt like as a “debate” we ducked this big question. A lot of the discussion felt like a brainstorming session on how to mitigate the pain that comes with multisite, rather than an honest discussion about the architecture itself, and whether or not we deserve something better.


We also glossed over budget and business in this debate a bit. I think it's important to acknowledge that economics play a huge role:

The last point is the one that matters. The first two just add heat to the debate. Vendors are bound to clash over solutions architecture, but we can all agree that any big-picture answer to the problem of operating sites at volume will need to be able to deliver price-points that can at least compete with piling a bunch of sites on a server somewhere.

That’s a challenge. I think it’s something we should take on as a community to think about and work on. I don't think Drupal should double-down on multisite just because it's what we know historically.

This debate will continue, and I’m game. There's a lot more to discuss. My thanks to everyone who represented on the panel, and attended!

Related posts: Drupal Multisite: Much ado About Drupal MultisiteWhy Drupal Multisite Is Not Enterprise GradeNobody Ever Goes Back to Multisite
Catégories: Elsewhere

Dries Buytaert: Drupal 6 support

mer, 18/06/2014 - 19:08
Topic: Drupal

The policy of the Drupal community is to support only the current and previous stable versions of Drupal. If we maintain that policy, Drupal 6 support would be dropped the day that Drupal 8 is released. We'd only support Drupal 8 and Drupal 7.

We're changing that policy slightly as there are still around 200K known Drupal 6 sites in the wild, and we want to ensure that sites that wish to move from Drupal 6 to Drupal 8 have a supported window within which to upgrade.

The short version is that Drupal 6 core and modules will transition to unsupported status three months after Drupal 8 is released. A three month extension is not a lot, but continuing to support Drupal 6 is difficult for many reasons, including lack of automated test coverage. This gives Drupal 6 users a few options:

  1. Upgrade to Drupal 7 any time between now and 3 months after Drupal 8.0.0 is released.
  2. Upgrade to Drupal 8 after it is released, but before Drupal 6 is not supported anymore. There's already a Drupal 6 to Drupal 8 migration path in core which can be used for testing. Keep in mind though that if your site relies on contributed and custom modules, you may need to port the code and write the migration paths yourself if they're not done by the community in time for Drupal 8's release.
  3. Find an organization that will provide extended support for Drupal 6. We hope that organizations that rely on Drupal 6 will step up to help maintain it after community support winds down. The Drupal Security Team will provide a method for companies or individuals to work together in the private security issue queue to continue developing updates, and will provide a reasonable amount of time for companies to provide patches to Drupal 6 security issues that also affect Drupal 7 or Drupal 8.

You can read the details on

Catégories: Elsewhere

LightSky: Why Project Managers Need to Know Drush

mer, 18/06/2014 - 19:03

There is always a big, well maybe not so big, debate as to how technical a project manager should be.  Some agencies hire project managers who are former developers and could really complete much of a project themselves if they needed to, and some agencies hire project managers that have strong leadership and business skills, but offer very little in the way of actual development.  Both of these practices work, but they both require much different structure in terms of support underneath the project manger.  For example the more technical your project manager is, the more flexible an agency can be with the staff of developers underneath them.  These developers can be directly supervised by a project manager that has solid development or technical skills.  Having a project manager that has limited technical skills requires a bigger support structure for each team.  Developers are going to have to be able to quickly answer questions, and work with the project manager so that they can have a quick turn around to the client.  If the project manager has to pass every technical question down the line, customer experience can be negatively impacted, and that is some thing to be cautious of.  

Beyond a working knowledge of Drupal, which I hope is standard, there are two things that every project manager needs to understand, at least a little bit, to be effective and to best serve the customer.

Command Line

You don't have to be a command line wiz here, but you need to understand the basics.  This would include changing directories, listing directory contents, and understanding file permissions.  You need to be able to navigate through a clients server, check permissions and do a little bit of basic troubleshooting.  From there you have the ability to see if servers are even responding, check credentials that are given to you by a client, check domain and DNS issues that might arise.  The more you can understand here, the more likely you will be to get all of the information from the client prior to sending an issue to a developer.


Once you have that basic command line knowledge squared away, you can use Drush, and let me tell if you aren't using Drush you don't know what you are missing.  At LightSky our Project Managers are typically the ones who manage our update agreements with clients.  They are the ones notified when updates are available, and they are the ones who typically run the updates.  Occasionally they will run into a snag, but that is what developers are here for.  This not only means that our clients can get these updates a bit quicker, but it also means that our dev time can be freed up for more complex tasks.  Our project managers can solve a lot of issues by being able to clear a cache in Drupal, or install and update modules and core.  And all of these tasks are far easier to do with Drush than with the standard Drupal interface.

Basic command line skills coupled with a good working knowledge of Drush can take your project management to the next level.  For me the change has been pretty drastic.  If everyone leaves for lunch, and I get a phone call, often I can have a fix started before the developers return, and sometimes a head start makes a big difference to your clients.  

What technical tools to your project managers use on a daily basis, are they all comfortable at the command line?

For more tips like these, follow us on Twitter, LinkedIn, or Google+. You can also contact us directly or request a consultation.

Catégories: Elsewhere

SitePoint PHP Drupal: Building a Drupal 8 Module – Configuration Management and the Service Container

mer, 18/06/2014 - 18:00

In the previous article on Drupal 8 module development, we’ve looked at creating block types and forms. We’ve seen that blocks are now reusable and how everything we need to do for defining block types happens in one single class. Similarly, form generation functions are also grouped under one class with specific methods performing tasks similar to what we are used to in Drupal 7.

In this tutorial, I will continue where we left off. I will illustrate how we can turn our DemoForm into a form used to store a value through the Drupal 8 configuration system. Following that, we will talk a bit about the service container and dependency injection by way of illustration.

Don’t forget that you can check out this repository if you want to get all the code we write in this tutorial series.

Configuration forms

When we first defined our DemoForm, we extended the FormBase class which is the simplest implementation of the FormInterface. However, Drupal 8 also comes with a ConfigFormBase that provides some additional functionality which makes it very easy to interact with the configuration system.

What we will do now is transform DemoForm into one which will be used to store the email address the user enters. The first thing we should do is replace the extended class with ConfigFormBase (and of course use it):

[code language="php"] use Drupal\Core\Form\ConfigFormBase; class DemoForm extends ConfigFormBase { [/code]

Before we move on to changing other things in the form, let’s understand a bit how simple configuration works in Drupal 8. I say simple because there are also configuration entities that are more complex and that we will not cover today. As it stands now, configuration provided by modules (core or contrib) is stored in YAML files. On enabling a module, this data gets imported into the database (for better performance while working with it). Through the UI we can change this configuration which is then easily exportable to YAML files for deployment across different sites.

A module can provide default configuration in a YAML file located in the config/install folder in the module root directory. The convention for naming this file is to prefix it with the name of the module. So let’s create one called demo.settings.yml. Inside this file, let’s paste the following:

[code] demo: email_address: [/code]

This is a nested structure (like an associative array in PHP). Under the key demo, we have another key|value pair. And usually to access these nested values we use a dot(.). In our case demo.email_address.

Once we have this file in place, an important thing you need to remember is that this file gets imported only when the module is installed. So go ahead and reinstall it. And now we can turn back to our form and go through the methods that need adapting one by one.

Continue reading %Building a Drupal 8 Module – Configuration Management and the Service Container%

Catégories: Elsewhere

DrupalCon Amsterdam: Amsterdam Session Submissions Overview

mer, 18/06/2014 - 08:00

On Friday, everyone sent in their ideas fore sessions and training at DrupalCon Amsterdam. We had tons of great ideas from lots of smart Drupalers, and got a great number of scholarship requests, too. Since everyone loves data, we decided to whip up some fun charts and graphs to tide you over until the session announcements come out.

That said, we're proud to present you with an overview of the submissions for DrupalCon Amsterdam!

Session Submission Results

Thanks to lots of hard work from our awesome volunteers, we enjoyed record-breaking numbers of session submissions.

2 weeks before submission close: 80
2 days before submission close: 190
Day submission close: 510

DrupalCon Prague submissions: 359
DrupalCon Amsterdam submissions: 510
That's a 142% increase from 2013 to 2014!

Submissions by Track and Experience Level*

Coding and Development: 128
Case Studies: 51
Core Conversations: 21
DevOps: 71
Drupal Business: 89
Frontend: 52
PHP: 15
Site Building: 83

Beginner: 155
Intermediate: 300
Advanced: 59

Training Submissions & Scholarship Submissions

Number of fantastic training proposals that came in: 20
Requested financial aid through grants and scholarships: 83

What comes next?

Announcements will be made in the coming weeks. If you applied for financial aid, submitted a session, or put forth an idea for training, we will notify you by email if your application is accepted. In the meantime, don't forget to book your ticket while the earlybird rate still applies and book your hotel while there are still rooms available.

See you in Amsterdam!

Catégories: Elsewhere

Drupal Watchdog: Drupal In Context

mar, 17/06/2014 - 20:38

For non-programmers like me, Drupal's biggest flaws show up when a site is almost right, but needs “just a few lines of code” to really make it work. Some people learn programming easily; others can't, or don't want to invest the time. Fortunately, there are Drupal tricks to get many of the benefits of programming without having to actually deal with PHP, CSS, or JavaScript. Here are some that'll get you most of the way there.

Refactor Your Problem

Whenever you run into a brick wall, stop, step back, and ask yourself: Does my site really need to have this feature? Could it be done in a different way?

Many times when I do this, I realize that I've had a specific idea in my head as to how something should work, rather than the end goal I'm trying to achieve. Stepping back to look at the site from a visitor's point of view often shows me a simpler way to produce what's needed — even if it doesn't work anything like my original idea.

Use the Megamodules

Most modules perform a specific task; "megamodules" give you ways to manipulate Drupal's inner workings through a simplified visual interface. The best-known is Views, which helps site builders deliver the results of a database query: in essence, it translates your point-and-click directives into SQL. (Views will be part of Drupal 8’s core package.)

But Views isn't the only megamodule out there.

  • Rules lets you create logic that reacts to actions by users;
  • Panels replaces tricky PHP theme templating in some cases;
  • Features packages site elements into a custom module;
  • Sweaver writes CSS for your theme, reflecting changes you make in a visual interface.
Learn What Programmers Do

Even if you don't plan to actually write code, getting to know the tools that programmers use will help you in two ways. First, you'll be able to implement code written by others; second, you'll be able to frame the problem better when you interact with programmers.

Catégories: Elsewhere

ThinkShout: Creating an Infinite Scroll Masonry Block Without Views

mar, 17/06/2014 - 19:00

The Views Infinite Scroll module provides a way to apply infinite scroll to the output of a view, but if you want to apply infinite scroll to custom block content, you're out of luck. I found myself in this position while developing a recently-launched site for The Salmon Project, which I'll use as a loose reference point as I walk you through my solution to applying both Masonry and Infinite Scroll to custom block content.

  1. Creating a block to hold our paged content that can be placed on a page
  2. Generating a paged content array to return as block content
  3. Applying Masonry to the paged content block
  4. Applying Infinite Scroll to the paged content block
  5. Creating an Infinite Scroll trigger
Creating a block to hold paged content

The first step is creating a block to hold our custom paged view that can be placed on a page. To create the block, we'll use a hook_block_info() followed by a hook_block_view().

<?php function my_module_block_info() { $block['masonry_content'] = array( 'info' => 'Masonry content', ); return $block } function my_module_block_view($delta = '') { $block = array(); $block['subject'] = ''; $block['content'] = my_module_masonry_content(); }

Here, we've defined which function we'll be generating our paged view from; namely my_module_masonry_content.

Generating a paged content array

Within our my_module_masonry_content function, we'll create a paged view of nodes. To do so, we'll use an EntityFieldQuery with the "pager" property, which causes the results of the query to be returned as a pager.

<?php $query->pager(3);

The argument passed to the pager function determines how many results at a time will be returned; this is analogous to setting the number of items to be shown per page in a view – keep this bit in mind, as we'll return to it later when implementing the infinite scroll.

Now we'll add some query conditions and, finally, execute the query. These conditions are, of course, site specific, but I'm including them as an example for thoroughness. For help constructing your EntityFieldQuery query, see the EntityFieldQuery api documentation page.

<?php $query->fieldCondition('field_example', 'value', array('value1', 'value2'), 'IN'); $query->propertyCondition('status', '1'); $results = $query.execute();

In this example, I've requested all nodes that have a field_example value of value1 or value2 and are published (i.e. node status property is equal to 1).

Once we have our results set, we will need to create a renderable array to return as the block content.

<?php foreach ($node_result['node'] as $row) { $node = entity_load_single('node', $row->nid); $output[] = node_view($node, 'category_term_page'); }

Then we'll apply pager theming to the output by explicitly adding it to the returned renderable array.

<?php $output['pager'] = array('#theme' => 'pager');

We then return our output array and get to the JavaScript…

Applying Masonry to the paged content block

To apply Masonry to the paged block content, we'll need some JavaScript so let's create a new JavaScript file within our module's js directory (js/my_module.js). This file will depend on the Masonry JavaScript library so we'll need to load it in addition to our new, custom JavaScript file.

Loading the required libraries

For optimal performance, we only want to load our JavaScript when the masonry_content block is present. To conditionally load the JavaScript we'll use a hook_block_view_alter().

<?php function my_module_block_view_alter(&$data, $block) { // only load libraries if masonry_content block is present if ($block->module == 'my_module' && $block->delta == 'masonry_content') $module_path = drupal_get_path('moduel', 'my_module'); $data['content']['#attached']['js'][] = $module_path . '/js/my_module.js'; $masonry_path = libraries_get_path('masonry'); $data['content']['#attached']['js'][] = $masonry_path . '/dist/masonry.pkgd.min.js'; } }

Now that we've got our required JavaScripts, let's take a look at how to apply Masonry to our paged content block.

// within my_module.js var container = $('#my-module-masonry-content); container.masonry({ // Masonry options itemSelector: '#my-module-masonry-content article.node' });

In this case, we're selecting the <div> that has our block id and the node items within that <div>.

Doing this will apply Masonry once to the items present after the initial page load, but since we are going to be loading more items via the pager, we'll need to re-apply Masonry after those new items are loaded. We haven't defined the "change" action yet, but will later when implementing the infinite scroll JavaScript.

// necessary to apply masonry to new items pulled in from infinite_scroll.js container.bind('change', function() { container.masonry('reloadItems'); container.masonry() }); });

Now that we've got Masonry applied to our block content, let's move on to getting the infinite scroll behavior in place.

Applying Infinite Scroll to the paged content block

To pull new items into our content block (to which Masonry is being applied) we'll leverage the Autopager library. This means we'll need to add Autopager to the list of JavaScript to be loaded conditionally when our block is present.

Loading more required libraries

Again, we'll use drupal_get_path() and libraries_get_path() to retrieve more required JavaScript from within our hook_block_view_alter().

<?php $autopager_path = libraries_get_path('autopager'); $data['content']['#attached']['js'][] = $autopager_path . '/jquery.autopager-1.0.0.js';

Now that we've got Autopager and my_module_infinite_scroll.js loaded, let's apply the infinite scroll…

The first thing we need to to is define the parameters that will be passed to Autopager.

<?php // Make sure that autopager plugin is loaded if($.autopager) { // define autopager parameters var content_selector = '#my-module-masonry-content'; var items_selector = content_selector + 'article.node'; var next_selector = '.pager-next a'; var pager_selector = '.pager'

Notice that $content_selector matches the selector we used to apply Masonry to each piece of block content. This is because Autopager will, behind the scenes, retrieve more content similar to what's already on the page when we click the "next" link.

The $next_selector and $pager_selector selectors are what target the "next" and "1, 2, 3…" links our pager exposes and, not incidentally, what Autopager uses to retrieve the next set of content. Recall from above our query returns 3 nodes at a time so the "next" link will cause 3 more nodes to be shown.

Though necessary to retrieve more content, we don't want to see these pager links so let's hide them.


…and now create our Autopager handler.

var handle = $.autopager({ autoLoad: false, appendTo: content_selector, content: items_selector, link: next_selector, load: function() { $(content_selector).trigger('change'); } });

The $(content_selector).trigger('change') bit is a key component of this snippet because the "change" action is what we are using to apply Masonry to new items added to our block (see the container.bind('change'… bit above).

Triggering the infinite scroll function

The Autopager handler we just defined acts as our gas with respect to the infinite scroll action, but we also need a brake. The following snippet, taken from views_infinite_scroll.js uses some fancy math to determine when the user has hit page bottom and only calls handle.autopager('load') when this is the case, effectively acting as the brake.

// Trigger autoload if content height is less than doc height already var prev_content_height = $(content_selector).height(); do { var last = $(items_selector).filter(':last'); if(last.offset().top + last.height() < $(document).scrollTop() + $(window).height()) { last = $(items_selector).filter(':last'); handle.autopager('load'); } else { break; } } while ($(content_selector).height() > prev_content_height);

You'll notice on The Salmon Project site I am not using infinite scroll; instead I opted for a "View more" button to trigger the autopager('load') action and some logic in the function bound to the "change" action to hide said button.

Regardless of the method you choose as a trigger, all the method needs to do is call Autopager's load function (analogous to hitting the hidden "next" pager link) to load more content.

And there you have it, an infinite scroll masonry block that loads 3 more nodes each time the user hits page bottom without the use of the Views module.

Catégories: Elsewhere

Acquia: Why Drupal? Because It Will Help You Win

mar, 17/06/2014 - 18:34

How will Drupal help you win? How will it help you do better? As business itself becomes digital, the benefits that a widely adopted, open source technology solution like Drupal can bring to organizations today are manyfold, but they can be boiled down to a few salient points.

The key ways Drupal will help you win:

Catégories: Elsewhere

Drupal core announcements: Drupal core security release window on Wednesday, June 18

mar, 17/06/2014 - 16:57
Start:  2014-06-18 (All day) America/New_York Sprint Organizers:  David_Rothstein

The monthly security release window for Drupal 6 and Drupal 7 core will take place on Wednesday, June 18.

This does not mean that a Drupal core security release will necessarily take place on that date for either the Drupal 6 or Drupal 7 branches, only that you should prepare to look out for one (and be ready to update your Drupal sites in the event that the Drupal security team decides to make a release).

There will be no bug fix release on this date; the next window for a Drupal core bug fix release is Wednesday, July 2.

For more information on Drupal core release windows, see the documentation on release timing and security releases, and the discussion that led to this policy being implemented.

Catégories: Elsewhere

Mediacurrent: The Real Value of Drupalcon

mar, 17/06/2014 - 16:50

I bet most people who have ever attended a DrupalCon before would agree that it takes a full week to process (and recoup) from all the community synergy and new information consumed during this epic annual event. 2014’s DrupalCon in Austin, TX was jam packed with awesome boasting the largest DrupalCon yet and also one of the most diverse. There were nearly 3,500 people from 60 countries in attendance! The Austin Convention Center was the perfect venue, and downtown Austin became the Drupal community’s stomping grounds all week as we filled restaurants, bars and hotels with Drupal chic.

Catégories: Elsewhere Featured Case Studies: Campagna Center Responsive, Ecommerce Website

mar, 17/06/2014 - 16:50
Completed Drupal site or project URL:

The Campagna Center is a non-profit organization located in Alexandria, Virginia centered on delivering superior academic and social development programs for individuals of all ages to inspire a commitment to learning and achievement. As with many non-profits, their website is an integral platform for keeping donors, volunteer members, and program attendees engaged and informed.

The company behind the development and design is New Target, Inc. based out of Alexandria, Virginia. New Target is a full service web company frequently partnering with associations, non-profits, and mission driven organizations to inspire and engage audiences on the web.

Key modules/theme/distribution used: OmegaRespond.jsViews Nivo SliderSuperfishRedirectMollomGoogle AnalyticsBlock referenceCommerceShortcodeOrganizations involved: New Target, Inc.Team members: Brian Newsomehayliejcastedohak55pgrujicpaige.elena
Catégories: Elsewhere Featured Case Studies: Campagna Center Responsive, Ecommerce Website

mar, 17/06/2014 - 16:50
Completed Drupal site or project URL:

The Campagna Center is a non-profit organization located in Alexandria, Virginia centered on delivering superior academic and social development programs for individuals of all ages to inspire a commitment to learning and achievement. As with many non-profits, their website is an integral platform for keeping donors, volunteer members, and program attendees engaged and informed.

The company behind the development and design is New Target, Inc. based out of Alexandria, Virginia. New Target is a full service web company frequently partnering with associations, non-profits, and mission driven organizations to inspire and engage audiences on the web.

Key modules/theme/distribution used: OmegaRespond.jsViews Nivo SliderSuperfishRedirectMollomGoogle AnalyticsBlock referenceCommerceShortcodeOrganizations involved: New Target, Inc.Team members: Brian Newsomehayliejcastedohak55pgrujicpaige.elena
Catégories: Elsewhere

Chromatic: Converting Drupal Text Formats with Pandoc

mar, 17/06/2014 - 16:36

Switching the default text format of a field is easy. Manually converting existing content to a different input format is not. What about migrating thousands of nodes to use a different input format? That isn't anyone's idea of fun!

For example, let's say that all of a site's content is currently written in markdown. However, the new editor wants to not only write all future content in the textile format, but also wants all previous content converted to textile as well for a consistent editing experience. Or perhaps you are migrating content from a site that was written in the MediaWiki format, but standard HTML is the desired input format for the new site and all of the content needs to be converted. Either way, there is a lot of tedious work is ahead if an automated solution is not found.

Thankfully there is an amazing command line utility called Pandoc that will help us do just that. Pandoc converts text from one input syntax to another, freeing you to do less mind numbing activities with your time. Let's take a look at how Pandoc can integrate with Drupal to allow you to migrate your content from one input format to another with ease.

After installing it on your environment(s), below is the basic function that provides Pandoc functionality to Drupal. It accepts a string of text to convert, a from format, and a to format. It then returns the re-formatted text. It's that simple.

* Convert text from one format to another.
* @param $text
*  The string of text to convert.
* @param $from
*  The current format of the text.
* @param $to
*  The format to convert the text to.
* @return
*  The re-formatted text.
function text_format_converter_convert_text($text, $from, $to) {
  // Create the command.
  $command = sprintf('pandoc -f %s -t %s --normalize', $from, $to);
  // Build the settings.
  $descriptorspec = array(
    // Create the stdin as a pipe.
    0 => array("pipe", "r"),
    // Create the stdout as a pipe.
    1 => array("pipe", "w"),
  // Set some command settings.
  $cwd = getcwd();
  $env = array();
  // Create the process.
  $process = proc_open($command, $descriptorspec, $pipes, $cwd, $env);
  // Verify that the process was created successfully.
  if (is_resource($process)) {
    // Write the text to stdin.
    fwrite($pipes[0], $text);
    // Get stdout stream content.
    $text_converted = stream_get_contents($pipes[1]);
    // Close the process.
    $return_value = proc_close($process);
    // A valid response was returned.
    if ($text_converted) {
      return $text_converted;
    // Invalid response returned.
    return FALSE;

We've written a barebones module around that function that makes our conversions much easier. It has a basic administration page that accepts from and to formats, as well as the node type to act upon. It will then run that conversion on the Body field of every node of that type. It should be noted that this module makes no attempt to adjust the input format settings or to ensure that the modules required for parsing the new/old format are even installed on the site. So treat this module as a migration tool, not a seamless production ready solution!

Pandoc has quite a few features and options, so check out the documentation to see how it will best help you. You can also see the powers of Pandoc in action with this online demo. Let us know if you use our module and as always, test any text conversions in a development environment before doing so on a live site! Please note: CHROMATIC nor myself bear any liability for this module's usage.

Catégories: Elsewhere

CTI Digital: Creating and using a public/private SSH key-pair in Mac OS X 10.9 Mavericks

mar, 17/06/2014 - 16:34
In the following article, we’re going to run through the process of creating a public/private SSH key-pair in OS X 10.9.   Once this is done, we’ll configure our GitHub account to use the public key, create a new repository and finally pull this repository down onto our machine via SSH.   Before setting up an SSH key on our system we first need to install GIT. If you’ve already installed GIT please proceed to the next section - otherwise lets get started.    Installation and configuration of GIT   To install GIT on Mac OS X 10.9, please navigate to the following URL ( and click the “Download for Mac” button.     Fig 1: Download options available at   Once the *.dmg file has downloaded, double-click the file to mount it and in the new finder window that pops up, double-click on the file “git-1.9.2-intel-universal-snow-leopard.pkg” (the file will have likely changed name somewhat by the time you read this article, but aside from the version number, it should still be quite similar).   If you get the error highlighted in “Fig 2” when trying to open the file simply right-click on the *.pkg file and click “Open”. You should then see a new dialogue window similar to the one displayed in “Fig 3”, which will allow you to continue on to the installation process.   Fig 2: The error an end-user will see when trying to open a non-identified file if the “Allow apps downloaded from” section of “Security & Privacy” is set to “Mac App Store and identified developers” within “System Preferences”.   Fig 3: When right-clicking the *.pkg file and clicking “Open” the end-user is given a soft warning but now, unlike “Fig 2” we're able to bypass this dialogue by clicking “Open”.   The installation process for Git is fairly self explanatory, so I won’t go into too much detail - In a nutshell you will be asked to install Git for all users of the computer (I suggest leaving this at it’s default value) and you’ll be asked if you want to change the location of the installer (unless you have good reason to change the Git install location this should be left to the default value).    Finally, as part of the installation process you’ll be prompted to enter your system password to allow the installer to continue as shown in - type your password and click “Install Software”. If all goes well at the end of the installation process you should see the message “The installation was successful.”. At this stage you can click “Close” to close the installer.    Fig 4: Prior to installation, the GIT installer will require you to enter your system password to allow it to write files to the specified locations.   After the Git installation process we need to open a new instance of the Terminal application. This can be accomplished by opening the finder, clicking the “Applications” shortcut in the sidebar, scrolling to the bottom of the applications listing in the main window, double-clicking “Utilities” and finally double-clicking on “Terminal”.  

Pro tip: A much quicker way of accessing the Terminal is by pressing “Cmd+Space” to bring up Spotlight, typing “Terminal” and hitting the enter key. Once you become familiar with Spotlight it becomes indispensable!

  Once the Terminal window is open, type “git --version” and hit enter. If you’re running a fresh install of Mac OS X 10.9 at this stage you will likely be shown a message telling you that Developer Tools was not found and a popup will appear requesting that you install the tools. Click “Install” on the first dialogue window and when the next popup is displayed, click “Agree”.     Fig 5: The message most users will receive with a fresh install of OS X 10.9 when typing “git --version” into the terminal.   After the installation of Developer Tools, restart the Terminal application and type the command “git --version” followed by hitting enter. This time you should see the version number of the Git application installed.     Fig 6: Terminal displaying the version number of the installed Git application.   Finally, for the installation and configuration of Git we’re going to configure some user-level settings (specifically your name and email address). These configuration settings will be stored in your home directory in a file named “.gitconfig”.    To configure these settings type the following into the terminal (replacing my name and email address with your own obviously!).   git config --global “Craig Perks”  git config --global    Once done, type “git config --list" and you should see a list of user configuration settings analogous to those shown in “Fig 7”.     Fig 7: A Terminal instance showing the configuration settings for the logged-in user.   Now that we have Git successfully installed, in the next section, let’s create our public/private key-pair and add them to our GitHub account.   Creating an SSH public/private key-pair!   In the Terminal, let’s ensure we’re in our home directory. We can navigate to it by typing the following command in the Terminal:   cd ~/    From here we want to create a folder to store our SSH keys in. My preference here is to store them in a hidden folder called ‘ssh’.    Pro tip: By prefixing a folder or a file name with a dot the you’re essentially saying to the system “Hide this” by default.   To create our SSH directory, type the following command into the Terminal window: mkdir .ssh Next, type the command “cd .ssh“ and hit enter followed by command “pwd”. At this point you should see that you’ve now successfully navigated into the “ssh” folder.     Fig 8: By typing “pwd” into the Terminal we’re shown a literal path to our present working directory, which as displayed is /Users/<username>/.ssh.   Now, let’s create our public/private key-pair. Type “ssh-keygen” into the Terminal and hit enter. At this point you’ll be asked to enter a name for your public/private key-pair. This name can be anything, but for this tutorial, I’ll use my first name with a suffix of _rsa.     Fig 9: Creation of a public/private key-pair with the name “”.   The creation of a passphrase is an optional step, but a recommended one. Enter a passphrase (short password of your choosing), hit enter and enter the same passphrase again. One your public/private key-pair has been generated, you’ll see a message similar to the one highlighted in “Fig 10”.   Fig 10: The message shown to an end-user upon successful creation of a public/private key-pair.   Now we have a public/private key-pair, we want to add our newly created key to the ssh-agent. This can be achieved by typing the following command (remembering to amend the private key file name with your own file):   ssh-add -K ~/.ssh/craig_rsa    If you created a passphrase in the previous step, you’ll be prompted to enter your passphrase now. If you successfully add your key to the agent you’ll see a message similar to the following “Identity added: /Users/craigperks/.ssh/craig_rsa (/Users/craigperks/.ssh/craig_rsa)”.   Once your key is added to the ssh-agent, type the command “ssh-add -l” into the Terminal and you’ll see it displayed in the list of known keys.     Fig 11: Our newly created key listed in the ssh-agent.   Now we have our public/private key-pair successfully created, let’s add our public key to our GitHub account, create a repository and clone the repository.   Creating a repository on GitHub and cloning this onto our machine.   I’m not going to go through the GitHub registration in this guide. If you haven’t already done so, register an account on and log-in.   Before we do anything on the GitHub website, we want to copy our public key. To do so, type the following command in the Terminal window (again substituting “craig_rsa” for whatever name you decided to give your key-pair”): pbcopy < ~/.ssh/   Once done, navigate over to GitHub and click the “Account Settings” icon in the toolbar as pictured.     Fig 12: The “Account Settings” icon as shown to logged-in GitHub users.   On the “Account Settings” page “SSH keys” should be listed in the left-hand sidebar. Click it and on the next page that loads click “Add SSH key”.     Fig 13: The “Add SSH key” button, which allows you to add public keys to your GitHub account.   On the next page, give your key a name and paste the contents of your key (that we previously copied with the pbcopy command) into the “Key” field.   Note: Although I’m showing the contents of a public key here, it’s a dummy key and will be deleted upon completion of this guide. You should only share your public key with trusted sources.     Fig 14: Form displayed to GitHub account holders when adding a new key to the site.   Now we have our public key loaded into Git, let’s create a new repository, by clicking the “+” icon displayed next to our username (located in the top-right of the toolbar when logged in). From the menu that pops-up, click “New repository” and you’ll be directed to   From here, give the repository a name of “test” and ensure “Initialize this repository with a README” is checked.      Fig 15: Page displayed to GitHub account holders when creating a new repository.   Finally click the “Create repository” button.   In the right-hand sidebar that is displayed on your newly created repository, “SSH clone URL” should be visible.     Fig 16: SSH clone URL link, which allows users to clone the Git repository.   Click the “copy to clipboard” icon under “SSH clone URL” and return to the Terminal application.   Type the command “cd ~/Desktop” into the Terminal window and hit enter. Now that we’re in the Desktop folder in the Terminal type the command “mkdir git” and hit enter. If you go to your Mac OS X desktop at this point you’ll see that a folder called “git” has been created.    Back in the Terminal window type “cd git” to move into this directory. Finally type “git clone” followed by pasting the code copied from the GitHub repository “SSH clone URL” into the Terminal window (for me this would be: git clone Hit enter when you’re ready and the repository will begin to clone.   If you’ve never cloned a repository from GitHub before, you may receive the message “The authenticity of the host ‘ (’ can’t be established” to continue type “yes” and hit enter and will be added to the list of known hosts.    Finally once the cloning is complete, type “cd test” to navigate into the newly created repository directory and finally type “ls -la” to display a listing of the folder (including hidden files).    If you see listed, you’ve just successfully cloned your Git repository!!     Fig 17: Our successfully cloned Git repository displaying its contents. _ _  If you spot an error in this tutorial, or have any questions, please feel free to get in touch with me on Twitter at @craigperks.  
Catégories: Elsewhere

Advomatic: Fajitas, Front End Meets Design, and Remembering to Shower: A DrupalCon Recap

mar, 17/06/2014 - 16:27

Like many of you, the AdvoTeam hit Austin a coupe of weeks ago for DrupalCon 2014. Now that we’ve had some time to digest all the knowledge dropped (pun intended), we’re sharing our favorite DrupalCon takeaways. We’ve even included links to the DrupalCon sessions, so you can share in the joy.


Amanda, Front-End Developer:

Design and front end are figuring out how to fit together. Do designers need to know how to code? Should they really uninstall Photoshop? And while front-enders are benefiting from all the dev work coming down the pipe, we’re a bit overloaded waiting to see what emerges when the dust settles.

Despite our gripes, it is pretty satisfying that our teams now recognize that the front-end is infinitely more complex than it was just a few years ago.

Lastly, I was blown away that the conference attendance was 20% women! For me, that’s an indication that the community is doing something right in terms of attracting women in ways that other softwares don’t. Kudos!

Monica-Lisa, Director of Web Development

Loved the session on running a shop of remote workers. The big takeaways: Be in constant, positive, fun communication with one another. Find the best tools to keep in touch. If you can’t work all the same hours, choose a few hours a day, every day to overlap. Have a lot of trust in your people. And of course, don’t forget to take a shower every once in a while.

Dave, Web Development Technical Manager

This was one of the best DrupalCons I’ve been to. Top of the list of sessions: Adam Edgerton’s talk on scaling a dev shop. This year was also one of the best DriesNotes (keynote by Dries Buytaert, founder and lead developer of Drupal.)

And in past years, I’ve spent a lot of time hanging out with anyone I bumped into. But this year, I spent almost all my time with the Advoteam; going biking and swimming - we even went to Monica’s mom’s house for fajitas one night.

Jim, Senior Web Developer

The Core Sprint was inspiring, because everyone was getting the help they needed while also giving help to others. Everyone knew different things, so as a group, we were all able to share our collective knowledge to get people set up on Drupal 8, review patches, and to commit new ones.

Jack, Front End Developer

Once again, Drupalcon has shown me that it's not safe (or fun) to get comfortable.  The tools we use to make our work go faster and smoother are constantly changing.  What’s all the rage this year will probably be obsolete next year.  Don't fall in love with any one way of doing things.

Front-end development and theming has never felt more "sink or swim", and that's probably a good thing.  However, as things get more and more complicated, the single front-end developer that knows everything becomes more of a mythological creature.  As new worlds of specialization open up, it becomes more important to have new specialists available.

Lastly, it was awesome to get some face time with the Advoteam.  It's good for the remote team's morale, and also nice to be reminded you work with other human beings that have other things to talk about besides technical Drupal talk.

Did you make it to DrupalCon? What sessions did we miss?


Catégories: Elsewhere

Drupalize.Me: Drupal 8 Survey Insights

mar, 17/06/2014 - 15:30

Last month we asked the Drupal community a few questions. We received 243 responses to our survey, and we'd like to share some of the information. While we're not making scientific generalizations from this data, it is an interesting picture of our community nonetheless. A big thank you to everyone who participated in the survey.

Here are 4 things we learned:

Catégories: Elsewhere

Open Source Training: Building a Business Directory With Drupal

mar, 17/06/2014 - 14:43

Over the last couple of weeks, several different OSTraining members have asked me about creating a directory in Drupal.

I'm going to recommend a 4-step process for creating a basic directory.

Using default Drupal, plus the Display Suite and the Search API modules, we can create almost any type of directory.

Catégories: Elsewhere

Liran Tal's Enginx: Drupal Performance Tuning for Better Database Utilization – Introduction

mar, 17/06/2014 - 08:13
This entry is part 1 of 1 in the series Drupal Performance Tuning for Better Database Utilization

Drupal is a great CMS or CMF, whichever your take on it, but it can definitely grow up to be a resources hog with all of those contributed modules implementing hooks to no avail. It is even worse when developers aren’t always performance oriented (or security oriented god save us all) and this can (unknowingly) take it’s toll on your web application performance.

Drupal performance tuning has seen it’s share through many presentation decks, tutorials, and even dedicated books such as PacktPub’s Drupal 6 Performance Tips but it seems to be an always continuing task to get great performance so here are some thoughts on where you should start looking.


Checklist for glancing further into Drupal’s rabbit hole and getting insights on tuning your web application for better performance:

  1. Enable MySQL slow query log to trace all the queries which take a long time (usually >1 is enough, and with later versions of MySQL or compliant databases like Percona or MariaDB you can also specify milliseconds for the slow query log)
  2. Enable MySQL slow query log to also log any queries without indexes
  3. Make sure to review all of those query logs with EXPLAIN to figure out which queries can be better constructed to employ good use of indexes. Where indexes are missing it’s worth reviewing if the database would benefit from modifying existing indexes (and not breaking older queries)
  4. Use percona-toolkit to review out standing queries
  5. Use New Relic’s PHP server side engine which can tune into your web application and provide great analysis on function call time, wall time, and overall execution pipelines. While it’s not a must, I’ve personally experienced it and it’s a great SaaS offering for an immediate solution without having to need to install alternatives like XHProf or Webgrind.


(adsbygoogle = window.adsbygoogle || []).push({});

The post Drupal Performance Tuning for Better Database Utilization – Introduction appeared first on Liran Tal's Enginx.

Catégories: Elsewhere

AGLOBALWAY: Drupal to Excel with Views

lun, 16/06/2014 - 23:06

Every now and then we need to export a bunch of content from our Drupal site into a spreadsheet. This can easily be accomplished through views with modules like Views Excel Export, which allows you to add a new display for your view. Also, you can add it as an attachment to your existing view, creating a button. Click, and you download a spreadsheet (or CSV, there are options!) just like that. Modules like this one work really well if your data structure is straightforward, and there is no need to format your spreadsheet. What happens if you want it to look a certain way, add borders to columns or rows? What if your view is referencing Entities? Suddenly Views Excel Export doesn't quite cut it anymore. Enter PHP Excel. While this module has no UI to speak of, it does add PHP Excel to your Libraries folder in your Drupal installation. The PHP Excel library gives you a ton of functions that allow you to format an Excel spreadsheet, as well as write to it directly. Let's say we wish to output our "Events" view to Excel. By creating our own custom module, we can call our view programmatically after referencing the PHP Excel library:

function my_custom_spreadsheet_view() {
$view = views_get_view('events');


Rather than calling


, we can use


to find and extract the exact information we want. Assign your data to variables so that you can loop through your Events and write them to Excel in the format you like.

$events = $view--->results;
// start writing to row 5 of your spreadsheet
$rowID = 5;
foreach($events as $event => $e){
// do magic with your data
->setCellValue('A'.$rowID, $data)
->setCellValue('B'.$rowID, $data1)
->setCellValue('C'.$rowID, $data2)
// move to the next row for the next Event

PHP Excel comes with a number of examples, simple and complex. Use these to learn how to format and generate your spreadsheet when you call your custom function.

Tags: drupaldrupal planet
Catégories: Elsewhere