Planet Drupal

Subscribe to flux Planet Drupal - aggregated feeds in category Planet Drupal
Mis à jour : il y a 33 min 21 sec

Phase2: Your Frontend Methodology is All of Them

ven, 17/01/2014 - 16:34

Atomic Design, StyleTiles, OOCSS, BEM, SMACSS, Compass – Combining the latest documented methodologies into a unified approach to component-based design/development

This is the first of a blog post series about teasing sense from the morass of information churning around modern frontend development and design. As a modern development team, you need a strategy. You need guidelines and best practices and defined design patterns. You need a unified language that your team all shares. This blog series starts with an anecdote about the humble <li> element.


A few years ago, my team of five frontend developers was tasked with quickly breaking apart comps for the landing page of a straightforward site design. We had to move fast, and we had to work in parallel without stepping on each other’s toes. Each person was given a different component of the front page – the main menu, the news post listings (complete with action links), the sidebar blocks, the footer menu, the random share widgets slowly asphyxiating the last vestiges of white space and page render performance …

This was a blog-format Drupal site, which meant a lot of one design pattern:

You can’t have a modern, information-focused design without lists, either vertical or horizontal. Take, for example, It is a perfect example of the peanut butter and jelly of Drupal and inline-lists:

Each of us had to style one or more inline lists in our assigned components. When we regrouped the next day to merge our work together, something odd happened. None of our lists played well together: we’d each styled something as simple as an inline-list completely differently. I used display: inline-block; with the IE7 zoom hack. Someone else floated everything left with a clear on the parent element. Someone else chose display: inline. Someone else used inline-block, but without the IE7 inclusion (which we had to suicidally support at the time). The fifth frontend dev panicked and just YOLO’d out a number of position: absolute’s spaced equidistant.

We had no unified approach to such a common design pattern as the lowly inline list. It was a mess and the different implementations were “leaking” into other aspects of the site build. There were no expectations of behavior: How does this technique behave on IE8? How does this technique handle overflowing content? Why is this technique a higher z-index than our overlays?! We needed to approach these patterns sanely and consistently as a team.


Sass and Compass changed everything. We weren’t using preprocessors until 2010, but when they hit, we truly started to work as a team. We finally had that thing that managers want their developers to have but don’t quite know how to, because it’s really hard: we had a PROCESS.

It goes like this: Compass is essentially a big collection of really useful Sass mixins (more on Sass later). That is, Compass makes writing a lot of tedious, common CSS structures fast and, more importantly, standardized. In our case, Compass provides the sublime little inline block list mixin.

// Apply to the parent ul ul.nav{   @include inline-block-list; }

That little “@include” is applying the following behind the scenes:

@mixin inline-block-list($padding: false) {   @include inline-block-list-container; // Wrapper behavior is handled here for us   li {     @include inline-block-list-item($padding); // See below for all the considerations handled for us   } }

Which is actually applying the more pertinent and robust:

@mixin inline-block-list-item($padding: false) {   @include no-bullet; // Because we always forget this   @include inline-block; // Takes into consideration things like what to include for IE   white-space: nowrap; // We also forget about this   // Optional padding we can pass in @if $padding {     padding: {       left: $padding;       right: $padding;     };   } }

There’s a lot going on here, but to summarize: we get an inline list that is self-clearing, floated-left, with optional padding and default, sane styles.

The second best part of all this is that we wrote NONE of it. The best part is that my whole team knows exactly what to expect in this scenario. No more cowboy-code to solve the same problem five different ways.

Compass allowed us to define a few important rules within our process. One of those rules has survived 50+ site-launches, two parent companies, and three MacBooks:

If Compass has a mixin for a component, USE IT.

Given that there are dozens of Compass mixins for every facet of CSS, our modern code is vastly more standardized. And vastly more filled with @includes!

Compass is just a tool in the toolchain

The above illustrates implementation of a framework to give us quick answers when staring at a new project for the first time while panicking about where to start. But something like Compass comes way later in the process. There are other, well-documented frameworks for each step in the design/development process. Each helps make sense of the terrifying possibilities at the start of a large project.

There exist hundreds of acronym-laden approaches to frontend development The specific frameworks you choose don’t matter so much as agreeing, understanding, and communicating about them. That is to say:

Every step in your design & development process should be accompanied by a well documented framework to solve problems and get work done.

There just happens to be many of these frameworks already available all over the internet, which we’ll link together into a coherent toolchain in parts 2-3 of this blog series. I will cover why frameworks like Atomic Design, StyleTiles, OOCSS, BEM, and SMACSS rock. We’ll use  a few mental models combined together that can be applied to any frontend process to gain tangible benefits almost immediately.  In the meantime, check out Dave Ruse’s blog post about responsive wireframes using Foundation.

Catégories: Elsewhere

Lullabot: Global Sprint Days

ven, 17/01/2014 - 15:01

In this week's episode Addi is joined by Cathy Theys (yesct) and Florian Weber (webflo) to talk about the upcoming Global Sprint Days. This world-wide sprint is happening next weekend, January 25–26, 2014. We talk about the history behind it, what all is actually going on, and of course, how you can get involved. It's a great opportunity to meet other local Drupalers in your community and learn so much about Drupal.

Catégories: Elsewhere

undpaul: Managing obsolete modules

ven, 17/01/2014 - 10:35

In every Drupal project, it is crucial for your application to be fully defined in code at every time and every state. When working with a configuration management based upon Git, Features, Drush based shell scripts and Master, it is possible to represent your whole Drupal 7 application's configuration state in a traceable and reproducible way.

Watch your Drupal modules

This way your team has a good tool to manage the development of the application and even manage the state of the Drupal modules in your application. Especially in large Drupal projects it is always important to know the modules you are dealing with, what modules to enable and to know what modules you can disable. Even already disabled Drupal modules influence your system and the overall development experience of your team.

For example, if you provide a large set of modules in your project, there will be a lot of noise for the developers when working on the sprint issues (e.g. a developer might find search results in the IDE for a code snippet in modules that are not meant to be enabled anymore). Therefore it is important all modules that should not be enabled are really disabled and uninstalled from the Drupal project. And, if possible, after that the modules should be removed from the file disk. As we are (hopefully) always dealing with a version control system like Git, we are safe to remove those modules without losing any information of the project's evolution.

Remove modules with Master

In projects that have grown for several months and no-one had a look at the modules directory, it will be very painful and time intense to remove those obsolete modules. To ease such tasks in the development workflow, we created the Master module some time ago. With the latest release of Master, we have also introduced a new command to assist you in finding the modules you really don't need anymore in your directory. Drupal 7configuration managementmasterdrushdrupal planet english

Catégories: Elsewhere

Digett: Pairing Static Websites with CMS

jeu, 16/01/2014 - 14:00

I’m going out on a limb by saying we may be on the cusp of a revolution as it relates to how we build, deploy and manage websites. That’s a strong claim, but witness the new Reese’s candy of web technologies: Static files combined with the content management system. Separately each has a certain charming flavor along with any number of nutritional deficiencies. Together, though, they create a compelling recipe for the future of the web.

read more

Catégories: Elsewhere Why I use Display Suite, Entity Reference and Bootstrap

jeu, 16/01/2014 - 12:00

Recently I was building a Drupal site with Display Suite, Entity Reference and Bootstrap.

In this blog I want to share why that combination and some tricks I used to get a great frontend and editor-UX.

Catégories: Elsewhere

Get Pantheon Blog: Drupal at 13: All Grown Up?

mer, 15/01/2014 - 23:49

I don't know if there's a standard rate to convert software age into human terms, ala "dog years". Probably not, given the fact that some projects seem both with beards of grey, while others appear determined never to grow up. Still, it's pleasantly ironic to me that Drupal's 13 birthday — the beginning of the "teen" years — may mark the onset of true maturity.

Drupal 7 to Drupal 8 is clearly a technical sea change: taking responsibility for "must have" features in core, architecturally separating configuration from content, supporting a real scaffolding toolkit, leveraging external projects and libraries for low-level core functionality. Those are hallmarks of a grown-up software project.

There's also awkwardness associated with such a transition. The Drupal 8 development cycle has been kind of like puberty for the project. A lot has changed. The project is bigger, smarter, more powerful. But we've also had our share of blemishes, missteps, confusion — growing pains.

Mistakes are how learnings happen. They're a necessary component of progress. Even though I'm not looking forward to the upgrade cycle, and there are things I'll miss about the early days of Drupal (the wildness and freedom in particular) I feel confident that hindsight will shine kindly on these years.

The community crossed the psychological million-user barrier this year. More importantly, what's emerging isn't just larger; it's also more professional and global than ever before. 

Professionalism helps create stability. While that sometimes trades against some of that grassroots flavor I personally enjoy, I think it's good for the future of the project. Not only does "Drupaling for a living" keep people engaged, it underwrites work that can't be done effectively by volunteers or hobbyists. It also attracts people with years of wisdom and experience elsewhere, providing valuable perspective and increasing the value of our hive mind.

Likewise, expansion in emerging markets is a huge net positive. In the world of web development there's tension around outsourcing, but you can't help but smile as Drupal spreads throughout the globe. There are billions of people around the world just coming online, and they have a lot to gain from open source, and to contribute.

I don't meant to sound triumphalist. Drupal's destiny is far from certain, and the proof is always in the pudding. We're negotiating one of the most difficult transitions an open-source project will ever undergo — but so far we're doing well, and everyone deserves credit for that. Drupal isn't all grown up quite yet, but we're getting there. Steady on.

PS: This is something I’ve been thinking about quite a bit as I prepare for my Saturday Keynote presentation at SANDCamp. If you're going to be there, come check it out!

Blog Categories: Education
Catégories: Elsewhere

CMS Quick Start: Create a tiled Views layout in Drupal 7

mer, 15/01/2014 - 23:33
Content management systems are not always known for their ease of layout. Drupal has such a vast array of modules to extend your site, but often times we forget that you can do some really creative things with layouts. Today we're going to look at a javascript plugin called Masonry and how we can use it to build amazing fluid tile layouts. Let's look at what we'll be building:

read more

Catégories: Elsewhere

Acquia: Drupal 8 Wins: Avoiding the Dead Hook Blues, Part 2

mer, 15/01/2014 - 22:38

Drupal 8 Wins: Avoiding the Dead Hook Blues, Part 2 - In today's conversation, Larry Garfield, Kris Vanderwater, and I go over the Go PHP 5 project as the first seeds of cooperation between different PHP projects, how the Symfony2 framework became part of Drupal 8, why we aren't building Drupal 8 on Symfony full-stack, CMI, abstraction in Drupal, and the future of Features in Drupal 8.

Catégories: Elsewhere

Appnovation Technologies: Successful Multilingual Sites with Drupal 7 – Part 2

mer, 15/01/2014 - 21:14
In a previous post I discussed creating a tri-lingual site with Drupal 7. This post is a continuation of that process, highlighting an assemblage of modules to complete the process. var switchTo5x = false;stLight.options({"publisher":"dr-75626d0b-d9b4-2fdb-6d29-1a20f61d683"});
Catégories: Elsewhere

Bert Boerland: Drupal Predictions for 2014

mer, 15/01/2014 - 21:11

x-posted from d.o:

"4877. That is where the tradition within the Drupal community of making predictions for the year ahead with regards to our software, our community and broader, the web, started. Node 4877, written at the end of the year 2003. We have come a long way since then.

This year we would like to know what you think the year ahead will bring for Drupal and, as a bonus, we would like to know what was the best prediction you found in the past. Where did we shine when it comes to vision or humor.

See older entries from 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012 and 2013. Read them.

And now predict for 2014 and reflect the last decade in this thread."

Oh, and happy Bday Drupal :-)

curl -vs 2>&1 | grep "1.0.0" Drupal 1.0.0, 2001-01-15 #happybdaydrupal #Drupal

— bert boerland (@bertboerland) 15 januari 2014

Catégories: Elsewhere frontpage posts for the Drupal planet: Drupal 7.26 and 6.30 released

mer, 15/01/2014 - 20:59

Drupal 7.26 and Drupal 6.30, maintenance releases which contain fixes for security vulnerabilities, are now available for download. See the Drupal 7.26 and Drupal 6.30 release notes for further information.

Download Drupal 7.26
Download Drupal 6.30

Upgrading your existing Drupal 7 and 6 sites is strongly recommended. There are no new features or non-security-related bug fixes in these releases. For more information about the Drupal 7.x release series, consult the Drupal 7.0 release announcement. More information on the Drupal 6.x release series can be found in the Drupal 6.0 release announcement.

Security information

We have a security announcement mailing list and a history of all security advisories, as well as an RSS feed with the most recent security advisories. We strongly advise Drupal administrators to sign up for the list.

Drupal 7 and 6 include the built-in Update Status module (renamed to Update Manager in Drupal 7), which informs you about important updates to your modules and themes.

Bug reports

Both Drupal 7.x and 6.x are being maintained, so given enough bug fixes (not just bug reports) more maintenance releases will be made available, according to our monthly release cycle.


Drupal 7.26 is a security release only. For more details, see the 7.26 release notes. A complete list of all bug fixes in the stable 7.x branch can be found in the git commit log.

Drupal 6.30 is a security release only. For more details, see the 6.30 release notes. A complete list of all bug fixes in the stable 6.x branch can be found in the git commit log.

Security vulnerabilities

Drupal 7.26 and 6.30 were released in response to the discovery of security vulnerabilities. Details can be found in the official security advisory:

To fix the security problem, please upgrade to either Drupal 7.26 or Drupal 6.30.

Known issues


Front page news: Planet DrupalDrupal version: Drupal 6.xDrupal 7.x
Catégories: Elsewhere

Drupal Association News: Tech Team plan for the first quarter of 2014

mer, 15/01/2014 - 19:59

There is never a shortage of things to do on and that’s never been more true than right now. We have D7 upgrade clean up to finish, as well as variety of improvements that were on hold during the upgrade. We want you to know what our tech team is focused on and where we’re spending our time, so here are some of the pieces in our first quarter roadmap.

Catégories: Elsewhere

Mediacurrent: Using Cloud Hooks on Acquia Cloud Hosting

mer, 15/01/2014 - 19:38

As Drupal experts our team at Mediacurrent is very familiar with Acquia’s cloud based hosting solution "Acquia Cloud". In fact, Mediacurrent is an official Acquia Partner. Because of our partnership and because Acquia Cloud is an excellent solution, we end up working with a lot of clients who are utilizing Cloud already or helping clients move on to Acquia’s platform.

Catégories: Elsewhere

Blink Reaction: My Birthday Gift to Drupal

mer, 15/01/2014 - 17:18

What do you give an open source software project that's got (almost) everything?

Catégories: Elsewhere

Get Pantheon Blog: Launch Check: Find out in seconds if your site is ready to launch

mer, 15/01/2014 - 17:14

I'm excited to announce a new platform feature: Launch Check, a collection of automated Drupal 7 checks for best practice site configuration - built right in to your Pantheon Dashboard.

Example checks include:

  • Caching. What are the optimal settings for caching content on your site? How have you configured both the result and the query caching for each view display? Proper cache configuration is crucial for high performance.
  • Modules. Are there duplicate or missing modules in your codebase? Are development modules in your live environments? This cruft can dramatically slow your site down.
  • Error Logs. See the overall error count, 404 entries, and PHP errors. These errors can compound problems.
  • Drupal Status. Get Drupal’s overview of the site health without having to log in.
  • Much More. Launch Check runs over fifty different tests on your site.

The net result is improved performance, compatibility and reliability, and less headaches for site owners - with no installation required. Check out our documentation on Launch Check in the Pantheon Helpdesk.

Video of Pantheon Launch Check Introduction Enterprise service evolves to free platform feature

As part of Pantheon's Enterprise onboarding process, we review sites prior to load testing to ensure that they're set up optimally. To save time, we started writing automated checks to see how a site is configured without needing to be installed. By avoiding the observer effect, recommendations can be made quickly without getting in the way of the site itself.

A number of Pantheon Partners and Enterprise Clients provided valuable, extensive feedback, until we felt that they'll be helpful for any site on Pantheon. The checks were curated to be as actionable as possible, then were added to the Dashboard along with the existing site status checks.

Keeping it open for transparency and trust

We are champions of open-source - and Launch Check is no exception. The collection of Drush commands are fully available on as Site Audit, which includes a number of additional checks that are useful for deeper introspection. The Drush Site Audit commands are already available on Pantheon for use on any Drupal 7 site.

These best practice recommendations reflect collected experience and community wisdom, but at the same time are delivered by algorithms. No size fits all, but one size fits most - and there will always be exceptions or edge cases. If you have any technical feedback, submit a support request, or feel free to contribute to the Drupal Issue queue - either in the forms of feature requests, bug reports, or even patches.

Together, we can make a better web, one site configuration at a time.

Want to learn more?

Join us on Tuesday, January 21st for a free webinar as we discuss this exciting platform feature.

You’ll learn:

  1. Top 4 reasons why your site could be slow
  2. How to spot the most commonly overlooked culprits and take action to improve your site’s performance
  3. A walk-through of what an underperforming site looks like on the Dashboard, and how to fix it through the Dashboard
  4. Which third-party performance tools to use for popular use-case scenarios

Register for webinar

Blog Categories: Product
Catégories: Elsewhere

Lullabot: Debugging Drush commands with Xdebug and PHPStorm

mer, 15/01/2014 - 17:00

Oftentimes, I run into issues with drush commands that needed more debugging power than dpm() provides. In search for a way to debug PHP scripts from the CLI, or drush commands more specifically, I stumbled upon PHPStorm’s Zero-configuration Debugging which turned out to be perfect for the job.

Catégories: Elsewhere

Phase2: Exposing External Content to Drupal’s Search

mer, 15/01/2014 - 16:44

Search is a vital part of content-rich websites. Like many CMS or blogging platforms, Drupal provides basic search functionality out-of-the-box. However, for advanced features like intelligent full-text keyword matching and faceted searching based on content attributes, we typically implement the Apache Solr search engine, which has straightforward integration with Drupal.

What if your Drupal site search should include content that’s not in Drupal?

Often when planning a site build, we identify content that will not live in the new CMS but needs to be displayed alongside native CMS content in search or other lists. For example, an organization’s blog may be powered by a standalone WordPress site, and it may be required to present this blog’s content in the new site’s search. Another use case is a site that integrates with a specialized database, the content of which should be searchable.

There are three general approaches for integrating external content in Drupal’s search:

  1. Import the content into Drupal as basic “stubs”

  2. Crawl the content and populate the search index directly

  3. Use a third-party, turnkey solution

I’ll discuss each of these approaches and the potential pros and cons to consider.

Import external content into Drupal as “stub” nodes

This approach is well suited for importing content from blogs or other CMSes that provide an RSS feed or content API. It stays true to the “Drupal way,” using nodes to represent content and providing extended functionality with common “contrib” modules. The most significant limitation with this approach is the dependency on having a usable source feed.

When taking this approach, we often use the Feeds contrib module to provide the core mechanism for retrieving feeds, parsing the feed format, and producing nodes from the feed’s content. Since the full version of the content will not be viewed on the Drupal site, the import does not need to fully replicate off the source content; rather it can be tailored to copy the content needed for search functionality. This includes not only the content that is displayed in search results, but also any content that should be indexed for keyword or facet matches.

In some cases, it is necessary to create a custom Feeds parser plugin, which can perform custom translations on or additional processing for the content as it’s imported. This approach allows external content to be exposed for full-text search on the content’s text and for faceted search on the content’s metadata or attributes. For example, you may want to provide facets by post author that cover both native Drupal content and posts imported from a blog, and need to map the author identifier used on the blog to the Drupal user. For more information on creating a Feeds plugin, see the Feeds developer documentation on and the API code examples bundled with the module itself.

One challenge to using feeds for content ingestion is that feeds are usually intended primarily for the syndication of new content, and a given feed may not allow retrieving all historical content. In some cases, the feed can be “paged through” by adding arguments to the feed URL. Custom code can take advantage of this to iterate through the feed pages until all content is fetched. In other cases, it may be necessary to import historical content from an exported database and then use the feed to import only new content.

In planning for this approach, consider these pros and cons:

  • “Stub” content is native to Drupal so can be used outside of search

  • Uses well-known modules within the Drupal ecosystem

  • External content must be available by feed or API

  • Difficult to manage many sources with different feed formats

  • Extra care is needed to handle both an import of archival content and an ongoing import of new content


Crawl content and populate the search index

In some cases, the external content is not accessible by feed or API. This external content could be the pages of a static or proprietary website, the contents of a specialized external database, or an archive of documents, spreadsheets, and other files.

For these cases, one option is to implement a crawler, or a routine that finds and parses external content, and use it to populate the search index directly. This approach allows content to be included in the Drupal site’s search results even if the content is not stored in Drupal at all. This allows greater flexibility to use languages other than PHP and to employ open source tools.

Imagine the case where a Drupal site’s search should include content from a network of sister sites that do not share a common CMS platform. Rather than working from a backend data source, like an RSS feed or file export, you can “scrape” content directly from the rendered, frontend web pages.

This process involves a number of steps: finding content on the sites, determining if it’s new or updated, parsing the page into structured fields, and submitting the parsed data to the search engine. To facilitate this process, we’ve used the open source web crawler Apache Nutch, which accepts a list of URLs from which to start crawling and a set of configuration files to govern the retrieval, parsing, and processing of the results.

One challenge to this approach is that the external content is not exposed directly to Drupal, only through Solr as search results. Therefore, the external content cannot be as easily intermingled with other lists of native Drupal content, nor can it be linked or embedded in content through reference fields. Also, this means that some code for the custom processing or formatting of search results may need to be duplicated between Drupal and the crawler code.

In planning for this approach, consider these pros and cons:

  • External content without a feed or API can be handled

  • Complex crawling code is offloaded from Drupal to another application

  • External content is not duplicated in Drupal’s database

  • External content is not fully exposed to Drupal, so integration is limited

  • DevOps expertise and special infrastructure may be needed to implement crawler

Use a third-party, turnkey solution

The previous approaches allow external content to be included in a Drupal site search powered by Apache Solr, which allows site managers to customize the search experience by weighting fields for keyword searches, by exposing sort options, and by adding search facets for important content attributes. This search experience can extend beyond the search page to regions throughout the site that highlight related content.

If this degree of customization and integration is not needed, a simple alternative is to use a third-party, turnkey solution for a site’s search. For example, Google has enterprise search products that all your site to host a Google-powered search. The basic version provides Google Search-like results limited to your web sites presented within an on-site, branded experience.

This is a good option where familiar and reliable search functionality is of higher importance to custom search functionality with specific weightings and facet categories. Also, this approach constrains the cost and effort involved in maintaining the search application or extending the scope of the crawler.

In planning for this approach, consider these pros and cons:

  • Familiar and reliable user experience for search functionality

  • Low cost for management and maintenance

  • Less flexibility for styling the search results, defining facets, and applying custom weightings

  • Limited by vendor support, product functionality, and licensing fees


The ability to easily find content through an on-site search feature is expected on high traffic, content rich websites. In certain cases, user experience is improved when a site’s search results include matches for content that may be part of another site or data source. In this article, I’ve discussed approaches for accomplishing this by importing the content as basic “stubs”, by crawling the content and adding it to the search index, or by using a third-party solution. The resources and requirements of your project can help weigh the pros and cons of each approach and determine which provides a best fit.

For more about our work implementing search, see CJ’s article on Techniques To Improve Your Solr Search Results and Brad’s article on Using Search API Attachments With Remote Solr Extraction.

Catégories: Elsewhere

Makak Media: Our First App Is Now Live On Google Play!

mer, 15/01/2014 - 15:51

It's an exciting start to the year for us as today saw us launch our very first mobile app! The My Caribbean Offers app is now available on Google Play.

The website uses Drupal 7 and the app is built using the Open Source framework PhoneGap.

We're using a number of modules including Services, GeoField, GeoCoder, Static Map as well as our own custom modules.

We'll go into more detail in the next few follow up posts but for now please download the app and provide any comments below.

read more

Catégories: Elsewhere

Deeson Online: Managing load balanced production environments

mer, 15/01/2014 - 13:47
In December we launched a site onto a multi server load balanced production environment. It's certainly the first time I've had to deal with a site that has two web servers and as such, the multi server production environment presented a couple of challenges which I cover in this blog. Challenge #1: ip_address() was returning the internal address of the server, not the actual user's IP address

This caused us real problems with the user login process, since all failed login attempts were being stored with the same IP address which in turn quickly locked everyone out.

Solution #1:

The fix was simple enough on our environment: add a code block to your settings.php which manually injects the correct address from the HTTP headers:

// Code provided by Acquia. // Non balanced servers (dev and stage) don't have this problem. if (!empty($conf['reverse_proxy_addresses'])) { $ips = explode(',', $_SERVER['HTTP_X_FORWARDED_FOR']); $ips = array_map('trim', $ips); // Add REMOTE_ADDR to the X-Forwarded-For list (the ip_address function will // also do this) in case it's a 10. internal AWS address; if it is we should // add it to the list of reverse proxy addresses so that ip_address will // ignore it. $ips[] = $_SERVER['REMOTE_ADDR'];   // Work backwards through the list of IPs, adding 10. addresses to the proxy // list but stop at the first non-10. address we find. $ips = array_reverse($ips); foreach ($ips as $ip) { if (strpos($ip, '10.') === 0) { if (!in_array($ip, $conf['reverse_proxy_addresses'])) { $conf['reverse_proxy_addresses'][] = $ip; } } else { // we hit the first non-10. address, so stop. break; } } } Challenge #2: The user login form was considered cacheable by Drupal (and therefore Varnish)

Coupled with the above issue, we were getting Varnish cache hits when filling out the login form. This meant that all users were sharing a form_build_id (and therefore the same form cache was being shared for everyone). The upshot was that as soon as anybody entered a valid user name it would be stored for everybody else attempting to login. That in turn meant that flood attempts were all registered against a single account and it would quickly get locked out.

Solution #2:

We don't actually have an answer as to the cause of this yet. It could be something specific to our site or it could be related to other problems with the environment, but the login page is obviously important, so we've added a manual call to drupal_page_is_cacheable(FALSE) which fixes it.

Challenge #3: Views data export, batch processes and temp files

The batch API splits a large job up into smaller jobs, of which each one is processed in a separate HTTP request, held together by an AJAX based page. We are using the views_data_export module to build a relatively large CSV and so enabled the batch mode. What we hadn't considered is that each server has it's own temporary directory, so because the requests are load balanced between the two servers, the CSV was being split roughly into two half's.

Solution #3:

If you're lucky enough to be using the Acquia platform, there is a module for this:


Each of these challenges only presented themselves to us once we had made it to the production environment, and once we had a realistic amount of user traffic. This made them all the harder to detect and solve. We're lucky enough to have the support of Acquia in solving the issues quickly but if we were building out this kind of environment ourselves things could have been very different.

Read moreManaging load balanced production environmentsBy Dan James | 15th January 2014
Catégories: Elsewhere