Planet Drupal

Subscribe to flux Planet Drupal - aggregated feeds in category Planet Drupal
Mis à jour : il y a 16 min 56 sec

Wellnet Blog: How Drupal 8 builds your pages [infographic]

jeu, 10/09/2015 - 16:23

Want to know more about it? Come to the DrupalCon Barcelona! I'll be there with my talk!

Catégories: Elsewhere

KatteKrab: Sympathy - Empathy - Compassion

jeu, 10/09/2015 - 05:10
Thursday, September 10, 2015 - 13:10Communication can be surprisingly hard.   Human beings talk a lot.  Well many humans do.  Some less so, Some more so, but I reckon, in general, there's a lot of daily jibber jabber.   Some of that talking is light hearted "Small Talk". Some of it is world changing speeches.  Some of it is the daily to and fro we need to get stuff done; at work, at play, at home.   Some talking leads to conflict.  Whilst thinking and talking about how to resolve conflict in more constructive ways, me and Gina Likins puzzled over how we could talk about compassion, and the important part it plays in conflict resolution.  We talked about Empathy and Sympathy, and examined various definitions of all these words.   In the end, this is what we came up with. We were thinking about Free and Open Source Software communities. But it seems to have resonated with others, so, I thought I'd post it here, and expand a little on each point.   Sympathy, Empathy and Compassion are "Big feels".  So, this is just my take on how we might more easily distinguish them from each other.   Sympathy This is when we feel sorry for someone.  We acknowledge that something has happened which isn't good for them, and is causing them some level of distress.  In the software world, it might be like noticing that someone has reported a bug.   Empathy This goes a step further. This is when we really acknowledge something is wrong. Perhaps we've experienced it too, we understand the issue is real. In software terms, perhaps we can replicate the bug, and acknowledge it's an issue that needs addressing.   Compassion The next step comes when we are motivated to help. We've not just acknowledged there's a problem, but are willing to do something to help fix it. In software terms, it means stepping up to help fix the bug that's been reported.   Bug trackers and issue queues can be sources of minor conflict.  We lack the non-verbal queues provided by tone of voice and body language that help when raising issues.  It's natural to feel defensive.  But perhaps, taking a moment to reflect on how we communicate, and what we hear, what we mean, and how we respond, could really help us get more done, together.
Catégories: Elsewhere

Drupal Watchdog: VIDEO: DrupalCon Los Angeles Interview: Dani Nordin

mer, 09/09/2015 - 18:24

Dani Nordin is at our “Meet the Authors” booth (courtesy of Drupal Watchdog, so subscribe now: to autograph her new book, Drupal For Designers, and to promote Design for Drupal, a summer get-away in Cambridge, MA.
Geeks, send your kids to camp: hiking (virtual), campfires (Yule log), and coding.
Lovely views.

Tags:  DrupalCon LA DrupalCon Video Video: 
Catégories: Elsewhere

ThinkShout: Ready or Not: Dreamforce for the Nonprofit Drupaller

mer, 09/09/2015 - 18:00

Next week, Lev and I will be attending our third Dreamforce, the annual Salesforce conference in San Francisco. I’m not gonna lie, each year the two of us feel about as much dread regarding Dreamforce as we do anticipation. If you’ve never been to Dreamforce, all I can say is that it’s BIG. Like overwhelmingly big. It’s the largest software conference in the world. Dreamforce 2014 had over 135,000 attendees. (That’s an influx of 16% of the total population of San Francisco.) Last year’s conference generated over $100 million in revenue for the city. There are conference badges everywhere, parties everywhere… Open hotel rooms and cabs are nowhere to be found….

That said, from an innovation and collaboration perspective, Dreamforce is one of the more important convenings we attend each year. Once you strip away all the pomp and circumstance and assuming you know how to navigate through the fray, there is a lot to be gained by the nonprofit technologist and the Drupal developer interested in integrating our beloved CMS with this robust CRM platform.

For those of you who are either braving your first Dreamforce or interested in following along online (at a safe distance from the Bay Area), below is a list of a few things we are most excited to explore at this year’s conference:

The Foo Fighters Concert

Dave Grohl fans, rejoice: The Foo Fighters will be playing Dreamforce! Every year, Dreamforce brings in a globally-recognized entertainment act. This year, it’s the Foo Fighters. But why the bands? Why the celebrities? Well, you have to think about just how integral Salesforce is to a ton of people. Some of the largest names in the country utilize Salesforce, and that means there’s a heck of a lot of clout backing the product. That also means people like Alec Baldwin, Jessica Alba, and Goldie Hawn (all of whom use Salesforce to support their organizations) might just show up to tell you exactly why Salesforce is their CRM of choice. So it’s no surprise that a band as big as the Foo Fighters would make an appearance at Dreamforce.

The Unveiling of the "Lightning Experience"

Not a Justin Bieber reference (though you never know with Dreamforce…), the Lightning Experience is one of the slickest and most useful product announcements we’ve seen from Salesforce in a long time.

As part of Salesforce’s "Winter 2016 release", Salesforce is rolling out huge user interface enhancements under the marketing banner of The Lightning Experience. From the few screenshots that have been released of the new UI, it looks like Salesforce is borrowing from Google’s embrace of “material design.” Even more significant, the Lightning Experience includes drag-and-drop templating tools that will allow non-developers to construct robust Salesforce applications, dashboards, and mobile interfaces from within their web browser.

The New Salesforce Wave App - Free for Nonprofits

Also included in the Winter 2016 release of Salesforce is a new data visualization and analytics tool called the Wave App. (Wave is Salesforce’s new underlying analytics engine.) Coupled with the Lightning Experience, Salesforce administrators will be able to build powerful visual reports of their constituent data.

What’s also neat about the Wave App is that the Salesforce Foundation is expected to announce that it will be free for nonprofits.

The Salesforce Foundation’s Increased Commitment to Open Source

Just this summer, the Salesforce Foundation hired its first Open Source Manager, our good friend and long-time nonprofit tech advocate, Judi Sohn. One of Judi’s first tasks in this new role has been to put together an open source advisory board for the Foundation, focused on promoting open source contributions and collaboration around its Nonprofit Starter Pack.

As a result, the nonprofit track at Dreamforce is where you will see the majority of conference sessions around open source integrations and development workflows. We are particularly excited about the following sessions:

See You There!

If you’re headed to Dreamforce, we will be primarily hanging out in the Salesforce Foundation Zone, talking with nonprofits and other vendors who share our interest in how Salesforce can be leveraged for fundraising and constituent engagement. Connect with us on Twitter to schedule a meetup in person if you’ll be there.

Catégories: Elsewhere

Palantir: Tokens, Performance, and Caching

mer, 09/09/2015 - 16:23

Earlier this year, I was brought in to help our client Foreign Affairs whose site was experiencing some pre-launch performance issues. Authenticated pages were taking far too long to render and they needed it fixed. We identified and addressed a couple of issues, but one in particular I want to unpack because it's an issue that many developers may not think about.

Well there's yer problem

Foreign Affairs had New Relic installed on their production server, which was able to turn up several leads. In particular, the function _entity_token_get_token() was being called over 13 million times in the course of a day. The next most common function was only a few hundred thousand. A clue, Sherlock!

Some spelunking through the site's codebase and profiling using eventually turned up the issue: The site was using both the Doubleclick for Publishers (DFP) module and the SiteCatalyst module, both of which are advertisement tracking services that need to place numerous markers on each rendered page. So far so good. Those markers need to vary by page, of course. So far so good. And both modules were using the token replacement system in Drupal core to generate those markers. Not so good.

Drupal's token replacement system is a powerful string mangling engine. Modules can expose []-denoted tokens — such as [node:title], [user:name], or [timestamp:Y-m-d] — that a user can enter into a textfield, and then at runtime those values get replaced with an appropriate, contextually-relevant value — such as the title of a node, login name of the current user, or the timestamp when the code is run, respectively.

The problem is that the token system is not architected to be very performant. The root issue is token generation, which is done for every token replace call. That is, every time token_replace() is called, all token hooks are called for every token in the string, and those in turn usually generate far more token replacement values than are actually going to be used. Entity API's token integration is even less efficient, and each token replacement call generates hundreds of internal function calls. Most of the time that's OK; if it takes a few dozen extra milliseconds to generate a path alias with pathauto when a node is saved no one is really going to notice or care. When the string needs to be re-processed on every page request, people notice and care. When it's not one but over a dozen strings to be processed on every page (because of how the modules were configured), people really notice and really care.

Take a memo

Since rewriting the token system in core to be smarter was clearly off the table, that left making the token system be invoked less often. Between two different page loads of the same node the token replacement values wouldn't change. You know what that means? Caching!

The solution was to modify both DFP and SiteCatalyst in the same way, adding a caching wrapper to all token_replace() calls. The fancy name for this concept is "memoization": That is, if you know that calling a function with the same parameters is always guaranteed to return the same result then you can cache ("memoize", as in, record a memo) that result. Then, the next time it's called, it can simply return the already-computed value. It's one of the easiest performance boosts you can get, as long as your functions have very clear, explicit inputs. (For more on this technique and others like it, see my DrupalCon Austin presentation on Functional PHP or catch it again this fall at Connect.JS in Atlanta.)

Our inputs were a bit more complicated in this case, as the token_replace() call can take any number of complex Drupal objects, such as nodes and users. Fortunately in our case the DFP module provided only a single node, user, and taxonomy term as possible sources for token data. That simplified the problem space. That meant we could create a cache key based on the input string (that contains the tokens to be replaced) and the IDs of the available objects. To allow for arbitrarily long input strings we can simply hash the string to get a unique key for lookup purposes. Those together form the total inputs, and so the output should always be the same.

Great! Now we can cache calls to token_replace() based on the input values and a given page's ad markers only need to be recalculated after a cache clear, and everything is fine!


Except for when someone edits a node. The node ID clearly doesn't ever change when updating a node, so the cached value would still get used rather than regenerated. The simple solution is to just include the node's last updated timestamp in the key. That works great for nodes but users and taxonomy terms do not have a last-updated field.

Fortunately that problem has already been solved (yay community!) in the form of the Entity modified module. That module allows for tracking the last-modified time of any entity, creating its own tables to track that information if the entity doesn't offer it already.

Great! We introduce a dependency on the entity_modified module, then use the last-modified value that module provides as part of the cache key. Now an edit to a node, user, or term will result in a new last-updated timestamp and thus a different cache key, and we'll regenerate the token value as soon as it's edited, and everything is fine!

Cache the stampede!

Except we're still then hitting the cache, and therefore the database, once for each string we need to process. In our case that was over a dozen cache lookups on every page, and thus a dozen extra needless hits against the database. Not good at all.

The solution here depends on the particulars of DFP's use case. It's calling token_replace() on every page, but what it does on each page is constant: It takes a fixed set of user-configured strings and processes them, then sticks the result into the page for ad tracking purposes. That means, realistically, we don't need a dozen cache items; we only need one cache item per page (which is reasonable), which holds all of the token replacements for that page. That cache item is just an array of the cache keys we defined before to the result we computed before. Once the cache is warm there's only a single cache lookup per page, and every string in that cache item gets used so there's no waste.

Great! Now we have a very efficient caching strategy, no wasted data, we skip calling token_replace() in the majority case, and everything is fine!

Keep it clean

Except we would get new items added to that array over time as a node gets edited, because we're not cleaning out the old values from the array. Additionally, some global tokens are sensitive to more than just the passed-in objects. Although we didn't need them, there are tokens available for the current date and time, for instance. There's no way we can catch them all safely.

The solution here is deceptively simple: Have an expiry time on our cache items, so that they'll get invalidated eventually no matter what happens, with a default of getting cleaned up when Drupal periodically clears its caches using the CACHE_TEMPORARY flag. Now we still get valid, non-stale data for our token replacements and any left-overs in the cache (either pages that are removed or nodes that are edited) get cleaned up by Drupal in due course. That way the waste never gets too big, and everything is fine!

Mind what you have learned

The final patch for DFP has already been committed, but is available online to see exactly how it all works. The main work is all in the two new functions _dfp_token_replace_make_key(), which produces the unique key per replacement, and dfp_token_replace(), which is the memoizing wrapper that coordinates the action. Both are surprisingly simple given what they allow us to do.

In the end, for Foreign Affairs, the patch to DFP (combined with a nearly identical patch to SiteCatalyst) resulted in a savings of over 400 ms for a warm cache on every authenticated page load. That's huge! So what have we learned along the way?

  1. We've learned that monitoring tools that can provide deep profiling are invaluable. Online tools like New Relic or developer tools like are both useful in their own ways.
  2. We've learned how to memoize our code for better performance. In particular, we need to know all of the inputs to our function in order to do so, which is not always entirely obvious. Once we actually know all of the true inputs to a function, though, memoizing is a very efficient and powerful way to improve the performance of our code. Writing code with explicit, fine-grained inputs also helps to make it easier to memoize our code (and easier to test, too).
  3. We've discovered the useful entity_modified module.
  4. We've learned that Drupal's token system, while powerful, is not very performant. The Entity token integration in particular is very bad. That doesn't mean you should not use it, but it means being very careful about how and when it is used.
  5. We've learned to be mindful of the performance implications of our code. Using tokens to generate ad markers is a completely logical decision, but do we know what the performance implications are going to be if it gets used more than we expect? Or if the site has more authenticated users than we were expecting, so page caching isn't useful? These are hard questions, but important ones to consider… and to adjust once we realize we got it wrong the first time.

Has anything changed in Drupal 8 to help with this sort of case? Quite a bit, actually. For one, the move to more cleanly injected service objects means that far more of the system is memoizable, in those cases where it's useful.

More importantly, though, the cache context and render caching systems now allow most output generating code (controllers, blocks, formatters, etc.) to do what we've described above automatically. A full discussion of how that works is worth a whole blog post on its own (or several), but suffice to say that enabling all of core to smartly cache output like we're doing here, in a much more robust and automated way, has been a major push in Drupal 8's development.

Catégories: Elsewhere

Red Crackle: Interface

mer, 09/09/2015 - 16:13
If you are new to object-oriented programming, you might be confused about what an Interface is and how to use it. Read this post to clear that confusion.
Catégories: Elsewhere

Nik LePage: Drupal shortcuts for Chrome browser

mer, 09/09/2015 - 16:01
A collection of useful search engine shortcuts relevant to Drupal, for the Chrome browser (and possibly others).
Catégories: Elsewhere

Modules Unraveled: 148 Getting Up to Speed with Drupal 8 with Michael Marzovilla and Nick Selvaggio - Modules Unraveled Podcast

mer, 09/09/2015 - 15:05
Published: Wed, 09/09/15Download this episodeDrupal 8 from a site-builder’s perspective
  • What makes Drupal 8 different from Drupal 7 and other previous versions?
  • What about Drupal 8 are you most excited about?
    • CMI
  • What contributed module(s) are you most excited to see for D8?
Drupal 8 from a developer’s perspective
  • What makes Drupal 8 different from Drupal 7 and other previous versions?
  • What about Drupal 8 are you most excited about?
  • What’s making developing contributed modules better from your perspective?
StackStarter and
  • What is StackStarter?
  • What is
Episode Links: Mike on TwitterMike on Drupal.orgNick on TwitterNick on Drupal.orgSego SolutionsTry Drupal 8Stack StarterTags: Drupal 8Module DevelopmentSite-buildingplanet-drupal
Catégories: Elsewhere

Mediacurrent: Contrib Committee Status Review for August, 2015

mer, 09/09/2015 - 14:31

With many members of staff on vacation for part of the month, and with some projects under crunch to finish off, August was a little light for contributions. That said, I think we made some good progress for the work we did do.

Client sponsored work

Some limited but good work was sponsored this month by some of our clients:

Catégories: Elsewhere

ERPAL: How to use Acquia Cloud, Pantheon and with Drop Guard

mer, 09/09/2015 - 13:45

Many times during our beta phase I was asked the question if Drop Guard is a hosting platform and will replace the use of Acquia Cloud, Pantheon or (or any other hosting Drupal platform.) The answer is clearly: NO!

On the contrary those Drupal hosting platforms make your life even easier when using Drop Guard to update your Drupal core and contrib modules automatically. In this blog post I want to outline the benefits of each platform and how using these platforms leverage your work with Drop Guard. To understand the points of integration, let me explain how Drop Guard works in a nutshell.

Drop Guard monitors your site's modules and core versions and detects new updates and their priority (security related or not, highly critical or not etc.). As Drop Guard is connected to your GIT repository, the updates will be committed there with respect to your branching workflow. After the code base has been updated in your GIT repository and patches have been re-applied, the code needs to be deployed to your server. Till this point, Drop Guard does exactly the some work that you would have to do if you apply your updates manually (Do you? Really?! ;-)) The deployment can happen via a custom integration using the "events" in Drop Guard which is a small Event-Condition-Action driven feature similar to the Rules module you know from Drupal. You will find more detailed information on how to deploy Drupal updates continuously with Drop Guard in on of our blog posts. The available webhooks and SSH commands provide you with all the tools to deploy your code to your servers. And this is the point, where the Drupal platforms mentioned above come into play.

Acquia cloud

The Acquia Cloud platform is a full Drupal management platform that provides everything you need to create and host Drupal websites. The integration works either with the Acquia Cloud API and with the provided cloud hooks. You can also easily update your code base in the Acquia Cloud Site Factory manually. You will find further instruction in the deployment manual. When using Acquia Cloud, you don't need to use external CI tools such as Jenkins but you need to use one of the APIs mentioned above.

Pantheon IO

Pantheon is a management platform for Drupal and Wordpress websites. Using Pantheon is pretty similar to using the Acquia Cloud. They have dev, test and live environments out of the box and the nice thing is that pushing your code branch is enough to trigger an update of your hosting environment. This architecture allows you to use Pantheon without any other external CI tools or custom integrations. If you connect the Pantheon GIT repository to your Drop Guard site, Drop Guard will update your code base, re-apply your patches and update the environment immediately after Drop Guard pushed the updated code base to pantheon. That's really easy. Pantheon also provides a CLI to execute commands by remote. I tried to get in contact with some of the Pantheon guys for integration purpose but I have no answer yet. I will keep you posted on updates here. You will also find a detailed description about the Pantheon workflow. is a new hosting platform built by the Commerce Guys. I had a closer look at the platform as well as some personal presentations and I find it very useful for testing updates in separate branches and instances. has developed an infrastructure to build feature branch instances with a click or by using their API. This will allow Drop Guard users to test their updates in separate feature branches and deploy these updates independently from the ongoing code changes in the dev or stage branches. As a result you will have a more reliable and less time consuming quality assurance process when deploying updates continuously. The benefit is clearly to deploy updates in a shorter period of time including a reliable infrastructure for QA on feature branches.

As conclusion all platforms work with Drop Guard out of the box, using either external CI tools or the Drop Guard events-conditions-actions interface for integration with external services. They will save you some time configuring your update deployment process. So Drop Guard will not replace any Drupal hosting platform. Drop Guard will do the update work 24/7 that usually none of your team members like to do as they want to develop cool new things. Drop Guard will work as your dedicated team member, using your tools and your platforms to care about your updates all around the clock.
In one of our next iterations we plan to integrate those platforms directly with Drop Guard so that you just need to select you hosting platform and all integration features and the deployment workflow are pre-configured.

Catégories: Elsewhere

Realityloop: Testing, Testing, 1, 0, 2

mer, 09/09/2015 - 12:29
9 Sep Stuart Clark

If you haven't read part 1 of this series, I recommend doing so now, as the example tests below are going to be based the scenario developed in that post:

In this post I will be covering how to write Simpletests, specifically targeted at our previous scenario using an installation profile based site.

Next time I'll cover over Behat tests, which are much nicer for the non-technical user to write, as well as wrapping the whole lot with Continuous Integration via Travis CI.


Preparing your tests

Simpletests live in your Drupal Installation profile or module(s) as .test files, which contain PHP classes. As such, you not only need to create the files, but you need to register them in your projects .info file.

My own personal preference is to keep all my simpletests in a tests directory, which I will be doing so in the example below.

  1. Create a tests directory in your sites install profile directory.
  2. Create a project prefixed test file in the tests directory.

    e.g., profiles/fictitious/tests/fictitious.test
  3. Populate that file with a basic Simpletest stub:
    1. ​<?php
    3. /**
    4. * @file
    5. * Tests for the Fictitious Inc. installation profile.
    6. */
    8. /**
    9. * Class FictitiousTestCase
    10. */
    11. class FictitiousTestCase extends DrupalWebTestCase {
    12. /**
    13. * @inheritdoc
    14. */
    15. public static function getInfo() {
    16. return array(
    17. 'name' => 'Fictitious Inc.',
    18. 'description' => 'Test Fictitious Inc. functionality',
    19. 'group' => 'Fictitious Inc.',
    20. );
    21. }
    22. }


  4. Register your test file by adding the following line to your projects .info file (e.g., profiles/fictitious/ files[] = tests/fictitious.test
  5. Flush Drupal cache.
  6. Enable the Testing module.
  7. Navigate to the Testing UI: admin/config/development/testing

Assuming all went as planed, you should see your tests in the Tests table.

However, your test won't actually do anything yet...


Write your tests

A simpletest consists of three main parts:


Definition - getInfo()

This required function simply returns a keyed array with information about the test; it’s name, description and group.

This has already been defined in the above test stub, and is relatively straightforward.


Setup - setUp()

An optional function that is run before each test function allowing you to enable modules, create users or any other common tasks required for each test.

It's unusual not to need this, as at the minimum you will need to enable specific modules for the purpose of testing. For the case of our scenario we need to do more than the normal as we need to ensure that our Install profile is used:

  1. protected $profile = 'fictitious';
  2. public $auth_user = NULL;
  4. /**
  5. * @inheritdoc
  6. */
  7. public function setUp() {
  8. // Setup required modules.
  9. $info = system_get_info('module', 'fictitious_core');
  10. $modules = array('fictitious_core') + $info['dependencies'];
  11. parent::setUp($modules);
  13. // Flush caches and revert features.
  14. _fictitious_core_flush_revert();
  16. // Create an authenticated user.
  17. $this->auth_user = $this->drupalCreateUser(array('access site reports'));
  18. }


Taking a closer look at the above, you’ll see the first thing we do is set the installation profile which is used in the parent classes setUp() function:

protected $profile = 'fictitious';


Next we stub a variable to be used for the authenticated user, which will be created during the actual setUp() function:

public $auth_user = NULL;


We then declare our setUp() function and get to the important part. The first thing we do is to build a list of modules and pass them through to the parent function, which will deal with all the heavy lifting.

As we are dealing with an installation profile that simply enables a Feature module, I did find that not all the required modules are actually being enabled correctly, so to compensate for that the first two lines of code actually extract the modules list from the Features info file before being passed back to the parent function:

  1. $info = system_get_info('module', 'fictitious_core');
  2. $modules = array('fictitious_core') + $info['dependencies'];
  3. parent::setUp($modules);


Next, again to compensate for a Features based installation profile, we need to flush all caches and revert all Feature components to ensure that we start with out configuration set properly. To do this I use a function that I use repeatedly during deployment in all the sites I work on:


Lastly, we need to create an authenticated user with the permission to do what will be required during the tests. In this case, all that user will need to do is to look at the watchdog logs to ensure that the emails would be sent to the correct recipients and contain the correct information.

To do this we use the drupalCreateUser() method, which takes an array of permissions as it's only argument:

$this->auth_user = $this->drupalCreateUser(array('access site reports'));


Test(s) - test*()

Finally, the most important parts, the actual tests. A single SimpleTest class can contain as many tests as you desire, each a member function prefixed with test.

e.g., public function testContactForm() {}

It is worth noting that each test function will be wrapped with the setup and teardown of a Drupal installation, so it may be of more benefit to do more actual tests in a single function when possible.


A test consists of two main parts: Instructions and Assertions. Remember that you simply want to automate the manual tests you had been doing in the past:

Goto the frontpage of the website



Is the link to the 'Contact' page available.


$this->assertLink(t('Contact'), t('Link to the "Contact" page is available.'));


In our case we want to submit some data to our Contact form and then check the contents of the sent email(s), which is a slightly more complex case than the above example, but still fits in with the basic concept:


1. Submit the form with dummy data.

Dummy data is generated using the randomName() method, which will generate an 8 character random string of containing letter and numbers.

The drupalPost() method is used to submit the form, which takes three required arguments:

  • $path - The path of the form we are submitting.
  • $edit - A keyed array of the field values we are submitting, the key being the field name attribute.
  • $submit - The label of the submit button being clicked.
  1. $edit = array(
  2. 'field_contact_type[und]' => 'support',
  3. 'field_first_name[und][0][value]' => $this->randomName(),
  4. 'field_last_name[und][0][value]' => $this->randomName(),
  5. 'field_email[und][0][email]' => "{$this->randomName()}",
  6. 'field_contact_subject[und][0][value]' => $this->randomName(),
  7. 'field_contact_message[und][0][value]' => $this->randomName(),
  8. );
  9. $this->drupalPost('contact', $edit, t('Submit'));

2. Login as our authenticated

Relatively straightforward, using the drupalLogin() method with our previously created user object as the argument.



3. Navigate to the email log

This is the trickiest part of this specific test case, as we need to determine the url of the specific watchdog log for our emails.

To do this, the first thing need to do is to navigate to the watchdog logs page itself:


Thankfully, whenever a drupalGet() or drupalPost() method is invoked, a copy of the rendered markup of that page is saved as an HTML file and linked to from the Simpletest log, allow us to see what Simpletest sees:

Using this, we are able to get the actual URLs of the two emails that are sent, being: admin/reports/event/91 and admin/reports/event/92.

We can then adjust the above drupalGet() call to go directly to one of our email logs:



4. Assert that the email content is all correct

Now that we've pulled up our email, we need to run a few assertions to make sure that the the email being sent is exactly what we expect it to be. Before doing so, it's worth running the test and looking at the output of the email log so we can see what data is available for our assertions.

As we are dealing with raw text, the best choice of assertion is either: assertText() or assertRaw().

  1. $this->assertText("to:", t('Support email sent to correct email.'));
  2. $this->assertRaw("[from] =&gt; {$edit['field_email[und][0][email]']}", t('Support email sent from correct email.'));


All together now:

  1. public function testContactForm() {
  2. $edit = array(
  3. 'field_contact_type[und]' => 'support',
  4. 'field_first_name[und][0][value]' => $this->randomName(),
  5. 'field_last_name[und][0][value]' => $this->randomName(),
  6. 'field_email[und][0][email]' => "{$this->randomName()}",
  7. 'field_contact_subject[und][0][value]' => $this->randomName(),
  8. 'field_contact_message[und][0][value]' => $this->randomName(),
  9. );
  10. $this->drupalPost('contact', $edit, t('Submit'));
  12. $this->drupalLogin($this->auth_user);
  14. $this->drupalGet('admin/reports/event/91');
  16. $this->assertText("to:", t('Support email sent to correct email.'));
  17. $this->assertRaw("[from] =&gt; {$edit['field_email[und][0][email]']}", t('Support email sent from correct email.'));
  18. }



More information

There really is too much information to cover over in a single post, for more information I recommend the following resources:



Simpletests look scarier than they are, but when it comes down to it the majority of a simpletest can be copied from a previous simpletest.

It really is as simple as using drupalGet() or drupalPost() to simulate your manual tests, and then using the correct assertions for the situation. And like any new language, it gets easier over time.

But next time we'll look at Behat, which is 10 times easier to write as it uses human language for it's tests.

drupaldrupal planettesting
Catégories: Elsewhere

InternetDevels: Using BrowserStack for cross-browser and cross-platform compatibility testing

mer, 09/09/2015 - 09:21

Happy Tester’s Day to everyone! :) Check out the new blog post on
QA testing from InternetDevels Company. And please mind that you
can always hire a QA tester here and forget about bugs forever ;)

Read more
Catégories: Elsewhere

Drupal Association News: What’s new on - August 2015

mar, 08/09/2015 - 19:47
What’s new on - August 2015

Look for links to our Strategic Roadmap highlighting how this work falls into our priorities set by the Drupal Association Board and Working Groups.

Preparing for a Release Candidate

After a month of planning and organizational introspection in July, the Association spent August getting down to brass tacks. Since May, progress on Drupal 8 criticals has been rapid and we focused on doing our part to clear blockers for the upcoming Drupal 8 release candidate.

On August 13th, the Drupal Association engineering team met with core maintainers to discuss infrastructure blockers to a Drupal 8 release candidate. Since May, core developers have been rapidly clearing critical issues, and several services are blockers to the release. Fortunately, our meeting with core maintainers confirmed what we knew: DrupalCI and are the two pieces of infrastructure that need work to support Drupal 8 release candidates. We also briefly discussed infrastructural issues to support a full 8.0.0 release as well as the path forward for supporting Composer.

We’re very excited for the upcoming release candidate and hope to celebrate with the community soon!

The Roadmap Drupal 8 blocker:

In August we completed the migration of to Drupal 7. This was the culmination of a tremendous amount of work by the community over the last year, and quite a bit of work by the Association over the last several months. As always with a major site migration we had to work closely with the active users to ensure there were no functional regressions and do quite a bit of permissions and security review as well as some of the fundamental modules which power the site, such as organic groups, work quite differently in their Drupal 7 implementations.

Completing this upgrade also allowed us to focus on Drupal 8 release blockers in the localization system. First, we need a server side version fall back system for translations. Secondly, we need to support contrib by supporting configuration translatables with external dependencies. We’ve made solid progress on the first issue, but there is still work remaining.

Our initial work on these issues was reviewed towards the end of August, and we hope to have the remaining work completed in the first couple weeks of September.

Many thanks to Gábor Hojtsy for helping us review this final work.

Drupal 8 blocker: Drupal CI

In our last update we laid out two milestones that the Association was pushing hard to reach for DrupalCI: DrupalCI must meet the testing requirements for Drupal 8 Core and contrib specified by core developers. DrupalCI must also meet or exceed the existing functionality of the PIFT/PIFR testbots for testing Drupal 7 and Drupal 6 so that the old testbot system can be retired.

We’re very pleased to report that as of the end of August the DrupalCI meets the testing requirements set out by Drupal core developers. There are no blockers to a Drupal 8 release in the DrupalCI infrastructure!

We’ve also made major progress in several other areas:

  • We made significant strides towards testing for Drupal 7 and Drupal 6. Simpletest jobs for these versions have a very different structure so this required some careful work.
  • We now display test results directly on This new display makes it much easier to see these results, and will also make it easier for us to provide email notifications for test failures.

However, there is still work to do. There is a flaw in test discovery for contrib modules that must be resolved for Drupal 8 contrib testing to be complete (in the first week of September a core patch for was commited thanks to jthorson).

Once contrib testing is stable and functional, our next goal is to begin phasing out the old testing infrastructure. We’ve identified several key issues that will allow us to disable the old testbots in a phased way. Once Drupal 8 core and contrib testing are burned in and core and contrib developers have had time to affirm that the PIFT/PIFR bots are no longer needed for D8 testing, we will phase out PIFT/PIFR’s Drupal 8 testbots. Because Drupal 8 testing represents the majority of testing volume on our infrastructure at this time, being able to disable the redundant bots will provide a significant cost savings for the Association.

We’ll then focus on making sure that Drupal 7 and Drupal 6 testing are equally functional before phasing those bots out as well. Finally we’ll need to transition to a static archive of the past test results.

In the meantime we are still asking project maintainers to enable DrupalCI for their projects and provide us with their feedback in this issue.

Search Improvements

The next item on our roadmap is improving the search experience. To make significant improvements in this area there was some pre-work we needed to do, both in planning and infrastructure. Earlier in the year the Association sat down with a consultant to provide recommendations on ways to improve our Solr configuration. At the same time, there were a number of new features of Solr itself in version 5 that we would not be able to take advantage of with our existing Solr 3 installation.

So we began our pre-work by creating a pre-production environment that would allow us to test our changes to search - evaluating what it would take to reindex with Solr 5 - and ultimately upgrading our production search servers to support Solr 5 and performing that index.

In parallel, we expanded the criteria that we would use to evaluate the success of our search improvements - drafting user stories that concretely define what a better search means for the variety of types of content that a user might be searching for.

Going into September we’ll then be implementing small iterative changes to our Solr configuration to tune our search results to meet these user stories.

Incremental Improvements to Issue Queues

We’ve also made several incremental improvements to the issue queues during the month of August. We started by making an automatic first comment on issues when the reporter first creates an issue. This will allow the reporter of an issue to be credited by the maintainer when the issue is closed, even if that reporter does not make additional comments on the issue.

As we were making this change we took the opportunity to change a subtle detail about the issue summary. Previously the issue summary was attributed to the initial reporter. However, because issue summaries can be edited by anyone, this attribution was misleading. We have removed this attribution from the summary, and instead added the original reporter attribution to the issue meta data in the sidebar.

There is some follow up work to do to allow the initial reporter to adjust their organization/customer attribution in that automatically generated first comment. We will likely also look into allowing credit attributions for users who did not comment in the issue.

Many thanks to the community members who provided their feedback on these changes as we were making them. Making your voices heard allowed us to improve on these changes even as we were making them.

Performance Profiling for the new Content Model

Another key deployment that is very close for is the first iteration of the new content model. Content Strategy for has been a major initiative for most of the past year, and we have built out the first iteration of that featureset. It should enable a new content organization model on with sections that have individual governance and maintainership, and lay the groundwork for a new navigational paradigm for the site. These changes won’t be immediately apparent in terms of visual changes to, but instead provide structural tools to make it easier to govern and maintain content on the site. This work will be the basis for our improvements to Documentation on

Before making this deployment we wanted to ensure that the new features would be performant. has a tremendous amount of content, is exceptionally highly trafficked, and provides services that are essential for the community to develop Drupal itself. We need to be sure that the new modules and features we deploy will be performant before rolling these new features out. We set up a new integration environment on which we could run some performance profiling tests, and in light of that testing we feel confident we’ll be able to deploy this first iteration very soon.

Revenue-related projects (funding our work) DrupalCon Asia

August also saw the full site launch of DrupalCon Asia. It’s a beautiful site and we hope you’ll check it out and join us in highlighting the strength of the community in India. This is our third site launch on the unified subsite and the new multi-event system is paying dividends in allowing us to launch these Con sites more quickly, consistently, and on schedule than ever before.

The call for papers is open now - so if you’re going to join us for the first DrupalCon in Asia, please submit your session proposals now! We’re also accepting applications for grants and scholarships, and looking for volunteers to mentor the sprints.

Improvements to

We’d like to give a special thanks to community member Matt Holford, CTO of who volunteered his time in August to help us making improvements to the Drupal Jobs board. He helped to improve the way job postings are listed, helped us adjust how renewed postings would be sorted, and helped improve the data we gather so we can provide a better home for Drupal careers.

Thanks, Matt!

Sustaining Support and Maintenance

Every month there is infrastructure to be maintained and improved, and August was no exception. We performed a number of tasks including updating the SSL cert, rebulding one of our Solr servers to support the upgrade to Solr version 5, improving stability and redundancy of our load balancers.

We also made adjustments to how we maintain our dynamically scaling infrastructure for DrupalCI test bots on Amazon. In August we upsized and rebuilt the dispatcher instance to take advantage of on the fly compression and provide us more capacity for the high volume of testing we’ve already been doing on the new system. Providing testing to the community represents a significant infrastructural cost for the Drupal Association, so we have also been focusing on ways to improve the efficiency of testing and reduce our expenses.


As always, we’d like to say thanks to all volunteers who are working with us and to the Drupal Association Supporters, who made it possible for us to work on these projects.

Follow us on Twitter for regular updates: @drupal_org, @drupal_infra

Catégories: Elsewhere

DrupalCon News: Session Spotlight Let's talk Drupal in your Language

mar, 08/09/2015 - 19:18

DrupalCon has been offered in more than 20 cities around the world since it started in 2005 as a place to talk about Drupal. Between a decade of Cons and countries, the global reach of Drupal has really taken shape. With 112 translation groups on, it becomes very apparent that translating Drupal is a huge community effort - almost 6,000 strong.

Catégories: Elsewhere

Pronovix: Help Drupal win the Context.IO App Challenge

mar, 08/09/2015 - 18:11

Voting just opened for the Popular choice prize of the Context.IO App Challenge on DevPost. We’ve submitted a Drupal module for the competition and it would be really great if we could win the Popular choice prize. We are however not asking you to vote to support us, instead we would like you to vote for the Drupal community: if we win the prize we will use it to prove to developer tool companies that Drupal is a market worth investing in. Should we win, we’ll split the 5k$ prize money over 2 sponsorships for upcoming Drupalcamps.

Catégories: Elsewhere

Mike Crittenden: How Drupal 7 Works, Part 2: The Bootstrap

mar, 08/09/2015 - 17:28

This is a chapter out of my in-progress book, Drupal Deconstructed. You can read it online for free, download it as a PDF/ePUB/MOBI, or contribute to it on GitHub.

So George's request for /about-us has been handed to Drupal, and index.php is ready to bootstrap Drupal. What does that mean?

A quick summary

At a code level, we're talking about the drupal_bootstrap function, which lets you pass in a parameter to tell it which level of bootstrap you need. In almost all cases, we want a "full" bootstrap, which usually means "this is a regular page request, nothing weird, so just give me everything."

What is "everything"? I'm glad you asked. All of the possible values for the parameter for drupal_bootstrap() are listed below. Note that they are run sequentially, meaning if you call it with DRUPAL_BOOTSTRAP_CONFIGURATION then it will only do that one (#1), but if you call it with DRUPAL_BOOTSTRAP_SESSION then it will do that one (#5) and all of the ones before it (#1-4). And since DRUPAL_BOOTSTRAP_FULL is last, calling it gives you everything in this list.

  1. DRUPAL_BOOTSTRAP_CONFIGURATION: Set up some configuration
  2. DRUPAL_BOOTSTRAP_PAGE_CACHE: Try to serve the page from the cache (in which case the rest of these steps don't run)
  3. DRUPAL_BOOTSTRAP_DATABASE: Initialize the database connection
  4. DRUPAL_BOOTSTRAP_VARIABLES: Load variables from the variables table
  5. DRUPAL_BOOTSTRAP_SESSION: Initialize the user's session
  6. DRUPAL_BOOTSTRAP_PAGE_HEADER: Set HTTP headers to prepare for a page response
  7. DRUPAL_BOOTSTRAP_LANGUAGE: Initialize language types for multilingual sites
  8. DRUPAL_BOOTSTRAP_FULL: Includes a bunch of other files and does some other miscellaneous setup.

Each of these are defined in more detail below.


This guy just calls _drupal_bootstrap_configuration(), which in turn does the following:

Sets error and exception handlers. set_error_handler('_drupal_error_handler'); set_exception_handler('_drupal_exception_handler');

These lines set a custom error handler (_drupal_error_handler()) and a custom exception handler (_drupal_exception_handler) respectively. That means that those functions are called when Drupal encounters a PHP error or exception.

These functions each go a few levels deep, but all they're really doing is attempting to log any errors or exceptions that may occur, and then throw a 500 Service unavailable response.

Initializes the PHP environment drupal_environment_initialize()

The drupal_environment_initialize() function called here does a lot, most of which isn't very interesting. For example:

  • It tinkers with the global $_SERVER array a little bit.
  • It sets the configuration for error reporting
  • It sets some session configuration using ini_set()


That said, it does have this nugget:

$_GET ['q'] = request_path();

It might not look like much, but this is what makes Clean URLs work. We always need $_GET['q'] to be set to the current path because $_GET['q'] is used all over the place. If you have Clean URLs disabled, then that happens by default, because your URLs look like So the line of code above will call request_path(), which sees that $_GET['q'] already exists, and returns it directly.

But if you have Clean URLs enabled (you do, right?), and your URLs look like, then $_GET['q'] is empty by default, and that just won't do. To fix that, it gets populated with the value of request_path(), which basically just cleans up the result of $_SERVER['REQUEST_URI'] (i.e., removes query strings as well as script filenames such as index.php or cron.php) and returns that.

Starts a timer timer_start('page');

This is actually pretty nifty. Drupal has a global $timers variable that many people don't know about.

Here, a timer is started so that the time it takes to render the page can be measured.

Initializes some critical settings drupal_settings_initialize();

The drupal_settings_initialize() function is super important, for exactly 2 reasons:

  1. It includes the all-important settings.php file which contains our database connection info (which isn't used yet), among other things.
  2. It creates many of our favorite global variables, such as $cookie_domain, $conf, $is_https, and more!

And that's the end of the CONFIGURATION bootstrap. 1 down, 7 to go!


When bootstrapping the page cache, everything happens inside _drupal_bootstrap_page_cache() which does a lot of work.

Includes and any custom cache backends require_once DRUPAL_ROOT . '/includes/'; foreach (variable_get('cache_backends', array()) as $include) { require_once DRUPAL_ROOT . '/' . $include; }

This bit of fanciness allows us to specify our own cache backend(s) instead of using Drupal's database cache.

This is most commonly used to support memcache, but someone could really go to town with this if they wanted, just by specifying (in the $conf array in settings.php) an include file to use (such as for whatever cache backend they're wanting to use.

Checks to see if cache is enabled // Check for a cache mode force from settings.php. if (variable_get('page_cache_without_database')) { $cache_enabled = TRUE; } else { drupal_bootstrap(DRUPAL_BOOTSTRAP_VARIABLES, FALSE); $cache_enabled = variable_get('cache'); }

You'll note that the first line there gives you a way to enable cache from settings.php directly. This speeds things up because that way it doesn't need to bootstrap DRUPAL_BOOTSTRAP_VARIABLES (i.e., load all of the variables from the DB table) which would also force it to bootstrap DRUPAL_BOOTSTRAP_DATABASE, which is a requirement for fetching the variables from the database, all just to see if the cache is enabled.

So assuming you don't have $conf['page_cache_without_database'] = TRUE in your settings.php file, then we will be bootstrapping the variables here, which in turn bootstraps the database. Both of those will be talked about in more info in a minute.

Blocks any IP blacklisted users drupal_block_denied(ip_address());

This does exactly what you'd expect - checks to see if the user's IP address is in the list of blacklisted addresses, and if so, returns a 403 Forbidden response. This doesn't strictly have anything to do with caching, except for the fact that it needs to block cached responses from blacklisted users and this is its last chance to do that.

An interesting thing to note here is that the ip_address() function is super useful. On a normal site it just returns regular old $_SERVER['REMOTE_ADDR'], but if you're using some sort of reverse proxy in front of Drupal (meaning $_SERVER['REMOTE_ADDR'] will always be the same), then it fetches the user's IP from the (configurable) HTTP header. Pretty awesome.

But beware that if you have $conf['page_cache_without_database'] = TRUE in settings.php then it won't fetch blocked IPs from the database, because it wouldn't have bootstrapped DRUPAL_BOOTSTRAP_VARIABLES yet (re-read the previous section to see what I mean). Tricky, tricky!

Checks to see if there's a session cookie if (!isset($_COOKIE [session_name()]) && $cache_enabled) { ...fetch and return cached response if there is one... }

It only returns a cached response (assuming one exists to return) if the user doesn't have a valid session cookie. This is a way of ensuring that only anonymous users see cached pages, and authenticated users don't. (What's that? You want authenticated users to see cached pages too?)

What's inside that "fetch and return cached response" block? Lots of stuff!

Populates the global $user object $user = drupal_anonymous_user();

The drupal_anonymous_user() function just creates an empty user object with a uid of 0. We're creating it here just because it may need to be used later on down the line, such as in some hook_boot() implementation, and also because its timestamp will be checked and possibly logged.

Checks to see if the page is already cached $cache = drupal_page_get_cache();

The drupal_page_get_cache() function is actually simpler than you'd think. It just checks to see if the page is cacheable (i.e., if the request method is either GET or HEAD, as told in drupal_page_is_cacheable()), and if so, it runs cache_get() with the current URL against the cache_page database table, to fetch the cache, if there is one.

Serves the response from that cache

If $cache from the previous section isn't empty, then we have officially found ourselves a valid page cache for the current page, and we can return it and shut down. This block of code does a few things:

  • Sets the page title using drupal_set_title()
  • Sets a HTTP header: X-Drupal-Cache: HIT
  • Sets PHP's default timezone to the site's default timezone (from variable_get('date_default_timezone'))
  • Runs all implementations of hook_boot(), if the page_cache_invoke_hooks variable isn't set to FALSE.
  • Serves the page from cache, using drupal_serve_page_from_cache($cache), which is scary looking but basically just adds some headers and prints the cache data (i.e., the page body).
  • Runs all implementations of hook_exit(), if the page_cache_invoke_hooks variable isn't set to FALSE.

And FINALLY, once all of that is complete, it runs exit; and we're done, assuming we got this far.

Otherwise, it doesn't do any of the above, and just sets the X-Drupal-Cache: MISS header.

Whew. That's a lot of stuff. Luckily, the next section is easier.


We're not going to get super in the weeds with everything Drupal does with the database here, since that deserves its own chapter, but here's an overview of the parts that happen while bootstrapping, within the _drupal_bootstrap_database() function.

Checks to see if we have a database configured if (empty($GLOBALS ['databases']) && !drupal_installation_attempted()) { include_once DRUPAL_ROOT . '/includes/'; install_goto('install.php'); }

Nothing fancy. If we don't have anything in $GLOBALS ['databases'] and we haven't already started the installation process, then we get booted to /install.php since Drupal is assuming we need to install the site.

Includes the file

This beautiful beautiful file includes all of the database abstraction functions that we know and love, such as db_query() and db_select() and db_update().

It also holds the base Database and DatabaseConnection and DatabaseTransaction classes (among a bunch of others).

It's a 3000+ line file, so it's out of scope for a discussion on bootstrapping, and we'll get back to "How Drupal Does Databases" in a later chapter.

Registers autoload functions for classes and interfaces spl_autoload_register('drupal_autoload_class'); spl_autoload_register('drupal_autoload_interface');

This is just a tricky way of ensuring that a class or interface actually exists, when we try to autoload one. Both drupal_autoload_class() and drupal_autoload_interface() just call registry_check_code(), which looks for the given class or interface first in the cache_bootstrap table, then registry table if no cache is found.

If it finds the class or interface, it will require_once the file that contains that class or interface and return TRUE. Otherwise, it just returns FALSE so Drupal knows that somebody screwed the pooch and we're looking for a class or interface that doesn't exist.

So, in English, it's saying "Ok, it looks like you're trying to autoload a class or an interface, so I'll figure out which file it's in by checking the cache or the registry database table, and then include that file, if I can find it."


This one just calls _drupal_bootstrap_variables(), which actually does a good bit more than just including the variables from the variables table.

Here's what it does:

Initializes the locking system require_once DRUPAL_ROOT . '/' . variable_get('lock_inc', 'includes/'); lock_initialize();

Drupal's locking system allows us to create arbitrary locks on certain operations, to prevent race conditions and other bad things. If you're interested to read more about this, there is a very good API page about it.

The two lines of code here don't actually acquire any locks, they just initialize the locking system so that later code can use it. In fact, it's actually used in the very next section, which is why it's initialized in this seemingly unrelated phase of the bootstrap process.

Load variables from the database global $conf; $conf = variable_initialize(isset($conf) ? $conf : array());

The variable_initialize() function basically just returns everything from the variables database table, which in this case adds it all to the global $conf array, so that we can variable_get() things from it later.

But there are a few important details:

  1. It tries to load from the cache first, by looking for the variables cache ID in the cache_bootstrap table.
  2. Assuming the cache failed, it tries to acquire a lock to avoid a stampede if a ton of requests are all trying to grab the variables table at the same time.
  3. Once it has the lock acquired, it grabs everything from the variables table.
  4. Then it caches all of that, so that step 1 won't fail next time.
  5. Finally, it releases the lock.
Load all "bootstrap mode" modules require_once DRUPAL_ROOT . '/includes/'; module_load_all(TRUE);

Note that this may seem scary (OH MY GOD we're loading every single module just to bootstrap the variables!) but it's not. That TRUE is a big deal, because that tells Drupal to only load the "bootstrap" modules. A "bootstrap" module is a module that has the bootstrap column in the system table set to 1 for it.

On the typical Drupal site, this will only be a handful of modules that are specifically required this early in the bootstrap, like the Syslog module or the System module, or some contrib modules like Redirect or Variable.

Sanitize the destination URL parameter

Here's another one that you wouldn't expect to happen as part of bootstrapping variables.

The $_GET['destination'] parameter needs to be protected against open redirect attacks leading to other domains. So what we do here is to check to see if it's set to an external URL, and if so, we unset it.

The reason we have to wait for the variables bootstrap for this is that we need to call the url_is_external() function to check the destination URL, and that function calls drupal_strip_dangerous_protocols() which has a variable to store the list of allowed protocols.


Bootstrapping the session means requiring the file and then running drupal_session_initialize(), which is a pretty fun function.

Register custom session handlers

The first thing that happens here is that Drupal registers custom session handlers with PHP:

session_set_save_handler('_drupal_session_open', '_drupal_session_close', '_drupal_session_read', '_drupal_session_write', '_drupal_session_destroy', '_drupal_session_garbage_collection');

If you've never seen the session_set_save_handler() PHP function before, it just allows you to set your own custom session storage functions, so that you can have full control over what happens when sessions are opened, closed, read, written, destroyed, or garbage collected. As you can see above, Drupal implements its own handlers for all 6 of those.

What does Drupal actually do in those 6 handler functions?

  • _drupal_session_open() and _drupal_session_close() both literally just return TRUE;.
  • _drupal_session_read() fetches the session from the sessions table, and does a join on the users table to include the user data along with it.
  • _drupal_session_write() checks to see if the session has been updated in the current page request or more than 180 seconds have passed since the last update, and if so, it gathers up session data and drops it into the sessions table with a db_merge().
  • _drupal_session_destroy() just deletes the appropriate row from the sessions DB table, sets the global $user object to be the anonymous user, and deletes cookies.
  • _drupal_session_garbage_collection() deletes all sessions from the sessions table that are older than whatever the max lifetime is set to in PHP (i.e., whatever session.gc_maxlifetime is set to).
If we already have a session cookie, then start the session

We then check to see if there's a valid session cookie in $_COOKIE[session_name()], and if so, we run the drupal_session_start(). If you're a PHP developer and you just want to know where session_start() happens, then you've found it.

That's basically all that drupal_session_start() does, besides making sure that we're not a command line client and we haven't already started the session.

Disable page cache for this request

Remember back in the DRUPAL_BOOTSTRAP_PAGE_CACHE section where I said that authenticated users don't get cached pages (unless you use something outside of Drupal core)? This is the part that makes that happen.

if (!empty($user->uid) || !empty($_SESSION)) { drupal_page_is_cacheable(FALSE); }

So if we have a session or a nonzero user ID, then we mark this page as uncacheable, because we may be seeing user-specific data on it which we don't want anyone else to see.

If we don't already have a session cookie, lazily start one

This part's tricky. Drupal lazily starts sessions at the end of the request, so all the bootstrap process has to do is create a session ID and tell $_COOKIE about it, so that it can get picked up at the end.


I won't go in detail here since we're talking about the bootstrap, but at the end of the request, drupal_page_footer() or drupal_exit() (depending on which one is responsible for closing this particular request) will call drupal_session_commit(), which checks to see if there's anything in $_SESSION that we need to save to the database, and will run drupal_session_start() if so.

Sets PHP's default timezone from the user's timezone date_default_timezone_set(drupal_get_user_timezone());

You may remember that the cache bootstrap above was responsible for setting the timezone for cached pages. This is where the timezone gets set for uncached pages.

The drupal_get_user_timezone() is very simple. It just checks to see if user-configurable timezones are enabled and the user has one set, and uses that if so, otherwise it falls back to the site's default timezone setting.


This is probably the simplest of the bootstrap levels. It does 2 very simple things in the _drupal_bootstrap_page_header() function.

Invokes hook_boot() bootstrap_invoke_all('boot');

If you've ever wondered how much of the bootstrap process has to complete before you can be guaranteed that hook_boot will run, there's your answer. Remember that for cached pages, it will have already run back in the cache bootstrap, but for uncached pages, this is where it happens.

Sends initial HTTP headers

There's a little bit of a call stack here. drupal_page_header() calls drupal_send_headers() which calls drupal_get_http_header() to finally fetch the headers that it needs to send.

Note that in this run, it just sends a couple default headers (Expires and Cache-Control), but the interesting part is that static caches are used throughout, and anything can call drupal_add_http_header() later on down the line, which will also call drupal_send_headers(). This allows you to append or replace existing headers before they actually get sent anywhere.


In this level, the drupal_language_initialize() function is called. This function only really does anything if we're talking about a multilingual site. It checks drupal_multilingual() which just returns TRUE if the list of languages is greater than 1, and false otherwise.

If it's not a multilingual site, it cuts out then.

If it is a multilingual site, then it initializes the system using language_initialize() for each of the language types that been configured, and then runs all hook_language_init() implementations.

This is a good time to note that the language system is complicated and confusing, with a web of "language types" (such as LANGUAGE_TYPE_INTERFACE and LANGUAGE_TYPE_CONTENT) and "language providers", and of course actual languages. It deserves a chapter of its own, so I'm not going to go into any more detail here.


And we have landed. Now that we already have the building blocks like a database and a session and configuration, we can add All Of The Other Things. And the _drupal_bootstrap_function() does just that.

Requires a ton of files require_once DRUPAL_ROOT . '/' . variable_get('path_inc', 'includes/'); require_once DRUPAL_ROOT . '/includes/'; require_once DRUPAL_ROOT . '/includes/'; require_once DRUPAL_ROOT . '/' . variable_get('menu_inc', 'includes/'); require_once DRUPAL_ROOT . '/includes/'; require_once DRUPAL_ROOT . '/includes/'; require_once DRUPAL_ROOT . '/includes/'; require_once DRUPAL_ROOT . '/includes/'; require_once DRUPAL_ROOT . '/includes/'; require_once DRUPAL_ROOT . '/includes/'; require_once DRUPAL_ROOT . '/includes/'; require_once DRUPAL_ROOT . '/includes/'; require_once DRUPAL_ROOT . '/includes/'; require_once DRUPAL_ROOT . '/includes/';

All that stuff that we haven't needed yet but may need after this, we require here, just in case. That way, we're not having to load on the fly if we happen to be using AJAX later, or on the fly if we happen to be sending an email.

Load all enabled modules

The module_load_all() does exactly what you'd expect - grabs the name of every enabled module using module_list() and then runs drupal_load() on it to load it. There's also a static cache in this function so that it only runs once per request.

Registers stream wrappers

The file_get_stream_wrappers() has a lot of meat to it, but it's all details around a fairly simple task.

At a high level, it's grabbing all stream wrappers using hook_stream_wrappers(), allowing the chance to alter them using hook_stream_wrappers_alter(), and then registering (or overriding) each of them using stream_wrapper_register(), which is a plain old PHP function. It then sticks the result in a static cache so that it only runs all of this once per request.

Initializes the path

The drupal_path_initialize() function is called which just makes sure that $_GET['q'] is setup (if it's not, then it sets it to the frontpage), and then runs it through drupal_get_normal_path() to see if it's a path alias, and if so, replace it with the internal path.

This also gives modules a chance to alter the inbound URL. Before drupal_get_normal_path() returns the path, it calls all implementations of hook_url_inbound_alter() to do just that.

Sets and initializes the site theme menu_set_custom_theme(); drupal_theme_initialize();

These two fairly innocent looking functions are NOT messing around.

The purpose of menu_set_custom_theme() is to allow modules or theme callbacks to dynamically set the theme that should be used to render the current page. To do this, it calls menu_get_custom_theme(TRUE), which is a bit scary looking, but doesn't do anything important besides that and saving the result to a static cache.

After that, the drupal_theme_initialize() comes along and goes to town.

First, it just loads all themes using list_themes(), which is where the .info file for each theme gets parsed and the lists of CSS files, JS files, regions, etc., get populated.

Secondly, it tries to find the theme to use by checking to see if the user has a custom theme set, and if not, falling back to the theme_default variable.

$theme = !empty($user->theme) && drupal_theme_access($user->theme) ? $user->theme : variable_get('theme_default', 'bartik');

Then it checks to see if a different custom theme was chosen on the fly in the previous step (the menu_set_custom_theme() function), by running menu_get_custom_theme() (remember that static cache). If there was a custom theme returned, then it uses that, otherwise it keeps the default theme.

$custom_theme = menu_get_custom_theme(); $theme = !empty($custom_theme) ? $custom_theme : $theme;

Once it has firmly decided on what dang theme is going to render the dang page, it can move on to building a list of base themes or ancestor themes.

$base_theme = array(); $ancestor = $theme; while ($ancestor && isset($themes [$ancestor]->base_theme)) { $ancestor = $themes [$ancestor]->base_theme; $base_theme [] = $themes [$ancestor]; }

It needs that list because it needs to initialize any ancestor themes along with the main theme, so that theme inheritance can work. So then it runs _drupal_theme_initialize on each of them, which adds the necessary CSS and JS, and then initializes the correct theme engine, if needed.

After that, it resets the drupal_alter cache, because themes can have alter hooks, and we wouldn't want to ignore them becuase we had already built the cache by now.


And finally, it adds some info to JS about the theme that's being used, so that if an AJAX request comes along later, it will know to use the same theme.

$setting ['ajaxPageState'] = array( 'theme' => $theme_key, 'theme_token' => drupal_get_token($theme_key), ); drupal_add_js($setting, 'setting'); A couple other miscellaneous setup tasks
  • Detects string handling method using unicode_check().
  • Undoes magic quotes using fix_gpc_magic().
  • Ensures mt_rand is reseeded for security.
  • Runs all implementations of hook_init() at the very end.

That's it. That's the entire bootstrap process. There are a lot of places that deserve some more depth, and we'll get there, but you should be feeling like you have a fairly good understanding of where and when things get set up while bootstrapping.

Keep in mind this is only a small part of the page load process. Most of the really heavy lifting happens after this, so keep reading!

This is a chapter out of my in-progress book, Drupal Deconstructed. You can read it online for free, download it as a PDF/ePUB/MOBI, or contribute to it on GitHub.

Catégories: Elsewhere Featured Case Studies: EUREKA network

mar, 08/09/2015 - 16:31
Completed Drupal site or project URL:

EUREKA is a network enabling businesses to convert their R&D findings into innovative projects. The network finances companies to cooperate with each other at an international level to develop innovative solutions. The focus is pragmatic and allows SMEs to have a shorter time-to-market and to develop their international strategy early in the design process. EUREKA has been active for 30 years supporting R&D projects in over 40 countries.

The purpose of the website was to power up EUREKA’s innovative platform with a state-to-the-art online communication platform. Also, part of the contract was a revamp of another website, Eurostars, as part of the EUREKA network. Eurostars is a joint programme between EUREKA and the European Commission, co-funded from the national budgets of 34 Eurostars Participating States and Partner Countries, and by the European Union through Horizon 2020. In the 2014-2020 period, it has a total public budget of €1.14 billion.

Key modules/theme/distribution used: Domain AccessBootstrapMigrateViewsWebformSearch APISearch API Solr Search
Catégories: Elsewhere

Drupal Watchdog: Using Drupal to Build Your Product

mar, 08/09/2015 - 16:15

Few developers in the startup community would recommend making a product with Drupal. If you are building a web or e-commerce site, sure, but if you want to build a SaaS product there are plenty of technologies that are easier to productise. Installation profiles and features were added onto Drupal 7 as an afterthought, not as a central design principle and, as a result, have plenty of shortcomings. It looks like this will get better with Drupal 8 but, for now, by design, Drupal is not architected for easy redeployability.

Sounds pretty damning? It may be, but a lot of work has gone into remediating this problem, and Drupal is still better at redeployability than most other CMSs. CMSs are designed to be products themselves, not to be used to build products. Drupal however, is hyper-configurable. As a tool it is made for infinite extensibility, to keep as many options as open as possible. Compared to other CMSs, it’s easier to extend even if you don’t yet know what you will want to do in the future – which makes it great for one-off tailor-made projects. And this is a strength for some types of products and for certain parts of the development cycle.

In our consultancy, we started building “products” with Drupal because it was the right thing to do: all our products had some technically challenging component that we could contribute back to the community. Our developers were able to grow their skills significantly through the challenges they overcame and get an opportunity to build their reputation in the community. But these initial products didn’t make much business sense, except maybe as cost-leaders to promote our services. Most were not really sustainable as standalone products.

It has taken a long time, but after many iterations, we’ve learned a lot from these experiments. The projects we launched in the past two years have been much more successful. We still need to keep on fighting to get through what Seth Godin calls “the dip,” but we’ve gained a key insight; we now know the types of products we should be using Drupal for. In this article I want to share with you the most important of these insights.

Catégories: Elsewhere

Amazee Labs: DrupalCamp Cape Town 2015

mar, 08/09/2015 - 14:20
DrupalCamp Cape Town 2015 Gregory Gerhardt Tue, 09/08/2015 - 14:20

With more than 100 Drupalistas joining Cape Town’s latest DrupalCamp there was a buzz that probably silenced the loudest beehive.

Two tracks provided our delegate's brains with an à la carte menu for either taste, be it more the back-end or front-endish. Our souls got to enjoy the rich company of a beautiful community from all over the country, including 9 CodeX programmers joining us for the day. We were very excited for this chance to introduce them to Drupal.

A special keynote starting us off was Jeffrey A. "jam" McGuire sharing about the Drupal Community and open source, before the 2 concurrent tracks with 12 talks took off. One for "Frontend, Design and UX" and another one for "Backend, Sitebuilding and Devops". Find the schedule over here.

With a band for entertainment and fabulous food from Sababa, we got to fill our learning track & learning-sandwich with the awesomeness of the extra mayo-chutney. 

This left us with only one more task: More networking fun at the Neighbourhood Bar, with most of us enjoying that just "ok, one more beer".

Looking forward to the next one and, in the meantime, we open the betting office on the year of Africa's first DrupalCon in Cape Town.

Catégories: Elsewhere

Mpumelelo Msimanga: Drupal View on Steroids. Using a Cloud Database as View Data Source.

mar, 08/09/2015 - 12:30
Drupal View on Steroids. Using a Cloud Database as View Data Source.

Having already created Drupal views on external data, I felt the next step was to use a cloud database as the data source for a Drupal view. In this post, part of a series on using Drupal as a data platform, I describe the steps I followed to use an Amazon RDS instance as the data source for a Drupal view.

Catégories: Elsewhere