Planet Drupal

Subscribe to flux Planet Drupal
Drupal.org - aggregated feeds in category Planet Drupal
Mis à jour : il y a 44 min 19 sec

erdfisch: Webprofiler toolbar integration for Drupal 8

lun, 03/02/2014 - 19:15

As someone working on some drupal site you often want to understand what happens on a specific site. This blog post will describe a tool which helps to understand the site faster and easier.

One example you wanna see are the executed database queries. In Drupal 7 we had the wonderful Devel module which showed a list of executed database queries below the page but there are way more information you might want to know:

  • PHP Configuration
  • The needed time and memory
  • List of enabled themes/modules
  • Routing information (aka. hook_menu in D7)
  • The requested cache/Key-Value data
  • Information about the Request raw data

Symfony has a nice toolbar at the bottom
which stores this information, shows it and make it available as separate page
for additional research.

The founder of symfony (fabpot) gave me some
initial version of a drupal integration. Sadly Luca Lusso started independent on a version, so we merged the code together and continue
on https://drupal.org/node/2165731.

So here is out it looks like (click images for larger version):

so you see quite a big amount of integrations already. Let's list what we have at the moment:

  • PHP Config
  • Request
  • Forms
  • Database
  • Modules/Themes
  • Routing
  • Cache
  • State
  • (Config: There is a working branch relying on a core patch: https://drupal.org/node/2184231)
  • Your ideas!

You could certainly ask yourself whether this is a total replacement of the devel module. There is an
ongoing discussion at https://drupal.org/node/257770 whether to use the symfony toolbar/an alternative php one.

Please try out the module on Drupal 8 and come up with more ideas and help us.

Weitere Bilder: 
Catégories: Elsewhere

Chapter Three: How to Write for the Web

lun, 03/02/2014 - 18:24

Your Web writing probably sucks. How do I know this? Because most of the content on the Internet is just plastered up there with sparse consideration for the medium.

Why does this problem persist? Because people haven't learned the unique fundamentals of writing for the Web. Ready to change all of that? Keep reading.

Good Web writing makes a site usable. Training content administrators on how to curate content for the Web is a critical component for making sites that people love.

I had the pleasure of working with California College of the Arts (CCA), and their talented Web Content Manager Jim Norrena, to create content creation guidelines for their admissions section. The goal was to lay the ground rules to better equip their decentralized site administrators to manage their site. Upon completion, I found these guidelines to be universal in nature and worth sharing far and wide.

Chief guidelines:

Why write differently for the Web?

People read differently online than in print. In print, readers delve deep and are not easily distracted by things to click on.

People online:

  • Read 20% slower vs. print
  • Are task focused rather than looking for an immersive experience
  • Scan the page rather than read every word

In short, do not expect your content to be effective if you simply copy and paste it from a print version.

Goals should guide your content creation process

Goals will help focus your efforts and inform what you should create, keep or delete. Answer the following questions to guide this process. It's best to answer all of these questions with the key stakeholders present.

  1. What are the communication goals of this page?
  2. Who is the audience?
  3. What are the key calls to action?
  4. What is the utilitarian function of this page?
  5. Can this content be shorter or more skimmable?
  6. Can redundant content be eliminated?
  7. Is all the content relevant?

Below are some sample goals that I created for CCA's undergraduate admissions page:

  1. Communicate differentiation
  2. Provide overview of admissions process
  3. Communicate application deadlines
  4. Show the university experience through rich media
  5. Outline next steps (apply, attend an event, etc.)
Define content governance

Content governance defines who is in charge of your site's content, who has input, and how content gets published. Answer the following questions and share them with your team.

  • Who writes the content?
  • Who edits the content?
  • Who are the stakeholders?
  • Who approves the content?
  • Where do the content requests come from?
  • When is this content due?
  • Who uploads the content?

Answering the questions above will help clarify roles and responsibilities.

Note - because of the way that Drupal handles content publishing (hitting publish about 50 times before you're done), a Drupal-specific way to define content governance is as follows:

  • Where do the requests come from?
  • Who creates and publishes the content?
  • Who edits and publishes the content?
  • Who approves and publishes the content?
  • Who are the stakeholders?
Usability

Researchers have proven that readers scan content in an F-shaped pattern.

In the three pages above, heat maps indicate where readers focus first; they focus mainly on the left side of the page, as well as on the first few sentences of each paragraph.

So what does this mean?

  • Put your most important content at the top of the page.
  • Use sub-headers throughout your text for improved scannability.
  • Make the first two words of your headers the most important.
Content structure

Use the inverse pyramid from journalism to structure your content. Start with the most important information, followed by supported details, and finish with related information.

Write concisely

Get to the point. People don't read.

Instead of... Use... despite the fact that although in the event that if it is important that must, should has the opportunity to can it is possible that may due to the fact that since Avoid redundant phrases

Say more with less

Instead of... Use... advanced notice notice end result result final outcome outcome extra bonus bonus collaborate together collaborate unintended mistake mistake Think like a designer

Develop an eye for aesthetics. For example, if a three column layout displays two headers with one line each and a third with three lines, it won't look balanced. Writers should feel empowered to have opinions about the way their words look and act on those observations. If you notice your copy has excessive line breaks, fix it.

Format smarter

Bullets are a great way to improve the scannability of your text. You can enhance those bullets by adding sub-headers (when appropriate).

Proof everything

Test your links. Remove typos. Get extra sets of eyes to vet your grammar and sentence structure. Copy errors reflect on your business and your reputation. Take this step seriously.

Use this blog to create writing guidelines for your clients. Spread this knowledge far and wide to improve the Web!

Reference articles
Catégories: Elsewhere

Phase2: Incremental Imports of Archival Content with Feeds

lun, 03/02/2014 - 17:33

Using the Feeds contrib module for Drupal is a popular route for importing content from RSS feeds or similar streams of data, including service APIs like Twitter or Flickr and even email accounts. The module provides robust methods to regularly check a feed for updates and to create or update nodes by mapping feed content to Drupal entities.

Since RSS feeds usually provide the most recent batch of content, the typical Feeds implementation will incrementally import feed items on an ongoing basis. However, some use cases call for also importing all historical content from the same data source. For example, a new website may need to feature both new and archival content from an external blog.

In such a case, if the historical content from the source system can be exposed using the same format or method as the feed for ongoing updates, then it may be beneficial to use the same process to accomplish both the archival and ongoing import.

A Migration By Any Other Name?

If the use case only involves the archival content import, then this would be a plain and simple migration, and I would likely recommend the Migrate contrib module. However, Migrate is less suited for ongoing, regularly scheduled imports from remote data sources. So in this case, I’d rather find a way for Feeds to work for the archival part of the job.

That said, we can take a few lessons from the Migrate module in terms of properties of a good migration system when implementing a system with Feeds:

  • It should be able to run incrementally. The script should not assume that all of the content can be handled in a single operation. There may be network interruptions or resource limitations that break the script on any given run.

  • It should not create duplicates if it is run multiple times. There should be no penalty to running the import multiple times either to confirm a successful import or to catch content added since the last import.

  • It should be able to be “rolled back” easily. Due to changes made to the importer configuration or the source content, it’s often necessary to remove all imported content and run the import process again.

Feeds provides good support for these requirements. It can run repeatedly on the same feed, and will only create content for new feed items — either ignoring or updating (based on configuration) content imported on previous runs. It also provides the ability to delete all items from the feed, which satisfies the rollback requirement.

However, there is a fundamental challenge to using Feeds for this use case. It assumes the feed source has a constant location; for example, the RSS feed URL for a blog is always the same. To support an archival migration, it is likely that you will need to retrieve the content from multiple locations or process it in batches.

Handling Multiple Importer Configurations

The general issue is that the feed importer needed for historical imports may need to be configured differently than one for ongoing imports. One basic difference may be the feed URL. The archival feed source may need to specify ranges of content by page or date or it may be an export file rather than a URL. Another difference may be the format, where a different feed parser is needed for the historical import.

To handle multiple configurations, you could either use one feed importer and change its configuration when switching from the historical import to the ongoing import. The other option is to have two feed importers defined, one for the historical import and one for the ongoing import.

Since it’s important to be able to easily run both imports at any time in order to catch potential new feed items, having to switch feed configuration could be problematic. Instead, I’d prefer to set up one feed importer, say for the ongoing import, and then clone it and tweak the configuration as needed for the historical import.

There’s one issue with this approach: Feeds avoids importing duplicate content within the context of a feed importer configuration. Therefore, duplicate content may be created as items from the ongoing feed end up in the archival feed.

A simple solution is to extend the standard node processor plugin and override one function to ensure that imported content items are unique in the context of the target content type. This plugin is available in a drupal.org sandbox project module by Steven Jones or can be implemented by including the following plugin in a custom module:

 

<?php /** * Class definition for FeedsUniqueNodeProcessor. * * Allows checking for uniqueness among all nodes in a given content type, * rather than only nodes imported with a given feeds importer. * * Based on: https://drupal.org/sandbox/darthsteven/1444686 */ class FeedsUniqueNodeProcessor extends FeedsNodeProcessor {   /**    * Retrieve the target entity's existing id if available. Otherwise return 0.   *   * @ingroup mappingapi   *   * @param FeedsSource $source   *   The source information about this import.   * @param $result   *   A FeedsParserResult object.   *   * @return   *   The serial id of an entity if found, 0 otherwise.   */   protected function existingEntityId(FeedsSource $source, FeedsParserResult $result) {     $query = db_select('feeds_item')      ->fields('feeds_item', array('entity_id'))      ->condition('entity_type', $this->entityType());     // Iterate through all unique targets and test whether they do already     // exist in the database.     foreach ($this->uniqueTargets($source, $result) as $target => $value) {       switch ($target) {         case 'url':           $entity_id = $query->condition('url', $value)->execute()->fetchField();          break;         case 'guid':          $entity_id = $query->condition('guid', $value)->execute()->fetchField();          break;       }      if (isset($entity_id)) {         // Return with the content id found.        return $entity_id;       }     }     if ($nid = parent::existingEntityId($source, $result)) {       return $nid;     }     return 0;   } }

Once this plugin is available to Drupal through a module, both the ongoing and archival feed importers should be configured to use this feed processor.

Importing Historical Feed Items

Most RSS feeds are limited to a certain number of items, so that as new content is published, the old content “falls off” the feed. Some RSS feeds allow providing a URL parameter for the page of results or date range to retrieve. For example, WordPress blogs allow using the “paged” URL parameter to specify the page of results to return.

If an RSS feed supports this, then all historical content could be imported by incrementally fetching and processing consecutive batches of content.

As mentioned earlier, Feeds assumes that the source for a feed is always the same, so this approach requires code to manage the feed source and iterate through the complete range of content. Therefore, rather than running automatically at some interval (as is appropriate for an ongoing feed importer), the historical feed importer should not be set to run automatically.

The following is a sample of code for managing this incremental import. The approach is:

  • Define a cron queue worker that configures the archival feed importer to the URL for a given page of results and then runs it.

  • On cron, start a queue worker from page 1 if it isn’t already started.

  • Use hook_feeds_after_import() to detect when the archival feed importer finishes so that the next page can be added to the queue.

  • Define a helper function that does a test fetch of the next page of feed content so that we can gracefully end the import process if the last page was reached.

 

/** * Implements hook_cron(). */ function example_feeds_cron() { // Enqueue the example_feeds_blog_archive job on page 1, if it has not been  // started and the job queue is empty.  if (variable_get('example_feeds_blog_archive_page', -1) < 0) {     $queue = DrupalQueue::get('example_feeds_blog_archive');    if (!$queue->numberOfItems()) {       example_feeds_blog_archive_next(1);    }  } } /** * Implements hook_cron_queue_info(). * * Define cron queue jobs for this module. */ function example_feeds_cron_queue_info() {   $queues = array(); $queues['example_feeds_blog_archive'] = array(     'worker callback' => 'example_feeds_blog_archive_get',    'time' => 60,  );   return $queues; } /** * Implements hook_feeds_after_import(). * * Handles the post-import event for feeds importers defined by this module. */ function example_feeds_feeds_after_import($feed) { // When the example_archive feeds importer finishes, queue the  // importer on the next page.  if ($feed->importer()->id === 'example_archive') {     example_feeds_blog_archive_next();  } } /** * Enqueue the next page (or the specified page) for the blog archive feed * importer. Check that the feed page exists first. */ function example_feeds_blog_archive_next($page = -1) {   if ($page < 0) {     $page = variable_get('example_feeds_blog_archive_page', -1) + 1;   }  $source_url = 'http://example.com/feed/?paged=' . intval($page);  feeds_include_library('http_request.inc', 'http_request');  $result = http_request_get($source_url, NULL, NULL, NULL, 5);  if (in_array($result->code, array(200, 201, 202, 203, 204, 205, 206))) {    // If the request was successful, then queue the job to import the feed.    $queue = DrupalQueue::get('example_feeds_blog_archive');    $queue->createItem(array($page));    variable_set('example_feeds_blog_archive_page', $page);    return $page;  }  drupal_set_message(t('Cannot fetch page %page-num of the blog archive feed, so ending import.', array('%page-num' => $page)));  return FALSE; } /** * Configure the feed importer to fetch the given page of the blog feed. */ function example_feeds_blog_archive_get($info) {   if (!empty($info[0]) && $page = intval($info[0])) {     $source_url = 'http://example.com/feed/?paged=' . intval($info[0]);    $source = feeds_source('example_archive');    $source_config = array('FeedsHTTPFetcher' => array('source' => $source_url));    $source->addConfig($source_config);    $source->save();    $source->startImport();  } }

With this code to support the archival import, both the archival and ongoing feed importers can be enabled and ready to run. The archival import will run once to pick up historical content, and the ongoing import will run regularly to pick up new items in the feed.

Providing Administrative Control

To round off this solution, it’s important to provide a few minimal administrative controls and status information.

By using the cron queue worker, the importer will process as many pages as it can during a cron run, and is able to pick up where it leaves off. This allows handling a large number of pages of archival content without worrying about memory or execution time limits. However, it means the number of cron runs it takes to complete the import may vary according from run to run. Therefore, the person managing the import will likely want to see how many pages have been processed.

What if the importer process has a glitch and stops midway through a run? What if the feed’s source site has an outage causing the import to fail? For these reasons, it’s also crucial to have the ability to “nudge” the importer to try again from where it left off.

In some cases, you may want to re-run the entire import process. Maybe the ongoing importer did not run frequently enough and missed new content items before they “fell off” the first page of the feed. For these reasons, we need to provide a way to restart the importer as needed.

The following sample code alters the Feeds import page for the archival importer to add status information on the last page fetched and administrative operations for retrying the next page and restarting the whole import.

 

/** * Implements hook_form_FORM_ID_alter(). * * Handles alterations to the feeds_import_form for importers defined by this * module. */ function example_feeds_form_feeds_import_form_alter(&$form, &$form_state, $form_id) {   if ($form['#importer_id'] === 'example_archive') {     $form['source_page_status'] = array(     '#type' => 'fieldset',      '#title' => t('Incremental Fetch Status'),      '#weight' => -1,    );    $form['source_page_status']['page_num'] = array(      '#type' => 'item',      '#title' => t('Last page fetched'),      '#markup' => variable_get('example_feeds_blog_archive_page', t('Not set')),    );    $form['source_page_status']['page_retry'] = array(       '#type' => 'submit',      '#name' => 'page_retry',      '#value' => t('Retry next page of import'),      '#submit' => array('example_feeds_form_feeds_import_form_blog_archive_submit'),    );    $form['source_page_status']['page_restart'] = array(      '#type' => 'submit',      '#name' => 'page_restart',      '#value' => t('Restart import'),      '#submit' => array('example_feeds_form_feeds_import_form_blog_archive_submit'),    );   } } /** * Form submit handler for custom operations for the blog archive import form. */ function example_feeds_form_feeds_import_form_blog_archive_submit($form, $form_state) {   if (!empty($form_state['clicked_button']['#name'])) {    if ($form_state['clicked_button']['#name'] === 'page_retry') {      if ($page = example_feeds_blog_archive_next()) {        drupal_set_message(t('The blog archive job will process page %page-num on the next cron run.', array('%page-num' => $page)));      }    }    else if ($form_state['clicked_button']['#name'] === 'page_restart') {      variable_del('example_feeds_blog_archive_page');      $queue = DrupalQueue::get('example_feeds_blog_archive');      $queue->deleteQueue();      drupal_set_message(t('The blog archive job will restart on the next cron run.'));    }  } }

With these administrative controls, the combination of an archival and ongoing feed importer can successfully provide a robust and unified approach to migrating existing content as well as catching newly published content.

Putting It All Together

The Feeds contrib modules provides a flexible base for a variety of tasks based on fetching and processing content from external sources. This article illustrates an approach for using Feeds to support an incremental import of archival content. This approach may be useful if you are planning a migration from or integration with another CMS.

For more on these topics, see Exposing External Content to Drupal’s Search, Planning Your Content Migration, and the sketches and talk linked from Sketching a Successful Drupal Migration.

Catégories: Elsewhere

Lullabot: Drupal Hosted This Year's 56th Annual Grammy Awards

lun, 03/02/2014 - 15:44

Kyle is joined by Kevin Colligan, Sr. Director/Head of Digital Media for the GRAMMY Awards and Nate Haug, Senior Drupal Architect at Lullabot to discuss how GRAMMY.com is powered by Drupal. From site architecture, to statistics of how Drupal was able to handle the amount of traffic such a big event brings we talk about the details in this podcast.

Catégories: Elsewhere

Pronovix: Brightcove's New Year's Gift for the Drupal Community: Caching and Exportables

lun, 03/02/2014 - 12:45

Brightcove is a video hosting platform that integrates with your Drupal site through the Brightcove module. They have a really solid service, and if you need an enterprise-grade solution you should definitely check them out (full disclosure: Brightcove is a customer of Pronovix).

Catégories: Elsewhere

Web Omelette: How to batch assign taxonomy terms to nodes using Views Bulk Operations

lun, 03/02/2014 - 09:04

In this article I am going to show you a quick trick that will allow you to assign taxonomy terms to a lot of nodes all at once. For this we will use Views and Views Bulk Operations and will require no coding whatsoever. So let's begin.

First off, make sure you have the 2 modules installed. Drush will quickly take care of that for you if you don't have them yet:

drush dl ctools views vbo && drush en ctools views views_ui vbo

Now, the next thing we need to do is create a View page that displays the content (nodes, users, etc) you'd like to operate on in bulk (in our case, add taxonomy terms to). Make sure it uses fields and for start, the title field will be enough. For filtering, use whatever you need to filter the View to show only the content you want and sorting doesn't really matter.

Next up, add a new field of the type: Bulk operations: Content. This will provide next to the node title a checkbox for you to select that node to be part of the bulk operation.

To configure this field, make sure you check the boxes Modify entity values and Show available tokens followed by selecting the values of which field you'd like changed (in our case the taxonomy term reference field). In the screenshot below, you can see my selection for the Tags field of the Article node (I am going with a node view displaying articles).

Now you can save the View and navigate to its page. In the image below you can see my example. A table formatted view with the checkbox and the Article node title to its right. And the VBO operations above it.

Now, let's say I want to apply a taxonomy term to all these nodes. I can select them all (either manually or using the top-most checkbox dedicated for selecting all rows at once), choose my operation (Modify entity values) and click Execute. And this is then my next screen:

I have an autocomplete widget (like the default Article content type Tags field referencing taxonomy terms) where I can select one or more terms I'd like applied to all those nodes. I also have an option to ensure that existing values do not get overriden, by checking the respective checkbox. And lastly, I have some tokens available to use (because of the Tokens module).

Once I select my terms and press Next, I get an overview of how many nodes are going to be affected by this change (as you can see below). And that's basically it.

One thing to keep in mind here. If you are adding a non already existing taxonomy term, VBO will create that term as many times as the number of nodes being processed. So you'll end up with a bunch of terms with the same title. Probably not something you either expect or want. I therefore recommend you create your term before and then apply it to the content.

Hope this helps!

In Drupal var switchTo5x = true;stLight.options({"publisher":"dr-8de6c3c4-3462-9715-caaf-ce2c161a50c"});
Catégories: Elsewhere

Victor Kane: Importing a pretty large D6 site into Pantheon (detailed instructions, with a little help from my friends)

lun, 03/02/2014 - 02:12

More and more of my clients are using Pantheon to host their Drupal based web applications. This is not an ad, it's just a fact. I'm finding more and more of my development work involves cloning Pantheon based workflow instances and coding and site building within that workflow, and I've seen how it has improved greatly over the years. Now I had to import a quite large Drupal 6 site for a client hoping for a trouble-free Drupal oriented hosting experience while we got on with the site renovation project. While the process was straightforward, and the necessary documentation is there (see References), I thought I'd share my experience as warm and fuzzies for others having to do the same:

read more

Catégories: Elsewhere

larsolesen.dk: Brooming the Panels issue queue

dim, 02/02/2014 - 23:08
Tags: planet drupaldrupal

I've been involved quite a lot lately in helping out with Panopoly - and it has just got to a 1.1 release because of great work @dsnopek, @mrfelton and @populist. Panopoly builds heavily on Panels, and I saw that they are getting close to releasing a 3.4-version

@japerry started to maintain panels, and he is working hard moving Panels forward, so everybody else can benefit from this great module.

However, it is a daunting task. As of writing there is 632 open issues in the issue queue for panels. You really do not have to be a coder to help sort out the issues. There is a lot of tasks, you can easily do, so @japerry can focus on the important stuff.

This is what you can do, if you want to be a janitor in the issue queue.

Catégories: Elsewhere

KnackForge: Leveraging CKeditor template to theme Drupal contents

sam, 01/02/2014 - 11:52

WYSIWYG (a.k.a HTML Editor) has become a De facto for quickly formatting and publishing contents from dynamic site like Drupal. This is certainly a time saver and prevents from getting hands dirty with HTML.

In this connection, CKEditor has been a pioneer and around in use and development cycle for a decade (first release was on March 2003, in the name FCKeditor).

Catégories: Elsewhere

KnackForge: Replacing lengthy URLs in simplenews newsletter email with Bit.ly short URLs

sam, 01/02/2014 - 11:47

Simplenews, a newsletter module allows you to send customized confirmation emails on subscribe and unsubscribe actions. The default singe email confirmation for subscribe looks as below,

Subject:

Confirmation for [simplenews-category:name] from [site:name]

Body:

We have received a request to subscribe [simplenews-subscriber:mail] to the [simplenews-category:name] newsletter. To confirm please use the link below.

[simplenews-subscriber:subscribe-url]

Catégories: Elsewhere

Palantir: D8FTW: Breadcrumbs That Work

sam, 01/02/2014 - 00:39

The upcoming Drupal 8 includes many changes large and small that will improve the lives of site builders, site owners, and developers. In a series we're calling, "D8FTW," we look at some of these improvements in more detail, including and especially the non-obvious ones.

Breadcrumbs have long been the bane of every Drupal developer's existence. In simple cases, they work fine out of the box. Once you get even a little complex, though, they get quite unwieldy.

That's primarily because Drupal 7 and earlier don't have a breadcrumb system. They just have an effectively-global value that modules can set from "anywhere," and some default logic that tries to make a best-guess based on the menu system if not otherwise specified. That best guess, however, is frequently not enough and letting multiple modules or themes specify a breadcrumb "anywhere" is a recipe for strange race conditions. Contrib birthed a number of assorted tools to try to make breadcrumbs better but none of them really took over, because the core system just wasn't up to the task.

Enter Drupal 8. In Drupal 8, breadcrumbs have been rewritten from the ground up to use the new system's architecture and style. In fact, breadcrumbs are now an exemplar of a number of "new ways" in Drupal 8. The result is the first version of Drupal where we can proudly say "Hooray, breadcrumbs rock!"

More power to the admin

There are two key changes to how breadcrumbs work in Drupal 8. The first is how they're placed. In Drupal 7 and earlier, there was a magic $breadcrumb variable in the page template. As a stray variable, it didn't really obey any rules about placement, visibility, caching, or anything else. That made sense when there were 100 modules and a slightly fancy blog was the typical Drupal use case. In a modern enterprise-ready CMS, though, having lots of special-case exceptions like that hurts the overall system.

In Drupal 8, breadcrumbs are an ordinary block. That’s it. Site administrators can place that block in any region they'd like, control visibility of it, even put it on the page multiple times right from the UI. (The new Blocks API makes that task easy; more on that another time.) And any new functionality added to blocks, either by core or contrib, will apply equally well to the breadcrumb block as to anything else. Breadcrumbs are no longer a unique and special snowflake.

More predictability to the developer

The second change is more directly focused at developers. Gone are the twin menu_set_breadcrumb() and menu_get_breadcrumb functions that acted as a wrapper around a global variable. Instead, breadcrumbs are powered by a chained negotiated service.

A chained negotiated whosawhatsis? Let's define a few new terms, each of which introduces a crucial change in Drupal 8. A service is simply an object that does something useful for client code and does so in an entirely stateless fashion. That is, calling it once or calling it a dozen times with the same input will always yield the same result. Services are hugely important in Drupal 8. Whenever possible, logic in a modern system like Drupal 8 should be encapsulated into services rather than simply inlined into application code somewhere else. If a service requires another service, then that dependency should be passed to it in its constructor and saved rather than manually created on the fly. Generally, only a single instance of a service will exist throughout the request but it's not hard-coded to that.

A negotiated service is a service where the code that is responsible for doing whatever needs to be done could vary. You call one service and ask it to do something, and that service will, in turn, figure out some other service to pass the request along to rather than handling it itself. That's an extremely powerful technique because the whole "figuring out" process is completely hidden from you, the developer. To someone writing a module, whether there's one object or 50 responsible for determining breadcrumbs is entirely irrelevant. They all look the same from the caller’s point of view.

The simplest and most common "figuring out" mechanism is a pattern called Chain of Responsibility. In short, the system has a series of objects that could handle something, and some master service just asks each one, in turn, "Hey, you got this?" until one says yes, then stops. It's up to each object to decide in what circumstances it cares.

Breadcrumbs in Drupal 8 implement exactly this pattern. The breadcrumb block depends on the breadcrumb_manager service, which by default is an object of the BreadcrumbManager class. That object is simply a wrapper around many objects that implement BreadcrumbBuilderInterface, which it implements itself as well. When the breadcrumb block calls $breadcrumb_manager->build() that object will simply forward the request on to one of the other breadcrumb builders it knows about; including those you, as a module developer, provide.

Core ships with five such builders out of the box. One is a default that will build a breadcrumb off of the path and always runs last. Then there are four specialty builders for forum nodes, taxonomy term entity pages, stand-alone comment pages, and book pages. Core does not currently ship with one that uses the menu tree — as was the case in Drupal 7 — because the menu system is still in flux and calculating that was quite difficult. That could certainly be re-added in contrib or later in core, however.

Let's try it!

Let's add our own new builder that will make all "News" nodes appear as breadcrumb children of a View we've created at /news. Although all we need to do is implement the BreadcrumbBuilderInterface, it's often easier to start from the BreadcrumbBuilderBase utility class. (Side note: This may turn into one or more traits before 8.0 is released.) We'll add a class to our module like so:

<?php
// mymodule/lib/Drupal/mymodule/NewsBreadcrumbBuilder.php

namespace Drupal\mymodule;

use

Drupal\Core\Breadcrumb\BreadcrumbBuilderBase;

class

NewsBreadcrumbBuilder extends BreadcrumbBuilderBase {
  /**
   * {@inheritdoc}
   */
  public function applies(array $attributes) {
    if ($attributes['_route'] == 'node_page') {
      return $attributes['node']->bundle() == 'news';
    }
  }

 

/**
   * {@inheritdoc}
   */
  public function build(array $attributes) {
    $breadcrumb[] = $this->l($this->t('Home'), NULL);
    $breadcrumb[] = $this->l($this->t(News), 'news');
   return $breadcrumb;
  }
}
?>

Two methods, that's it! In the applies() method, we are passed an array of values about the current request. In our case, we know that this builder only cares about showing the node page, and only when the node being shown is of type "news". So we return TRUE if that's the case, indicating that our build() method should be called, or FALSE to say "ignore me!"

The second method, then, just builds the breadcrumb array however we feel like. In this case we're just going to hard code a few links but we could use whatever logic we want, safe in the knowledge that our code, and only our code, will be in control of the breadcrumb on this request. A few important things to note:

  • The $this->l() and $this->t() methods are provided by the base class, and function essentially the same as their old procedural counterparts but are injectable; we'll discuss what that means in more detail in a later installment.
  • The breadcrumb does not include the name of the page we're currently viewing. The theme system is responsible for adding that (or not).

Now we need to tell the system about our class. To do that, we define a new service (remember those?) referencing our new class. We'll do that in our *.services.yml file, which exists for exactly this purpose:

Catégories: Elsewhere

Drupal core announcements: No more 7.x to 8.x hook_update_N() -- file Migrate issues instead

sam, 01/02/2014 - 00:25

At DrupalCon Prague, we decided not to provide a Drupal 7 to Drupal 8 upgrade path using the database update system, and to instead provide data migration using a Drupal 8 data migration API based on the Migrate module. As of today, Drupal 7 sites can no longer be upgraded to Drupal 8 with update.php, and all implementations of hook_update_N() have been removed from Drupal 8 core.

Going forward, hook_update_N() should only be included to provide 8.x to 8.x updates, once the 8.x to 8.x upgrade path is supported.

If your patch introduces a data model change that would previously have required a hook_update_N() implementation, consider instead whether a new / changed data migration is needed. Migration works by calling APIs, so most changes (for example entities) are covered already. But if your data change is not covered by an API (like changing a raw configuration object name or key) then you need to:

  1. Create a new issue in the Migrate component.
  2. Document the data model change in the issue and reference the core issue that introduces it under the "related issues" section.
  3. Update the summary of your main issue to indicate that the data model change has a corresponding Migrate issue.

If you are unsure whether new migration code is needed, file the issue anyway, and Migrate maintainers will review it and close it if it is not needed.

Catégories: Elsewhere

Metal Toad: Reliably Monitoring MySQL Replication

ven, 31/01/2014 - 23:05

Replication is a wonderful thing for your clients. Having a 'hot spare' of their database(s) for redundancy, or being able to off-load read operations from the main database to increase performance, giving your client peace-of-mind about their data and application. I won't go into setting up MySQL Replication; there are more than a few guides on that already out there (here's the official documentation). Once you do have Replication running, you need to make sure that it remains running, reliably, all the time. How best to accomplish this?

The Way Monitoring Had Been

The typical method is to use SLAVE STATUS to look at information about the setup.

Catégories: Elsewhere

Drupal Association News: Volunteer or Contractor? Help the Association Answer This Question

ven, 31/01/2014 - 22:20
Overview

The single biggest reason that the Drupal Association is such a great place to work is the Drupal community. You all dedicated your blood, sweat, and tears to make Drupal amazing, and that includes work on Drupal.org, our community’s home. As the organization charged with maintaining Drupal.org, we’ve relied on a pastiche of volunteer support, contractors (at various rate scales), and staff.

Catégories: Elsewhere

Drupal core announcements: No Drupal core release on Wednesday, February 5

ven, 31/01/2014 - 21:45

The monthly Drupal core bug fix release window is scheduled for this Wednesday. However, the last Drupal 7 bug fix release was only a month ago, and I also won't be available next week to coordinate a release. As a result, there won't be a Drupal 7.x bug fix release in February.

Upcoming release windows include:

  • Wednesday, February 19 (security release window)
  • Wednesday, March 5 (bug fix release window)

For more information on Drupal core release windows, see the documentation on release timing and security releases, and the discussion that led to this policy being implemented.

Catégories: Elsewhere

Drupal core announcements: Change records now needed before commit

ven, 31/01/2014 - 17:35

Starting February 14, issues that require API change records must have these change records written before patches are committed. This is Drupal 8 core's valentine to contributed modules. :)

What issues are affected?

Any Drupal core issues that introduce backwards-compatibility-breaking API changes are required to have change records documenting the change. Up until now, these change records were created after the issues were committed. Going forward, the change records need to be written and reviewed before the issue is marked RTBC.*

* Note that in rare cases, core maintainers may allow certain critical patches to go in before the change record is written, for example, in the case of a critical bug, or a high-impact issue that is blocking other work, but please don't count on that. ;)

How does the new process work?
  1. Follow the normal development process while the patch is being worked on.
  2. Make sure the API change tag is added to issues that break backwards compatibility. (In general, API changes should be documented in the issue summary.)
  3. Once you get the API change approved by a core maintainer, the Needs change record tag can be added to the issue. (Note that the previous tag "Needs change notification" is no longer used.)
  4. Create a change record with the Published checkbox unchecked (the default option), and then remove the "Needs change record" tag from the issue. (All draft change records can be found on the draft change record list.)
  5. In order for the issue to be marked RTBC and committed by a core maintainer, a draft change record documenting the approved API changes must be attached.
  6. Once the patch for the issue is committed, the core maintainer will simply mark the issue fixed (like any regular issue). The "Published" checkbox can then be checked to make the change record appear in the main Change record list.
Why are we making this change?

As we complete Drupal 8 APIs and move toward the first Drupal 8 beta, it's increasingly important that our API documentation is accurate, including our API change records. With the previous process, change records have gone unwritten for months -- 24 change records are still outstanding. Furthermore, the previous process (wherein the issue title, status, priority, category, and tags were all changed) was also convoluted and error-prone, and interfered with accurate core metrics.

Sounds great! How can I help?

We need your help to get both outstanding and upcoming change records written so that core and contrib developers can use this critical documentation. Help us stabilize our APIs by:

Catégories: Elsewhere

Acquia: 2012 Greatest Hits - Drupal Camp Sofia, Bulgaria

ven, 31/01/2014 - 15:01

2012 Greatest Hits - Drupal Camp Sofia, Bulgaria - A quick blast from the past this week from the first Drupal community event at which I was recording material for a podcast. This gives this particular event an extra sparkle in my memory. True to form for the Drupal community around the world, many of the people I met at this camp have become friends with whom I stay in touch with or even get to see now and then at a DrupalCon or Drupal Camp somewhere. Community ftw!

Catégories: Elsewhere

Change(b)log: Drupal installation profile with multiple languages

ven, 31/01/2014 - 14:29
By default Drupal gets installed with only one default language - English - and all additional languages need to be added afterwards, followed by downloading and importing all relevant interface translations. But why not to make our multilingual lives just a little bit easier?
Catégories: Elsewhere

Jackson River: Drupal Panelizer: Architecting Landing Pages

jeu, 30/01/2014 - 22:09

When designing and implementing landing pages on a web site, there's often a need to have a unique design that differs drastically from that of the main site. The content strategy may not fit, in Drupal terms, a typical node / page model.

read more

Catégories: Elsewhere

Drupal @ Penn State: Creating accessible, responsive rubrics in Drupal

jeu, 30/01/2014 - 21:54

These two screencasts show the current state of the Rubric module for Drupal 7.  The first video shows how I'm working towards making Rubrics, traditionally complex, structured documents, accessible using form fields of different types.  It also shows a unique usage of the .element-invisible CSS class built into Drupal core so that language that would bloat interfaces can be hidden from sighted users but portray implied meaning for screen-readers.

Catégories: Elsewhere

Pages