Planet Drupal

Subscribe to Planet Drupal feed
Drupal.org - aggregated feeds in category Planet Drupal
Updated: 48 min 52 sec ago

KnackForge: Drupal - Updating File name

Wed, 25/03/2015 - 08:45

If you know the file id, it is really simple,
Code:

$file = file_load($fid); file_move($file, 'public://new_file_name);

How it works:

We need a source file object to move file to the new location and update the files database entry. Moving a file is performed by copying the file to the new location and then deleting the original. To get source file object, we can use file_load() function.
file_load($fid) - loads a single file object from the database
$fid - file id
return object representing the file, or FALSE if the file was not found.

After that, we can use file_move() function to move the file new location and delete the original file..

file_move($file, 'public://new_file_name)

Parameter1 - Source File Object
Parameter2 - A string containing the destination that $source should be moved to. This must be a stream wrapper URI.
Parameter3 - Replace behaviour (Default value : FILE_EXISTS_RENAME)

For more details, check here.
 

Categories: Elsewhere

Kristian Polso: My talk on Drupal security in Drupal Days Milan

Wed, 25/03/2015 - 08:06
So I just came back from European Drupal Days in Milan. I had great fun at the event, it was well organized and filled with interesting talks. I'll be sure to attend it next year too!
Categories: Elsewhere

ThinkShout: The How and Why of Nonprofits Contributing to Open Source

Wed, 25/03/2015 - 01:00

Originally published on February 23rd, 2015 on NTEN.org. Republished with permission.

For the last 15 years or so, we’ve seen consistent growth in nonprofits’ appreciation for how open source tools can support their goals for online engagement. Rarely do we run across an RFP for a nonprofit website redesign that doesn’t specify either Drupal or WordPress as the preferred CMS platform. The immediate benefits of implementing an open source solution are pretty clear:

  • With open source tools, organizations avoid costly licensing fees.

  • Open source tools are generally easier to customize.

  • Open source tools often have stronger and more diverse vendor/support options.

  • Open source platforms are often better suited for integration with other tools and services.

The list goes on… And without going down a rabbit hole, I’ll simply throw out that the benefits of open source go well beyond content management use cases these days.

But the benefits of nonprofits supporting and contributing to these open source projects and communities are a little less obvious, and sometimes less immediate. While our customers generally appreciate the contributions we make to the larger community in solving their specific problems, we still often get asked the following in the sales cycle:

"So let me get this straight: First you want me to pay you to build my organization a website. Then you want me to pay you to give away everything you built for us to other organizations, many of whom we compete with for eyeballs and donations?"

This is a legitimate question! One of the additional benefits of using an open source solution is that you get a lot of functionality "for free." You can save budget over building entirely custom solutions with open source because they offer so much functionality out of the box. So, presumably, some of that saving could be lost if additional budget is spent on releasing code to the larger community.

There are many other arguments against open sourcing. Some organizations think that exposing the tools that underpin their website is a security risk. Others worry that if they open source their solutions, the larger community will change the direction of projects they support and rely upon. But most of the time, it comes down to that first argument:

"We know our organization benefits from open source, but we’re not in a position to give back financially or in terms of our time."

Again, this is an understandable concern, but one that can be mitigated pretty easily with proper planning, good project management, and sound and sustainable engineering practices.

Debunking the Myths of Contributing to Open Source

Myth #1: "Open sourcing components of our website is a security risk."

Not really true. Presumably the concern here is that if a would-be hacker were to see the code that underlies parts of your website, they could exploit security holes in that code. While yes, that could happen, the chances are that working with a software developer who has a strong reputation for contributing to an open source project is pretty safe. More importantly, most strong open source communities, such as the Drupal community, have dedicated security teams and thousands of developers who actively review and report issues that could compromise the security of these contributions. In our experience, unreviewed code and code developed by engineers working in isolation are much more likely to present security risks. And on the off chance that someone in the community does report a security issue, more often than not, the reporter will work with you, for free, to come up with a security patch that fixes the issue.

Myth #2: "If we give away our code, we are giving away our organization’s competitive advantage."

As a software vendor that’s given away code that powers over 45,000 Drupal websites, we can say with confidence: there is no secret sauce. Trust me, all of our competitors use Drupal modules that we’ve released - and vice versa.

By leveraging open source tools, your organization can take advantage of being part of a larger community of practice. And frankly, if your organization is trying to do something new, something that’s not supported by such a community, giving away tools is a great way to build a community around your ideas.

We’ve seen many examples of this. Four years ago, we helped a local nonprofit implement a robust mobile mapping solution on top of the Leaflet Javascript library. At the time, there wasn’t an integration for this library and Drupal. So, as part of this project we asked the client invest 20 hours or so for us release the barebones scaffolding of their mapping tool as a contributed Drupal module.

At first, this contributed module was simply a developer tool. It didn’t have an interface allowing Drupal site builders to use it. It just provided an easier starting point for custom map development. However, this 20 hour starting point lowered the cost for us to build mapping solutions for other clients, who also pitched in a little extra development time here and there to the open source project. Within a few months, the Leaflet module gained enough momentum that other developers from other shops started giving back. Now the module is leveraged on over 5,700 websites and has been supported by code contributions from 37 Drupal developers.

What did that first nonprofit and the other handful of early adopters get for supporting the initial release? Within less than a year of initially contributing to this Drupal module, they opened the door to many tens of thousands of dollars worth of free enhancements to their website and mapping tools.

Did they lose their competitive advantage or the uniqueness of their implementation of these online maps? I think you know what I’m gonna say: No! In fact, the usefulness of their mapping interfaces improved dramatically as those of us with an interest in these tools collaborated and iterated on each other’s ideas and design patterns.

Myth #3: "Contributing to an open source project will take time and money away from solving our organization’s specific problems."

This perception may or may not be true, depending on some of the specifics of the problems your organization is trying to solve. More importantly, this depends upon the approach you use to contribute to an open source project. We’ve definitely seen organizations get buried in the weeds of trying to do things in an open source way. We’ve seen organizations contribute financially to open source projects on spec (on speculation that the project will succeed). This can present challenges. We’ve also seen vendors try to abstract too much of what they’re building for clients up front, and that can lead to problems as well.

Generally, our preferred approach is generally to solve our clients immediate problems first, and then abstract useful bits that can be reused by the community towards the end of the project. There are situations when the abstraction, or the open source contribution, needs to come first. But for the most part, we encourage our clients to solve their own problems first, and in so doing so provide real-life use cases for the solutions that they open source. Then, abstraction can happen later as a way of future-proofing their investment.

Myth #4: "If we open source our tools, we’ll lose control over the direction of the technologies in which we’ve invested."

Don’t worry, this isn’t true! In fact:

Contributing to an open source project is positively selfish.

By this I mean that by contributing to an open source project, your organization actually gets to have a stronger say in the direction of that project. Most open source communities are guided by those that just get up and do, rather than by committee or council.

Our team loves the fact that so many organizations leverage our Drupal modules to meet their own needs. It’s great showing up at nonprofit technology conferences and having folks come up to us to thank us for our contributions. But what’s even better is knowing that these projects have been guided by the direct business needs of our nonprofit clients.

How to Go About Contributing to Open Source

There are a number of ways that your nonprofit organization can contribute to open source. In most of the examples above, we speak to financial contributions towards the release of open source code. Those are obviously great, but meaningful community contributions can start much smaller:

  • Participate in an open source community event. By engaging with other organizations with similar needs, you can help guide the conversation regarding how a platform like Drupal can support your organization’s needs. Events like Drupal Day at the NTC are a great place to start.

  • Host a code sprint or hackathon. Sometimes developers just need a space to hack on stuff. You’d be surprised at the meaningful that connections and support that can come from just coordinating a local hackathon. One of our clients, Feeding Texas, recently took this idea further and hosted a dedicated sprint on a hunger mapping project called SNAPshot Texas. As part of this sprint, four developers volunteered a weekend to helping Feeding Texas build a data visualization of Food Stamp data across the state. This effort built upon the work of Feeding America volunteers across the country and became a cornerstone of our redesign of FeedingTexas.org. Feeding Texas believes so strongly in the benefits they received from this work that they felt comfortable open sourcing their entire website on GitHub.

Of course, if your organization is considering a more direct contribution to an open source project, for example, by releasing a module as part of a website redesign, we have some advice for you as well:

  • First and foremost, solve your organization’s immediate problems first. As mentioned earlier in the article, the failure of many open source projects is that their sponsors have to handle too many use cases all at once. Rest assured that if you solve your organization’s problems, you’re likely to create something that’s useful to others. Not every contribution needs to solve every problem.

  • Know when to start with abstraction vs. when to end with abstraction. We have been involved in client-driven open source projects, such as the release of RedHen Raiser, a peer-to-peer fundraising platform, for which the open source contribution needed to be made first, before addressing our client’s specific requirements. In the case of RedHen Raiser, the Capital Area Food Bank of Washington, DC came to us with a need for a Drupal-based peer-to-peer fundraising solution. Learning that nothing like that existed, they were excited to help us get something started that they could then leverage. In this case, starting with abstraction made the most sense, given the technical complexities of releasing such a tool on Drupal. However, for the most part, the majority of open source contributions come from easy wins that are abstracted after the fact. Of course, there’s no hard and fast rule about this - it’s just something that you need to consider.

  • Celebrate your contributions and the development team! It might sound silly, but many software nerds take great pride in just knowing that the stuff they build is going to be seen by their peers. By offering to open source even just small components of your project, you are more likely to motivate your development partners. They will generally work harder and do better work, which again adds immediate value to your project.

In conclusion, I hope that this article helps you better understand that there’s a lot of value in contributing to open source. It doesn’t have to be that daunting of an effort and it doesn’t have to take you off task.

Categories: Elsewhere

btmash.com: Using Dev Desktop with Composer Drush

Wed, 25/03/2015 - 00:55

Before recently settling into the path of using Vagrant + Ansible for site development (speaking of Ansible: I absolutely love it and need to blog about some of my fun with that), I had been using Acquia Dev Desktop. Even now, I'll use it from time to time since it is easy to work with.

planet drupaldrush
Categories: Elsewhere

Web Wash: Define Custom Validation using Field Validation in Drupal 7

Tue, 24/03/2015 - 22:45

Out of the box, Drupal offers only a single type of validation for fields; required or not required. For most use cases this is fine, however, it can be a little difficult to define your own custom validation logic. What if you need to validate a field value and make sure it's unique?

The Field validation module allows you to define custom validation rules using Drupal's administration interface. The module ships with a lot of its own validators: "Plain text", "Specific value(s)" and much more.

If none of the validators meet your requirement, you can write your own by implementing a validator plugin. Because validators are Ctools plugin, they are easy to maintain as each one gets its own file.

In this tutorial, we'll use Field validation to validate a field value and make sure it's unique.

Categories: Elsewhere

Web Wash: Define Custom Validation using Field Validation in Drupal 7

Tue, 24/03/2015 - 22:45

Out of the box, Drupal offers only a single type of validation for fields; required or not required. For most use cases this is fine, however, it can be a little difficult to define your own custom validation logic. What if you need to validate a field value and make sure it's unique?

The Field validation module allows you to define custom validation rules using Drupal's administration interface. The module ships with a lot of its own validators: "Plain text", "Specific value(s)" and much more.

If none of the validators meet your requirement, you can write your own by implementing a validator plugin. Because validators are Ctools plugin, they are easy to maintain as each one gets its own file.

In this tutorial, we'll use Field validation to validate a field value and make sure it's unique.

Categories: Elsewhere

Drupal for Government: Drupal, Tableau, and Media Tableau

Tue, 24/03/2015 - 20:31

Tableau is a helluva tool.  For data visualization it is at the top of its game... we recently had a 3 day seminar at UVa regarding Tableau, and I was stoked to see that they use Drupal heavily for their portal design.  

While I don't have the $$'s to run Tableau Server right now, it's a cool tool and something to consider...  Anyhow - here's a quick look at Tableau + Drupal + media tableau.  The upshot is that by integrating at the database level you can use views to display all of your workbooks... if all you want is to embed some visualizations in your content this isn't necessary as tableau provides iFrames for workbooks too... to use that just install media tableau, integrate with your wysiwyg editor of choice and move on... I wanted to test it all, so here's the kitchen sink :)

Here are the steps I've taken for the complete integration with views etc... if you just want to embed some tableau views in drupal skip down a bit :)

Categories: Elsewhere

Lullabot: Paying It Forward: Drupal 8 Accelerate

Tue, 24/03/2015 - 19:00

Have you heard about the Drupal 8 Accelerate fund? The Drupal Association is collaborating with Drupal 8 branch maintainers to provide grants for those actively working on Drupal 8, with the goal of accelerating its release.

Categories: Elsewhere

Mediacurrent: A Comparison of Drupal 7 Image Caption Methods using WYSIWYG Module with CKEditor

Tue, 24/03/2015 - 18:18

I recently had the opportunity to explore popular methods of adding captions to images inside the WYSIWYG editor using the setup of WYSIWYG module with CKEditor library. Our two main criteria were Media module integration and a styled caption in the WYSIWYG editor. As I discovered, we couldn’t have both without custom coding which the budget didn’t allow for. Media module integration won out and the File Entity with View Modes method was chosen. 

The modules/methods I reviewed were:

Categories: Elsewhere

Daniel Pocock: The easiest way to run your own OpenID?

Tue, 24/03/2015 - 17:57

A few years ago, I was looking for a quick and easy way to run OpenID on a small web server.

A range of solutions were available but some appeared to be slightly more demanding than what I would like. For example, one solution required a servlet container such as Tomcat and another one required some manual configuration of Python with Apache.

I came across the SimpleID project. As the name implies, it is simple. It is written in PHP and works with the Apache/PHP environment on just about any Linux web server. It allows you to write your own plugin for a user/password database or just use flat files to get up and running quickly with no database at all.

This seemed like the level of simplicity I was hoping for so I created the Debian package of SimpleID. SimpleID is also available in Ubuntu.

Help needed

Thanks to a contribution from Jean-Michel Nirgal Vourgère, I've just whipped up a 0.8.1-14 package that should fix Apache 2.4 support in jessie. I also cleaned up a documentation bug and the control file URLs.

Nonetheless, it may be helpful to get feedback from other members of the community about the future of this package:

  • Is it considered secure enough?
  • Have other people found it relatively simple to install or was I just lucky when I tried it?
  • Are there other packages that now offer such a simple way to get OpenID for a vanilla Apache/PHP environment?
  • Would anybody else be interested in helping to maintain this package?
  • Would anybody like to see this packaged in other distributions such as Fedora?
  • Is anybody using it for any online community?
Works with HOTP one-time-passwords and LDAP servers

One reason I chose SimpleID is because of dynalogin, the two-factor authentication framework. I wanted a quick and easy way to use OTP with OpenID so I created the SimpleID plugin for dynalogin, also available as a package.

I also created the LDAP backend for SimpleID, that is available as a package too.

Works with Drupal

I tested SimpleID for login to a Drupal account when the OpenID support is enabled in Drupal, it worked seamlessly. I've also tested it with a few public web sites that support OpenID.

Categories: Elsewhere

Drupal core announcements: Drupal 8 beta 8 TOMORROW on Wednesday, March 25, 2015

Tue, 24/03/2015 - 17:24

The next beta for Drupal 8 will be beta 8! Here is the schedule for the beta release.

Wednesday, March 25, 2015 Drupal 8.0.0-beta8 released. Emergency commits only.
Categories: Elsewhere

Nuvole: A pluggable field-handling system for the Behat Drupal Extension

Tue, 24/03/2015 - 17:15
Write more complete Behat test scenarios for both Drupal 7 and Drupal 8.

On of the main goal of BDD (Behaviour Driven Development) is to be able to describe a system's behavior using a single notation, in order to be directly accessible by product owners and developers and testable using automatic conversion tools.

In the PHP world, Behat is the tool of choice. Behat allows to write test scenarios using Gherkin step definitions and it generates the corresponding PHP code to actually run and test the defined scenarios.

Thanks to the excellent Behat Drupal Extension Drupal developers have been able to enjoy the benefits of Behavioral Driven Development for quite some time.

Essentially the project provides an integration between Drupal and Behat allowing the usage of Drupal-specific Gherkin step definitions. For example, writing a scenario that tests node authorship would look like:

Scenario: Create nodes with specific authorship
  Given users:
  | name     | mail            | status |
  | Joe User | joe@example.com | 1      |
  And "article" content:
  | title          | author   | body             |
  | Article by Joe | Joe User | PLACEHOLDER BODY |
  When I am logged in as a user with the "administrator" role
  And I am on the homepage
  And I follow "Article by Joe"
  Then I should see the link "Joe User" Dealing with complex content types

The Gherkin scenario above is pretty straightforward and it gets the job done for simple cases. In a real-life situation, though, it's very common to have content types with a high number of fields, often of different types and, possibly, referencing other entities.

The following scenario might be a much more common situation for a Drupal developer:

Scenario: Reference site pages from within a "Post" node
  Given "page" content:
    | title      |
    | Page one   |
    | Page two   |
    | Page three |
  When I am viewing a "post" content:
    | title                | Post title         |
    | body                 | PLACEHOLDER BODY   |
    | field_post_reference | Page one, Page two |
  Then I should see "Page one"
  And I should see "Page two"

While it is always possible to implement project specific step-definition, as show on this Gist dealing with field collections and entity references, having to do that for every specific content type might be an unnecessary burden.

Introducing field-handling for the Behat Drupal Extension

Nuvole recently contributed a field-handling system that would allow the scenario above to be ran out of the box, without having to implement any custom step definition, working both in Drupal 7 and Drupal 8. The idea behind it is to allow a Drupal developer to work with fields when writing Behat test scenarios, regardless of the entity type or of any field-specific implementation.

The code is currently available on the master branches of both the Behat Drupal Extension and the Drupal Driver projects, if you want to try it out follow the instructions at "Stand-alone installation" and make sure to grab the right code by specifying the right package versions in your composer.json file:

{
  "require": {
    "drupal/drupal-extension": "3.0.*@dev",
    "drupal/drupal-driver": "1.1.*@dev"
}

The field-handling system provides an integration with several highly-used field types, like:

Date fields

Date field values can be included in a test scenario by using the following notation:

  • Single date field value can be expressed as 2015-02-08 17:45:00
  • Start and end date are separated by a dash -, for ex. 2015-02-08 17:45:00 - 2015-02-08 19:45:00.
  • Multiple date field values are separated by a comma ,

For example, the following Gherkin notation will create a node with 3 date fields:

When I am viewing a "post" content:
  | title       | Post title                                |
  | field_date1 | 2015-02-08 17:45:00                       |
  | field_date2 | 2015-02-08 17:45:00, 2015-02-09 17:45:00  |
  | field_date3 | 2015-02-08 17:45:00 - 2015-02-08 19:45:00 | Entity reference fields

Entity reference field values can be expressed by simply specifying the referenced entity's label field (e.g. the node's title or the term's name). Such an approach wants to keep up with BDD's promise: i.e. describing the system behavior by abstracting, as much as possible, any internal implementation.

For example, to reference to a content item with title "Page one" we can simply write:

When I am viewing a "post" content:
  | title           | Post title |
  | field_reference | Page one   |

Or, in case of multiple fields, titles will be separated by a comma:

When I am viewing a "post" content:
  | title           | Post title         |
  | field_reference | Page one, Page two | Link fields

A Link field in Drupal offers quite a wide range of options, such as an optional link title or internal/external URLs. We can use the following notation to work with links in our test scenarios:

When I am viewing a "post" content:
  | title       | Post title                                              |
  | field_link1 | http://nuvole.org                                       |
  | field_link2 | Link 1 - http://nuvole.org                              |
  | field_link3 | Link 1 - http://nuvole.org, Link 2 - http://example.com |

As you can see we use always the same pattern: a dash - to separate parts of the same field value and a comma , to separate multiple field values.

Text fields with "Select" widget

We can also refer to a select list value by simply referring to its label, so to have much more readable test scenarios. For example, given the following allowed values for a select field:

option1|Single room
option2|Twin room
option3|Double room

In our test scenario, we can simply write:

When I am viewing a "post" content:
  | title      | Post title               |
  | field_room | Single room, Double room | Working with other entity types

Field-handling works with other entity types too, such as users and taxonomy terms. We can easily have a scenario that would create a bunch of users with their relative fields by writing:

Given users:
  | name    | mail                | language | field_name | field_surname | field_country  |
  | antonio | antonio@example.com | it       | Antonio    | De Marco      | Belgium        |
  | andrea  | andrea@example.com  | it       | Andrea     | Pescetti      | Italy          |
  | fabian  | fabian@example.com  | de       | Fabian     | Bircher       | Czech Republic | Contributing to the project

At the moment field-handling is still a work in progress and, while it does support both Drupal 7 and Drupal 8, it covers only a limited set of field types, such as:

  • Simple text fields
  • Date fields
  • Entity reference fields
  • Link fields
  • List text fields
  • Taxonomy term reference fields

If you want to contribute to the project by providing additional field type handlers you will need to implement this very simple, core-agnostic, interface:

<?php
namespace Drupal\Driver\Fields;

/**
* Interface FieldHandlerInterface
* @package Drupal\Driver\Fields
*/
interface FieldHandlerInterface {

  /**
   * Expand raw field values in a format compatible with entity_save().
   *
   * @param $values
   *    Raw field values array.
   * @return array
   *    Expanded field values array.
   */
  public function expand($values);
}
?>

If you need some inspiration check the current handlers implementation by inspecting classes namespaced as \Drupal\Driver\Fields\Drupal7 and \Drupal\Driver\Fields\Drupal8.

Tags: Drupal PlanetTest Driven DevelopmentBehat
Categories: Elsewhere

4Sitestudios.com Drupal Blog: Wordpress, Drupal and SEO.

Tue, 24/03/2015 - 17:00

Something that’s seen a resurgence of late is the importance of SEO and site/page ranking (though if your job revolves around content, you’d probably say it never left). It seems like after the initial crush of SEO and SEO specialists in the early-ish days of the web, things like backlink ranking, metadata, alt tags and all those other accessibility tools went the way of the dodo. Gone were the days where people massaged their content to smash in as many keywords as possible, where SEO specialists were getting paid obscene amounts of money to add in <meta> data into your html templates. Design reigned supreme.

But with the sheer amount of information available online, there remains a pressing and consistent need to ensure that your site rises above the fray; you want to be the expert, the forum for discussion, the end-all-be-all. So you’ve put together that lovely little site with the cool hamburger nav, the beautifully optimized content, the frequently updated blog and news sections--what’s next?

One of the key tools that people don’t really use is Google analytics; sure, everyone’s got that tracking code embedded in their templates or footer, and every so often, you’ll go in there and take a look at the cool line graph that tells you...well, nothing really. If you dig deeper into GA, you get an awesome picture of where, when, how, and even why people are visiting your site. You can see which content people like, which content is shared and liked, how often and by whom.

So that’s step one: Use GA to it’s full potential.

They have enough tools to make you cry, but if you make the effort to wade through them, you can tailor it to not only fit your content or website better, you can also make website a better fit for the web itself. Both Drupal and Wordpress make it so easy to add in the GA tracking code that there’s really no excuse to not have it up and running, secreting away all those squirrely little details Google is so fond of.

But more importantly, on the subject of tools, is a wonderful little plugin for Wordpress called Yoast. If you’ve worked with WP, have a WP site, or even browse the internet, you’ve probably heard of this plugin, and for good reason...it’s stunningly easy to setup and use. It tells you what pages are awful, how many keywords you’re missing, when your content is poorly optimized, when your images don’t have tags and text, and a number of other issues that take the best stab at Google’s inscrutable SEO rules/ranking criteria as you can get. This plugin is so ubiquitous as to be almost mandatory if you care at all about getting your website seen--the makers claim that Yoast has been downloaded almost 15 million times. So what’s the option for Drupal?

The short answer is, there isn’t one.

Sure, when it comes down to it, you have the same options as WP sites--being customizable is, after all, kind of Drupal’s thing. You can edit your templates to add in your markup, you can install dozens of modules (XML sitemap, pathauto, Google analytics, redirect, metatag, token, page title, SEO checker, SEO checklist...and the list goes on), and you’ll eventually get the same functionality as one WP plugin that takes literally two minutes to add and install. And that’s including the time it takes to actually FTP in.

And I suppose this all fits in with Drupal’s more...Drupal-y...audience. It’s open source--you’re expected to code, to configure, to muck, to install and extend. It’s just one of those things where you shouldn’t have to install so many different, specialized modules to get the functionality you need to optimize. Sure-- there’s even a module called SEO Tools -- but it has a list of 18 additional “recommended/required modules,” and all that gets you is a screen that looks remarkably like the Google analytics dashboard.

I hesitate to use the word “lazy,” when what I really mean is “efficient,” or the even more adept “optimized,” but really--you just spent weeks/months and thousands of dollars getting a site up and running, UX’d, designed, sprinted and agile-d. The final step, the key step to actually getting your site seen, should not be a massive, dozen-module installing, template editing undertaking.

Obviously, the SEO ideal for a fresh or redesigned Drupal site is a thorough and well-planned strategy phase, and if there’s time, budget, interest (and willpower) to structure your content around key SEO strategies, it’s perfect. But if you’ve ever worked on websites, you know that sometimes you just need something that works well out of the box.

WP has Yoast. Where’s Drupal’s “something?” Consider the gauntlet thrown.

 
Categories: Elsewhere

Drupal Easy: Book Review: Drush for Developers (Second Edition)

Tue, 24/03/2015 - 14:48

Calling this book a "second edition" is more than a little bit curious, since there is no Drush for Developers (First Edition). I can only assume that the publisher considers Juampy's excellent Drush User's Guide (reviewed here) as the "first edition" of Drush for Developers (Second Edition), but that really minimzes ths book, as it is so much more than an updated version of Drush User's Guide. It would be like saying The Godfather, Part 2 is an updated version of The Godfather - which is just crazy talk. Drush Developer's Guide is more of a sequel - and (like The Godfather, Part 2) a darn good one at that.

-->

read more

Categories: Elsewhere

Drupalize.Me: Tutorial: Vagrant Drupal 8 Development

Tue, 24/03/2015 - 14:04

Vagrant Drupal Development (VDD) is a ready-to-use development environment using a virtual machine. Why use it? It provides a standard hosting setup, contained in a virtual machine, for developing with Drupal. This allows you to get up and running really, really quickly, without knowing anything about server administration.

Categories: Elsewhere

ERPAL: Setting objectives in projects with these 3 rules

Tue, 24/03/2015 - 12:07

Many of you who work on projects every day have enjoyed great success with them, but then again, there are also those projects that don't achieve their financial goals, or slip over the timeline or don't deliver the results they should. In this article, I’ll describe three rules to help you consistently plan for project success. In no case should a project be left to chance, to the customer or to the supplier alone. Here I’ll explain how to control a project from its start to its successful end. You risk driving it into the ground if you don’t stick to the following points:

1) Understand why we have projects

Recently, at the booth of a project manager at an exhibition, I read the following: "Projects are not an end unto themselves." This sentence may seem trivial at first glance, but it incorporates several important truths:

  • After a successfully completed project, something should be different and improved, compared to before
  • Projects start with a fixed – and hopefully clearly defined – goal
  • Afterward, there should be positive changes and the project value should persist

2) Set a clear schedule to control limited resources

Taking the definition of a project into consideration, it can be described as:

  • Unique: we’re not selling a product that’s finished and can be sold and used as is; rather, we’re creating something that doesn’t yet exist in the desired form
  • Fixed scheduled start: unless a project has a clearly defined beginning, the project team will only take it semi-seriously during the course of the work
  • Scheduled end: a clear project end is part of the goal and is defined in terms of both date and content

The fact that there are only limited resources available for a project is self-evident and of utmost importance. This is where a project plan comes into play – as well as a project manager who’ll manage the resources to stick to the plan.

3) Set goals! Make them clear, measurable and known amongst the team

A project follows a vision that consists of one or more goals. These objectives should be formulated in a SMART (Specific Measurable Achievable Realistic Timely) way and be clearly understood by all project participants, as they can only work towards objectives that are known to them. Goals extend to certain areas, e.g. cost, duration, or benefits. Important in the formulation of objectives: the project requirements are not goals but conditions that must be met before the goal can ever be reached. All requirements and project milestones should be periodically checked regarding each objective. This can happen in a weekly retrospective meeting, in a monthly controlling meeting or even in a daily status meeting. The basic conditions and requirements can – and will – change, of course. This is normal. Even the goal, especially of long-running projects, can change if important conditions change. However, this should be done consciously and in a controlled fashion, with the agreement of all involved parties. Under no circumstances should unnoticed and thus uncontrolled changes be accepted. To deal with changing environments and requirements is exactly what agile project management was made for.

So: stick to your overall goal. If, for instance, you want to create an e-commerce system to attract new customer groups and generate more revenue, but you’re spending days trying to find just the right position for banners ads, you’re forgetting your goals. Set priorities and stick to them during development.

If your goal is higher revenues, better quality or more efficiency, you should provide concrete metrics and a fixed date for those measurements. Otherwise, there will be unfulfilled expectations at the end of the project. What if someone thought 1% more sales was reaching the goal, although the sales manager expected an uplift of 10% – but he just assumed this without clearly communicating his expectation? And, in addition, the sales manager guessed the project would take six months, while the rest of the team planned for 12?

The topic of the next installment in this series will be "Agile projects for a fixed price? Yes you can!"

Categories: Elsewhere

Triquanta Web Solutions: What the fq? A short summary of Solr query fields.

Tue, 24/03/2015 - 08:24
#And how to use them in Drupal

A popular search engine for Drupal is Apache Solr. Although installation and configuration of Solr can be done almost completely via the Drupal Admin UI, in some cases it can be very instructive to see and understand what data is sent to and from Solr when a search is done from Drupal.

First place to look when using Solr inside Tomcat is the log file of Tomcat, usually /var/log/tomcat6/catalina.out. If this file is not present in this directory on your system use

locate catalina.out

or a similar command to find it.

If Solr is used in its own Jetty-container and is run as a seperate service (which is the only option for Solr 5.x), log4j is used to implement logging and configuration is done in the log4j.properties-file.

By default the logs are written to 'logs/solr' under the Solr root, this can be set with in log4j.properties with the 'solr.log'-option, for example:

solr.log=/var/solr/logs

For more information about log4j, see Apache Log4j 2.

In the log, each line like the following represents one search query:

INFO: [solrdev] webapp=/solr path=/select params={spellcheck=true& spellcheck.q=diam& hl=true& facet=true& f.bundle.facet.mincount=1& f.im_field_tag.facet.limit=50& f.im_field_tag.facet.mincount=1& fl=id,entity_id, entity_type, bundle, bundle_name, label, ss_language, is_comment_count, ds_created, ds_changed, score, path, url, is_uid, tos_name& f.bundle.facet.limit=50& f.content.hl.alternateField=teaser& hl.mergeContigious=true& facet.field=im_field_tag& facet.field=bundle& fq=(access_node_tfslk0_all:0+OR+access__all:0)& mm=1& facet.mincount=1& qf=content^40& qf=label^5.0& qf=tags_h2_h3^3.0& qf=taxonomy_names^2.0& qf=tos_content_extra^0.1& qf=tos_name^3.0& hl.fl=content& f.content.hl.maxAlternateFieldLength=256& json.nl=map& wt=json& rows=10& pf=content^2.0& hl.snippets=3& start=0& facet.sort=count& q=diam& ps=15} hits=10 status=0 QTime=12

NB: one way to get a clearer look at the log-lines, is by copying one of them into a text editor and replace '&' with '&\n' and ',' with ',\n' to get a more readable text.

Here '[solrdev]' indicates the core the query was submitted to and 'path=/select' the path.

Everything between the {}-brackets is what is added to the query as parameter. If your Solr host is localhost, Solr is running on port 8080 and the name of your core is solrdev then you can make this same query in any browser by starting with:

http://localhost:8080/solr/soldev/select?

followed by all the text between the {}-brackets.

This looks like no simple query and in fact a lot is going on here: not only is the Solr/Lucene index searched for a specific term, we also tell Solr which fields to return, to give us spellcheck suggestions, to higlight the search term in the return snippet, to return facets etcetera.

For better understanding of the Solr query we will break it down and discuss each (well, maybe not all) of the query parameters from the above log line.

Query breakdown q: Search term

The most basic Solr query would only contain a q-field, e.q.

http://localhost:8080/solr/solrdev/select?q=diam

This would return all fields present in Solr for all matching documents. This will either be fields directly defined in the schema.xml (in this examples we use the schema's based on the schema included in the Search API Solr module) like bundle_name:

<field name="bundle_name" type="string" indexed="true" stored="true"/>

or dynamic fields, which are created according to the field definition in the schema.xml, eg:

<dynamicField name="ss_*" type="string" indexed="true" stored="true" multiValued="false"/>

The above query run on my local development environment would return a number of documents like this one:

<doc> <bool name="bs_promote">true</bool> <bool name="bs_status">true</bool> <bool name="bs_sticky">false</bool> <bool name="bs_translate">false</bool> <str name="bundle">point_of_interest</str> <str name="bundle_name">Point of interest</str> <str name="content">Decet Secundum Wisi Cogo commodo elit eros meus nisl turpis.(...) </str> <date name="ds_changed">2015-03-04T08:45:18Z</date> <date name="ds_created">2015-02-19T18:55:44Z</date> <date name="ds_last_comment_or_change">2015-03-04T08:45:18Z</date> <long name="entity_id">10</long> <str name="entity_type">node</str> <str name="hash">tfslk0</str> <str name="id">tfslk0/node/10</str> <long name="is_tnid">0</long> <long name="is_uid">1</long> <str name="label">Decet Secundum Wisi</str> <str name="path">node/10</str> <str name="path_alias">content/decet-secundum-wisi</str> <str name="site">http://louis.atlantik.dev/nl</str> <arr name="sm_field_subtitle"> <str>Subtitle Decet Secundum Wisi</str> </arr> <arr name="spell"> <str>Decet Secundum Wisi</str> <str>Decet Secundum Wisi Cogo commodo elit eros meus nisl turpis. (...) </str> </arr> <str name="ss_language">nl</str> <str name="ss_name">admin</str> <str name="ss_name_formatted">admin</str> <str name="teaser"> Decet Secundum Wisi Cogo commodo elit eros meus nisl turpis. Abluo appellatio exerci exputo feugiat jumentum luptatum paulatim quibus quidem. Decet nutus pecus roto valde. Adipiscing camur erat haero immitto nimis obruo pneum valetudo volutpat. Accumsan brevitas consectetuer fere illum interdico </str> <date name="timestamp">2015-03-05T08:30:11.7Z</date> <str name="tos_name">admin</str> <str name="tos_name_formatted">admin</str> <str name="url"> http://louis.atlantik.dev/nl/content/decet-secundum-wisi </str> </doc>

(In this document I have left out most of the content of the fields content and spell).

Obviously, when searching we are not interested in all fields, for example the afore mentioned field content contains a complete view (with HTML tags stripped) of the node in Drupal, which most of the times is not relevant when showing search results.

fl: Fields

So the first thing we want to do is limit the number of fields Solr is returning by adding the fl parameter, in which we name the fields we want returned from Solr:

http://localhost:8080/solr/solrdev/select?q=diam&fl=id,entity_id,entity_type,bundle,bundle_name,label,ss_language,score,path,url

This would return documents like:

<doc> <float name="score">0.1895202</float> <str name="bundle">point_of_interest</str> <str name="bundle_name">Point of interest</str> <long name="entity_id">10</long> <str name="entity_type">node</str> <str name="label">Decet Secundum Wisi</str> <str name="path">node/10</str> <str name="ss_language">nl</str> <str name="url"> http://localhost/nl/content/decet-secundum-wisi </str> </doc>

Here we not only use fields which are direclty present in the index (like bundle) but also a generated field score which indicates the relevance of the found item. This field can be used to sort the results by relevance.

By the way, from Solr 4.0 on, the fl can be added multiple times to the query with in each parameter one field. However the "old" way of a comma-seperated field list is still supported (also in Solr 5).

So in Solr 4 the query could (or should I say: should?) be written as:

http://localhost:8080/solr/solrdev/select?q=diam&fl=id&fl=entity_id&fl=entity_type&fl=bundle&fl=bundle_name&fl=label&fl=ss_language&fl=score&fl=path&fl=url

NB: in Solr 3 this would give a wrong result, with only the first fl field returned.

Please note that you can not query all dynamic fields at once with a fl-parameter like 'fl=ss_*': you must specify the actual field which are created while indexing: fl=ss_language,ss_name,ss_name_formatted... etc.

fq: Filter queries

One thing we do not want is users finding unpublished content which they are not allowed to see. When using the Drupal scheme, this can be accomplished by filtering on the dynamic fields created from

<dynamicField name="access_*" type="integer" indexed="true" stored="false" multiValued="true"/>

To filter, we add a fq-field like this:

http://localhost:8080/solr/solrdev/select?q=diam&fl=id,entity_id,entity_type,bundle,bundle_name,label,ss_language,score,path,url&fq=(access_node_tfslk0_all:0+OR+access__all:0)

The queries in fq are cached independent from the other Solr queries and so can speed up complex queries. The query can also contain range-queries, e.g. to limit the returned documents by a popularity-field present in the Drupal-node (and of course, indexed) between 50 and 100, one could use

fq=+popularity:[50 TO 100]

For more info see the Solr wiki, CommonQueryParameters

To add filter queries programmatically in Drupal when using the Apache Solr Search-module, implement

hook_apachesolr_query_alter()

and use

$query->addFilter

to add filters to the query.

For example, to filter the query on the current language, use:

function mymodule_apachesolr_query_alter(DrupalSolrQueryInterface $query) { global $language; $query->addFilter("ss_language", $language->language); }

If using the Solr environment with Search API Solr Search, implement

hook_search_api_solr_query_alter(array &$call_args, SearchApiQueryInterface $query)

In this hook you can alter or add items to the

$call_args['params']['fq']

to filter the query.

 

In both cases the name of the field to use can be found via the Solr admin. In this case it is a string fieks created from the dynamic ss_ *-field in 'schema.xml'.

rows and start: The rows returnd

The 'row'-parameter determines how many documents Solr will return, while 'start' detemines how many documents to skip. This basically implements pager-functionality.

wt: Type of the returned data

The 'wt'-parameter determines how the data from Solr is returned. Most used are:

  • xml (default)
  • json

See https://cwiki.apache.org/confluence/display/solr/Response+Writers for a complete list of available formats and their options.

qf: Boosting

The DisMax and EdisMax plugins have the abilty to boost the score of documents. In the default Drupal requestHandler ("pinkpony") the Edismax-parser is set as query plugin:

<requestHandler name="pinkPony" class="solr.SearchHandler" default="true"> <lst name="defaults"> <str name="defType">edismax</str>

Boosting can be done by adding one or more qf-parameters which define the fields ("query field") and their boost value in the following syntax:

[fieldName]^[boostValue]

E.g:

qf=content^40& qf=label^5.0& qf=tags_h2_h3^3.0& qf=taxonomy_names^2.0&

In Drupal this is done by implementing the relevant (Search API or Apache Solr) hook and adding the boost.

For example, let's say you want to boost the result with a boost value of "$boost" if a given taxonomy term with id "$tid" is present int the field "$solr_field"

For Apache Solr module we should use:

function mymodule_apachesolr_query_alter(DrupalSolrQueryInterface $query) { $boost_q = array(); $boost_q[] = $solr_field . ':' . $tid . '^' . $boost; $query->addParam('bq', $boost_q); }

For Search API Apache Solr:

function mymodule_search_api_solr_query_alter(array &$call_args, SearchApiQueryInterface $query) { $call_args['params']['bq'][] = $solr_field . ':' . $tid . '^' . $boost; }

NB: both examples we ignore any boost-values set by other modules for the same field. In real life you should merge the exisitng boost-array with our new one.

Negative boosting

Negative boosting is not possible in Solr. It can however be simulated by boosting all documents who do not have a specific value. In the above example of boosting on a specific taxonomy term, we can use:

$boost = abs($boost); $boost_q[] = '-' . $solr_field . ':' . $tid . '^' . $boost;

or, for Search API

$boost = abs($boost); $call_args['params']['bq'][] = '-' . $solr_field . ':' . $tid . '^' . $boost;

where the '-' for the field name is used to indicated that we want to boost the items that do not have this specific value.

mm: minimum should match

The Edismax parser also supports querying phrases (as opposed to single words). With the 'mm' parameters the minimum amount of words that must be present in the Solr-document is given. For example: if the query is "little red corvet" and 'mm' is set to '2', only documents which contain at least:

  • "little" and "red"
  • "little" and "corvette"
  • "red" and "corvette"

are returned, and documents which contain only of the words are not.

q.alt: for empty queries

If specified, this query will be used when the main query string is not specified or blank. This parameter is also specific for the EdisMax query handler.

And even more

Of course the above mentioned fields are not all fields used in a Solr query. Much more can be done with them and there are a lot of other parameters to influence the output of Solr.

To mention just a few:

Facets

Facets are one of the strong assets of Solr. A complete description of all possibilities and settings for facets would take to far in this scope, but a number of usefull parameters are discussed her.

The Drupal Facet API module and its range of plugins have a plethora of settings and possibilites, like the Facet Pretty Paths-module or, for an example of the numerous possibilities of the Facet API, the Facet API Taxonomy Sort module.

The most important Solr parameters in relation to facets are:

facet

Turn on the facets by using "facet=true" in the query.

facet.field

The field to return facets for, can be added more than once.

facet.mincount

This indicates the minimum amount of results for which to show a facet.

If this is set to '1', all facets items which would return documents are returned, if set to '0' a facet item for each value in the field will be returned, even if clicking on the item would return zero documents.

facet.sort

How to sort the facet items. Possible values are:

facet.sort=count

Sort by number of returned documents

facet.sort=index

Amounts to sorting by alphabet, or, in the Solr-wiki style: return the constraints sorted in their index order (lexicographic by indexed term). For terms in the ascii range, this will be alphabetically sorted.

facet.limit

The maximum amount of facet items returned, defaults to 100.

facet.offset

The number of facet items skipped. Defaults to '0'.

'facet.limit' and 'facet.offset' can be combined to implement paging of facet items.

Highlighting hl

Highlighting is turned on with 'hl=true' and enables highlighting the keywords in the search snippet Solr returns. Like facetting and spellchecking, highlighting is an extensive subject, see http://wiki.apache.org/solr/HighlightingParameters for more info.

hl.formatter

According to the Solr wiki: currently the only legal value is "simple". So just use 'simple'.

hl.simple.pre/hl.simple.post

Set the tags used to surround the highlight, defaults to <em> / </em> To wrap the highlight in for example bold tags, use:

hl.simple.pre=<b>&hl.simple.post=<b> Spellchecking spellcheck

Spellchecking is enabled with 'spellcheck=true'.

Because spellchecking is a complicated and language dependend process, it is not discussed here in full, see http://wiki.apache.org/solr/SpellCheckComponent for more information about the spellcheck component.

If the query for the spellchecker is given in a seperate 'spellcheck.q'-parameter like this:

spellcheck.q=<word>

this word is used for spell checking. If the 'spellcheck.q'-parameter is not set, the default 'q'-parameters is used as input for the spellchecker. Of course the word in the 'spellcheck.q' should bare a strong relation to the word in the 'q'-parameter, otherwise ununderstandable spelling suggestions wpould be given.

One can also make seperate requests to the spellchecker:

http://localhost:8080/solr/solrdev/spell?q=<word>&spellcheck=true&spellcheck.collate=true&spellcheck.build=true

Where <word> in the 'q-parameter is the word to use as input for the spellchecker.

One important subject releated to spellchecking is the way content is analyzed before it is written into the index or read from it. See SpellCheckingAnalysis for the default settings for the spellcheck-field.

In Drupal there is a spellcheck module for Search API Search API Spellcheck which can be used for multiple search backends.

Conclusion

Although most of the parameters mentioned above are more or less hidden by the Drupal admin interface, a basic understanding of them, can help to understand why your Solr search does (or more usually: does not) returns the results you expected.

As said in the introduction: looking at the queries in the Tocmat log can help a lot when debugging Solr.

Categories: Elsewhere

Drupal.org frontpage posts for the Drupal planet: Announcing The Aaron Winborn Award to honor amazing community members

Tue, 24/03/2015 - 05:17

In honor of long-time Drupal contributor Aaron Winborn (see his recent Community Spotlight), whose battle with Amyotrophic lateral sclerosis (ALS) (also referred to as Lou Gehrig's Disease) is coming to an end later today, the Community Working Group, with the support of the Drupal Association, would like to announce the establishment of the Aaron Winborn Award.

This will be an annual award recognizing an individual who demonstrates personal integrity, kindness, and above-and-beyond commitment to the Drupal community. It will include a scholarship and stipend to attend DrupalCon and recognition in a plenary session at the event. Part of the award will also be donated to the special needs trust to support Aaron's family on an annual basis.

Thanks to Hans Riemenschneider for the suggestion, and the Drupal Association executive board for approving this idea and budget so quickly. We feel this award is a fitting honor to someone who gave so much to Drupal both on a technical and personal level.

Thank you so much to Aaron for sharing your personal journey with all of us. It’s been a long journey, and a difficult one. You and your family are all in our thoughts.

Front page news: Planet Drupal
Categories: Elsewhere

lakshminp.com: The Drupal 8 plugin system - part 4

Tue, 24/03/2015 - 02:53

We defined what a plugin is, discussed some plugins used in core and wrote our own custom plugin previously. We shall tune it up a bit in this post.

Real world plugins have a lot more properties than the label property mentioned in our breakfast plugin. To make our plugin more "real world", we introduce 2 properties, image and ingredients. It makes more sense now to have a custom annotation for breakfast instead of using the default Plugin annotation.

How different are custom annotations from the usual Plugin annotation?

1) They convey more information about a plugin than what an @Plugin does. Here's a custom annotation for a views display plugin from search_api, taken from here.

/** * Defines a display for viewing a search's facets. * * @ingroup views_display_plugins * * @ViewsDisplay( * id = "search_api_facets_block", * title = @Translation("Facets block"), * help = @Translation("Display the search result's facets as a block."), * theme = "views_view", * register_theme = FALSE, * uses_hook_block = TRUE, * contextual_links_locations = {"block"}, * admin = @Translation("Facets block") * ) */

2) Custom annotations are a provision to document the metadata/properties used for a custom plugin.
Check out this snippet from FieldFormatter annotation code for instance:

/** * A short description of the formatter type. * * @ingroup plugin_translatable * * @var \Drupal\Core\Annotation\Translation */ public $description; /** * The name of the field formatter class. * * This is not provided manually, it will be added by the discovery mechanism. * * @var string */ public $class; /** * An array of field types the formatter supports. * * @var array */ public $field_types = array(); /** * An integer to determine the weight of this formatter relative to other * formatter in the Field UI when selecting a formatter for a given field * instance. * * @var int optional */ public $weight = NULL;

That gave us a lot of information about the plugin and its properties.
Now that you are convinced of the merits of custom annotations, let's create one.

Checkout the module code if you haven't already.

$ git clone git@github.com:badri/breakfast.git

Switch to the custom-annotation-with-properties tag.

$ git checkout -f custom-annotation-with-properties

and enable the module. The directory structure should look like this:

The new Annotation directory is of interest here. Custom annotations are defined here.

/** * A glimspe of how your breakfast looks. * * This is not provided manually, it will be added by the discovery mechanism. * * @var string */ public $image; /** * An array of igredients used to make it. * * @var array */ public $ingredients = array();

Now, the plugin definition is changed accordingly. Here's the new annotation of the idli plugin.

/** * Idly! can't imagine a south Indian breakfast without it. * * * @Breakfast( * id = "idly", * label = @Translation("Idly"), * image = "https://upload.wikimedia.org/wikipedia/commons/1/11/Idli_Sambar.JPG", * ingredients = { * "Rice Batter", * "Black lentils" * } * ) */

The other change is to inform the Plugin Manager of the new annotation we are using.
This is done in BreakfastPluginManager.php.

// The name of the annotation class that contains the plugin definition. $plugin_definition_annotation_name = 'Drupal\breakfast\Annotation\Breakfast'; parent::__construct($subdir, $namespaces, $module_handler, 'Drupal\breakfast\BreakfastInterface', $plugin_definition_annotation_name);

Let's tidy the plugin by wrapping it around an interface. Though this is purely optional, the interface tells the users of the plugin what properties it exposes.
This also allows us to define a custom function called servedWith() whose implementation is plugin specific.

With the plugin class hierarchy looking like this now:

The servedWith() is implemented differently for different plugin instances.

// Idly.php public function servedWith() { return array("Sambar", "Coconut Chutney", "Onion Chutney", "Idli podi"); } // MasalaDosa.php public function servedWith() { return array("Sambar", "Coriander Chutney"); }

We make use of the interface functions we wrote in the formatter while displaying details about the user's favorite breakfast item.

// BreakfastFormatter.php foreach ($items as $delta => $item) { $breakfast_item = \Drupal::service('plugin.manager.breakfast')->createInstance($item->value); $markup = '<h1>'. $breakfast_item->getName() . '</h1>'; $markup .= '<img src="'. $breakfast_item->getImage() .'"/>'; $markup .= '<h2>Goes well with:</h2>'. implode(", ", $breakfast_item->servedWith()); $elements[$delta] = array( '#markup' => $markup, ); }

And the user profile page now looks like this.

Derivative plugins

We have Idly plugin instance mapped to the Idly class and Uppuma instance mapped to the Uppuma class. Had all the plugin instances been mapped to a single class, we would have got derivative plugins. Derivative plugins are plugin instances derived from the same class.
They are employed under these scenarios:
1. If one plugin class per instance is an overkill.
There are times where you don't want to define a class for each plugin instance. You just say that it is an instance of a particular class that is already defined.
2. You need to dynamically define plugin instances.
The Flag module defines a different Flag Type plugin instance for different entities. The entities in a Drupal site are not known beforehand and hence we cannot define one instance each for every entity. This calls for a plugin derivative implementation.

Lets add derivative plugins to our breakfast metaphor.

$ git checkout -f plugin-derivatives

Here's a new user story for the breakfast requirement. We are going to add desserts to our menu now. All desserts are of type Sweet. So, we define a derivative plugin called Sweet which is based on breakfast.

This calls for 3 changes as shown in the new directory structure outlined below:

1) Define the Sweet plugin instance class on which all our desserts are going to be based on.

/** * A dessert or two whould be great! * * * @Breakfast( * id = "sweet", * deriver = "Drupal\breakfast\Plugin\Derivative\Sweets" * ) */ class Sweet extends BreakfastBase { public function servedWith() { return array(); } }

Note the "deriver" property in the annotation.

2) Next, we define the deriver in the Derivative directory.

/** * Sweets are dynamic plugin definitions. */ class Sweets extends DeriverBase { /** * {@inheritdoc} */ public function getDerivativeDefinitions($base_plugin_definition) { $sweets_list = drupal_get_path('module', 'breakfast') . '/sweets.yml'; $sweets = Yaml::decode(file_get_contents($sweets_list)); foreach ($sweets as $key => $sweet) { $this->derivatives[$key] = $base_plugin_definition; $this->derivatives[$key] += array( 'label' => $sweet['label'], 'image' => $sweet['image'], 'ingredients' => $sweet['ingredients'], ); } return $this->derivatives; } }

3) The derivative gets sweets info from the sweets.yml present in the module root directory. This could even be an XML/JSON or any file format which holds metadata. I've used a YAML file for consistency's sake.

mysore_pak: label: 'Mysore Pak' image: 'https://upload.wikimedia.org/wikipedia/commons/f/ff/Mysore_pak.jpg' ingredients: - Ghee - Sugar - Gram Flour jhangri: label: 'Jhangri' image: 'https://upload.wikimedia.org/wikipedia/commons/f/f8/Jaangiri.JPG' ingredients: - Urad Flour - Saffron - Ghee - Sugar

The class hierarchy for derivative plugins looks a little big bigger now.

Clear your cache and you must be able to see the sweets defined in the yaml file in the breakfast choice dropdown of the user profile.

Enjoy your dessert!

That concludes our tour of Drupal 8 plugins. Hope you liked it and learnt something. Stay tuned for the next series about services.

Categories: Elsewhere

Colan Schwartz: Drupal-specific hosting: Choose a provider from those offering comprehensive platforms

Tue, 24/03/2015 - 02:12
Topics: 

There are many Web hosting companies claiming their ability to host Drupal sites in the enterprise space. However, most of the time, these providers simply provide the hardware or a virtual machine (VM/VPS) with an operating system (OS) capable of hosting Drupal if you build the application stack, configure it and manage it all yourself. They may even claim that they can set all of this up for you, but they'll charge extra for the labour. They don't have a comprehensive platform where instances can be automatically deployed as needed.

What do I mean by a "platform"?

I'm considering a somewhat complex requirement: a solution that includes a fully-managed Drupal application stack with several environments where it is trivial to move code, databases and files between them.

While it seems like it would be difficult to find such a service, there are now several available. While these options may seem expensive relative to generic Web-site hosting, they'll save you immensely on labour costs, as you won't have to pay to do any of the following yourself:

  • Installation of the application stack on each environment (Dev, Staging and Prod, see below.)
  • Managing all OS software updates and security patches.
  • Managing the configuration between environments (a.k.a. Configuration Management)
  • Self-hosting and managing a deployment tool such as Aegir or creating and managing your own deployment scripts.
  • Developing, maintaining and documenting processes for your development workflow.

The environments in question are Dev (or Development, the integration server where code from all developers is combined and tested), Staging (or QA, where quality assurance testing is done with the content and configuration from Production/Live and the code from Dev) and Prod (Production/Live, your user-facing site). Typically, these three (3) server environments are set up for you, and the Drupal hosting provider provides a simple-to-use command-line (CLI) and/or Web interface to deploy code to Dev, to Staging and finally to Prod while allowing you to easily move site data (content and configuration) from Prod to Staging to Dev. Your developers then work in their own sandboxes on their laptops pushing code up through the environments, and pulling site data from them.

This allows you to significantly reduce or eliminate the resources you'll need for system administration and DevOps. You can focus your project resources on application development, which is to get your content management system up and running as quickly as possible, while leaving infrastructure issues to the experts. Even if these items are outsourced, they add additional costs to your project.

However, you may have specific requirements which preclude you from going with such a managed solution. You may have a need to keep the environments behind your corporate firewall (say to communicate securely with other internal back-end systems such as CRMs, authentication servers or other data stores) or your country/jurisdiction has strict privacy legislation preventing you from physically locating user data in regions where electronic communication is not guaranteed to be kept private. For example, in Canada, personal information can only be used for purposes that users have consented to (see PIPEDA). As a good number of the described hosting services rely on data centres in the United States, the PATRIOT Act there (which gives the US government full access to any electronic communication) could make it difficult to enforce such privacy protections.

At the time of this writing, the following Drupal hosting companies are available to provide services as described above. They're listed in no particular order.

I haven't had a chance to try all of these yet as I have clients on only some of them. It would be great if someone who's evaluated all of them were to publish a comparison.

This article, Drupal-specific hosting: Choose a provider from those offering comprehensive platforms, appeared first on the Colan Schwartz Consulting Services blog.

Categories: Elsewhere

Pages