Planet Drupal

Subscribe to Planet Drupal feed
Drupal.org - aggregated feeds in category Planet Drupal
Updated: 30 min ago

OSTraining: There Will Never Be a Drupal 9

Thu, 04/08/2016 - 15:09

Yes, that's a big statement in the title, so let me explain.

Lots of OSTraining customers are looking into Drupal 8 and they have questions about Drupal 8's future. If they invest in the platform today, how long will that invesment last. 

This is just my personal opinion, but I think an investment in Drupal 8 will last a long, long time.

Drupal 8 took five years. It was a mammoth undertaking, and no-one in the Drupal community has the energy for a similar re-write. 

Categories: Elsewhere

Frederic Marand: How to display time and memory use for Drush commands

Thu, 04/08/2016 - 14:08

When you use Drush, especially in crontabs, you may sometimes be bitten by RAM or duration limits. Of course, running Drush with the "-d" option will provide this information, but it will only do so at the end of an annoyingly noisy output debugging the whole command run.

On the other hand, just running the Drush command within a time command won't provide fine memory reporting. Luckily Drush implements hooks to make acquiring this information easily, so here is a small gist you can use as a standalone Drush plugin or add to a module of your own:

read more

Categories: Elsewhere

lakshminp.com: Drupal composer workflow - part 2

Thu, 04/08/2016 - 09:41

In the previous post, we saw how to add and manage modules and module dependencies in Drupal 8 using Composer.

In this post we shall see how to use an exclusive composer based Drupal 8 workflow. Let's start with a vanilla Drupal install. The recommended way to go about it is to use Drupal Composer project.

$ composer create-project drupal-composer/drupal-project:8.x-dev drupal-8.dev

If you are a careful observer(unlike me), you will notice that a downloaded Drupal 8 package ships with the vendor/ directory. In other words, we need not install the composer dependencies when we download it from d.o. On the other hand, if you "git cloned" Drupal 8, it won't contain the vendor/ directory, hence the extra step to run `composer install` in root directory. The top level directory contains a composer.json and the name of the package is drupal/drupal, which is more of a wrapper for the drupal/core package inside the core/ directory. The drupal/core package installs Drupal core and its dependencies. The drupal/drupal helps you build a site around Drupal core, maintains dependencies related to your site and modules etc.

Drupal project takes a slightly different project structure.It installs core and its dependencies similar to drupal/drupal. It also installs the latest stable versions of drush and drupal console.

$ composer create-project drupal-composer/drupal-project:8.x-dev d8dev --stability dev --no-interaction New directory structure

Everything Drupal related goes in the web/ directory, including core, modules, profiles and themes. Contrast this with the usual structure where there is a set of top level directories named core, modules, profiles and themes.

drush and drupal console(both latest stable versions) gets installed inside vendor/bin directory.The reason Drush and Drupal console are packaged on a per project basis is to avoid any dependency issues which we might normally face if they are installed globally.

How to install Drupal

Drupal can be installed using the typical site-install command provided by drush.

$ cd d8dev/web $ ../vendor/bin/drush site-install --db-url=mysql://<db-user-name>:<db-password>@localhost/<db-name> -y Downloading modules

Modules can be downloaded using composer. They get downloaded in the web/modules/contrib directory.

$ cd d8dev $ composer require drupal/devel:8.1.x-dev

The following things happen when we download a module via composer.

  1. Composer updates the top level composer.json and adds drupal/devel:8.1.x-dev as a dependency.
"require": { "composer/installers": "^1.0.20", "drupal-composer/drupal-scaffold": "^2.0.1", "cweagans/composer-patches": "~1.0", "drupal/core": "~8.0", "drush/drush": "~8.0", "drupal/console": "~1.0", "drupal/devel": "8.1.x-dev" },
  1. Composer dependencies(if any) for that module get downloaded in the top level vendor directory. These are specified in the composer.json file of that module. At the time of writing this, Devel module does not have any composer dependencies.
"license": "GPL-2.0+", "minimum-stability": "dev", "require": { } }

Most modules in Drupal 8 were(are) written without taking composer into consideration. We use the drush dl command every time which parses our request and downloads the appropriate version of the module from drupal.org servers. Downloading a module via composer requires the module to have a composer.json as a minimal requirement. So how does composer download all Drupal contrib modules if they don't have any composer.json? The answer lies in a not so secret sauce ingredient we added in our top level composer.json:

"repositories": [ { "type": "composer", "url": "https://packagist.drupal-composer.org" } ],

Composer downloads all packages from a central repository called Packagist. It is the npmjs equivalent of PHP. Drupal provides its own flavour of Packagist to serve modules and themes exclusively hosted at Drupal.org. Drupal packagist ensures that contrib maintainers need not add composer.json to their project.

Let's take another module which does not have a composer.json, like Flag(at the time of writing this). Let's try and download flag using composer.

$ composer require drupal/flag:8.4.x-dev ./composer.json has been updated > DrupalProject\composer\ScriptHandler::checkComposerVersion Loading composer repositories with package information Updating dependencies (including require-dev) - Installing drupal/flag (dev-8.x-4.x 16657d8) Cloning 16657d8f84b9c87144615e4fbe551ad9a893ad75 Writing lock file Generating autoload files > DrupalProject\composer\ScriptHandler::createRequiredFiles

Neat. Drupal Packagist parses contrib modules and serves the one which matches the name and version we gave when we ran that "composer require" command.

Specifying package sources

There is one other step you need to do to complete your composer workflow, i.e., switching to the official Drupal.org composer repository. The actual composer.json contains Drupal packagist as the default repository.

"repositories": [ { "type": "composer", "url": "https://packagist.drupal-composer.org" } ],

Add the Drupal.org composer repo using the following command:

$ composer config repositories.drupal composer https://packages.drupal.org/8

Now, your repositories entry in composer.json should look like this:

"repositories": { "0": { "type": "composer", "url": "https://packagist.drupal-composer.org" }, "drupal": { "type": "composer", "url": "https://packages.drupal.org/8" } }

To ensure that composer indeed downloads from the new repo we specified above, let's remove the drupal packagist entry from composer.json.

$ composer config --unset repositories.0

The repositories config looks like this now:

"repositories": { "drupal": { "type": "composer", "url": "https://packages.drupal.org/8" } }

Now, let's download a module from the new repo.

$ composer require drupal/token -vvv

As a part of the verbose output, it prints the following:

... Loading composer repositories with package information Downloading https://packages.drupal.org/8/packages.json Writing /home/lakshmi/.composer/cache/repo/https---packages.drupal.org-8/packages.json into cache ...

which confirms that we downloaded from the official package repo.

Custom package sources

Sometimes, you might want to specify your own package source for a custom module you own, say, in Github. This follows the usual conventions for adding VCS package sources in Composer, but I'll show how to do it in Drupal context.

First, add your github URL as a VCS repository using the composer config command.

$ composer config repositories.restful vcs "https://github.com/RESTful-Drupal/restful"

Your composer.json will look like this after the above command is run successfully:

"repositories": { "drupal": { "type": "composer", "url": "https://packages.drupal.org/8" }, "restful": { "type": "vcs", "url": "https://github.com/RESTful-Drupal/restful" } }

If you want to download a package from your custom source, you might want it to take precedence to the official package repository, as order really matters for composer. I haven't found a way to do this via cli, but you can edit the composer.json file and swap both package sources to look like this:

"repositories": { "restful": { "type": "vcs", "url": "https://github.com/RESTful-Drupal/restful" }, "drupal": { "type": "composer", "url": "https://packages.drupal.org/8" } }

Now, lets pick up restful 8.x-3.x. We can specify a Github branch by prefixing with a "dev-".

$ composer require "drupal/restful:dev-8.x-3.x-not-ready"

Once restful is downloaded, composer.json is updated accordingly.

"require": { "composer/installers": "^1.0.20", "drupal-composer/drupal-scaffold": "^2.0.1", "cweagans/composer-patches": "~1.0", "drupal/core": "~8.0", "drush/drush": "~8.0", "drupal/console": "~1.0", "drupal/devel": "8.1.x-dev", "drupal/flag": "8.4.x-dev", "drupal/mailchimp": "8.1.2", "drupal/token": "1.x-dev", "drupal/restful": "dev-8.x-3.x-not-ready" }, Updating drupal core

Drupal core can be updated by running:

$ composer update drupal/core > DrupalProject\composer\ScriptHandler::checkComposerVersion Loading composer repositories with package information Updating dependencies (including require-dev) - Removing drupal/core (8.1.7) - Installing drupal/core (8.1.8) Downloading: 100% Writing lock file Generating autoload files Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% > DrupalProject\composer\ScriptHandler::createRequiredFiles

As the output reads, we updated core from 8.1.7 to 8.1.8. We will revisit the "Writing lock file" part in a moment. After this step is successful, we have to run drush updatedb to do any database updates. This applies to even updating modules.

$ cd d8dev/web $ ../vendor/bin/drush updatedb Updating modules

One of the cool things I like about composer workflow is, I can update selective modules, or even a single module. This is not possible using drush. The command for updating a module, say, devel is:

$ composer update drupal/devel > DrupalProject\composer\ScriptHandler::checkComposerVersion Loading composer repositories with package information Updating dependencies (including require-dev) Nothing to install or update Generating autoload files > DrupalProject\composer\ScriptHandler::createRequiredFiles

Hmmm. Looks like devel is already the latest bleeding edge version. To quickly revise and ensure what composer related artifacts we need to check in to version control,

Should you check in the vendor/ directory?

Composer recommends that you shouldn't, but there are some environments that don't support composer(ex. Acquia Cloud), in which case you have to check in your vendor folder too.

Should you check in the composer.json file?

By now, you should know the answer to this question :)

Should you check in the composer.lock file?

Damn yes. composer.lock contains the exact version of the dependencies which are installed. For example, if your project depends on Acme 1.*, and you install 1.1.2 and your co-worker runs composer install after a month or so, it might install Acme 1.1.10, which might introduce version discrepancies in your project. To prevent this, composer install will check if a lock file exists, and install only that specific version recorded or "locked" down in the lock file. The only time the lock file changes is when you run a composer update to update your project dependencies to their latest versions. When that happens, composer updates the lock file with the newer version that got installed.

Drupal, Drupal 8, Drupal Planet
Categories: Elsewhere

Metal Toad: Avoiding Drupal 7 #AJAX Pitfalls

Thu, 04/08/2016 - 04:10
Avoiding Drupal 7 #AJAX Pitfalls August 3rd, 2016 Marcus Bernal

Rather than provide a basic how-to tutorial on Drupal's form API #AJAX functionality, I decided to address a few pitfalls that often frustrate developers, both junior and senior alike. To me, it seems that most of the problems arise from the approach rather than the direct implementation of the individual elements.

TL;DR
  • Try to find a reasonable argument for not using #ajax.
  • Do not do any processing in the callback function, it's too late, I'm sorry.
  • Force button names that are semantic and scalable.
  • Template buttons and remove unnecessary validation from #ajax actions.
  • Use '#theme_wrappers' => array('container') rather than '#preffix' and '#suffix'.
Is AJAX Even Needed?

Since #ajax hinders accessibility and adds that much more complexity, before continuing on with the approach, reconsider others. Drupal will automatically handle the "no js" accessibility issue, providing full page refreshes with unsubmitted forms, but issues will still exist for those using screen readers. Because the time to request and receive the new content is indeterminate, screen readers will fail at providing the users with audible descriptions of the new content. Simply by choosing to use #ajax, you will automatically exclude those needing visual assistance. So, if it is simply hiding/showing another field or sets of fields, then #states would be a better fit. If the task is to select something out of a large selection, a multiple page approach or even an entity reference with an autocomplete field could suffice.

This example is a simplified version of a new field type used to select data from a Solr index of another site's products. The number of products was in the 200k's and the details needed to decide on a selection was more than just the product names, so building checkboxes/radios/select box would be too unwieldy and an autocomplete could not provide enough information. Also, the desired UX was to use a modal rather than multiple pages.

Callback is a Lie

An misconception that many developers, including past myself, have is that the AJAX callback function is the place to perform bulk of the logic. I have come to approach this function as just one that returns the portion of the form that I want. Any logic that changes the structure or data of a form should be handled in the form building function, because there it will be persistent as Drupal will store those changes but ignore any done within AJAX callback. So, the role of the callback function is simply a getter for a portion of the $form array. At first, it may seem easier to just hardcode the logic to return the sub array, but I recommend a dynamic solution that relies on the trigger's nested position relative to the AJAX container.

function product_details_selector_field_widget_form(&$form, &$form_state, $field, $instance, $langcode, $items, $delta, $element) { ... // Add a property to nested buttons to declare the relative depth // of the trigger to the AJAX targeted container $form['container']['modal']['next_page']['#nested_depth'] = 1; ... }

Then, for the callback, some "blind" logic can easily return the portion of form to render and return.

/** * AJAX callback to replace the container of the product_details_selector. */ function product_details_selector_ajax_return($form, $form_state) { // Trim the array of array parents for the trigger down to the container $array_parents = $form_state['triggering_element']['#array_parents']; $pop_count = 1; // The trigger is always included, so always have to pop if (isset($form_state['triggering_element']['#nested_depth'])) { $pop_count += $form_state['triggering_element']['#nested_depth']; } for ($i = 0; $i < $pop_count; $i++) { if (empty($array_parents)) { break; // Halt the loop whenever there are no more items to pop } array_pop($array_parents); } // Return the nested array return drupal_array_get_nested_value($form, $array_parents); // This function is so awesome }

With this approach, any future modifications to the $form array outside of the container are inconsequential to this widget. And if this widget's array is modified outside of the module, the modifier will just have to double check the #nested_depth values rather than completely overriding the callback function.

Name the Names

For clarity, from here on name will refer to what will be used for the HTML attributes id and name for containers (divs) and buttons, respectively.

Like with everything in programming, naming is the initial task that can make development, current and future, a simple walk through the business logic or a spaghetti mess of "oh yeahs". This is especially true for #ajax which requires the use of HTML ID attributes to place the new content as well as handling user actions (triggers). For most developers, this step is brushed over because the idea of their work being used in an unconventional or altered way is completely out of their purview. But a solid approach will reduce the frustration of future developers including yourself for this #ajax widget right now.

In this example and most cases these triggers will be buttons, but Drupal 7 also allowed for other triggering elements, such as the select box or radio buttons. Now, this leaves a weird situation where these other triggers have semantic names, but buttons will simply be named 'op'. For a simple form, this is no big deal, but for something complex, determining which action to take relies on the comparison of the button values. This gets much harder to do when you have multiple fields of the same type, bring in translation, and/or the client decides to change the wording later on in the project. So, my suggestion is to override the button names and base the logic on them.

// drupal_html_class() converts _ to - as well as removing dangerous characters $trigger_prefix = drupal_html_class($field['field_name'] . '-' . $langcode . '-' . $delta); // Short trigger names $show_trigger = $trigger_prefix . '-modal-open'; $next_trigger = $trigger_prefix . '-modal-next'; $prev_trigger = $trigger_prefix . '-modal-prev'; $search_trigger = $trigger_prefix . '-modal-search'; $add_trigger = $trigger_prefix . '-add'; $remove_trigger = $trigger_prefix . '-remove'; $cancel_trigger = $trigger_prefix . '-cancel'; // Div wrapper $ajax_container = $trigger_prefix . '-ajax-container';

The prefix in the example is built as a field form widget example. It is unique to field's name, language, and delta so that multiple instances can exist in the same form. But if your widget is not a field, it is still best to start with someting that is dynamically unique. Then, semantics are used to fill out the rest of the trigger names needed as well as the container's ID.

Button Structure

Ideally, every button within the #ajax widget should simply cause a rebuild of the same container, regardless of the changes triggered within the nested array. Since the callback is reduced to a simple getter for the container's render array, the majority of trigger properties can be templated. Now, all buttons that are built off of this template, barring intentional overrides, will prevent validation of elements outside of the widget, prevent submission, and have the same #ajax command to run.

$ajax_button_template = array( '#type' => 'button', // Not 'submit' '#value' => t('Button Template'), // To be replaced '#name' => 'button-name', // To be replaced '#ajax' => array( 'callback' => 'product_details_selector_ajax_return', 'wrapper' => $ajax_container, 'method' => 'replace', 'effect' => 'fade', ), '#validate' => array(), '#submit' => array(), '#limit_validation_errors' => array(array()), // Prevent standard Drupal validation '#access' => TRUE, // Display will be conditional based on the button and the state );   // Limit the validation errors down to the specific item's AJAX container // Once again, the field could be nested in multiple entity forms // and the errors array must be exact. If the widget is not a field, // then use the '#parents' key if available. if (!empty($element['#field_parents'])) { foreach ($element['#field_parents'] as $field_parent) { $ajax_button_template['#limit_validation_errors'][0][] = $field_parent; } } $ajax_button_template['#limit_validation_errors'][0][] = $field['field_name']; $ajax_button_template['#limit_validation_errors'][0][] = $langcode; $ajax_button_template['#limit_validation_errors'][0][] = $delta; $ajax_button_template['#limit_validation_errors'][0][] = 'container';

Limiting the validation errors will prevent other, unrelated fields from affecting the modal's functionality. Though, if certain fields are a requirement they can be specified here. This example will validate any defaults, such as required fields, that exist within the container.

$form['container']['modal']['page_list_next'] = array( '#value' => t('Next'), '#name' => $next_trigger, '#access' => FALSE, '#page' => 1, // For page navigation of Solr results ) + $ajax_button_template; // Keys not defined in the first array will be set from the values in the second   // Fade effect within the modal is disorienting $element['container']['modal']['search_button']['#ajax']['effect'] = 'none'; $element['container']['modal']['page_list_prev']['#ajax']['effect'] = 'none'; $element['container']['modal']['page_list_next']['#ajax']['effect'] = 'none';

The #page key is arbitrary and simply used to keep track of the page state without having to clutter up the $form_state, especially since the entire array of the triggering element is already stored in that variable. Other buttons within the widget do not need to track the page other than previous and next. Clicking the search button should result in the first page of a new search while cancel and selection buttons will close the modal anyway.

Smoking Gun

Determining the widget's state can now start easily with checks on the name and data of the trigger.

$trigger = FALSE; if (!empty($form_state['triggering_element']['#name'])) { $trigger = $form_state['triggering_element']; } $trigger_name = $trigger ? $trigger['#name'] : FALSE;   $open_modal = FALSE; if (strpos($trigger_prefix . '-modal', $trigger_name) === 0)) { $open_modal = TRUE; }   ... // Hide or show modal $form['container']['modal']['#access'] = $open_modal;   ... // Obtain page number regardless of next or previous $search_page = 1; if (isset($trigger['#page'])) { $search_page = $trigger['#page']; }   ... // Calculate if a next page button should be shown $next_offset = ($search_page + 1) * $per_page; if ($next_offset > $search_results['total']) { $form['container']['modal']['next_page']['#access'] = TRUE; // or '#disabled' if the action is too jerky } Theming

Now, to where most developers start their problem solving, how to build the AJAX-able portion. Drupal requires an element with an ID attribute to target where the new HTML is inserted. Ideally, it is best to make the target element and the AJAX content one and the same. There are a couple of ways for doing this, the most common that I see is far too static and therefore difficult to modify or extend.

// Div wrapper for AJAX replacing $element['contianer'] = array( '#prefix' => '<div id="' . $ajax_container . '">', '#suffix' => '</div>', );

This does solve the solution for the time being. It renders any child elements properly while wrapping with the appropriate HTML. But if another module, function, or developer wants to add other information, classes for instance, they would have to recreate the entire #prefix string. What I propose is to use the #theme_wrappers key instead.

// Div wrapper for AJAX replacing $element['container'] = array( '#theme_wrappers' => array('container'), '#attributes' => array( 'id' => $ajax_container, ), );   if (in_array($trigger_name, $list_triggers)) { $element['container']['#attributes']['class'][] = 'product-details-selector-active-modal'; }   // Div inner-wrapper for modal styling $element['container']['modal'] = array( '#theme_wrappers' => array('container'), '#attributes' => array( 'class' => array('dialog-box'), ), );   $element['container']['product_details'] = array( '#theme_wrappers' => array('container'), '#attributes' => array( 'class' => array('product-details'), ), '#access' => TRUE, );

I have experienced in the past that using #theme, causes the form elements to be rendered "wrong," losing their names and their relationships with the data. The themes declared within #theme_wrappers will render later in the pipeline, so form elements will not lose their identity and the div container can be built dynamically. That is, simply to add a class, one just needs to add another array element to $element['container']['#attributes']['class'].

Conclusion

I do not propose the above to be hard-set rules to follow, but they should be helpful ideas that allow for more focus to be put into the important logic rather than basic functional logistics. View the form as transforming throughout time as the user navigates while the AJAX functionality is simply a way to refresh a portion of that form and the complexity of building your form widget will reduce down to the business logic needed.

Categories: Elsewhere

Mediacurrent: Pixel &#039;Perfection&#039; front-end development. Or, Avoiding awkward conversations with the Quality Assurance team

Wed, 03/08/2016 - 21:32
What is 'pixel perfect'?

Pixel perfection is when the finished coded page and the design file are next to each other, you can not tell them apart. To quote Brandon Jones :

...so precise, so pristine, so detailed that the casual viewer can’t tell the difference.

Categories: Elsewhere

Chromatic: In Search of a Better Local Development Server

Wed, 03/08/2016 - 19:26
The problem with development environments

If you're a developer and you're like me, you have probably tried out a lot of different solutions for running development web servers. A list of the tools I've used includes:

That's not even a complete list — I know I've also tried other solutions from the wider LAMP community, still others from the Drupal community, and I've rolled my own virtual-machine based servers too.

All of these tools have their advantages, but I was never wholly satisfied with any of them. Typically, I would encounter problems with stability when multiple sites on one server needed different configurations, or problems customizing the environment enough to make it useful for certain projects. Even the virtual-machine based solutions often suffered from the same kinds of problems — even when I experimented with version-controlling critical config files such as vhost configurations, php.ini and my.cnf files, and building servers with configuration management tools like Chef and Puppet.

Drupal VM

Eventually, I found Drupal VM, a very well-thought-out virtual machine-based development tool. It’s based on Vagrant and another configuration management tool, Ansible. This was immediately interesting to me, partly because Ansible is the tool we use internally to configure project servers, but also because the whole point of configuration management is to reliably produce identical configuration whenever the software runs. (Ansible also relies on YAML for configuration, so it fits right in with Drupal 8).

My VM wishlist

Since I've worked with various VM-based solutions before, I had some fairly specific requirements, some to do with how I work, some to do with how the Chromatic team works, and some to do with the kinds of clients I'm currently working with. So I wanted to see if I could configure Drupal VM to work within these parameters:

1. The VM must be independently version-controllable

Chromatic is a distributed team, and I don't think any two of us use identical toolchains. Because of that, we don't currently want to include any development environment code in our actual project repositories. But we do need to be able to control the VM configuration in git. By this I mean that we need to keep every setting on the virtual server outside of the server in version-controllable text files.

Version-controlling a development server in this way also implies that there will be little or no need to perform administrative tasks such as creating or editing virtual host files or php.ini files (in fact, configuration of the VM in git means that we must not edit config files in the VM since they would be overridden if we recreate or reprovision it).

Furthermore, it means that there's relatively little need to actually log into the VM, and that most of our work can be performed using our day-to-day tools (i.e. what we've configured on our own workstations, and not whatever tools exist on the VM).

2. The VM must be manageable as a git submodule

On a related note, I wanted to be able to add the VM to the development server repository and never touch its files—I'm interested in maintaining the configuration of the VM, but not so much the VM itself.

It may help to explain this in Drupal-ish terms; when I include a contrib module in a Drupal project, I expect to be able to interact with that module without needing to modify it. This allows the module to be updated independently of the main project. I wanted to be able to work with the VM in the same way.

3. The VM must be able to be recreated from scratch at any time

This is a big one for me. If I somehow mess up a dev server, I want to be able to check out the latest version of the server in git, boot it and go back to work immediately. Specifically, I want to be able to restore (all) the database(s) on the box more or less automatically when the box is recreated.

Similarly, I usually work at home on a desktop workstation. But when I need to travel or work outside the house, I need to be able to quickly set up the project(s) I'll be working on on my laptop.

Finally, I want the VM configuration to be easy to share with my colleagues (and sometimes with clients directly).

4. The VM must allow multiple sites per server

Some of the clients we work with have multiple relatively small, relatively similar sites. These sites sometimes require similar or identical changes. For these clients, I much prefer to have a single VM that I can spin up to work on one or several of their sites at once. This makes it easier to switch between projects, and saves a great deal of disk space (the great disadvantage to using virtual machines is the amount of disk space they use, so putting several sites on a single VM can save a lot of space).

And of course if we can have multiple sites per server, then we can also have a single site per server when that's appropriate.

5. The VM must allow interaction via the command line

I've written before about how I do most of my work in a terminal. When I need to interact with the VM, I want to stay in the terminal, and not have to find or launch a specific app to do it.

6. The VM must create drush aliases

The single most common type of terminal command for me to issue to a VM is drush @alias {something}. And when running the command on a separate server (the VM!), the command must be prefixed with an alias, so a VM that can create drush aliases (or help create them) is very, very useful (especially in the case where there are multiple sites on a single VM).

7. The VM must not be too opinionated about the stack

Given the variations in clients' production environments, I need to be able to use any current version of PHP, use Apache or Nginx, and vary the server OS itself.

My VM setup

Happily, it turns out that Drupal VM can not only satisfy all these requirements, but is either capable of all of them out of the box, or makes it very straightforward to incorporate the required functionality. Items 4, 5, 6, and 7, for example, are stock.

But before I get into the setup of items 1, 2, and 3, I should note that this is not the only way to do it.

Drupal VM is a) extensively documented, and b) flexible enough to accommodate any of several very different workflows and project structures than what I'm going to describe here. If my configuration doesn't work with your workflow, or my workflow won't work with your configuration you can probably still use Drupal VM if you need or want a VM-based development solution.

For Drupal 8 especially, I would simply use Composer to install Drupal, and install Drupal VM as a dependency.

Note also that if you just need a quick Drupal box for more generic testing, you don't need to do any of this, you can just get started immediately.

Structure

We know that Drupal VM is based on Ansible and Vagrant, and that both of those tools rely on config files (YAML and ruby respectively). Furthermore, we know that Vagrant can keep folders on the host and guest systems in sync, so we also know that we'll be able to handle item 1 from my wishlist--that is, we can maintain separate repositories for the server and for projects.

This means we can have our development server as a standalone directory, and our project repositories in another. For example, we might set up the following directory structure where example.com contains the project repository, and devserver contains the Drupal VM configuration.

Servers └── devserver/ Sites └── example.com/ Configuration files

Thanks to some recent changes, Drupal VM can be configured with an external config.yml file , a local config.yml file, and an external Vagrantfile.local file using a delegating Vagrantfile.

The config.yml file is required in this setup, and can be used to override any or all of the default configuration in Drupal VM's own default.config.yml file.

The Vagrantfile.local file is optional, but useful in case you need to alter Drupal VM's default Vagrant configuration.

The delegating Vagrantfile is the key to tying together our main development server configuration and the Drupal VM submodule. It defines the directory where configuration files can be found, and loads the Drupal VM Vagrantfile.

This makes it possible to create the structure we need to satisfy item 2 from my wishlist--that is, we can add Drupal VM as a git submodule to the dev server configuration:

Server/ ├── Configuration/ | ├── config.yml | └── Vagrantfile.local ├── Drupal VM/ └── Vagrantfile Recreating the VM

One motivation for all of this is to be able to recreate the entire development environment quickly. As mentioned above, this might be because the VM has become corrupt in some way, because I want to work on the site on a different computer, or because I want to share the site—server and all—with a colleague.

Mostly, this is simple. To the extent that the entire VM (along with the project running inside it!) is version-controlled, I can just ask my colleague to check out the relevant repositories and (at most!) override the vagrant_synced_folders option in a local.config.yml with their own path to the project directory.

In checking out the server repository (i.e. we are not sharing an actual virtual disk image), my colleague will get the entire VM configuration including:

  • Machine settings,
  • Server OS,
  • Databases,
  • Database users,
  • Apache or Nginx vhosts,
  • PHP version,
  • php.ini settings,
  • Whatever else we've configured, such as SoLR, Xdebug, Varnish, etc.

So, with no custom work at all—even the delegating Vagrant file comes from the Drupal VM documentation—we have set up everything we need, with two exceptions:

  1. Entries for /etc/hosts file, and
  2. Databases!

For these two issues, we turn to the Vagrant plugin ecosystem.

/etc/hosts entries

The simplest way of resolving development addresses (such as e.g. example.dev) to the IP of the VM is to create entries in the host system's /etc/hosts file:

192.168.99.99 example.dev

Managing these entries, if you run many development servers, is tedious.

Fortunately, there's a plugin that manages these entries automatically, Vagrant Hostsupdater. Hostsupdater simply adds the relevant entries when the VM is created, and removes them again (configurably) when the VM is halted or destroyed.

Databases

Importing the database into the VM is usually a one-time operation, but since I'm trying to set up an easy process for working with multiple sites on one server, I sometimes need to do this multiple times — especially if I've destroyed the actual VM in order to save disk space etc.

Similarly, exporting the database isn't an everyday action, but again I sometimes need to do this multiple times and it can be useful to have a selection of recent database dumps.

For these reasons, I partially automated the process with the help of a Vagrant plugin. "Vagrant Triggers" is a Vagrant plugin that allows code to be executed "…on the host or guest before and/or after Vagrant commands." I use this plugin to dump all non-system databases on the VM on vagrant halt, delete any dump files over a certain age, and to import any databases that can be found in the dump location on the first vagrant up.

Note that while I use these scripts for convenience, I don't rely on them to safeguard critical data.

With these files and a directory for database dumps to reside in, my basic server wrapper now looks like this:

Server/ ├── Vagrantfile ├── config/ ├── db_dump.sh ├── db_dumps/ ├── db_import.sh └── drupal-vm/ My workflow New projects

All of the items on my wishlist were intended to help me achieve a specific workflow when I needed to add a new development server, or move it to a different machine:

  1. Clone the project repo.
  2. Clone the server repo.
  3. Change config.yml:
    • Create/modify one or more vhosts.
    • Create/modify one or more databases.
    • Change VM hostname.
    • Change VM machine name.
    • Change VM IP.
    • Create/modify one or more cron jobs.
  4. Add a database dump (if there is one) to the db_dumps directory.
  5. Run vagrant up.
Sharing projects

If I share a development server with a colleagues, they have a similar workflow to get it running:

  1. Clone the project repo.
  2. Clone the server repo.
  3. Customize local.config.yml to override my settings:
    • Change VM hostname (in case of VM conflict).
    • Change VM machine name (in case of VM conflict).
    • Change VM IP (in case of VM conflict).
    • Change vagrant synced folders local path (if different from mine).
  4. Add a database dump to the dumps directory.
  5. Run vagrant up.
Winding down projects

When a project completes that either has no maintenance phase, or where I won't be involved in the ongoing maintenance, I like to remove the actual virtual disk that the VM is based on. This saves ≥10GB of hard drive space (!):

$ vagrant destroy

But since a) every aspect of the server configuration is contained in config.yml and Vagrantfile.local, and b) since we have a way of automatically importing a database dump, resurrecting the development server is as simple as pulling down a new database and re-provisioning the VM:

$ scp remotehost:/path/to/dump.sql.gz /path/to/Server/db_dumps/dump.sql.gz $ vagrant up Try it yourself

Since I wanted to reuse this structure for each new VM I need to spin up, I created a git repository containing the code. Download and test it--the README contains detailed setup instructions for getting the environment ready if you don't already use Vagrant.

Categories: Elsewhere

Miloš Bovan: Final code polishing of Mailhandler

Wed, 03/08/2016 - 17:54
Final code polishing of Mailhandler

This blog post summarizes week #11 of the Google Summer of Code 2016 project - Mailhandler. 

Time flies and it is already the last phase of this year’s Google Summer of Code 2016. The project is not over yet and I would like to update you on the progress I made last week. In the last blog post I was writing about the problems I faced in week 10 and how we decided to do code refactoring instead of UI/UX work. The plan for the last week was to update Mailhandler with the newly introduced changes in Inmail as well as work on new user-interface related issues. Since this was the last week of issues work before doing the project documentation, I used the time to polish the code as much as possible.

As you may know, Inmail got new features on the default analyzer result. Since this change was suggested by Mailhandler, the idea was to remove Mailhandler specific analyzer result and use the default one instead. It allows the core module (and any other Inmail-based module) to use the standardized result across all enabled analyzers. The main benefit of this is to support better collaboration between analyzer plugins.
Even though Mailhandler updates were not planned to take a lot of time, it turned to be opposite. Hopefully, the long patch passed all the tests and was fixed in Use DefaultAnalyzerResult instead of Mailhandler specific one issue.
It was needed to not only replace the Mailhandler-specific analyzer result but to use user context and context concept in general as well. Each of the 5 Mailhandler analyzers were updated to “share” their result. Also, non-standard features of each of the analyzers are available as contexts. Later on, in the handler processing phase, handler plugins can access those contexts and extract the needed information.

The second part of the available work time was spent on user interface issues, mostly on improving Inmail. Mailhandler as a module is set of Inmail plugins and configuration files and in discussion with mentors, we agreed that improving the user interface of Inmail is actually an improvement to Mailhandler too.
IMAP (Internet Message Access Protocol) as a standard message protocol is supported by Inmail. It is the main Inmail deliverer and UI/UX improvements were really needed there. In order to use it, valid credentials are requested. One of the DX validation patterns is to validate those credentials via a separate “Test connection” button.
 

IMAP test connection button

In the previous blog posts, I mentioned a power of Monitoring module. It provides an overall monitoring of a Drupal website via nice UI. Since it is highly extensible, making Inmail support it would be a nice feature. Among the most important things to monitor was a quota of an IMAP plugin. This allows an administrator to see the "health state" of this plugin and to react timely. The relevant issue needs a few corrections, but it is close to be finished too.

Seeing that some of the issues mentioned above are still in “Needs review” or “Needs work” state, I will spend additional time this week in order to finish them. The plan for the following week is to finish the remaining issues we started and focus on the module documentation. The module documentation consist of improving plugin documentation (similary to api.drupal.org), Drupal.org project page, adding a Github read me, installation manuals, code comments, demo article updates and most likely everything related to describing the features of the module.

 

 

 

Milos Wed, 08/03/2016 - 17:54 Tags Google Summer of Code Drupal Open source Drupal Planet Add new comment
Categories: Elsewhere

OSTraining: How to Validate Field Submissions in Drupal

Wed, 03/08/2016 - 16:25

As OSTraining member asked us how to validate fields in Drupal 8.

In this particular example, they wanted to make sure that every entry in a text field was unique. 

For this tutorial, you will need to download, install and enable the following modules.

Categories: Elsewhere

Drop Guard: 1..2..3 - continuous security! A business guide for companies & individuals

Wed, 03/08/2016 - 15:00

A lot of Drupal community members, who are interested in or already use Drop Guard, were waiting for this ultimate guide on continuous security in Drupal. Using Drop Guard in a daily routine improved update workflows and increased the efficiency of the website support for all of our users. But there were still a lot of blind spots and unexplored capabilities such as using Drop Guard as an "SLA catalyser". So we've stuck our heads together and figured out how to share this information with you in a professional and condensed way.

Drupal Drupal Planet Drupal Community Security Drupal shops Business
Categories: Elsewhere

GVSO Blog: [GSoC 2016: Social API] Week 10: A Social Post implementer

Wed, 03/08/2016 - 07:42
[GSoC 2016: Social API] Week 10: A Social Post implementer

Week 10 is over, and we are only two weeks away from Google Summer of Code final evaluation. During these ten weeks, we have been rebuilding social networking ecosystem in Drupal. Thus, we created the Social API project and divided it into three components: Social Auth, Social Post and Social Widgets.

gvso Wed, 08/03/2016 - 01:42 Tags Drupal Drupal Planet GSoC 2016
Categories: Elsewhere

Galaxy: GSoC’ 16: Port Search Configuration module; coding week #10

Wed, 03/08/2016 - 00:47

Google Summer of Code 2016 is into its final lap. I have been porting the search configuration module to Drupal 8 as part of this program and I am into the last stage, fixing some of the issues reported and improving the module port I have been doing for the past two months.
Finally, I have set up my Drupal blog. I should have done it much more earlier. A quick advice to to those who are interested in creating a blog powered by Drupal and run it online, I made use of the openshift to host my blog. It gives the freedom to run maximum of three applications for free. Select your favorite Drupal theme and start blogging.
So, now lets come back to my project status. If you would like to have a glance at my past activities on this port , please refer these posts.
Last week I was mainly concentrating on fixing some of the issues reported in the module port. It was really a wonderful learning experience, creating new issues, getting reviewed the patches, updating the patches if required and finally the happiness of getting the patches committed into the Drupal core is a different feeling. Moreover, I could also get suggestions from other developers who are not directly part of my project which I find as the real blessing of being part of this wonderful community.

The module is now shaping up well and is moving ahead in the right pace. Last week, I had faced some issues with the twig and I was resolving it. The module is currently available for testing. I could work on some key aspects of the module in the past week. I worked on the namespace issues. Some of the functions were not working as intended due to the wrong usage of the PSR namespace. I could fix some of these issues. Basically, PSR namespaces helps to reuse certain standard functions of the Drupal API framework. They are accessed using the 'use' keyword. We can name the class locations using the namespace property.
For instance, if I want to use the Html escape function for converting special characters to HTML format,
use Drupal\Component\Utility\Html;
Now, $role_option s= array_map('Html::escape', user_role_names())
Hope you got the idea. Here I could have written the entire route/path of thee escape function. But through the usage of the namespace, I just need to define the route at the beginning and later on it can be used for further implementations n number of times.
The user_role_names() retrieved the names of the roles involved. This is an illustration of the usage of namespace. This is really an area to explore more. Please do read more on this, for better implementation of the Drupal concepts.

In the coming days, I would like to test the various units of the module ported, fix the issues if any and bring up a secure, user friendly search configuration module for Drupal.
Hope all the students are enjoying the process and exploring the Drupal concepts. Stay tuned for the future updates on this port process.

Tags: drupal-planet
Categories: Elsewhere

Cocomore: „Memories“ and more: These new features make Snapchat even more attractive for businesses

Wed, 03/08/2016 - 00:00

Until recently one of the biggest contradictions in social media was called: Snapchat and consistency. In early July Snapchat put an end to this. The new feature "Memories" now allows users to save images. Next to "Memories" Snapchat further developed the platform also on other positions. We show what opportunities the new changes offer for businesses.

Categories: Elsewhere

Janez Urevc: Release of various Drupal 8 media modules

Tue, 02/08/2016 - 22:36
Release of various Drupal 8 media modules

Today we released new versions of many Drupal 8 media modules. This release is specially important for Entity browser and Entity embed modules since we released the last planned alpha version of those modules. If there will be no critical bugs reported in next two weeks we'll release first beta versions of those modules.

List of all released modules:

slashrsm Tue, 02.08.2016 - 22:36 Tags Drupal Media Enjoyed this post? There is more! We had great and productive time at NYC sprint! Sam Morenson is thinking about media in Drupal core Presentations about various Drupal 8 media modules

View the discussion thread.

Categories: Elsewhere

Phponwebsites: Multiple URL alias for a node in pathauto - drupal 7

Tue, 02/08/2016 - 16:38
   As we discussed in my previous post, clean URL is one of the option to improve SEO. We've module called pathauto to clean URLs in drupal 7. It can allow us to set alias for content types, files, users & taxonomies. But we can set only one URL alias for a content type in drupal 7. You can set URL alias for a content type at admin/config/search/path/patterns. It looks like below image:




   Suppose you need two path for a content. For instance, the URL alias for a article need to node title and also article/node-title. Is it possible to set multiple path alias for a content type in drupal 7? Yes it is possible in drupal 7. We can set multiple URL alias for a conten type programmatically using pathauto module in drupal 7. We need to insert our path alias into the "url_alias" table while inserting & updating a node and remove path alias When delete a node.

Add URL alias programmatically when insert and update a node using pathauto module in drupal 7:
    For instance, I've choosen article content type. We need to insert & update a URL alias into the "url_alias" table using hook_node_insert() & hook_node_update() in drupal 7.


/**
 * Implements hook_node_insert()
 */
function phponwebsites_node_insert($node) {
  if ($node->type == 'article') {
    //save node alias
    _phponwebsites_insert_update_alias($node);
  }
}

/**
 * Implements hook_node_update()
 */
function phponwebsites_node_update($node) {
  if ($node->type == 'article') {
    //update node alias
    _phponwebsites_insert_update_alias($node);
  }
}

/**
 * Insert and update alias for course
 */
function _phponwebsites_insert_update_alias($node) {
  module_load_include('inc', 'pathauto');
  $title = pathauto_cleanstring($node->title);

  $values['source'] = 'node/' . $node->nid . '/article';
  $values['alias'] = 'article/' . $title;

  $all_values = array($values);

  foreach ($all_values as $all) {
    $query = db_merge('url_alias')
      ->fields(array('source' => $all['source'], 'alias' => $all['alias'], 'language' => LANGUAGE_NONE))
      ->key(array('source' => $all['source']))
      ->execute();
  }
}



Where,
 pathauto_cleanstring is obey the pathatuo module's rules which is mentioned at admin/config/search/path/settings. To know more details of pathauto_cleanstring, please visit http://www.drupalcontrib.org/api/drupal/contributions!pathauto!pathauto.inc/function/pathauto_cleanstring/7

After added the above code into your custome module(clear cache), you will create a article. You just test your url at admin/config/search/path in the pathauto's list. It looks like below image:




Now you could access the article by both node-title as well as article/node-title.





Delete URL alias programmatically when delete a node using pathauto module in drupal 7:
     We've inserted 2 URL alias for a node. So we need to delete those from "url_alias" table when delete a node. We can trigger it using hook_node_delete() in drupal 7. Consider the below code:



/**
 * Implements hook_node_delete()
 */
function arep_node_delete($node) {
  if ($node->type == 'article') {
    //delete node alias for ceu and non-ceu course
    module_load_include('inc', 'pathauto');
    $source[0] = 'node/' . $node->nid . '/article';

    foreach ($source as $s) {
      $path = path_load(
        array('source' => $s)
      );
      path_delete($path['pid']);
    }

  }
}

Where,
  path_load returns the details of a URL alias like source, alias, path id  & language. To know more details of path_load(), please visit https://api.drupal.org/api/drupal/includes!path.inc/function/path_load/7.x.

After added the above code into your customer module(clear cache), you will delete a node and check your URL alias at admin/config/search/path. Now tt should not be displayed here.

Now I've hope you know how to set multiple URL alias for a content type.

Related articles:
Remove speical characters from URL alias using pathauto module in Drupal 7
Add new menu item into already created menu in Drupal 7
Add class into menu item in Drupal 7
Create menu tab programmatically in Drupal 7
Add custom fields to search api index in Drupal 7
Clear views cache when insert, update and delete a node in Drupal 7
Create a page without header and footer in Drupal 7
Login using both email and username in Drupal 7
Categories: Elsewhere

Zivtech: You Don't Know Git!

Tue, 02/08/2016 - 15:57
I’m going to put it out there. This blog is not for senior developers or git gurus; I know you know git. This post is for the noobs, the career-changers like me. If I could go back in time, after I had graduated from my three month web development bootcamp, I would tell myself, “You don’t know git!”

I can hear myself saying, “But I know the workflow. I know to pull from master before starting a new branch. I know to avoid merge conflicts. I git add -A. I know what’s up.”

No. No. No. Fix Your Workflow
If there’s one command you want to know, it’s this: <code> git add -p </code> This command changed my entire workflow and was tremendously helpful. In bootcamp, you learn the basics of git and move on. You generally learn: <code>git add -A</code> or <code>git add .</code> This stages all the changes that you’ve made to the repository or all the changes you’ve made from your current directory. This worked during bootcamp because the changes were small, and I was often just committing to my own repository. Once I switched over to Drupal and started working with Features, I realized that after I would make updates, not all of the files that were showing up in code were things I had changed. How could that be?! Work with Your Team
I was working on a project with other developers who were also working on the same feature. I had to learn -p so that I could be a responsible member of the team and only commit what I had changed. That’s why it’s so important to use this command: <code>git add -p</code> If you’re ever unsure about a command in git, just type this command: <code>git add --help</code> The git manual will then show you all the options you can use, like this: -p, --patch Interactively choose hunks of patch between the index and the work tree and add them to the index. This gives the user a chance to review the difference before adding modified contents to the index.
Essentially, it allows you to review each file to determine what changed, and if you want to stage it or not.
In the example above, I made changes to my .gitignore file. I deleted the line in red, and added the line in green. Then it asks you what you want to do with those changes. If you type in ‘?’ and push enter, it will explain what your options are.

Not only does it help by preventing you from staging code that isn’t yours, it’s also helpful as a new developer to see what changed. In Drupal, you can think that you’re making a small change in the UI, but then see a ton of altered files. Using -p has helped me figure out how Drupal works, and I’m a lot more confident about what I’m staging now.

Now go out there and <code>git add -p</code> all of your changes and be the granular committer I know you can be!
Categories: Elsewhere

OSTraining: How to Use Entity Print in Drupal 8

Tue, 02/08/2016 - 14:04

An OSTraining member asked us how to configure the Entity Print module with Drupal 8.

This module allows you to make a PDF version of your nodes

I would recommend that you install Entity Print using Drush, because you will need to install also need to install a composer package. This package contains the library to create PDFs.

Categories: Elsewhere

ComputerMinds.co.uk: Drupal 8 Namespaces - class aliasing

Tue, 02/08/2016 - 14:00

Class Aliasing is the simple, but very useful solution to the problem of needing to use two classes (from different namespaces) with the same name.

Categories: Elsewhere

Talha Paracha: GSoC'16 – Pubkey Encrypt – Week 10 Report

Tue, 02/08/2016 - 02:00

I started this week’s work by finishing the integration of cookies into my module. To give you some context, Pubkey Encrypt now uses cookies for temporarily storing the Private key for any user upon login. Previously, we were using sessions for this purpose, but we’ve just shifted to this new approach because Pubkey Encrypt aims to protect a website’s Data-at-Rest in compromised servers. Since sessions get stored in the servers too, the module cannot rely on sessions for keeping any secret information.

Categories: Elsewhere

Jeff Geerling's Blog: Hide the page title depending on a checkbox field in a particular content type

Tue, 02/08/2016 - 00:55

In Drupal 8, many small things have changed, but my willingness to quickly hack something out in a few lines of code/config instead of installing a relatively large module to do the same thing hasn't :-)

I needed to add a checkbox to control whether the page title should be visible in the rendered page for a certain content type on a Drupal 8 site, and there are a few different ways you can do this (please suggest alternatives—especially if they're more elegant!), but I chose to do the following:

  1. Add a 'Display Title' boolean field (checkbox, using the field label as the title, and setting off to 0 and on to 1 in the field settings) to the content type (page in this example).

Categories: Elsewhere

myDropWizard.com: A Survey! Is Drupal Hard?

Tue, 02/08/2016 - 00:04

I attended Drupal Camp WI at the University of Wisconsin-Madison this weekend.

There was a fantastic presentation called "Why Is Drupal So Hard?" by Joe Shindelar at Drupalize.me

It got me thinking about myDropWizard, our clients, and what path people are taking at this current crossroad of Drupal 6 -> Drupal 7 -> Drupal 8 versus going a different path. Sometimes when you are too close to a question, you shouldn't be the one answering it. So, I'd like to ask "you," the world!

I'll share the results in a future blog post, and I'll share my thinking about what the results mean.

If you have any criticisms of the survey, please share those with me! I think this is just the first of a few surveys that we can do.

I hope this is fun, and I know it's not "scientific". However, I am hoping that it continues the discussion within the Drupal Community in regards to what we should do?

Without further ado: Here is the survey!

Categories: Elsewhere

Pages