Planet Drupal

Subscribe to Planet Drupal feed
Drupal.org - aggregated feeds in category Planet Drupal
Updated: 26 min 58 sec ago

DrupalEasy: Relaunching DrupalEasy.com on Drupal 8

Thu, 18/02/2016 - 21:16

It's been a long time coming, so we are really happy to announce that DrupalEasy.com recently relaunched on Drupal 8 (8.0.3, to be precise)! Our previous site was also our original - built on Drupal 6 with more than its fair share of cruft. As we thought about what our new site should be, we made the easy decision that it should be focused on two things: our Drupal Career Online 12-week training program and an expanded focus on tutorials, videos, and quicktips for Drupal developers looking to up their game. It of course provides, what we hope, is easy access to our other training and consulting services.

Data migration

We’ve got some solid experience with data migration - we've worked on numerous migration projects for clients, we offer a (Drupal 7, for now) Getting Your Stuff into Drupal workshop, as well as some experience working on Drupal 8's core migration system. For the migration of DrupalEasy.com, we decided to use the core migration path combined with the Migrate Upgrade module to migrate everything possible. Our data model was rather simple, and didn't require any major changes, so we felt this was the best and most efficient option. It worked like a charm - other than changes we wanted to make, all of our content and most of our configuration (which, granted, we modified after the migration) came through just fine.

At this point in time, continuous migration isn't readily possible with the core migration path. To overcome this, we decided to re-run the migration, in-full, every two weeks. This required that we basically start with a fresh Drupal 8 database each time, so after each migration we had to re-enable a number of modules and our new custom theme, redo a bunch of configuration settings, and tweak some content. While we used configuration management as much as possible to alleviate the repetitiveness, we still ended up with a multi-page manual checklist that needed to be performed with each migration.

Text formats

One of the main issues we ran into after migrating content was that on our old site, we had way too many text formats, including the dreaded "PHP code" format. We knew we wanted to reduce the number of available text formats in the new site - this meant reassigning text formats for existing content. As an exercise in module development, we wrote a custom action plugin that we could use on the main admin/content page to select which node bodies we wanted to assign a new text format to. We ended up with more text formats that we originally planned, mainly because we found that we really like the default "Basic HTML" format and wanted to use it as much as possible. Rather than updating old content to use this new format, we simply re-labeled two previously migrated text formats as "legacy". (Yes, we did remove the "PHP code" text format.)

Blocks

One of the major sources of work for the new site was with Blocks. While the core migration path migrated all of our blocks to the new site, our design called for a whole new set of blocks, configurations, and placements. So, post-migration, we removed all migrated blocks from display, and created new blocks for the site. You can see many of these new blocks on the home page as well as several of the landing pages on the site. As of this moment, all of the block placement is done with the tools provided by Drupal core (even though we were very tempted to use Block Visibility Groups).

Forms

We have forms. Contact forms, Drupal Career Online application forms, post-training survey forms. On the old site, we used Webform, but Webform isn't available yet for Drupal 8 - and (sadly) we may not miss it. The core Contact forms combined with the Contact storage module are an adquate replacment. So much so, in fact, that we wrote a quicktip about it.

Contributed modules

Perhaps the biggest win on the new site is in the number of contributed modules that are (currently) required. As of this moment, we are using 11 contributed modules:

  • Google analytics
  • Honeypot
  • Big pipe (soon to be in core)
  • Redirect
  • Libraries
  • Contact storage
  • Configuration Update Manager
  • Features
  • Pathologic
  • Markdown filter
  • Field group

This compares favorably to over 50(!) contributed modules enabled on the Drupal 6 version of the site. Granted, more than a few of those 50 modules are now part of Drupal 8 core, but it nonetheless makes the new site much easier to maintain.

There are still a handful of modules that we'll be adding once we're comfortable with their release status, including Pathauto, EU Cookie Compliance, Metatag, and XML Sitemap.

Custom Theme

One of the most important changes to the site is that we're finally responsive. Quite frankly, it was a bit embarrassing that our old site wasn't responsive. After researching available base themes, as well as considering building a custom theme with core's "Classy" theme as the base, we selected the Neato base theme. Neato includes all the Sassy goodness that we were looking for, as well as using the Bourbon library and Bourbon Neat grid framework. While our only complaint about Neat is that it isn't based on Classy, we managed to overcome that limitation and build a theme that we're proud of.

Custom module

In addition to the aforementioned custom action plugin, we also wrote a small custom module for the site that provides a custom field formatter for link fields that outputs them as an HTML5 audio tag. We'll be doing a separate blog post on this module in the future. In the meantime, you can check out the code.

Feedback

So, what do you think? We know there's still a few rough edges (the HTML5 audio tag breaks out of its block in some responsive modes), but we also know it's about 1000 times better than the old site. We're looking forward to using the new site to provide more and better resources back to the community, as well as to help spread the word about our world-class training and consulting services!

Categories: Elsewhere

Commercial Progression: 3 Helpful Hints for using Drupal 8 REST APIs

Thu, 18/02/2016 - 19:48

A recent internal Commercial Progression project allowed us the opportunity to set up a Drupal 8 site and get REST working on it. In the process, and by merit of the stack being new with the documentation catching up, I learned a few tricks that might be useful for enterprising developers out there.

1. POST-ing entities is to a different path than the REST UI says

If you install the REST UI module and enable an endpoint, you'll see a screen like:

From this display, you might be convinced that the URL you use to HTTP POST a new entity (if the bundle were named "update", for instance) would be

http://yourname.tld/admin/structure/eck/entity/component/update

But you would be wrong. The actual post path is:

http://yourname.tld/entity/component

...where "component" is the entity type. This seems to be the case for pretty much any entity. Don't forget the

?_format=hal_json

on all your stuff, for good measure.

2. There's a thing called a CSRF token, and you can get one with cURL

The documentation lets you know that risky operations require a CSRF token and how to get one. Yuen Ying Kit even has the necessary Guzzle-based PHP code (posted in 2013!) for grabbing a CSRF token on the fly.

Let's say you wanted to write it for a cURL request in PHP without any external dependencies. That would look something like:

  1. // Log in.
  2. $fields = array(
  3. 'name' => 'username',
  4. 'pass' => 'this_is_where_the_password_goes',
  5. 'form_id' => 'user_login_form',
  6. );
  7. // Cheesy way to urlencode a POST string
  8. $fields_string = '';
  9. foreach ($fields as $key => $value) {
  10. $fields_string .= $key . '=' . urlencode($value) . '&';
  11. }
  12. rtrim($fields_string, '&');
  13.  
  14. $process = curl_init('http://yourname.tld/user');
  15. curl_setopt($process, CURLOPT_TIMEOUT, 30);
  16. curl_setopt($process, CURLOPT_RETURNTRANSFER, TRUE);
  17. curl_setopt($process, CURLOPT_POST, count($fields));
  18. curl_setopt($process, CURLOPT_POSTFIELDS, $fields_string);
  19. $login = curl_exec($process);
  20.  
  21. // Fetch a new CSRF token.
  22. curl_setopt($process, CURLOPT_URL, 'http://yourname.tld/rest/session/token');
  23. $csrf_token = curl_exec($process);
  24. curl_close($process);

The $csrf_token variable now contains a working token for POST-ing and doing other risky operations.

For a cURL request to do such an operation, your HTTP header should include the following:

  1. curl_setopt($process, CURLOPT_HTTPHEADER, array(
  2. 'Content-Type: application/hal+json',
  3. 'Accept: application/json',
  4. 'X-CSRF-Token: ' . $csrf_token,
  5. ));
  6. curl_setopt($process, CURLOPT_USERPWD, $fields['name'] . ':' . $fields['pass']);

As long as you have an acceptable payload in your CURLOPT_POSTFIELDS, you should be off to the races!

3. Entity reference fields work how you would expect, except not

If you GET an entity with an entityreference (Reference? Entity reference?) field as HAL JSON, the result will include a number of interesting tidbits that indicate the reference:

  1. {
  2. "_links": {
  3. "http://yourname.tld/rest/relation/component/component_update/field_update_site": [
  4. {
  5. "href": "http://yourname.tld/node/3?_format=hal_json"
  6. }
  7. ]
  8. },
  9. "_embedded": {
  10. "http://yourname.tld/rest/relation/component/component_update/field_update_site": [
  11. {
  12. "_links": {
  13. "self": {
  14. "href": "http://yourname.tld/node/3?_format=hal_json"
  15. },
  16. "type": {
  17. "href": "http://yourname.tld/rest/type/node/site"
  18. }
  19. },
  20. "uuid": [
  21. {
  22. "value": "0a2cbd4a-1bc2-40f0-a502-2e3bb0917b2b"
  23. }
  24. ]
  25. }
  26. ]
  27. }
  28. }

These are all useful payload for interacting with the referenced entities when ingesting, but...

If you format a HAL JSON entity for POST with this structure, the reference field will not populate (at least it didn't for me). The trick is also include a direct value to the desired field, like:

  1. // This is the addition to the JSON that <em>actually</em> populates the entity.
  2. $entity['field_update_site'] = [[
  3. 'target_id' => $nid
  4. ]];

This would add JSON that looks like:

  1. {
  2. "field_update_site": [
  3. {
  4. "target_id": "3"
  5. }
  6. ],
  7. }

...which the backend will process into the reference. I keep the other HAL stuff along for good measure.

In closing

So that's the stuff I found in my first go at RESTful work in Drupal 8. There's bound to be way more where that came from, and we'll be sure to keep you POSTed with all the gotchas and fun tweaks we find along the way.

Looking to make a Drupal 8 site with lots of web services? Get in touch with the Drupal experts!

Categories: Elsewhere

Mediacurrent: Supplying Thumbnails to your Headless Drupal Front End

Thu, 18/02/2016 - 19:15

The Drupal 8 dev team decided to include the REST module in core because of the growing movement towards decoupled architecture, which we often refer to as “Headless Drupal.” Along these lines, entities have built in REST support, and furthermore it is extremely simple to expose custom lists of entities as REST resources by using the Views module REST display type. Drupal 8 is thoroughly headless-ready.

Categories: Elsewhere

Modules Unraveled: Drush Not Working With MAMP? Here's How To Fix It!

Thu, 18/02/2016 - 19:00
Here's the issue

If you're using MAMP, you might have experienced an issue where a website loads just fine in the browser, but when you try to use a Drush command, you get an error like the following:

exception 'PDOException' with message 'SQLSTATE[HY000] [2002] No such file or directory' in core/lib/Drupal/Core/Database/Driver/mysql/Connection.php:146

To be completely honest, I'm not sure what causes this issue. I think it has to do with the way Drush accesses MySQL. As far as I can tell, Drush is trying to access the system MySQL, instead of the one that comes with MAMP.

Here's the fix!

Luckily, this can be easily fixed by changing:

'host' => 'localhost',

to

'host' => '127.0.0.1',

and/or by adding the following line to the database credentials in your settings.php (or settings.local.php) file.

'unix_socket' => '/Applications/MAMP/tmp/mysql/mysql.sock', Example

So, for example, if your database credentials look like this in settings.php:

$databases['default']['default'] = array (
  'database' => 'drupal',
  'username' => 'root',
  'password' => 'root',
  'prefix' => '',
  'host' => 'localhost',
  'port' => '3306',
  'namespace' => 'Drupal\\Core\\Database\\Driver\\mysql',
  'driver' => 'mysql',
);

Change it to this (differences highlighted for clarity):

$databases['default']['default'] = array (
  'database' => 'drupal',
  'username' => 'root',
  'password' => 'root',
  'prefix' => '',
  'host' => '127.0.0.1',
  'port' => '3306',
  'namespace' => 'Drupal\\Core\\Database\\Driver\\mysql',
  'driver' => 'mysql',
  'unix_socket' => '/Applications/MAMP/tmp/mysql/mysql.sock',
);

With those updates in the database settings, Drush should work as expected!

Hope that helps!

Tags: DrushMAMPplanet-drupal
Categories: Elsewhere

SitePoint PHP Drupal: Quick Tip: Set up Drupal 8 with Composer!

Thu, 18/02/2016 - 18:00

The recommended approach to getting started with Drupal 8 is now via Composer. An official project template has been created for this. We will create our project directly using the template, which is also available on Packagist.

To create a new project based on this template we can run the following Composer command:

composer create-project drupal-composer/drupal-project:8.x-dev my_project --stability dev --no-interaction

Continue reading %Quick Tip: Set up Drupal 8 with Composer!%

Categories: Elsewhere

Acquia Developer Center Blog: Creating a Multimedia Installation Using Drupal 8

Thu, 18/02/2016 - 14:58
DC Denison

It started when Genuine was selected by IPG, our parent company and one of the "big four" global advertising companies, to creatively support their move into new offices.

Tags: acquia drupal planet
Categories: Elsewhere

PreviousNext: Decorated services in Drupal 8

Thu, 18/02/2016 - 10:12

One of the aspects of the new object oriented architecture in Drupal 8 is the introduction of services. Services can be provided by any module and provide a mechanism for exposing reusable functionality by way of interfaces and classes. All services are instantiated via the container which is responsible for injecting a service’s dependencies.

Since services implement interfaces and are always instantiated via the container, we have the opportunity to alter what the container returns, ultimately allowing us to swap any existing service with a new one.

Categories: Elsewhere

Chromatic: Backup Your Drupal 8 Database to S3 with Drush & Jenkins

Wed, 17/02/2016 - 23:36

There are many different ways to handle offsite database backups for your Drupal sites. From host provider automations to contrib modules like Backup and Migrate and everywhere in between. This week, I was looking to automate this process on a Drupal 8 site. Since Backup and Migrate is being rewritten from the ground up for Drupal 8, I decided to whip up a custom shell script using Drush.

I knew I wanted my backups to not only be automated, but to be uploaded somewhere offsite. Since we already had access to an S3 account, I decided to use that as my offsite location. After doing a bit of Googling, I discovered s3cmd, a rather nifty command line tool for interacting with Amazon S3. From their README.md:

S3cmd (s3cmd) is a free command line tool and client for uploading, retrieving and managing data in Amazon S3 and other cloud storage service providers that use the S3 protocol, such as Google Cloud Storage or DreamHost DreamObjects. It is best suited for power users who are familiar with command line programs. It is also ideal for batch scripts and automated backup to S3, triggered from cron, etc.

It works like a charm and basically does all of the heavy lifting needed to interact with S3 files. After installing and setting it up on my Drupal 8 project's server, I was able to easily upload a file like so: s3cmd put someDatabase.sql.gz s3://myBucket/someDatabase.sql.gz.

With that bit sorted, it was really just a matter of tying it together with Drush's sql-dump command. Here's the script I ended up with:

# Switch to the docroot. cd /var/www/yourProject/docroot/ # Backup the database. drush sql-dump --gzip --result-file=/home/yourJenkinsUser/db-backups/yourProject-`date +%F-%T`.sql.gz # Switch to the backups directory. cd /home/yourJenkinsUser/db-backups/ # Store the recently created db's filename as a variable. database=$(ls -t | head -n1) # Upload to Amazon S3, using s3cmd (https://github.com/s3tools/s3cmd). s3cmd put $database s3://yourBucketName/$database # Delete databases older than 10 days. find /home/yourJenkinsUser/db-backups/ -mtime +10 -type f -delete

With the script working, I created a simple Jenkins job to run it nightly, (with Slack notifications of course) and voilà: automated offsite database backups with Jenkins and Drush!

Categories: Elsewhere

Chromatic: Drupal 8 Deployments with Jenkins, GitHub & Slack

Wed, 17/02/2016 - 23:36

We recently launched our first Drupal 8 site--actually it’s this very site that you’re reading! While this wasn’t our first time using or developing for Drupal 8, it was our first full site build and launch on the new platform. As such, it was the first time we needed to handle Drupal 8 code deployments. While I’ve previously covered the benefits of using Jenkins, this post will take you through the steps to create a proper Drupal 8 deployment and how to integrate GitHub and Slack along the way. In other words, you’ll see our current recipe for deploying code automatically, consistently and transparently.

First Things First: Some Assumptions

This post assumes you already have a Jenkins server up and running with the following plugins installed:

If you don’t yet have these things ready to go, getting a Jenkins server setup is well documented here. As is how to install Jenkins plugins. For us, we typically use a Linode instance running Ubuntu LTS for our Jenkins servers. This post also assumes that the external environment you’re trying to deploy to already has Drush for Drupal 8 installed.

Example Deployment Script with Drush

Before we dive into setting up the Jenkins job to facilitate code deployments, I’d like to take a look at what exactly we’re trying to automate or delegate to our good friend Jenkins. At the heart of virtually any of our Drupal deployments (be them Drupal 7, 8 or otherwise) is a simple bash script that executes Drush command in succession. At a macro level, this typically means doing the following, regardless of version:

  1. SSH to the server
  2. Change directory to repository docroot
  3. Pull down latest code on the master branch
  4. Clear Drush cache
  5. Run database updates
  6. Update production configuration
  7. Clear Drupal caches

In Drupal 7, where we relied heavily on Features to deploy configuration, we would typically do something like this:

echo "" echo "Switching to project docroot." cd /var/www/drupal-7-project/docroot echo "" echo "Pulling down latest code." git pull origin master echo "" echo "Clearing drush cache" drush cc drush echo "" echo "Run database updates." drush updb -y echo "" echo "Reverting features modules." drush fra -y echo "" echo "Clearing caches." echo "" drush cc all echo "" echo "Deployment complete."

In Drupal 8, we have the magical unicorn that is the Configuration Management System, so our deployments scripts now look something like this:

If you’re familiar with creating Jenkins jobs already and are just looking for a Drupal 8 deploy script, these next lines are for you.

echo "" echo "Switching to project docroot." cd /var/www/chromatichq.com/docroot echo "" echo "Pulling down the latest code." git pull origin master echo "" echo "Clearing drush caches." drush cache-clear drush echo "" echo "Running database updates." drush updb -y echo "" echo "Importing configuration." drush config-import -y echo "" echo "Clearing caches." drush cr echo "" echo "Deployment complete."

Seriously, configuration management in Drupal 8 is amazing. Hat tip to all of those who worked on it. Bravo.

Another notable difference is that with Drupal 8, clearing caches uses the cache-rebuild Drush command or drush cr for short. drush cc all has been deprecated. R.I.P. little buddy. ⚰

If you have a site that needs to be put into "Maintenance mode" during deployments, you can handle that in Drupal 8 with drush sset system.maintenance_mode 1 to enable and drush sset system.maintenance_mode 0 to disable.

Creating our Jenkins Slave & Job

Now that we’ve covered what it is we want Jenkins to handle automatically for us, let’s quickly run down the punch list of things we want to accomplish with our deployment before we dive into the actual how-to:

  1. Automatically kickoff our deployment script when merges to the master branch occur in GitHub
  2. Run our deployment script from above (deploys latest code, imports config, clears caches, etc.)
  3. Report deployment results back to Slack (success, failure, etc.)
Create Your Jenkins Slave

For Jenkins to orchestrate anything on a remote box, it first needs to know about said box. In Jenkins parlance, this is known as a "node". In our case, since we’re connecting to a remote machine, we’ll use a “Dumb Slave”. Navigate to Manage Jenkins > Manage Nodes > New Node

At Chromatic our naming convention matches whatever we’ve named the machine in Ansible. For the purposes of this article, you can just name this something that makes sense to you. Example:** Drupal-Prod-01**

As part of the creation of this node, you’ll need to specify the Host and the Credentials Jenkins should use to access the box remotely. If you don’t yet have credentials added to Jenkins, you can do so at Jenkins > Credentials > Global credentials (unrestricted). From there things are pretty self-explanatory.

Setup the Basics for our Jenkins Job

Now that we have a way for Jenkins to target a specific server (our slave node) we can start building our deployment job from scratch. Start by navigating to: Jenkins > New Item > Freestyle Project.

From there press "OK" and move on to setting up some basic information about your job, including Project Name, Description and the URL to your GitHub repository. Pro tip: take the time to add as much detail here, especially in the Description field as you can. You’ll thank yourself later when you have loads of jobs.

Configure Slack Notification Settings (optional)

Assuming you’re interested in tying your deployment status messages to Slack and you’ve installed the Slack Notification Plugin, the next step is to tell Jenkins how/where to report to Slack. You do this under the Slack Notifications options area. As far as notifications go, we prefer to use only the "Notify Failure", “Notify Success” and “Notify Back To Normal” options. This is the right mix of useful information without becoming noisy. To allow Jenkins to connect to your Slack channels, you’ll need to follow these steps for adding a Jenkins integration. Then just fill in your Slack domain, the integration token from Slack and the channel you’d like to post to. These settings are hidden under “Advanced…”.

Configure Where this Job Can Run

This is where we instruct Jenkins on which nodes the job is allowed to run. In this case, we’ll limit our job to the slave we created in step one: Drupal-8-Prod-01. This ensures that the job can’t run, even accidentally, on any other nodes that Jenkins knows about. Jenkins allows this to be one node, multiple nodes, or a group.

Configure GitHub Repository Integration

Under "Source Code Management" we’ll specify our version control system, where are repository lives, the credentials used to access the repo and the branches to “listen” to. In our example, the settings look like this:

Here we’re using a jenkins system user on our servers that has read access on our GitHub repositories. You’ll want to configure credentials that make sense for your architecture. Our "Branch Specifier" (*/master) tells Jenkins to look for changes on the master branch or any remote name by using the wildcard, “*”.

Configure Your Build Triggers

This is where the rubber meets the road in terms automation. At Chromatic, we typically opt for smaller, more frequent deployments instead of larger releases where there is a higher probability of regressions. Since we rely heavily on the GitHub pull request model, we often have many merges to master on any given day of development for an active project. So we configure our deployments to coincide with these merges. The following setup (provided via the GitHub Jenkins Plugin) allows us to automate this by selecting "Build when a change is pushed to GitHub".

Setup Your Deployment Script

Here’s where we’ll implement the example deployment script I wrote about earlier in the post. This is the meat and potatoes of our job, or simply put, this is what Jenkins is going to do now that it finally knows how/when to do it.

Under "Build" choose “Add build step” and select “Execute shell”. Depending on your installed plugins, your list of options might vary.

Then add your deployment script to the textarea that Jenkins exposed to you. If you want somewhere to start, here is a gist of my job from above. When you’ve added your script it should look something like this:

Last Step! Enable Slack Notifications

Although earlier in the job we configured our Slack integration, we still need to tell Jenkins to send any/all notifications back to Slack when a build is complete. You do this sort of thing under the "Add post-build action" menu. Select “Slack Notifications” and you’re good to go.

Our deployment job for Drupal 8 is now complete! Click "Save" and you should be able to start testing your deployments. To test the job itself, you can simply press “Build Now” on the following screen OR you can test your GitHub integration by making any change on the master branch (or whichever branch you configured). With the setup I’ve covered here, Jenkins will respond automatically to merges and hot-fix style commits to master. That is to say, when a PR is merged or when someone commits directly to master. Of course no one on your team would ever commit directly to master, would they?!

Wrapping Up

Assuming everything is setup properly, you should now have a robust automatic deployment system for your Drupal 8 project! Having your deployments automated in this way keeps them consistent and adds transparency to your entire team.

Categories: Elsewhere

Chromatic: Creating Links Within Twig Templates Using path() and url()

Wed, 17/02/2016 - 23:36

Drupal 8 comes packed with loads of great new features, APIs and developer tools. There are sweeping changes aplenty. Not the least of which is a brand new templating system called Twig. Twig is the bee's knees and a welcome improvement over the much maligned PHPTemplate. However, many front-end developers who've grown accustomed to the templating ways of old might feel a bit lost as they enter the strange, new world of Drupal 8: Twig and render arrays. Furthermore, the Drupal community is still learning, documentation is still being written, etc.

As we approach the completion of our first official Drupal 8 project here at Chromatic, I thought it would be helpful to start sharing a bit of what I've learned along the way, starting with some examples on linking from within Twig templates.

If you want to just see how the hell you link to things from within Twig templates, skip these next bits on context.

Some Context

In Drupal 7 and versions prior, we used paths to define destinations for our content, APIs, etc. In Drupal 8, these are abstracted into routes. Where in Drupal 7 you would define a custom page via hook_menu(), like so:

<?php function chromatic_conact_menu() { $items['contact'] = array( 'title' => 'Chromatic Contact', 'page callback' => 'contact_page', 'access arguments' => array('access content'), 'type' => MENU_SUGGESTED_ITEM, ); return $items; } ?>

In Drupal 8, you define this in a special routing file that follows this naming convention: module_name.routing.yml. Here's a similar example, where chromatic_contact_contact is our route name and contact is the internal path where it can be accessed.

chromatic_contact_contact: path: 'contact' defaults: _form: '\Drupal\chromatic_contact\Form\ChromaticContactForm' _title: 'Contact Us!' requirements: _permission: 'access content'

Here's how Drupal 8 sets up the route for node type creation:

node.type_add: path: '/admin/structure/types/add' defaults: _entity_form: 'node_type.add' _title: 'Add content type' requirements: _permission: 'administer content types' Got it. Routes are the new hotness in D8. Now why does this matter to me? I'm a front-ender.

Glad you asked. Routes matter because if you want to generate URLs to custom pages from your templates and you want to do it properly, you need to understand routes. The old methods of using paths are "dead in Drupal 8".

The url() and path() functions are how we handle this type of linking in D8. These functions don't expect an internal path like you're probably familiar with in prior versions of Drupal. Instead, they expect proper system routes. The type of thing you now see in module_name.routing.yml.

So what's the difference?
  • path() - Generates a [relative] URL path given a route name and parameters.
  • url() - Generates an absolute URL given a route name and parameters.
Examples

// Link to the default frontpage content listing view: <a href="{{ path('view.frontpage') }}">{{ 'View all content'|t }}</a> // Link to a specific node page: <a href="{{ path('entity.node.canonical', {'node': node.id}) }}">{{ 'Read more'|t }}</a> // Link to a specific user profile page: <a href="{{ path('entity.user.canonical', {'user': user.id}) }}">{{ 'View user profile'|t }}</a> // Link to a view, and throw in some additional query string parameters: <a href="{{ path('view.articles.page_1', {'page': 2}) }}">{{ 'Go to page 2'|t }}</a> // Link to a view and pass in some arguments to the view: <a href="{{ path('view.recent_articles_by_author.page_1', {'arg_0': user.field_display_name.value|drupal_escape }) }}">{{ 'See all the articles written by'|t }} {{ user.field_display_name.value }}</a>

The source code for the path() and url() Twig extensions can be found here within Drupal 8 core: /core/lib/Drupal/Core/Template/TwigExtension.php.

If you're curious about how all of this came to pass, here's the D.O issue on which the work occurred: https://www.drupal.org/node/2073811. It's long. Really long. The tldr; is basically a lot of back and forth over how routes are a barrier to entry for beginners and that there should still be an easy way to use paths. I honestly see both sides, but in the end, routes won out in the name of consistency and compatibility. The original patch contained a urlFromPath() function but it has since been removed. :-/. Beginner or not, you should now understand how to generate URLs within your templates. Now go! Commence Twigging!

One more thing!

The Drupal.org documentation page that explained these Twig functions (and others) was marked as incomplete, so I took this opportunity to finish it up. If you have other examples to share, please do so there!

Categories: Elsewhere

Chromatic: Drupal Camp Chattanooga 2015

Wed, 17/02/2016 - 23:36

A few weekends ago I was fortunate enough to attend my first Drupal Camp ever. What was even more fortunate for me was that it was located near where I grew up in Chattanooga, TN. I’ve been to several leadership/business conferences in my life, but this is the first one that I’ve been to where it felt like everyone was genuinely glad to be there, even on a rainy fall day. In the spirit of open source, it wouldn’t feel right to keep all of the helpful info I learned to myself, so I wanted to take a moment and share some of my key take-aways.

Selling Drupal

Bob Snodgrass from net2Community gave a great seminar on sales. He started his presentation on selling Drupal by stating "Don’t sell Drupal, sell results!" So many times we become entrenched in the technology that we use and sell, and that’s ok!, but we have to remember that clients are more interested in the results we can provide, instead of the medium we use to get there. A good example given was a donut in a donut box. People rarely care about what box it comes in, they just want the donut!

Bob also brought attention to how each team member contributes to "selling", whether they realize it or not. The interactions, work and relationships that we manage on a daily basis help clients decide if they will continue to work with us or recommend us to their work colleagues.

Becoming a PM Ninja

Justin Rhodes from Commerce Guys led a good discussion on increasing efficiencies as a Project Manager. One of the first things we discussed was Parkingson’s Law: tasks will swell or shrink according to the amount of time given to them. If you use this law to your advantage, you can create small window deadlines to create efficiencies in your development process.

When it comes to planning and estimating phases for a project, Justin shared the success he has had doing this by using relativity to help the team give input. For example, rather than trying to nail down hours for each process step right way, you can tell everyone to estimate the size of the task by using shirt sizes (S, M, L, XL) to align everyone by using relative sizes. This can give the resource planner a clear visual of which parts of the project are going to take the longest. Of course, there are no complete replacements for old fashioned hourly estimates. The key to success on estimates is to check the team’s accuracy by comparing estimates against actual spend.

Contributing to Open Source

Our keynote speaker was Mike Anello from Drupal Easy. He gave a great talk on why we should be contributing back to the Drupal community. It wasn’t the typical, do it because it is the right thing to do, lecture. Mike made the case that not only does contributing back to the community grow your network and visibility, but it has a very real probability of increasing your bottom line. It is easy to say that we are going to contribute, but unless you deliberately set aside time, it probably isn’t going to happen. One way to make contributing a priority is to work it into your marketing budget. Switching your perspective on contributing from a way to get karma points to a powerful marketing tool, will change your priorities around where it fits into the company’s strategy.

The last take away I got from Mike’s talk was that even those of us who are nontechnical team members can contribute back to the Drupal community. Reaching out to module owners and asking how you can help, cleaning up ticket queues on modules, confirming bugs and assisting in documentation, are all ways to help move the ball forward (and I’m told they are greatly appreciated). Don’t forget that Drupal events don’t just happen by themselves. There is always a need for folks to help with the planning and coordination of meetups, camps and conventions.

Final Thoughts

Although I was unable to attend the pre and post party, I have no doubt that the enthusiasm of the group spilled into every conversation that was had. Who knows where I will be living next year (the benefits of working for a distributed company), but if I am able to, I’ll be returning to DrupalCamp Chattanooga!

(photo credit goes to @DrupalNooga)

Categories: Elsewhere

Chromatic: TheaterMania: Lessons Learned on Localization

Wed, 17/02/2016 - 23:36

We recently launched a new site for an existing client, TheaterMania. We helped launch and currently maintain and develop The Gold Club, which is a subscription-based discount theater club in New York City. The new site is the same thing, but in London - same language, same codebase, new database, different servers. We only had to migrate users, which were already exported for us, so nothing exceptional there. Shouldn't be a big deal, right? We learned that's not always the case.

Architectural Decisions

One of our first problems, besides the obvious localization issues (currency, date formats, language), was to decide what we were shipping. Were we just building another site? Were we packaging software? There will most likely be more sites in other cities in the future - how far did we want to go in terms of making this a product that we could ship? In the end, we wound up going somewhere in the middle. We had to decide initially if we would use Organic Groups to have one site with multiple "clubs," one Drupal multisite installation, or multiple Drupal installations. The final decision was to combine the latter two choices - we created multisite-style directories so that if we need to take the site in a multi-site direction, we can easily do that. The sites each have a site-specific settings file, full of various configuration variables.

Now that the site has been launched, we're not sure if this list of variables will be developer-friendly moving forward, and have been keeping in mind that we may want a more elegant solution for this. The best part about this setup is that we have one codebase, one master branch, and each site is configured to use the appropriate settings. The most important thing is that this is all very thoroughly documented, both in the code, README files, and the repo wiki.

Currency & Recurly: Easier than Expected

One of the issues I thought would be very problematic was currency, but that wasn't actually an issue. All of the existing transactions are set up in cents - ie, 100 instead of 1.00 for a dollar, and that translates perfectly from dollars to pounds. We use Recurly, an external payment and subscription processor, so we didn't have to worry about any localization issues on that front. Most of the currency abstractions I did were to remove any hard-coded references to the dollar sign, and create functions and variables to get the appropriate currency symbol.

Dealing with Dates; Ugh.

Date formats were something I expected to be easy, but that wound up being more complex. I discovered hook_date_combo_process_alter() to change the display of the date in calendar popup fields. This made what I’d thought was going to be a difficult series of view handlers really simple. We have several fields using the date combo box on both content types and entities, and this function took care of them.

/** * Implements hook_date_combo_process_alter(). * * Changes the date format. */ function gc_display_date_combo_process_alter(&$element, &$form_state, $context) { if (isset($element['#entity']->type)) { switch ($element['#entity']->type) { case 'event': $element['value']['#date_format'] = variable_get('date_format_short'); break; case 'partner': $element['value']['#date_format'] = variable_get('date_format_short'); $element['value2']['#date_format'] = variable_get('date_format_short'); break; case 'promo_offer': $element['value']['#date_format'] = variable_get('date_format_short'); $element['value2']['#date_format'] = variable_get('date_format_short'); break; default: break; } } elseif (isset($element['#entity']->field_name)) { if ($element['value']['#instance']['widget']['type'] == 'date_popup' && $element['#entity']->field_name == 'field_user_csr_notes') { $element['value']['#date_format'] = variable_get('date_format_short'); } } }

I took the dozen or so existing date formats from Drupal, altered some of them to meet our needs, and added a few more. My head also started spinning when testing because I'm so used to M/D/Y formats that D/M/Y formats look really strange after a while, especially because code changes needed to be tested on the US and UK sites, so I had to be really careful when visually testing a page to make sure that a US page was showing 9/1/15 and the UK page was showing 1/9/15. In the future, I’d definitely advocate for a testing suite on a project like this. Overall, making sure all of the dates were changed was somewhat tedious, but not difficult. It required a lot of attention to detail and familiarity with PHP date formats, and vigorous testing by the whole team to make sure nothing had been missed.

Proper Use of t() Early == Wins Later

This project made me extremely grateful for the t() function. Since both sites were in English, we didn't have a need for site-wide translation, but we did need to localize a handful of strings, both for language issues (words like 'personalize' vs 'personalise'), and the general language preference of the stakeholders. It was easy enough to find the strings and list them in locale_custom_strings_en to switch them out. One gotcha we came across that I wasn't familiar with - you cannot use t() in your settings files. The function isn't available at that point in the bootstrapping. You can use get_t(), but we opted to remove the translation strings from any variables and make sure that t() was used when the variable was called. This wasn't something I had run into before, and it caused some problems before we figured it out.

Miscellany

A few tricky miscellaneous problems cropped up, too. There was a geolocation function enabled in Recurly, which was defaulting to the US and we were unable to change the settings - we also didn't realize this when testing in the US, and we scratched our heads when the London team told us the field was defaulting to US until we came across the culprit. We were able to fix it, and put in a patch for the library causing the issue.

I also realized how many various settings default to the US when working on this project - a lot of the location-related work was just abstracting out country defaults. Something to keep in mind if you're working on a project with locations. Don't make more work for developers who live or work on projects outside of the US. Plan for the future! Assume nothing!

Looking Back

I'm really glad that I worked on this project, because it's made me develop with a better eye for abstraction of all kinds, and making sure that it's easy for developers or users to work with my code anywhere. In the future, I’d put more thought into managing our configurations from the start, as well as automating the testing process, both for time-saving and better QA.

If you’ve ever worked on a site with challenges like these, I’d love to hear how you handled them! What are your best practices for managing custom locale strings and other site-specific variables? To what extent do you abstract things like dates and currency when developing a site, even when you don’t know if those will ever change?

Categories: Elsewhere

Chromatic: BADCamp 2015: Transitioning From theme() and Theme Functions to Render Arrays and Templates

Wed, 17/02/2016 - 23:36

I was fortunate to attend and speak at BADCamp for the first time this year. BADCamp is the Bay Area Drupal Camp, held annually in Berkeley, CA. I don't know the official numbers, but I believe over 1,000 were in attendance, giving it the feel of a smaller DrupalCon. The camp was well organized, the sessions were high quality, and I met and got to know quite a few great people.

Anyone who wasn't able to attend can watch my session, 7 to 8: Transitioning From theme() and Theme Functions to Render Arrays and Templates, here:

My slides are also available online. The video and slides include in-depth examples, but for the TL;DW crowd more interested in the key takeaways:

Render Arrays > theme()

The theme() function no longer exists in Drupal 8. Instead, render markup by passing render arrays to the template. Start using render arrays instead of theme() in Drupal 7 right now. Not only will it make the code easier to port to Drupal 8, but there are other advantages.

Using render arrays allows for the data to remain an array until the template renders it into markup. Waiting to render markup allows other modules or themes to intercept and alter the data before it is rendered. This is typically done with preprocess hooks.

Templates > Theme Functions

When Drupal 7 goes to render markup, it does so with either a theme function, such as theme_image(), or a template. Theme functions contain a lot of logic and are a tough way to write and alter markup. Overriding theme functions also involves copying a significant amount of code to make changes.

Instead of writing theme functions, write templates. Keep the logic in preprocess and process functions. This separation will make altering the logic and markup much easier. Other modules or themes can easily use preprocess hooks or override templates as needed.

Drupal 8 has taken this approach. While theme functions can still exist in Drupal 8, all core theme functions were converted to templates.

More information

For more on this topic, check out these resources on drupal.org:

There were also a number of other sessions at BADCamp related to this topic. They included:

Thanks again to the BADCamp organizers. I hope to see everyone again next year!

Categories: Elsewhere

Chromatic: Drupal 8 Configuration Management - Solving the Configuration Conundrum

Wed, 17/02/2016 - 23:36

A common difficulty in web development is keeping configuration consistent between environments. Drupal keeps most configuration settings in the database and it can be a pain to keep databases across environments synchronized. Solving this conundrum was one of Drupal 8's goals and the focus of D8's Configuration Management Initiative (CMI). All site configuration is saved in YAML files (hooray for version control!) and can be shared between site environments.

The Way We Were

Let's take a classic example from before D8, using the beloved Views module. When you build a great new view on your local machine you still need to get it onto the production site. You could repeat what you did on your local machine and build the View by hand on prod, but that's just repeating all the work you have already completed, not to mention, prone to user error. Most developers are too lazy to do that anyways, so they depend on Drupal's Features module. The Features module works by exporting db configuration settings into Drupal module code which can be shared amongst environments. It usually works pretty well, but it turns out the Features module was not built for handling deployment and configuration. These tasks have fallen into its lap because, until now, it was the only game in town.

The Way We're Going

With every Drupal 8 installation, the configuration system creates a configuration directory*, sync, to hold the aforementioned YAML configuration files. The directory is actually created as a sub-directory in sites/default/files, into a folder with a long, random hash as its name, prefaced by config_. (Including a random hash makes it near impossible for bad guys to guess the configuration folder name.)

* At the time of writing the default installation actually created two directories: active and staging, but that was changed to just one, the staging directory which subsequently was changed to sync. The blog post has been updated to reflect this.

In picture form...a folder with a really, really long, random name

So in the above screenshot the configuration directory's name is

config_Z1fHk5YnKXiTdJl9lAfz_dqGmrMTzqT9lNnbUF6z4kwKxglnC8srJZBTcI1dIMSCOmOwvEMZ5g

and the full path of the sync directory is:

sites/default/files/config_Z1fHk5YnKXiTdJl9lAfz_dqGmrMTzqT9lNnbUF6z4kwKxglnC8srJZBTcI1dIMSCOmOwvEMZ5g/sync

So that's where the site configuration files live, but by default, for reasons of speed and security, the actual configuration settings are saved in the database. So all the configuration from those YAML files actually gets imported into the database.

The configuration management system provides a UI for uploading and double-checking changes before applying them to the live site. (Of course, there are Drush commands, drush config-export and drush config-import that allow you to import/export through the command line too.)

How do you Feel About Clones?

Now hold on, you can't upload one site's configuration to any old site; you can only synchronize configuration settings between cloned instances of a site. This restriction was a conscious decision made by the Drupal CMI team. So you need to start with one environment and then clone it. If you don't do this, Drupal simply will not allow configuration settings to be imported.

Steps for cloning a site:
  1. Do a db dump of the original database and import it into an empty database for your new clone environment.

  2. Assuming you are using git, (you are using git, right?) you can run git clone to clone the original site files into the clone environment.

  3. Check folder permissions (Did .htaccess come across? Does sites/default/files have sufficient permissions?).

  4. Change settings.php in cloned site to point to the fresh db from step 1.

Assuming your cloned site is working, you are ready to import settings from one environment to another. In my case, I created a new view called bookshelf that needs to get to prod. Below are screenshots of what the process looks like.

Export and Download from Dev

In this screenshot, I have just exported and downloaded my dev environment's configuration tar.gz file.

Upload into Prod

Having switched to my prod environment, I select the downloaded tar.gz file and upload it.

Successful Upload

After a successful upload you can see that there are changes. Note that my new bookshelf view is there. I can click the View differences button to compare the changes.

Vive la Difference

Having clicked the View differences button, I see the changes that are about to be made.

Import All

Satisfied with what I see, I can click Import all to apply the changes.

Import Progress

A progress bar displays as Drupal imports the settings.

A Site With a View

Voilà, the new view is on my prod site.

Whew, so there were a bunch of steps to get things set up, but once you have your environments in place, it's easy to move settings between them. Of course, in a full-on, professional development environment you would harness the power of Drush and use scripts to do all the dirty work. But that's a tutorial for another time.

Categories: Elsewhere

Chromatic: Programatically Creating and Storing WordPress Migrate Migrations in Drupal

Wed, 17/02/2016 - 23:36

Migrations are never glamorous, but doing them right and verifying their integrity is essential to their success. The WordPress Migrate module gives you an easy turnkey solution to migrating content into Drupal from WordPress. It allows you to create each migration through an interactive admin form, allowing you to configure your migration entirely through the UI. This is great, but it does not make creating or storing the resulting migrations easy to manage across multiple environments, since the migrations are not defined in code like a typical Migrate class. Short of copying database tables or re-entering the configuration through the admin forms, developers are stuck with the migrations stored in a single database and thus it is not easy to move to other environments for testing or further development.

Copying data tables is almost always the wrong solution and manually re-entering all of the migrations would be way too time consuming, so our solution was to create the migrations programmatically. To do this, we hooked into the existing WordPress Migrate codebase and used its logic to build programmatically, what it builds from data input to its admin forms. Then we are able to define all of our migration sources in code and instantly create all of our migrations in a new environment, or recreate them after something fails during development.

As mentioned, this solution relies upon programmatically submitting admin forms, which is often not an ideal scenario. Additionally, there is the almost inevitable request to add additional customizations beyond what Wordpress Migrate supports out of the box. Sometimes this makes WordPress Migrate more of a hinderance than a help. So why not just create a custom Migrate class from the outset and avoid all of these issues? Here are some factors to consider:

  • Writing a custom Migrate class for your WordPress content always sounds more appealing until you run into problems and realize WordPress Migrate already solved those issues.
  • The WordPress Migrate module offers a lot of functionality, including file transfer, author migration, embedded video processing, internal link rewriting, comment migration, etc.
  • You might not need much custom code and just tweaking the WordPress Migrate functionality by extending one of its classes will easily do the trick.
  • You might not have the resources (time, knowledge, etc.) to write a custom Migrate class.
  • Running and testing the migrations on multiple environments might not be in your workflow, although I would argue it should be.
  • You might only have one or two WordPress sites to migrate content from, so manually re-creating them is not an issue.

If after weighing all of the factors, you decide using the WordPress Migrate module is in your best interest and manually recreating the migrations is not an option, then follow along as we walk you through our approach to creating and storing WordPress Migrate migrations programmatically.

Our Solution

First we need to define the list of source blogs. The keys of each item in this array can be added to as needed to override the default values we assign later.

/** * Define the WordPress blogs to be imported. */ function example_wordpress_migrate_wordpress_blogs() { // Any key not set here will default to the values set in the // $blog_default_settings variable in the drush command. $blogs = array( array( 'domain' => 'www.example.com/site-one/', ), array( 'domain' => 'www.example.com/site-two/', ), array( 'domain' => 'www.test.com/', ), ); return $blogs; }

Next we'll create a custom drush command so that we can easily trigger the creation of our migrations from the command line.

/** * Implements hook_drush_command(). */ function example_wordpress_migrate_drush_command() { $items = array(); // Creates WordPress migrations. $items['example-migrate-create-wordpress-migrations'] = array( 'description' => 'Creates the WordPress migrations.', 'aliases' => array('mcwm'), ); return $items; }

Be sure to note the example_migrate_wordpress_password variable below, as you will need to ensure you set that in settings.php before creating the migrations. The WordPress Migrate code needs to be able to login to your site to download the source XML file, and a password is paramount to the success of that operation!

/** * Callback for WordPress migration creation drush command. */ function drush_example_wordpress_migrate_create_wordpress_migrations() { // Reset the file_get_stream_wrappers static cache so the 'wordpress' stream // wrapper created by the wordpress_migrate module is available. $wrappers_storage = &drupal_static('file_get_stream_wrappers', NULL, TRUE); // The wordpress_migrate module's UI is a multi-step form that collects all // configuration needed to migrate a given blog. As this form's steps are // submitted and validated, an export file is downloaded for each blog and its // contents are migrated. There is no easy way to export these settings or use // code to provide that configuration and then trigger a migration, so the best // bet is simulate the submission of those form steps with the needed data. module_load_include('inc', 'migrate_ui', 'migrate_ui.wizard'); // Get a list of blogs to migrate. $blogs = example_migrate_wordpress_blogs(); $blog_default_settings = array( 'source_select' => '1', 'domain' => '', 'username' => 'admin', 'password' => variable_get('example_migrate_wordpress_password', ''), 'wxr_file' => NULL, 'do_migration' => 0, 'default_author' => 'admin', 'page_type' => '', 'blog_post_type' => 'story', 'path_action' => 1, 'tag_field' => '', 'category_field' => '', 'attachment_field' => '', 'text_format' => 'filtered_html', 'text_format_comment' => 'filtered_html', ); // Import each of the blogs. foreach ($blogs as $blog_settings) { // Combine the default settings and the custom per blog settings. $blog_settings = array_merge($blog_default_settings, $blog_settings); // Skip the import if no username or password was found. if (empty($blog_settings['username']) || empty($blog_settings['password'])) { $message = t('The :site-name migration was not created since no username and/or password could be found. Verify that the example_migrate_wordpress_password variable has been set.'); $replacements = array( ":site-name" => $blog_settings['domain'], ); drupal_set_message(t($message, $replacements), 'warning'); continue; } // Set the form state values. $form_state['values'] = $blog_settings; // Store the values so we can use them again since $form_state is // a reference variable. $form_state_values = $form_state['values']; // Build the import form. $form = drupal_get_form('migrate_ui_wizard', 'WordPressMigrateWizard'); $form = migrate_ui_wizard($form, $form_state, 'WordPressMigrateWizard'); // Create a Migrate Wizard object. $form_state['wizard'] = new WordPressMigrateWizard(); // Set the number of steps in the form. $form_steps = 6; // Go through all of the steps. foreach (range(1, $form_steps) as $step) { // Validate the form data. $form_state['wizard']->formValidate($form_state); // Submit the form page. migrate_ui_wizard_next_submit($form, $form_state); // Put any values removed from the array back in for the next step. $form_state['values'] = array_merge($form_state_values, $form_state['values']); } // Submit the form. drupal_form_submit('migrate_ui_wizard', $form_state); // Save the settings into the wizard object. $form_state['wizard']->formSaveSettings(); // Notify the user that the migration was created successfully. $replacements = array( '@site-name' => $blog_settings['domain'], ); $message = t('The @site-name migration was successfully created.', $replacements); drupal_set_message($message, 'success'); } }

With all of this in place, the source WordPress sites and the configuration needed to import them are now fully defined in code along with a custom Drush command to create the required migrations. No longer will each individual site need to be re-entered through the UI introducing opportunities for mistakes and wasted time.

Now when you are in a new environment or after you reset your migrations, you can simply run drush mcwm.

Following its successful completion, the following are done for you:

  • A new Migrate group is created for each individual blog.
  • The actual Migrate classes within each group that migrate, authors, content, terms, and attachments are created and configured as defined in the code.
  • The source WordPress XML file is downloaded for each site and stored in wordpress://.

Then simply run drush ms to verify everything was created successfully, and you are ready to migrate some content!

Now that you have the tools and knowledge to evaluate your unique migration needs, you can make a well informed decision if this approach is right for you. However, we think that more often than not, all of the incredible functionality you get pre-built with the WordPress Migrate module will outweigh the issues that arise from not being able to fully build and store your migrations in code, especially when you add the functionality outlined above that gets you the best of both worlds. So have your cake and eat it too, and define your migrations in code and utilize the WordPress Migrate module while you are at it!

If you decide to go this route, all of the code referenced here is available in this gist. Please note that this was all built for WordPress Migrate 7.x-2.3, so future updates to the module could break this functionality.

Categories: Elsewhere

DrupalEasy: Contact + Contact Storage (contrib) Module as a Drupal 8 Webform/Entityform Replacement

Wed, 17/02/2016 - 23:06

Looking to migrate your Drupal 6 or 7 site to Drupal 8 but can't do it yet because your site it too reliant on the Webform (or Entityform) module? There's actually very elegant solution that is ready to go today. The core Drupal 8 "Contact" module is now a fully-fledged fieldable entity - meaning you can create various "contact form" types with different sets of fields. Like Entityform, you can use any field types provided by Drupal core as well as contributed modules. When a contact form is submitted, the data is emailed to the assigned addresses.

The Contact Storage module provides the functionality missing from the core Contact form - the ability to save user submissions in the database. Furthermore, Contact Storage stores all submissions in a way that makes them automatically compatible with Views!

The Drupal 8 version of DrupalEasy.com uses exactly this technique for all of our forms, including our main contact form, our Drupal Career Online application form, as well as our various training student surveys. 

Categories: Elsewhere

OSTraining: Using Entity Reference Fields in Drupal Views

Wed, 17/02/2016 - 22:29

An OSTraining member came to us with a question. The member had two content types that were linked by an Entity Reference field. They wanted to use Views to automatically show import from one content type to another.

In this tutorial, I'm going to show you how to use Entity Reference fields inside Views.

Categories: Elsewhere

Jason Pamental's RWT.io Blog: Beep Edition comes to Drupal8

Wed, 17/02/2016 - 16:42

I’ve never had particularly lofty ambitions for Beep Edition, the responsive base theme I started a few years ago. Rather than make a new ‘all singing, all dancing’ kind of theme, I wanted this to be my starting point, and include all of my collected experience in responsive design, accessibility, typography and general good practice.

Categories: Elsewhere

Promet Source: On the Road to DrupalCon Asia

Wed, 17/02/2016 - 16:41

Read some more updates from Promet's journeys across the Drupal-verse:

Categories: Elsewhere

Acquia Developer Center Blog: Drupal 8 Module of the Week: Metatag

Wed, 17/02/2016 - 16:19
Jeffrey A. "jam" McGuire

Each day, more Drupal 7 modules are being migrated over to Drupal 8; new modules are also being created for the Drupal community’s latest major release. In this series, the Acquia Developer Center is profiling some of the most prominent, useful modules available for Drupal 8. This week: Metatag.

Tags: acquia drupal planetmetatagmodules
Categories: Elsewhere

Pages