Planet Drupal

Subscribe to flux Planet Drupal
Drupal.org - aggregated feeds in category Planet Drupal
Mis à jour : il y a 25 min 20 sec

Mediacurrent: The Real Value of Drupalcon

mar, 17/06/2014 - 16:50

I bet most people who have ever attended a DrupalCon before would agree that it takes a full week to process (and recoup) from all the community synergy and new information consumed during this epic annual event. 2014’s DrupalCon in Austin, TX was jam packed with awesome boasting the largest DrupalCon yet and also one of the most diverse. There were nearly 3,500 people from 60 countries in attendance! The Austin Convention Center was the perfect venue, and downtown Austin became the Drupal community’s stomping grounds all week as we filled restaurants, bars and hotels with Drupal chic.

Catégories: Elsewhere

Drupal.org Featured Case Studies: Campagna Center Responsive, Ecommerce Website

mar, 17/06/2014 - 16:50
Completed Drupal site or project URL: http://www.campagnacenter.org

The Campagna Center is a non-profit organization located in Alexandria, Virginia centered on delivering superior academic and social development programs for individuals of all ages to inspire a commitment to learning and achievement. As with many non-profits, their website is an integral platform for keeping donors, volunteer members, and program attendees engaged and informed.

The company behind the development and design is New Target, Inc. based out of Alexandria, Virginia. New Target is a full service web company frequently partnering with associations, non-profits, and mission driven organizations to inspire and engage audiences on the web.

Key modules/theme/distribution used: OmegaRespond.jsViews Nivo SliderSuperfishRedirectMollomGoogle AnalyticsBlock referenceCommerceShortcodeOrganizations involved: New Target, Inc.Team members: Brian Newsomehayliejcastedohak55pgrujicpaige.elena
Catégories: Elsewhere

Drupal.org Featured Case Studies: Campagna Center Responsive, Ecommerce Website

mar, 17/06/2014 - 16:50
Completed Drupal site or project URL: http://www.campagnacenter.org

The Campagna Center is a non-profit organization located in Alexandria, Virginia centered on delivering superior academic and social development programs for individuals of all ages to inspire a commitment to learning and achievement. As with many non-profits, their website is an integral platform for keeping donors, volunteer members, and program attendees engaged and informed.

The company behind the development and design is New Target, Inc. based out of Alexandria, Virginia. New Target is a full service web company frequently partnering with associations, non-profits, and mission driven organizations to inspire and engage audiences on the web.

Key modules/theme/distribution used: OmegaRespond.jsViews Nivo SliderSuperfishRedirectMollomGoogle AnalyticsBlock referenceCommerceShortcodeOrganizations involved: New Target, Inc.Team members: Brian Newsomehayliejcastedohak55pgrujicpaige.elena
Catégories: Elsewhere

Chromatic: Converting Drupal Text Formats with Pandoc

mar, 17/06/2014 - 16:36

Switching the default text format of a field is easy. Manually converting existing content to a different input format is not. What about migrating thousands of nodes to use a different input format? That isn't anyone's idea of fun!

For example, let's say that all of a site's content is currently written in markdown. However, the new editor wants to not only write all future content in the textile format, but also wants all previous content converted to textile as well for a consistent editing experience. Or perhaps you are migrating content from a site that was written in the MediaWiki format, but standard HTML is the desired input format for the new site and all of the content needs to be converted. Either way, there is a lot of tedious work is ahead if an automated solution is not found.

Thankfully there is an amazing command line utility called Pandoc that will help us do just that. Pandoc converts text from one input syntax to another, freeing you to do less mind numbing activities with your time. Let's take a look at how Pandoc can integrate with Drupal to allow you to migrate your content from one input format to another with ease.

After installing it on your environment(s), below is the basic function that provides Pandoc functionality to Drupal. It accepts a string of text to convert, a from format, and a to format. It then returns the re-formatted text. It's that simple.

/**
* Convert text from one format to another.
*
* @param $text
*  The string of text to convert.
* @param $from
*  The current format of the text.
* @param $to
*  The format to convert the text to.
*
* @return
*  The re-formatted text.
*/
function text_format_converter_convert_text($text, $from, $to) {
  // Create the command.
  $command = sprintf('pandoc -f %s -t %s --normalize', $from, $to);
  // Build the settings.
  $descriptorspec = array(
    // Create the stdin as a pipe.
    0 => array("pipe", "r"),
    // Create the stdout as a pipe.
    1 => array("pipe", "w"),
  );
  // Set some command settings.
  $cwd = getcwd();
  $env = array();
  // Create the process.
  $process = proc_open($command, $descriptorspec, $pipes, $cwd, $env);
  // Verify that the process was created successfully.
  if (is_resource($process)) {
    // Write the text to stdin.
    fwrite($pipes[0], $text);
    fclose($pipes[0]);
    // Get stdout stream content.
    $text_converted = stream_get_contents($pipes[1]);
    fclose($pipes[1]);
    // Close the process.
    $return_value = proc_close($process);
    // A valid response was returned.
    if ($text_converted) {
      return $text_converted;
    }
    // Invalid response returned.
    return FALSE;
  }
}

We've written a barebones module around that function that makes our conversions much easier. It has a basic administration page that accepts from and to formats, as well as the node type to act upon. It will then run that conversion on the Body field of every node of that type. It should be noted that this module makes no attempt to adjust the input format settings or to ensure that the modules required for parsing the new/old format are even installed on the site. So treat this module as a migration tool, not a seamless production ready solution!

Pandoc has quite a few features and options, so check out the documentation to see how it will best help you. You can also see the powers of Pandoc in action with this online demo. Let us know if you use our module and as always, test any text conversions in a development environment before doing so on a live site! Please note: CHROMATIC nor myself bear any liability for this module's usage.

Catégories: Elsewhere

CTI Digital: Creating and using a public/private SSH key-pair in Mac OS X 10.9 Mavericks

mar, 17/06/2014 - 16:34
In the following article, we’re going to run through the process of creating a public/private SSH key-pair in OS X 10.9.   Once this is done, we’ll configure our GitHub account to use the public key, create a new repository and finally pull this repository down onto our machine via SSH.   Before setting up an SSH key on our system we first need to install GIT. If you’ve already installed GIT please proceed to the next section - otherwise lets get started.    Installation and configuration of GIT   To install GIT on Mac OS X 10.9, please navigate to the following URL (http://git-scm.com/downloads) and click the “Download for Mac” button.     Fig 1: Download options available at http://git-scm.com/downloads.   Once the *.dmg file has downloaded, double-click the file to mount it and in the new finder window that pops up, double-click on the file “git-1.9.2-intel-universal-snow-leopard.pkg” (the file will have likely changed name somewhat by the time you read this article, but aside from the version number, it should still be quite similar).   If you get the error highlighted in “Fig 2” when trying to open the file simply right-click on the *.pkg file and click “Open”. You should then see a new dialogue window similar to the one displayed in “Fig 3”, which will allow you to continue on to the installation process.   Fig 2: The error an end-user will see when trying to open a non-identified file if the “Allow apps downloaded from” section of “Security & Privacy” is set to “Mac App Store and identified developers” within “System Preferences”.   Fig 3: When right-clicking the *.pkg file and clicking “Open” the end-user is given a soft warning but now, unlike “Fig 2” we're able to bypass this dialogue by clicking “Open”.   The installation process for Git is fairly self explanatory, so I won’t go into too much detail - In a nutshell you will be asked to install Git for all users of the computer (I suggest leaving this at it’s default value) and you’ll be asked if you want to change the location of the installer (unless you have good reason to change the Git install location this should be left to the default value).    Finally, as part of the installation process you’ll be prompted to enter your system password to allow the installer to continue as shown in - type your password and click “Install Software”. If all goes well at the end of the installation process you should see the message “The installation was successful.”. At this stage you can click “Close” to close the installer.    Fig 4: Prior to installation, the GIT installer will require you to enter your system password to allow it to write files to the specified locations.   After the Git installation process we need to open a new instance of the Terminal application. This can be accomplished by opening the finder, clicking the “Applications” shortcut in the sidebar, scrolling to the bottom of the applications listing in the main window, double-clicking “Utilities” and finally double-clicking on “Terminal”.  

Pro tip: A much quicker way of accessing the Terminal is by pressing “Cmd+Space” to bring up Spotlight, typing “Terminal” and hitting the enter key. Once you become familiar with Spotlight it becomes indispensable!

  Once the Terminal window is open, type “git --version” and hit enter. If you’re running a fresh install of Mac OS X 10.9 at this stage you will likely be shown a message telling you that Developer Tools was not found and a popup will appear requesting that you install the tools. Click “Install” on the first dialogue window and when the next popup is displayed, click “Agree”.     Fig 5: The message most users will receive with a fresh install of OS X 10.9 when typing “git --version” into the terminal.   After the installation of Developer Tools, restart the Terminal application and type the command “git --version” followed by hitting enter. This time you should see the version number of the Git application installed.     Fig 6: Terminal displaying the version number of the installed Git application.   Finally, for the installation and configuration of Git we’re going to configure some user-level settings (specifically your name and email address). These configuration settings will be stored in your home directory in a file named “.gitconfig”.    To configure these settings type the following into the terminal (replacing my name and email address with your own obviously!).   git config --global user.name “Craig Perks”  git config --global user.email c.perks@test.com    Once done, type “git config --list" and you should see a list of user configuration settings analogous to those shown in “Fig 7”.     Fig 7: A Terminal instance showing the configuration settings for the logged-in user.   Now that we have Git successfully installed, in the next section, let’s create our public/private key-pair and add them to our GitHub account.   Creating an SSH public/private key-pair!   In the Terminal, let’s ensure we’re in our home directory. We can navigate to it by typing the following command in the Terminal:   cd ~/    From here we want to create a folder to store our SSH keys in. My preference here is to store them in a hidden folder called ‘ssh’.    Pro tip: By prefixing a folder or a file name with a dot the you’re essentially saying to the system “Hide this” by default.   To create our SSH directory, type the following command into the Terminal window: mkdir .ssh Next, type the command “cd .ssh“ and hit enter followed by command “pwd”. At this point you should see that you’ve now successfully navigated into the “ssh” folder.     Fig 8: By typing “pwd” into the Terminal we’re shown a literal path to our present working directory, which as displayed is /Users/<username>/.ssh.   Now, let’s create our public/private key-pair. Type “ssh-keygen” into the Terminal and hit enter. At this point you’ll be asked to enter a name for your public/private key-pair. This name can be anything, but for this tutorial, I’ll use my first name with a suffix of _rsa.     Fig 9: Creation of a public/private key-pair with the name “craig_rsa.pub/craig_rsa”.   The creation of a passphrase is an optional step, but a recommended one. Enter a passphrase (short password of your choosing), hit enter and enter the same passphrase again. One your public/private key-pair has been generated, you’ll see a message similar to the one highlighted in “Fig 10”.   Fig 10: The message shown to an end-user upon successful creation of a public/private key-pair.   Now we have a public/private key-pair, we want to add our newly created key to the ssh-agent. This can be achieved by typing the following command (remembering to amend the private key file name with your own file):   ssh-add -K ~/.ssh/craig_rsa    If you created a passphrase in the previous step, you’ll be prompted to enter your passphrase now. If you successfully add your key to the agent you’ll see a message similar to the following “Identity added: /Users/craigperks/.ssh/craig_rsa (/Users/craigperks/.ssh/craig_rsa)”.   Once your key is added to the ssh-agent, type the command “ssh-add -l” into the Terminal and you’ll see it displayed in the list of known keys.     Fig 11: Our newly created key listed in the ssh-agent.   Now we have our public/private key-pair successfully created, let’s add our public key to our GitHub account, create a repository and clone the repository.   Creating a repository on GitHub and cloning this onto our machine.   I’m not going to go through the GitHub registration in this guide. If you haven’t already done so, register an account on http://github.com and log-in.   Before we do anything on the GitHub website, we want to copy our public key. To do so, type the following command in the Terminal window (again substituting “craig_rsa” for whatever name you decided to give your key-pair”): pbcopy < ~/.ssh/craig_rsa.pub   Once done, navigate over to GitHub and click the “Account Settings” icon in the toolbar as pictured.     Fig 12: The “Account Settings” icon as shown to logged-in GitHub users.   On the “Account Settings” page “SSH keys” should be listed in the left-hand sidebar. Click it and on the next page that loads click “Add SSH key”.     Fig 13: The “Add SSH key” button, which allows you to add public keys to your GitHub account.   On the next page, give your key a name and paste the contents of your key (that we previously copied with the pbcopy command) into the “Key” field.   Note: Although I’m showing the contents of a public key here, it’s a dummy key and will be deleted upon completion of this guide. You should only share your public key with trusted sources.     Fig 14: Form displayed to GitHub account holders when adding a new key to the site.   Now we have our public key loaded into Git, let’s create a new repository, by clicking the “+” icon displayed next to our username (located in the top-right of the toolbar when logged in). From the menu that pops-up, click “New repository” and you’ll be directed to https://github.com/new.   From here, give the repository a name of “test” and ensure “Initialize this repository with a README” is checked.      Fig 15: Page displayed to GitHub account holders when creating a new repository.   Finally click the “Create repository” button.   In the right-hand sidebar that is displayed on your newly created repository, “SSH clone URL” should be visible.     Fig 16: SSH clone URL link, which allows users to clone the Git repository.   Click the “copy to clipboard” icon under “SSH clone URL” and return to the Terminal application.   Type the command “cd ~/Desktop” into the Terminal window and hit enter. Now that we’re in the Desktop folder in the Terminal type the command “mkdir git” and hit enter. If you go to your Mac OS X desktop at this point you’ll see that a folder called “git” has been created.    Back in the Terminal window type “cd git” to move into this directory. Finally type “git clone” followed by pasting the code copied from the GitHub repository “SSH clone URL” into the Terminal window (for me this would be: git clone git@github.com:craigweb/test.git). Hit enter when you’re ready and the repository will begin to clone.   If you’ve never cloned a repository from GitHub before, you may receive the message “The authenticity of the host ‘github.com (192.30.252.129)’ can’t be established” to continue type “yes” and hit enter and GitHub.com will be added to the list of known hosts.    Finally once the cloning is complete, type “cd test” to navigate into the newly created repository directory and finally type “ls -la” to display a listing of the folder (including hidden files).    If you see README.md listed, you’ve just successfully cloned your Git repository!!     Fig 17: Our successfully cloned Git repository displaying its contents. _ _  If you spot an error in this tutorial, or have any questions, please feel free to get in touch with me on Twitter at @craigperks.  
Catégories: Elsewhere

Advomatic: Fajitas, Front End Meets Design, and Remembering to Shower: A DrupalCon Recap

mar, 17/06/2014 - 16:27

Like many of you, the AdvoTeam hit Austin a coupe of weeks ago for DrupalCon 2014. Now that we’ve had some time to digest all the knowledge dropped (pun intended), we’re sharing our favorite DrupalCon takeaways. We’ve even included links to the DrupalCon sessions, so you can share in the joy.

 

Amanda, Front-End Developer:

Design and front end are figuring out how to fit together. Do designers need to know how to code? Should they really uninstall Photoshop? And while front-enders are benefiting from all the dev work coming down the pipe, we’re a bit overloaded waiting to see what emerges when the dust settles.

Despite our gripes, it is pretty satisfying that our teams now recognize that the front-end is infinitely more complex than it was just a few years ago.

Lastly, I was blown away that the conference attendance was 20% women! For me, that’s an indication that the community is doing something right in terms of attracting women in ways that other softwares don’t. Kudos!

Monica-Lisa, Director of Web Development

Loved the session on running a shop of remote workers. The big takeaways: Be in constant, positive, fun communication with one another. Find the best tools to keep in touch. If you can’t work all the same hours, choose a few hours a day, every day to overlap. Have a lot of trust in your people. And of course, don’t forget to take a shower every once in a while.

Dave, Web Development Technical Manager

This was one of the best DrupalCons I’ve been to. Top of the list of sessions: Adam Edgerton’s talk on scaling a dev shop. This year was also one of the best DriesNotes (keynote by Dries Buytaert, founder and lead developer of Drupal.)

And in past years, I’ve spent a lot of time hanging out with anyone I bumped into. But this year, I spent almost all my time with the Advoteam; going biking and swimming - we even went to Monica’s mom’s house for fajitas one night.

Jim, Senior Web Developer

The Core Sprint was inspiring, because everyone was getting the help they needed while also giving help to others. Everyone knew different things, so as a group, we were all able to share our collective knowledge to get people set up on Drupal 8, review patches, and to commit new ones.

Jack, Front End Developer

Once again, Drupalcon has shown me that it's not safe (or fun) to get comfortable.  The tools we use to make our work go faster and smoother are constantly changing.  What’s all the rage this year will probably be obsolete next year.  Don't fall in love with any one way of doing things.

Front-end development and theming has never felt more "sink or swim", and that's probably a good thing.  However, as things get more and more complicated, the single front-end developer that knows everything becomes more of a mythological creature.  As new worlds of specialization open up, it becomes more important to have new specialists available.

Lastly, it was awesome to get some face time with the Advoteam.  It's good for the remote team's morale, and also nice to be reminded you work with other human beings that have other things to talk about besides technical Drupal talk.

Did you make it to DrupalCon? What sessions did we miss?

 

Catégories: Elsewhere

Drupalize.Me: Drupal 8 Survey Insights

mar, 17/06/2014 - 15:30

Last month we asked the Drupal community a few questions. We received 243 responses to our survey, and we'd like to share some of the information. While we're not making scientific generalizations from this data, it is an interesting picture of our community nonetheless. A big thank you to everyone who participated in the survey.

Here are 4 things we learned:

Catégories: Elsewhere

Open Source Training: Building a Business Directory With Drupal

mar, 17/06/2014 - 14:43

Over the last couple of weeks, several different OSTraining members have asked me about creating a directory in Drupal.

I'm going to recommend a 4-step process for creating a basic directory.

Using default Drupal, plus the Display Suite and the Search API modules, we can create almost any type of directory.

Catégories: Elsewhere

Liran Tal's Enginx: Drupal Performance Tuning for Better Database Utilization – Introduction

mar, 17/06/2014 - 08:13
This entry is part 1 of 1 in the series Drupal Performance Tuning for Better Database Utilization

Drupal is a great CMS or CMF, whichever your take on it, but it can definitely grow up to be a resources hog with all of those contributed modules implementing hooks to no avail. It is even worse when developers aren’t always performance oriented (or security oriented god save us all) and this can (unknowingly) take it’s toll on your web application performance.

Drupal performance tuning has seen it’s share through many presentation decks, tutorials, and even dedicated books such as PacktPub’s Drupal 6 Performance Tips but it seems to be an always continuing task to get great performance so here are some thoughts on where you should start looking.

 

Checklist for glancing further into Drupal’s rabbit hole and getting insights on tuning your web application for better performance:

  1. Enable MySQL slow query log to trace all the queries which take a long time (usually >1 is enough, and with later versions of MySQL or compliant databases like Percona or MariaDB you can also specify milliseconds for the slow query log)
  2. Enable MySQL slow query log to also log any queries without indexes
  3. Make sure to review all of those query logs with EXPLAIN to figure out which queries can be better constructed to employ good use of indexes. Where indexes are missing it’s worth reviewing if the database would benefit from modifying existing indexes (and not breaking older queries)
  4. Use percona-toolkit to review out standing queries
  5. Use New Relic’s PHP server side engine which can tune into your web application and provide great analysis on function call time, wall time, and overall execution pipelines. While it’s not a must, I’ve personally experienced it and it’s a great SaaS offering for an immediate solution without having to need to install alternatives like XHProf or Webgrind.

 

(adsbygoogle = window.adsbygoogle || []).push({});

The post Drupal Performance Tuning for Better Database Utilization – Introduction appeared first on Liran Tal's Enginx.

Catégories: Elsewhere

AGLOBALWAY: Drupal to Excel with Views

lun, 16/06/2014 - 23:06

Every now and then we need to export a bunch of content from our Drupal site into a spreadsheet. This can easily be accomplished through views with modules like Views Excel Export, which allows you to add a new display for your view. Also, you can add it as an attachment to your existing view, creating a button. Click, and you download a spreadsheet (or CSV, there are options!) just like that. Modules like this one work really well if your data structure is straightforward, and there is no need to format your spreadsheet. What happens if you want it to look a certain way, add borders to columns or rows? What if your view is referencing Entities? Suddenly Views Excel Export doesn't quite cut it anymore. Enter PHP Excel. While this module has no UI to speak of, it does add PHP Excel to your Libraries folder in your Drupal installation. The PHP Excel library gives you a ton of functions that allow you to format an Excel spreadsheet, as well as write to it directly. Let's say we wish to output our "Events" view to Excel. By creating our own custom module, we can call our view programmatically after referencing the PHP Excel library:


<?php
function my_custom_spreadsheet_view() {
libraries_load('PHPExcel');
$view = views_get_view('events');
$view--->set_display('page');
$view->pre_execute();
$view->execute();
...
}

?>

Rather than calling

$view->render();

, we can use

dpm($view->results);

to find and extract the exact information we want. Assign your data to variables so that you can loop through your Events and write them to Excel in the format you like.


<?php
$events = $view--->results;
// start writing to row 5 of your spreadsheet
$rowID = 5;
foreach($events as $event => $e){
// do magic with your data
...
$objPHPExcel->setActiveSheetIndex(0)
->setCellValue('A'.$rowID, $data)
->setCellValue('B'.$rowID, $data1)
->setCellValue('C'.$rowID, $data2)
...
// move to the next row for the next Event
$rowID++;
}
?>

PHP Excel comes with a number of examples, simple and complex. Use these to learn how to format and generate your spreadsheet when you call your custom function.

Tags: drupaldrupal planet
Catégories: Elsewhere

Acquia: Search API Drupal 8 Sprint June 2014

lun, 16/06/2014 - 22:50

During 13th of June until the 15th of June we had a very successful Search API D8 sprint as announced at http://drupalsear.ch/node/681 and at Intracto's site. The sprint was organized with the Drupalfund.us money but even more valuable than money is time.

Catégories: Elsewhere

Acquia: Migrate in 8, DrupalCon post-mortem

lun, 16/06/2014 - 22:16

At DrupalCon Austin, I led a Core Conversation panel, Status of Migrate in 8, along with chx, ultimike, Ryan Weal, and bdone. The conversation was well-received, and we fielded some good questions. To summarize what we went over, updated for continued developments:

Catégories: Elsewhere

Acquia: First patch! Matt Moen and a village of contributors

lun, 16/06/2014 - 19:23

Every DrupalCon, there's a moment I especially look forward to: the First-Patch "ritual". This time around, a patch written by Matt Moen, Technical Director at Kilpatrick Design, was selected for fast-track testing, approval, and was committed to Drupal 8 core in front of a few hundred of us at the Austin convention center. In this podcast, I talk with Matt about becoming a core contributor; we hear from Angie "webchick" Byron about how it takes a village to commit a patch; and I've included a quick refresher on how version control works with the Gitty Pokey from the DrupalCon Austin pre-keynote. You can see the full patch approval and commit process in the 2nd video embedded on this page.

Catégories: Elsewhere

SitePoint PHP Drupal: Building a Drupal 8 Module: Blocks and Forms

lun, 16/06/2014 - 18:00

In the first installment of this article series on Drupal 8 module development we started with the basics. We’ve seen what files were needed to let Drupal know about our module, how the routing process works and how to create menu links programatically as configuration.

In this tutorial we are going to go a bit further with our sandbox module found in this repository and look at two new important pieces of functionality: blocks and forms. To this end, we will create a custom block that returns some configurable text. After that, we will create a simple form used to print out user submitted values to the screen.

Drupal 8 blocks

A cool new change to the block API in D8 has been a switch to making blocks more prominent, by making them plugins (a brand new concept). What this means is that they are reusable pieces of functionality (under the hood) as you can now create a block in the UI and reuse it across the site - you are no longer limited to using a block only one time.

Let’s go ahead and create a simple block type that prints to the screen Hello World! by default. All we need to work with is one class file located in the src/Plugin/Block folder of our module’s root directory. Let’s call our new block type DemoBlock, and naturally it needs to reside in a file called DemoBlock.php. Inside this file, we can start with the following:

<?php namespace Drupal\demo\Plugin\Block; use Drupal\block\BlockBase; use Drupal\Core\Session\AccountInterface; /** * Provides a 'Demo' block. * * @Block( * id = "demo_block", * admin_label = @Translation("Demo block"), * ) */ class DemoBlock extends BlockBase { /** * {@inheritdoc} */ public function build() { return array( '#markup' => $this->t('Hello World!'), ); } /** * {@inheritdoc} */ public function access(AccountInterface $account) { return $account->hasPermission('access content'); } }

Like with all other class files we start by namespacing our class. Then we use the BlockBase class so that we can extend it, as well as the AccountInterface class so that we can get access to the currently logged in user. Then follows something you definitely have not seen in Drupal 7: annotations.

Continue reading %Building a Drupal 8 Module: Blocks and Forms%

Catégories: Elsewhere

Friendly Machine: Quick Tip: Syncing Databases Between Drupal Websites with Backup and Migrate

lun, 16/06/2014 - 16:48

A common scenario that Drupal developers and site builders run into is the challenge of keeping the database in sync between the dev, testing and production versions of a site. Web hosts like Pantheon (highly recommended) make this a snap, but what if you're using a VPS or some other hosting that doesn't have that functionality? One popular option is to use Drush, but that isn't a good fit for everyone.

Backup and Migrate (BaM) can be a great tool for helping with this sort of problem. In this post we'll talk about using BaM for this task and include a very handy companion service that makes things even easier. What I often see with site builders who are using Backup and Migrate is the manual downloading of backup files and then doing a manual restore from the downloaded file.

A great alternative to that process is setting up an Amazon S3 bucket (cloud storage) where you can directly place your backups from Backup and Migrate. Once each version of the site has the S3 bucket set up, keeping the database in sync becomes a snap.

Setting up an account with Amazon Web Services is free - you get 5 GB of Amazon S3 storage, 20,000 get requests, 2,000 put requests, and 15GB of data transfer each month for one year with the free account. If you start to use the service more heavily, you pay for what you use, but for most dev scenarios, you probably won't incur fees.

Amazon has a nice tutorial to walk you through the process of getting started with the service.

Once you have an account with AWS, you head back to Backup and Migrate on your Drupal site - /admin/config/system/backup_migrate. Go to the Destinations tab and you'll see an 'Add Destination' link as in the image below. Click that link and in the list on the next page, select 'Amazon S3 Bucket' as the destination.

You'll most likely be prompted to add a PHP library - simply follow the link provided in the prompt and download the library to the 'libraries' folder of your site. It's really easy, so don't be put off.

The next step is to fill in the form you see below with the information from your S3 bucket.

That's pretty much all there is to it. Make sure everything is working by running a manual backup to the new S3 bucket destination. A good next step is to configure scheduled backups on the target machine so you can periodically restore a fresh copy on your other environment(s).

To restore your database from the S3 bucket in the UI, you just go to the 'Destination' you have configured in Backup and Migrate for the bucket and click the 'restore' link next to the copy you want to use. A nice twist is using Drush to help with some of this.

If you're brand new to Drush but would like to learn, here's a good place to start. If you'd just like to know how to use Drush and Backup and Migrate together, here's a good tutorial on that topic.

If you have any comments on this post, you may politely leave them below.

Catégories: Elsewhere

LightSky: CKEditor in Core and Why that is a Big Deal

lun, 16/06/2014 - 16:37

I can remember back to the first time I installed Drupal.  I ran the install through CPanel (I know right, don’t hold it against me) and bingo there was my brand new website… or something like that.  Drupal 6 just didn’t offer me out of the box much to be excited about, and while Drupal 7 was certainly an improvement it still didn’t offer much more than a platform to simply get started with what I had just found out was going to be a long process.  I was building a site for my newly formed curling club, and I just learned that this was going to be a bit of work to get it where it needed to be, and even then I started to fear that I may not be able to teach anyone else how to update the site, which would mean even more work. 

I fought the urge, the strong urge to use a platform that was a little more, how do you say, out of the box ready.  But here we are several years later and I am glad I stuck with Drupal, the curling club is as well.  The functionality that I have been able to add over the years and the payment and registration integration just wouldn’t be quite possible under some of the other available systems. 

Drupal long took the approach that the core should be very lightweight, compact, and should allow site builders the flexibility to make decisions on what to use in their site.  I applaud this as a bloated core is just as bad as one that doesn’t accomplish the intended task, but usability has to be taken into mind.  As the user base for Drupal has grown though the need for a higher level of functionality to come out of the box has come as well. 

Drupal’s somewhat philosophical shift to adding some key features to core, like CKEditor, is a huge step in the right direction.  Now Drupal is a step closer after install to being ready to run, and positions itself to be used by more site builders and developers by decreasing the amount of work they will have to get a simple site up and running.  This is an indicator that you don’t have to load down core to the point where it interferes with enterprise level projects to concede a bit to the smaller players.  Bravo Drupal!

What features do you seem to install on every Drupal site.

For more tips like these, follow us on Twitter, LinkedIn, or Google+. You can also contact us directly or request a consultation

Catégories: Elsewhere

Appnovation Technologies: 5 Best Drupal Starter Themes

lun, 16/06/2014 - 16:16

Learning and working on a Drupal starter theme can not only speed up workflows significantly for 'themers', but also enable you to focus solely on building custom layouts without having to worry about browser incompatibilities an

var switchTo5x = false;stLight.options({"publisher":"dr-75626d0b-d9b4-2fdb-6d29-1a20f61d683"});
Catégories: Elsewhere

Drupal Watchdog: Drupal 7 Content Types from Code

lun, 16/06/2014 - 16:04

One key feature of Drupal 7 that makes it one of the most flexible content management frameworks available, is the ability for administrators to build new types of content – beyond the two built into a standard installation of Drupal 7, “Article” and “Basic page”. Content types are typically created in the administration section of the website, clicking through a series of pages, and manually entering information such as the names of new fields. This is sufficient for most situations, but can become tedious and error-prone. There are several advantages to automating this process: You could define new content types for your own use without having to step through all the screens and mouse clicks. You could define them for use by other people and on other websites without having to document the steps. It would expand your module-writing capabilities, since oftentimes module functionality calls for one or more custom content types. On a related note, this can be valuable in setting up testing harnesses (e.g., in test/modules/node/node.test, the class NodeWebTestCase).

Fortunately, Drupal allows for the programmatic building of content types, using its Fields application programming interface (API). We have already noted two useful examples, "Article" and "Basic page", which we can build upon.

Consult the Core

During a standard installation, Drupal runs the PHP code in the file profiles/standard/standard.install, which consists of a single function, standard_install(). It includes the code to create the two aforesaid content types:

Catégories: Elsewhere

Web Omelette: Drupal 8 Dependency Injection, Service Container And All That Jazz

lun, 16/06/2014 - 09:07

With the move from a mostly procedural to a mostly OOP based architecture in Drupal 8, many Drupal developers have been introduced to new concepts we don't fully understand. I myself was terrified of notions such as dependency injection and service container, making me worry about the future of my Drupal development.

So I took it to Symfony (from where many components have been borrowed) and then Drupal 8 alpha releases and turns out it's not a big deal. All you need in order to understand them is to know basic OOP principles. Check out Larry Garfield's timeless introductory article to Object Oriented Programming on Drupal Watchdog for a good start.

In this article I am going to talk a bit about what dependency injection is and why one would use a container for managing these dependencies. In Symfony and Drupal 8 this is called a service container (because we refer to these global objects as services). Then, we will take a look at how these are applied in Drupal 8. Briefly, because you don't need much to understand them.

So what is dependency injection?

Take the following simple class:

class Car { protected $engine; public function __construct() { $this->engine = new Engine(); } /* ... */ }

When you instantiate a new class Car you go like this:

$car = new Car();

And now you have an object handler ($car) that has an $engine property containing the handler of another object. But what if this car class needs to work with another engine? You'd have to extend the class and overwrite its constructor for each new car with a different engine. Does that make sense? No.

Now consider the following:

class Car { protected $engine; public function __construct($engine) { $this->engine = $engine; } /* ... */ }

To instantiate an object of this class, you go like this:

$engine = new Engine(); $car = new Car($engine);

Much better. So now if you need to create another car using another engine, you can do so easily without caring about the Car class too much since it is supposedly equipped to work with all the engines in your application.

$turbo = new TurboEngine(); $car2 = new Car($turbo);

And that is dependency injection. The Car class depends on an engine to run (dooh), so we inject one into its constructor which then does what it needs to do. Rather than hardcoding the engine into the Car class which would not make the engine swappable. Such constructor injections are the most common but you'll also find other types such as the setter injection by which we would pass in the engine through a setter method.

So what is this container business?

So far we've seen a very simple class example. But imagine (rightfully) that the Car has many other potentially swappable components (dependencies), like a type of gear shift, breaks or wheels. You have to manually instantiate all these dependent objects just so you can pass them to the one you actually need. This is what the container is for, to do all that for you.

Basically it works like this. You first register with the container your classes and their dependencies. And then at various points of your application, you can access the container and request an instance of a particular class (or service as we call them in Symfony and Drupal 8). The container instantiates an object of that class as well as one of each of its dependencies, then returns you that service object. But what is the difference between services that we usually access through the container and other PHP classes?

A very good definition that makes this distinction comes from the Symfony book:

As a rule, a PHP object is a service if it is used globally in your application. A single Mailer service is used globally to send email messages whereas the many Message objects that it delivers are not services. Similarly, a Product object is not a service, but an object that persists Product objects to a database is a service.

Understanding how the container works under the hood is I believe not crucial for using it. It's enough to know how to register classes and how to access them later. There are multiple ways to register services but in Drupal 8 we use YAML files. In Symfony, you can use directly PHP, YAML or even XML. To know more about how to do this in Drupal 8, check out this documentation page. Accessing the services in Drupal 8, on the other hand, is done in one of two ways: statically and using, yet again, dependency injection.

Statically, it's very simple. We use the global \Drupal namespace to access its service() method that returns the service with the name we pass to it.

$service = \Drupal::service('my service');

This approach is mostly used for when we need a service in our .module files where we are still working with procedural code. If we are in a class (such as a form, controller, entity, etc), we should always inject the service as a dependency to the class. Since I covered it elsewhere and the Drupal documentation mentioned provides a good starting point, I won't go into the exact steps you need to take in order to inject dependent services into your Drupal 8 classes. However, you can check out my introductory series on Drupal 8 module development, on Sitepoint.com, where I covered the process of creating services and injecting them as dependencies (in the third part).

Conclusion

So there you go. Dependency injection is a very simple concept that has to do with the practice of decoupling functionality between classes. By passing dependencies to objects we can isolate their purpose and easily swap them with others. Additionally, it make is much easier to unit test the classes individually by passing mock objects.

The service container is basically there to manage some classes when things get overwhelming. That is, when the number grows and the number of their dependencies also increases. It keeps track of what a certain service needs before getting instantiated, does it for you and all you have to do is access the container to request that service.

Hope its clear.

var switchTo5x = true;stLight.options({"publisher":"dr-8de6c3c4-3462-9715-caaf-ce2c161a50c"});
Catégories: Elsewhere

groups.drupal.org frontpage posts: Modernizing Testbot: Drupalcon Austin Update

dim, 15/06/2014 - 07:08
Background:

"Modernizing Testbot" is a Drupal community initiative with the goal of rebuilding Drupal.org's Automated Testing / Continuous Integration infrastructure, leveraging modern practices and tools in order to enhance the capabilities and functionality provided by the platform and better meet the changing needs of the Drupal community. The initiative first took root with the vetting of a potential design and architecture during the DevOps Summit at BADCamp 2013, which then led to the development of a functional Proof of Concept during DevDays Szeged. DrupalCon Austin saw a number of further refinements to the model, and launch of the official project pages/repositories on Drupal.org. This post is intended to provide a snapshot of what was accomplished.

Drupal.org Project Pages

The week before DrupalCon saw the launch of the 'DrupalCI' project on Drupal.org, which is intended as the umbrella project for related 'Modernizing Testbot' work. The initiative has been broken down into five sub-projects under the same DrupalCI namespace. The initiative is still young, and code in the following repositories is not necessarily ready for use, but this is where you will find things during development and after the platform goes live:

  1. drupalci_drupalorg: Responsible for integration between drupal.org and the DrupalCI platform
  2. drupalci_jenkins: Responsible for the scripts and configuration files used on the jenkins master/slave servers which serve as the central job dispatcher for the environment
  3. drupalci_testbot: Responsible for the job runners performing the actual work, but also functional as a standalone project for local testing
  4. drupalci_results: Responsible for the long-term archival, storage, and exposure of build artifacts and job results, similar to qa.d.o today
  5. drupalci_puppet: Responsible for the puppet configuration scripts used to build out the various servers which make up the environment.
Development Infrastructure

Also in the weeks leading up to DrupalCon Austin, we set up a full development environment with the help of the Drupal.org Infrastructure Team. This environment has been set up entirely on Drupal.org infrastructure, and is intended to closely mimic that which would be used for a production launch of the platform. A special thanks goes out to nick_shuch (PreviousNext) for his help in setting up the jenkins server and initial drupalci_puppet scripts.

Multiple Environment Support

One of the primary testbot sprint goals for the week was to enhance the proof of concept build, enabling support for testing on multiple database types and php versions. In addition to his contributions as sponsor of the DrupalCon sprints, user jaredsmith (Bluehost) also joined the testbot sprint team; where he started with developing a Docker build which would add support for a Postgres 9.1 test environment to the platform ... and then followed that up with builds for MariaDB 5.5 and 10 as well. With these Docker builds now available in the repository, we are ready to begin integration and testing of these environments with the actual run scripts; and any assistance the community could provide with this task would certainly be welcome.

Initial Drupal.org Integration

While it will be quite some time before the DrupalCI platform is actually integrated with Drupal.org itself, we anticipate the need for some means of triggering tests from a Drupal.org dev site, in order to demonstrate the full end-to-end communications path as envisioned for the final deployment. User Psikik dropped by our table during the Friday sprints in order to express an interest in the project, and looking for some way to chip in. Despite admitting to have never built a drupal module before, Psikik took on the task of developing the initial drupalci_drupalorg implementation, providing the ability to trigger jobs on our remote Jenkins server from a form located on Drupal.org (or, in this case, a D.o dev instance); with his final result ending up committed directly as-is (other than namespace changes) to the drupalci_drupalorg repository.

Test Runner Refinements

Throughout the week, and especially during the Friday/extended sprints, users dasrecht and ricardoamaro (original author of the Szeged proof of concept, sprinting remotely from Portugal) made a number of refinements to the test runner bash scripts; including refactoring to a new directory structure which should better position the project for an expanded set of functionality in the near future. Dasrecht also contributed an initial framework for a future 'drupalci' console command, which will eventually serve as the primary end-user interface for local interaction with the platform.

Jenkins scripts

Also during the Friday sprints, user geerlinguy and I both made progress on the development of jenkins scripts which will be triggered in order to kick off new jobs/test runs.

Where to from here ... What's next?

Starting next week, I intend on establishing a rotating weekly "Modernizing Testbot" call; alternating each week between North/South American and Europe/Asia/Australia-friendly timezones. Progress and updates on the initiative will be shared via the 'Drupal.org Testing Infrastructure' group on groups.drupal.org, so anyone interested in the ongoing progress of the initiative is encouraged to sign up there. Our immediate priority will be further testing and stablizing the Postgres and MariaDB environment support for local testing purposes, after which we'll turn attention towards the investigation of issue queue/patch workflow improvements which could be supported by the platform.

As always, testing and contributions are welcome; if you would like to get involved, please feel free to contact me via my Drupal.org contact form.

Bonus Points: QA.Drupal.Org "Test Complete" Notifications

While not directly related to the 'Modernizing Testbot' activities, I simply cannot leave without a huge shout out to justafish, who approached me during the extended sprints, asking what it would take to enable json support for qa.d.o test results. After I provided her with an admittedly not-so-helpful answer (i.e. "If someone else was to do the work, I'll deploy it; but it otherwise isn't a priority") and a qa.d.o dev site, she had a working json feed, PIFR patch, and new view definitions completed in no time. This work was deployed live to qa.d.o on Tuesday, so that end users can now access the json view of their test by appending '/json' to the 'view test' URL. This in turn enabled development this week of a chrome extension (also by justafish), which can monitor your tests in progress and notify you as soon as the qa.d.o test run is complete ... a project that I anticipate will very quickly find it's way into the toolset of Drupal core developers everywhere.

Catégories: Elsewhere

Pages