Planet Drupal

Subscribe to flux Planet Drupal
Drupal.org - aggregated feeds in category Planet Drupal
Mis à jour : il y a 36 min 54 sec

Daniel Pocock: Working to pass GSoC

mer, 08/06/2016 - 19:11

GSoC students have officially been coding since 23 May (about 2.5 weeks) and are almost half-way to the mid-summer evaluation (20 - 27 June). Students who haven't completed some meaningful work before that deadline don't receive payment and in such a large program, there is no possibility to give students extensions or let them try and catch up later.

Every project and every student are different, some are still getting to know their environment while others have already done enough to pass the mid-summer evaluation.

I'd like to share a few tips to help students ensure they don't inadvertently fail the mid-summer evaluation

Kill electronic distractions

As a developer of real-time communications projects, many people will find it ironic or hypocritical that this is at the top of my list.

Switch off the mobile phone or put it in silent mode so it doesn't even vibrate. Research has suggested that physically turning it off and putting it out of sight has significant benefits. Disabling the voicemail service can be an effective way of making sure no time is lost listening to a bunch of messages later. Some people may grumble at first but if they respect you, they'll get into the habit of emailing you and waiting for you to respond when you are not working.

Get out a piece of paper and make a list of all the desktop notifications on your computer, whether they are from incoming emails, social media, automatic updates, security alerts or whatever else. Then figure out how to disable them all one-by-one.

Use email to schedule fixed times for meetings with mentors. Some teams/projects also have fixed daily or weekly times for IRC chat. For a development project like GSoC, it is not necessary or productive to be constantly on call for 3 straight months.

Commit every day

Habits are a powerful thing. Successful students have a habit of making at least one commit every day. The "C" in GSoC is for Code and commits are a good way to prove that coding is taking place.

GSoC is not a job, it is like a freelance project. There is no safety-net for students who get sick or have an accident and mentors are not bosses, each student is expected to be their own boss. Although Google has started recommending students work full time, 40 hours per week, it is unlikely any mentors have any way to validate these hours. Mentors can look for a commit log, however, and simply won't be able to pass a student if there isn't code.

There may be one day per week where a student writes a blog or investigates a particularly difficult bug and puts a detailed report in the bug tracker but by the time we reach the second or third week of GSoC, most students are making at least one commit in 3 days out of every 5.

Consider working away from home/family/friends

Can you work without anybody interrupting you for at least five or six hours every day?

Do you feel pressure to help with housework, cooking, siblings or other relatives? Even if there is no pressure to do these things, do you find yourself wandering away from the computer to deal with them anyway?

Do family, friends or housemates engage in social activities, games or other things in close proximity to where you work?

All these things can make a difference between passing and failing.

Maybe these things were tolerable during high school or university. GSoC, however, is a stepping stone into professional life and that means making a conscious decision to shut those things out and focus. Some students have the ability to manage these distractions well, but it is not for everybody. Think about how leading sports stars or musicians find a time and space to be "in the zone" when training or rehearsing, this is where great developers need to be too.

Some students find the right space in a public library or campus computer lab. Some students have been working in hacker spaces or at empty desks in local IT companies. These environments can also provide great networking opportunities.

Managing another summer job concurrently with GSoC

It is no secret that some GSoC students have another job as well. Sometimes the mentor is aware of it, sometimes it has not been disclosed.

The fact is, some students have passed GSoC while doing a summer job or internship concurrently but some have also failed badly in both GSoC and their summer job. Choosing one or the other is the best way to succeed, get the best results and maximize the quality of learning and community interaction. For students in this situation, now it is not too late to make the decision to withdraw from GSoC or the other job.

If doing a summer job concurrently with GSoC is unavoidable, the chance of success can be greatly increased by doing the GSoC work in the mornings, before starting the other job. Some students have found that they actually finish more quickly and produce better work when GSoC is constrained to a period of 4 or 5 hours each morning and their other job is only in the afternoon. On the other hand, if a student doesn't have the motivation or energy to get up and work on GSoC before the other job then this is a strong sign that it is better to withdraw from GSoC now.

Catégories: Elsewhere

ImageX Media: Lions, Tigers, and Bears, Oh My!

mer, 08/06/2016 - 18:54

DrupalCon brings together thousands of people from the Drupal community who use, design for, develop for, and support the platform. It is the heartbeat of the Drupal community, where advancements in the platform are announced, learnings from its users are shared, and where connections that strengthen the community are made. 

Catégories: Elsewhere

DrupalCon News: Frontend fatigue? Share your story

mer, 08/06/2016 - 18:27

Are you fatigued as a Frontender?  You are not alone. Frontend developers are moving in rapid waters all of the time. The explosion of frameworks and tools during the last 3 years was supposed to help us, but a feeling of overwhelmingness can hit easily.

The polyglot frontend-er

Our mother tongue is HTML, CSS, JavaScript, and add SVG to the mix. But to help us in our tasks, we've added Sass/Less, multiple templating languages, Markdown, JavaScript transpilers, testing languages, and what not.

Catégories: Elsewhere

Lullabot: Adventures with eDrive: Accelerated SSD Encryption on Windows

mer, 08/06/2016 - 18:01

As we enter the age of ISO 27001, data security becomes an increasingly important topic. Most of the time, we don’t think of website development as something that needs tight security on our local machines. Drupal websites tend to be public, have a small number of authenticated users, and, in the case of a data disclosure, sensitive data (like API and site keys) can be easily changed. However, think about all of the things you might have on a development computer. Email? Saved passwors that are obscured but not encrypted? Passwordless SSH keys? Login cookies? There are a ton of ways that a lost computer or disk drive can be used to compromise you and your clients.

If you’re a Mac user, FileVault 2 is enabled by default, so you’re likely already running with an encrypted hard drive. It’s easy to check and enable in System Preferences. Linux users usually have an encrypted disk option during install, as shown in the Ubuntu installer. Like both of these operating systems, Windows supports software-driven encryption with BitLocker.

I recently had to purchase a new SSD for my desktop computer, and I ended up with the Samsung 850 EVO. Most new Samsung drives support a new encryption technology called "eDrive".

But wait - don’t most SSDs already have encryption?

The answer is… complicated.

SSDs consist of individual cells, and each cell has a limited number of program/erase (PE) cycles. As cells reach their maximum number of PE cycles, they are replaced by spare cells. In a naive scenario, write activity can be concentrated on a small set of sectors on disk, which could lead to those extra cells being used up prematurely. Once all of the spare blocks are used, the drive is effectively dead (though you might be able to read data off of it). Drives can last longer if they spread writes across the entire disk automatically. You have data to save, that must be randomly distributed across a disk, and then read back together as needed. Another word for that? Encryption! As the poster on Stack Overflow says, it truly is a ridiculous and awesome hack to use encryption this way.

What most SSDs do is they have an encryption key which secures the data, but is in no way accessible to an end user. Some SSDs might let you access this through the ATA password, but there are concerns about that level of security. In general, if you have possession of the drive, you can read the data. The one feature you do get "for free" with this security model is secure erase. You don’t need to overwrite data on a drive anymore to erase it. Instead, simply tell the drive to regenerate its internal encryption key (via the ATA secure erase command), and BAM! the data is effectively gone.

All this means is that if you’re using any sort of software-driven encryption (like OS X’s FileVault, Windows BitLocker, or dm-crypt on Linux), you’re effectively encrypting data twice. It works, but it’s going to be slower than just using the AES chipset your drive is already using.

eDrive is a Microsoft standard based on TCG Opal and IEEE 1667 that gives operating systems access to manage the encryption key on an SSD. This gives you all of the speed benefits of disk-hosted encryption, with the security of software-driven encryption.

Using eDrive on a Windows desktop has a pretty strict set of requirements. Laptops are much more likely to support everything automatically. Unfortunately, this article isn’t going to end in success (which I’ll get to later), but it turns out that removing eDrive is much more complicated than you’d expect. Much of this is documented in parts on various forums, but I’m hoping to collect everything here into a single resource.

The Setup
  • An SSD supporting eDrive and "ready" for eDrive
  • Windows 10, or Windows 8 Professional
  • A UEFI 2.3.1 or higher motherboard, without any CSMs (Compatibility Support Modules) enabled, supporting EFI_STORAGE_SECURITY_COMMAND_PROTOCOL
  • A UEFI installation of Windows
  • (optionally) a TPM to store encryption keys
  • No additional disk drivers like Intel’s Rapid Storage Tools for software RAID support
  • An additional USB key to run secure erases, or an alternate boot disk
  • If you need to disable eDrive entirely, an alternate Windows boot disk or computer

I’m running Windows 10 Professional. While Windows 10 Home supports BitLocker, it forces encryption keys to be stored with your Microsoft account in the cloud. Honestly for most individuals I think that’s better than no encryption, but I’d rather have solid backup strategies than give others access to my encryption keys.

Determining motherboard compatibility can be very difficult. I have a Gigabyte GA-Z68A-D3-B3, which was upgraded to support UEFI with a firmware update. However, there was no way for me to determine what version of UEFI it used, or a way to determine if EFI_STORAGE_SECURITY_COMMAND_PROTOCOL was supported. The best I can suggest at this point is to try it with a bare Windows installation, and if BitLocker doesn’t detect eDrive support revert back to a standard configuration.

The Install

Samsung disk drives do not ship with eDrive enabled out of the box. That means you need to connect the drive and install Samsung’s Magician software to turn it on before you install Windows to the drive. You can do this from another Windows install, or install bare Windows on the drive knowing it will be erased. Install the Magician software, and set eDrive to "Ready to enable" under “Data Security”.

After eDrive is enabled, you must run a secure erase on the disk. Magician can create a USB or CD drive to boot with, or you can use any other computer. If you get warnings about the drive being "frozen", don’t ignore them! It’s OK to pull the power on the running drive. If you skip the secure erase step, eDrive will not be enabled properly.

Once the disk has been erased, remove the USB key and reboot with your Windows install disk. You must remove the second secure erase USB key, or Window’s boot loader will fail (#facepalm). Make sure that you boot with UEFI and not BIOS if your system supports both booting methods. Install Windows like normal. When you get to the drive step, it shouldn’t show any partitions. If it does, you know secure erase didn’t work.

After Windows is installed, install Magician again, and look at the security settings. It should show eDrive as "Enabled". If not, something went wrong and you should secure erase and reinstall again. However, it’s important to note that “Enabled” here does not mean secure. Anyone with physical access to your drive can still read data on it unless you turn on BitLocker in the next step.

Turning on BitLocker

Open up the BitLocker control panel. If you get an error about TPM not being available, you can enable encryption without a TPM by following this How-To Geek article. As an aside, I wonder if there are any motherboards without a TPM that have the proper UEFI support for hardware BitLocker. If not, the presence of a TPM (and SecureBoot) might be an easy way to check compatibility without multiple Windows installs.

Work your way through the BitLocker wizard. The make or break moment is after storing your recovery key. If you’re shown the following screen, you know that your computer isn’t able to support eDrive.

You can still go ahead with software encryption, but you will lose access to certain ATA features like secure erase unless you disable eDrive. If you don’t see this screen, go ahead and turn on BitLocker. It will be enabled instantly, since all it has to do is encrypt the eDrive key with your passphrase or USB key instead of rewriting all data on disk.

Turning off eDrive

Did you see that warning earlier about being unable to turn off eDrive? Samsung in particular hasn’t publically released a tool to disable eDrive. To disable eDrive, you need physical access to the drive so you can use the PSID printed on the label. You are supposed to use a manufacturer supplied tool and enter this number, and it will disable eDrive and erase any data. I can’t see any reason to limit access to these tools, given you need physical access to the disk. There’s also a Free Software implementation of these standards, so it’s not like the API is hidden. The Samsung PSID Revert tool is out there thanks to a Lenovo customer support leak (hah!), but I can’t link to it here. Samsung won’t provide the tool directly, and require drives to be RMA’ed instead.

For this, I’m going to use open-source Self Encrypting Drive tools. I had to manually download the 2010 and 2015 VC++ redistributables for it to work. You can actually run it from within a running system, which leads to hilarious Windows-crashing results.

C:\Users\andre\msed> msed --scan C:\Users\andre\msed> msed --yesIreallywanttoERASEALLmydatausingthePSID <YOURPSID> \\.\PhysicalDrive?

At this stage, your drive is in the "Ready" state and still has eDrive enabled. If you install Windows now, eDrive will be re-enabled automatically. Instead, use another Windows installation with Magician to disable eDrive. You can now install Windows as if you’ve never used eDrive in the first place.

Quick Benchmarks

After all this, I decided to run with software encryption anyways, just like I do on my MacBook with FileVault. On an i5-2500K, 8GB of RAM, with the aforementioned Samsung 850 EVO:

Before Turning On BitLocker After BitLocker After Enabling RAPID in Magician

RAPID is a Samsung provided disk filter that aggressively caches disk accesses to RAM, at the cost of increased risk of data loss during a crash or power failure.

As you can see, enabling RAPID (6+ GB a second!) more than makes up for the slight IO performance hit with BitLocker. There’s a possible CPU performance impact using BitLocker as well, but in practice with Intel’s AES crypto extensions I haven’t seen much of an impact on CPU use.

A common question about BitLocker performance is if there is any sort of impact on the TRIM command used to maintain SSD performance. Since BitLocker runs at the operating system level, as long as you are using NTFS TRIM commands are properly passed through to the drive.

In Closing

I think it’s fair to say that if you want robust and fast SSD encryption on Windows, it’s easiest to buy a system pre-built with support for it. In a build-your-own scenario, you still need at least two Windows installations to configure eDrive. Luckily Windows 10 installs are pretty quick (10-20 minutes on my machine), but it’s still more effort than it should be. It’s a shame MacBooks don’t have support for any of this yet. Linux support is functional for basic use, with a new release coming out as I write. Otherwise, falling back to software encryption like regular BitLocker or FileVault 2 is certainly the best solution today.

Header photo is a Ford Anglia Race Car, photographed by Kieran White

Catégories: Elsewhere

Acquia Developer Center Blog: Drupal 8 Module of the Week: Workbench Moderation

mer, 08/06/2016 - 15:29

Each day, between migrations and new projects, more and more features are becoming available for Drupal 8, the Drupal community’s latest major release. In this series, the Acquia Developer Center is profiling some prominent, useful, and interesting projects--modules, themes, distros, and more--available for Drupal 8. This week: Workbench Moderation.

Tags: acquia drupal planetworkbenchmoderationstate changeworkflow
Catégories: Elsewhere

Cheppers blog: Exploring Behat Ep. 2: Behat Scenario Selectors

mer, 08/06/2016 - 10:57

In our previous post on the topic, we used output formatters to determine how Behat displays test results. Now we continue with exploring our possibilities on what tests to run together with Behat’s scenario selectors.

Catégories: Elsewhere

Virtuoso Performance: wordpress_migrate now has a Drupal 8 release

mer, 08/06/2016 - 06:04
wordpress_migrate now has a Drupal 8 release

So, last week the next arena for me to work on came up as XML/JSON source plugins (see my companion piece for more on that). My intention had been to hold off tackling wordpress_migrate until "nailing down" the XML parser plugin it depends on, but I decided that at least trying to prototype a WordPress migration would be a good test of the XML plugin. The initial attempt was promising enough that I kept going... and going - we've now got a (very) basic D8 dev release!

The UI

The user interface works generally as it did under Drupal 7 - upload the XML file on the first page of a wizard, configure taxonomies, then content, then review. It's important to note that this UI is for configuring the WordPress migration process - it creates the migrations, which you can then run using migrate_tools.

To begin, visit the migration dashboard at /admin/structure/migrate:

Clicking "Add import from WordPress" starts the wizard, where you can upload your XML file:

Clicking Next provides options for handling authors:

Next you can select which Drupal vocabularies (if any) to use for your WordPress tags and categories:

Then, choose the Drupal content types to use for WordPress posts and pages:

You may omit either one (actually, you could omit both if all you wanted to import were authors, tags and/or vocabularies!). Following that, for each content type you selected you can choose the Drupal text format to use for the body content:

In the review step, you can choose the machine name of the migration group containing your WordPress migrations, and also a prefix added to each generated migration's original machine name (if you were to import multiple WordPress blogs into your site, choosing distinct values here will keep them straight).

When you click Finish, you are brought to the migration dashboard for your new group:

Soon, you should be able to run your migration from this dashboard - for now, you'll need to use drush (which, really, you should use anyway for running migrations).

The drush command

Speaking of drush, you can configure your migrations through drush instead of stepping through the UI. Most of these options should be self-evident - do drush help wordpress-migrate-configure for more information.

drush wordpress-migrate-generate private://wordpress/nimportable.wordpress.2016-06-02.xml --group-id=test --prefix=my_ --tag-vocabulary=tags --category-vocabulary=wordpress_categories --page-type=page --page-text-format=restricted_html --post-type=article --post-text-format=full_html

Using drush for configuration has some advantages:

  1. If you're testing the import process, particularly tweaking the settings, it's much quicker to reissue the command line (possibly with minor edits) than to step through the UI.
  2. Scriptability!
  3. If your WordPress site is large, uploading the XML through the UI may run into file upload limits or timeout issues - alternatively, you can copy the XML file directly to your server and configure the migration to point to where you put it.
Ctools Wizard

This was my first time using the Ctools wizard API, and it's really easy to create step-by-step UIs - even dynamic ones (where not all the steps are determined up-front). Basically:

  1. Set up two routes in example.routing.yml, one for the landing page of your wizard, and one to reflect the specific steps (containing a {step} token).
  2. Create a class extending FormWizardBase.
  3. Implement getRouteName(), returning the step route from above.
  4. The key - implement getOperations() to tell the wizard what your steps are (and their form classes):

  public function getOperations($cached_values) {
    $steps = [
      'source_select' => [
        'form' => 'Drupal\wordpress_migrate_ui\Form\SourceSelectForm',
        'title' => $this->t('Data source'),
      ],
      'authors' => [
        'form' => 'Drupal\wordpress_migrate_ui\Form\AuthorForm',
        'title' => $this->t('Authors'),
      ],
      'vocabulary_select' => [
        'form' => 'Drupal\wordpress_migrate_ui\Form\VocabularySelectForm',
        'title' => $this->t('Vocabularies'),
      ],
      'content_select' => [
        'form' => 'Drupal\wordpress_migrate_ui\Form\ContentSelectForm',
        'title' => $this->t('Content'),
      ],
    ];
    // Dynamically add the content migration(s) that have been configured by
    // ContentSelectForm.
    if (!empty($cached_values['post']['type'])) {
      $steps += [
        'blog_post' => [
          'form' => 'Drupal\wordpress_migrate_ui\Form\ContentTypeForm',
          'title' => $this->t('Posts'),
          'values' => ['wordpress_content_type' => 'post'],
        ],
      ];
    }
    if (!empty($cached_values['page']['type'])) {
      $steps += [
        'page' => [
          'form' => 'Drupal\wordpress_migrate_ui\Form\ContentTypeForm',
          'title' => $this->t('Pages'),
          'values' => ['wordpress_content_type' => 'page'],
        ],
      ];
    }
    $steps += [
      'review' => [
        'form' => 'Drupal\wordpress_migrate_ui\Form\ReviewForm',
        'title' => $this->t('Review'),
        'values' => ['wordpress_content_type' => ''],
      ],
    ];
    return $steps;
  }

Particularly note how the content-type-specific steps are added based on configuration set in the content_select step, and how they use the same form class with an argument passed to reflect the different content types they're handling.

Your form classes should look pretty much like any other form classes, with one exception - you need to put the user's choices where the wizard can find them. For example, in the VocabularySelectForm class:

  public function submitForm(array &$form, FormStateInterface $form_state) {
    $cached_values = $form_state->getTemporaryValue('wizard');
    $cached_values['tag_vocabulary'] = $form_state->getValue('tag_vocabulary');
    $cached_values['category_vocabulary'] = $form_state->getValue('category_vocabulary');
    $form_state->setTemporaryValue('wizard', $cached_values);
  }

Next steps

Now, don't get too excited - wordpress_migrate is very basic at the moment, and doesn't yet support importing files or comments. I had a couple of people asking how they could help move this forward, which was difficult when there was nothing there yet - now that we have the foundation in place, it'll be much easier for people to pick off one little (or big;) bit to work on. Having spent more time than I intended on this last week, I need to catch up in other areas so won't be putting much more time into wordpress_migrate immediately, but I'm hoping I can come back to it in a couple of weeks to find a few community patches to review and commit.

mikeryan Tue, 06/07/2016 - 23:04 Tags
Catégories: Elsewhere

Virtuoso Performance: Drupal 8 plugins for XML and JSON migrations

mer, 08/06/2016 - 06:03
Drupal 8 plugins for XML and JSON migrations

I put some work in last week on implementing wordpress_migrate for Drupal 8 (read more in the companion piece). So, this seems a good point to talk about the source plugin work that's based on, supporting XML and JSON sources in migrate_plus

History and status of the XML and JSON plugins

Last year Mike Baynton produced a basic D8 version of wordpress_migrate, accompanied by an XML source plugin. Meanwhile, Karen Stevenson implemented a JSON source plugin for Drupal 8. The two source plugins had distinct APIs, and differing configuration settings, but when you think about it they really only differ in the parsing of the data - they are both file-oriented (may be read via HTTP or from a local filesystem) unlike SQL, both require a means to specify how to select an item ("row") from within the data, and a means to specify how to select fields from within an item. I felt there should be some way to share at least a common interface between the two, if not much of the implementation.

So, in migrate_plus I have implemented an Url source plugin (please weigh in with suggestions for a better name!) which (ideally) separates the retrieval of the data (using a fetcher plugin) from parsing of the data (using a parser plugin). There are currently XML and JSON parser plugins (based on Mike Baynton's and Karen Stevenson's original work), along with an HTTP fetcher plugin. All of the former migrate_source_xml functionality is in migrate_plus now, so that module should be considered deprecated. Not everything from migrate_source_json is yet in migrate_plus - for example, the ability to specify HTTP headers for authentication, which in the new architecture should be part of the HTTP fetcher and thus available for both XML and JSON sources. Since no new work is going into migrate_source_json at this point, the best way forward for JSON migration support is to contribute to beefing up the migrate_plus version of this support.

Using the Url source plugin with the XML parser plugin

The migrate_example_advanced submodule of migrate_plus contains simple examples of both XML and JSON migrations from web services. Here, though, we'll look at at a more complex real-world example - migration from a WordPress XML export.

The outermost element of a WordPress export is <rss> - within that is a <channel> element, which contains all the exported content - authors, tags and categories, and content items (posts, pages, and attachments). Here's an example of how tags are represented:

<rss>
  <channel>
    ...
    <wp:tag>
      <wp:term_id>6859470</wp:term_id>
      <wp:tag_slug>a-new-tag</wp:tag_slug>
      <wp:tag_name><![CDATA[A New Tag]]></wp:tag_name>
    </wp:tag>
    <wp:tag>
      <wp:term_id>18</wp:term_id>
      <wp:tag_slug>music</wp:tag_slug>
      <wp:tag_name><![CDATA[Music]]></wp:tag_name>
    </wp:tag>
    ...
  </channel>
</rss>

The source plugin configuration to retrieve this data looks like the following (with comments added for annotation). The configuration for a JSON source would be nearly identical.

source:
  # Specifies the migrate_plus url source plugin.
  plugin: url
  # Specifies the http fetcher plugin. Note that the XML parser does not actually use this,
  # see below.
  data_fetcher_plugin: http
  # Specifies the xml parser plugin.
  data_parser_plugin: xml
  # One or more URLs from which to fetch the source data (only one for a WordPress export).
  # Note that in the actual wordpress_migrate module, this is not builtin to the wordpress_tags.yml
  # file, but rather saved to the migration_group containing the full set of WP migrations
  # from which it is merged into the source configuration.
  urls: private://wordpress/nimportable.wordpress.2016-06-03.xml
  # For XML, item_selector is the xpath used to select our source items (tags in this case).
  # For JSON, this would be an integer depth at which importable items are found.
  item_selector: /rss/channel/wp:tag
  # For each source field, we specify a selector (xpath relative to the item retrieved above),
  # the field name which will be used to access the field in the process configuration,
  # and a label to document the meaning of the field in front-ends. For JSON, the selector
  # will be simply the key for the value within the selected item.
  fields:
    -
      name: term_id
      label: WordPress term ID
      selector: wp:term_id
    -
      name: tag_slug
      label: Analogous to a machine name
      selector: wp:tag_slug
    -
      name: tag_name
      label: 'Human name of term'
      selector: wp:tag_name
  # Under ids, we specify which of the source fields retrieved above (tag_slug in this case)
  # represent our unique identifier for the item, and the schema type for that field. Note
  # that we use tag_slug here instead of term_id because posts reference terms using their
  # slugs.
  ids:
    tag_slug:
      type: string

Once you've fully specified the source in your .yml file (no PHP needed!), you simply map the retrieved source fields normally:

process:
  # In wordpress_migrate, the vid mapping is generated dynamically by the configuration process.
  vid:
    plugin: default_value
    default_value: tags
  # tag_name was populated via the source plugin configuration above from wp:tag_name.
  name: tag_name

Above we pointed out that the XML parser plugin does not actually use the fetcher plugin. In an ideal world, we would always separate fetching from parsing - however, in the real world, we're making use of existing APIs which do not support that separation. In this case, we are using PHP's XMLReader class in our parser - unlike other PHP XML APIs, this does not read and parse the entire XML source into memory, thus is essential for dealing with potentially very large XML files (I've seen WordPress exports upwards of 200MB). This class processes the source incrementally, and completely manages both fetching and parsing, so as consumers of that class we are unable to make that separation. There is an issue in the queue to add a separate XML parser that would use SimpleXML - this will be more flexible (providing the ability to use file-wide xpaths, rather than just item-specific ones), and also will permit separating the fetcher.  

Much more to do!

What we have in migrate_plus today is (almost) sufficient for WordPress imports, but there's still a ways to go. The way fetchers and parsers interact could use some thought; we need to move logically HTTP-specific stuff out of the general fetcher base class, etc. Your help would be much appreciated - particularly with JSON sources, since I don't have handy real-world test data for that case.

mikeryan Tue, 06/07/2016 - 23:03 Tags
Catégories: Elsewhere

Gizra.com: Drupal 8: Migrate Nodes with Attachments Easily

mer, 08/06/2016 - 06:00

Drupal-8-manina is at its highest. Modules are being ported, blog posts are being written, and new sites are being coded, so we in Gizra decided to join the party.

We started with a simple site that will replace an existing static site. But we needed to migrate node attachments, and we just couldn’t find an existing solution. Well, it was time to reach out to the community

Any example of #Drupal 8 migration of files/ images out there? (including copy from source into public:// )

— Amitai Burstein (@amitaibu) April 8, 2016

A few minutes after the tweet was published, we received a great hint from the fine folks at Evoloving Web. They were already migrating files into Drupal 8 from Drupal 7, and were kind enough to blog post about it.

However, we were still missing another piece of the puzzle, as we also wanted to migrate files from an outside directory directly into Drupal. I gave my good friend @jsacksick a poke (it’s easy, as he sits right in front of me), and he gave me the answer on a silver platter.

Post has a happy end - we were able to migrate files and attach to a node!

Continue reading…

Catégories: Elsewhere

DrupalEasy: DrupalEasy Podcast 178 - Rifftrax - Erik Peterson

mer, 08/06/2016 - 04:40

Direct .mp3 file download.

Rifftrax is the movie commentary web product from former members of Mystery Science Theatre 3000 (powered by Drupal). Erik Peterson (torgospizza) is the Lead web architect, art “director”, sometimes-writer (marketing and social media copy) for the site. Mike and Ryan interview Erik about the cultural phenomenon which is also a verb (to "MiSTy" a film is to watch it while riffing).

Interview

Erik is a Contributor to Drupal Commerce and Ubercart before it. He is also a Co-maintainer of Commerce Stripe and creator of Commerce Cart Link.

DrupalEasy News Three Stories Sponsors Picks of the Week Upcoming Events Follow us on Twitter Five Questions (answers only)
  1. Photography such as this Raven photo
  2. PHPStorm 10 (EAP)
  3. Giraffe @ SD Zoo
  4. Manage a team, teach for passion. Play music in my spare time. (What’s that?)
  5. Matt Glaman He’s well-versed in pretty much everything, and wrote a D8 cookbook.
Intro Music

MST3K Theme Song, California Lady (Gravy)

Subscribe

Subscribe to our podcast on iTunes, Google Play or Miro. Listen to our podcast on Stitcher.

If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or corrections. If you'd rather just send us an email, please use our contact page.

Catégories: Elsewhere

Acquia Developer Center Blog: 5 Mistakes to Avoid on Your Drupal Website - Number 1: Architecture

mar, 07/06/2016 - 20:40

Drupal is one of the most flexible content management systems in existence. In this blog series, I'll go over five mistakes to avoid on your Drupal website which include: architecture, security, performance, infrastructure, and website lifecycle management.

From an architecture standpoint, these are the most vital decisions you'll make to ensure the success and performance of your Drupal website.

Tags: acquia drupal planet
Catégories: Elsewhere

Evolving Web: Drupal North Summit is Coming to Montreal - June 16-19

mar, 07/06/2016 - 17:23

If you haven't heard, Drupal North, the regional summit for all things Drupal, is coming up June 16-19 in Montreal. As long-time Drupal community organizers, we're really excited to be sponsoring the 4-day summit at the diamond level.

read more
Catégories: Elsewhere

Drupal.org frontpage posts for the Drupal planet: Matthew Lechleider Community Spotlight

mar, 07/06/2016 - 16:38

Matthew Lechleider (Slurpee) has been active in the Drupal community for over a decade, and his hard work has directly led to an incredible amount of community growth. The founder of a Chicago Drupal User Group and our community’s chief advocate for the Google Summer of Code and Google Code-In programs, Matthew has been a key part of growing the Drupal project and our global community. Here's his Drupal story.

“In 2005, I was a full-time university student working at an internet service provider so I could put myself through school,” Matthew said. “I was working as a network/systems person, and since I was at an ISP we had a lot of people calling us and asking the same questions over and over. At the time, I knew bit about web development and programming, and I thought, ‘I bet I could make a website that would answer these people’s questions.’ And that’s how I found Drupal. I proposed it to my boss, and the next thing I knew I was working on a full-time project getting paid to work with Drupal 4. I built the website and it was really popular— and we noticed that the phone calls went down. We were tracking our support calls at the 24-hour call center, and when people called for help, we would refer them to the website as a resource. So it really was a big help."

After that, the next steps were logical for Matthew. He put together a Drupal meet-up at his Chicago-based company. The group grew quickly each month, and in no time at all, people were asking about training and “Introduction to Drupal” classes. "I started teaching those classes,” Matthew said, "and then next thing you know, people were asking for private trainings and businesses were asking me to come to their offices and train new Drupal developers. When the people I was training came back with advanced questions, I realized how much money they were making, so in 2008 I went from being a network engineer to focusing on Drupal full-time. Since then, I’ve started a Drupal business and worked on some very big projects."

"I never thought I would be a web developer, but I fell into Drupal, saw how great and easy it was, and decided it was a good thing to be a part of,” Matthew added.

Over his time in Drupal, Matthew has converted a lot of Chicagoan web developers into Drupal users. “It's pretty cool to be part of something bigger than yourself,” Matthew said. “It's like a big tidal wave — I feel like I’ve been riding this Drupal wave for a long time. I didn’t think I’d still be work with Drupal this many years later."

Why Slurpee?

Many people in the community know Matthew only by his user username, Slurpee. But how did he come by that handle?

"I was probably eight or nine years old, learning about computers, and I had some nicknames I was playing around with. But it’s like that movie ‘Hackers’: you have to have your handle, you have to have your identity. It was the middle of a hot July in the summer, and as I was figuring out what I should call myself, I realized I had bout 20 empty slurpee cups surrounding my computer. I really do like slurpees. So that’s where that came from."

Drupal 8

As a long-time Drupal user and evangelist, Matthew is incredibly excited for Drupal 8.

" I have a traditional programming background in computer science, and Drupal wasn’t always the most professional CMS,” Matthew said. “Now, I’m very excited about Drupal 8— it’s like Drupal grew up and went to college and got a graduate degree. Drupal 8 is the way Drupal should have been a long time ago. I’ve built some of the biggest Drupal projects in the world, and when you’re talking to those kinds of massive clients, it’s hard to sell systems like Drupal four, five, and six. They’re out paying the big money for the huge enterprise solutions, and Drupal 8 is big. It’s ready to go, and I really think it's on par with everything else these days."

When asked about his favorite Drupal 8 feature, Matthew said, “As I work with big sites, it’s been a big struggle to deal with continuous integration, which is resolved in Drupal 8. As a person with a sysadmin background, I think integration is probably the thing that will save the most headaches. It’s going to be a very cool tool to work with."

Google Summer of Code and Google Code-In

Matthew is also heavily involved in several of Google’s student coding programs, and runs point on keeping Drupal in the Google Summer of Code (GSoC) and Google Code-In (GCI) programs.

“I’m a bit younger than other people in the community, and I started with Drupal when I was 19 years old or so. I was going to college when I got into Drupal, and I remember talking to people about it at school... and nobody had any clue about it, not even my teachers,” Matthew said. “Fast forward several years: I was working in the community, specifically on the VoIP Drupal module suite. At the time, we had a student come to us about GSoC —this was in 2012— and he said, ‘hey, I want to work on this project for you guys as part of the GSoC. Can you be a mentor?’ I said sure, and that’s all I was that first year. The next year, nobody wanted to organize GSoC for Drupal, so I stepped up and said that I liked the program and would get the ball rolling. I spent a lot of time revamping the things that Drupal does with Google, got us accepted back into the program, and we’ve been participating every year since, both in the Google Summer of Code, which is for university students, and the Google Code-In, which is for students ages 13 through 17.

“Getting back to what I said earlier about being a student, when nobody knew what Drupal was, it was a bit harder to get involved. Knowing that there’s a program like this in schools around the world, pushing these projects to students, I think it’s critical for us to participate. If we want this project to continue to be successful, we have to focus on younger people. They’re the ones who are adapting and changing the tech as we know it, and if they don’t know about Drupal, they won’t use it. But if we embrace these kids and show them how awesome Drupal can be — we have impressive students doing impressive stuff— it’s great for everyone. That’s why I think it’s super important that we spend so much time on the GSoC and GCI programs."

For those who are interested in getting involved, there are both GSoC and GCI Drupal Groups, a documentation guide for GCI, and a guide for the GSoC students who are just getting started. There is also a #drupal-google IRC channel on free node.

“It’s fairly easy to get involved,” Matthew said. “If you’re interested in helping, join the groups — we send out lots of updates — or you can can contact me directly. You can find us on IRC, or if you know of a student who wants to get involved, all they need to do is read the documentation. It covers everything. We spent a lot of time on that documentation,” Matthew said.

Being part of something bigger

When asked what drives him to participate and organize the community, Matthew’s answer was simple.

"Honestly, it’s all about being part of something bigger than myself,” he said. "I’ve been participating in computer hacker nerd groups since I was a kid, and back then (in the 90s) it was fairly difficult to find out about these kinds of groups. I attended something called 2600 — does anyone else remember that? — I thought it was so cool to be a part of something like that. I’ve been in IRC every day for over 20 years participating in communities, and it’s so exciting to be part of a huge thing bigger than myself. Now I’m getting recognized for it in Drupal, which is cool,” he added.

"Drupal is by far the largest community I’ve participated in from that point of view, and it’s been an exciting ride. Honestly, I thought it would have been over by now — I’ve been part of the community for over 10 years and that’s a long time — but I’m excited to see where it goes. For me, Drupal is more of a lifestyle than a career. Technically I’m a web developer, but I spend so much of my time volunteering and going to events. I can go to a DrupalCon in another country and see my friends from all over the world. Seeing how mature our community has become over the past decade continues to excite me, and and I plan to be part of this for as long as possible."

“The money’s not bad either,” he added. “I live a comfortable life working from home. I like to travel a lot, go to music festivals, or go on Phish tour for weeks at a time, but I can still work. I have a mobile hotspot, and just bring my laptop with me. I can’t tell you how many times I’ve been set up at a camping spot in the middle of nowhere, working on my laptop, or how many times I’ve been in a random country looking for internet so I can get some work done.”

“Working remotely, or working from home, gives me a lot of freedom. It lets me go to the skate park when all the kids are at school— I skateboard, and bring my board with me when I travel — and there’s time for me to go running every day. It’s actually one of my favorite activities, my running break — halfway through the day, I get up and I go running. It helps me keep my sanity."

"I also like to snowboard,” Matthew continued. “I’ve been skateboarding since I was 5 — since I could walk and talk I’ve been skateboarding — and I started snowboarding a couple years after that. Going to DrupalCon Denver was exciting, and there was a DrupalCon snowboard trip. Lots of people went to Breckenridge together, and that’s the other cool thing about the community: these people are fun, they like to go out and do things together."

Helping camps and communities around the world

After a decade in the Drupal community, Matthew has a lot of great memories of traveling, teaching, and sharing.

“I was invited to organize the first DrupalCamp in Sri Lanka and hold training classes there,” Matthew said. “It’s an island just off the coast of India, and when I went in 2012, they had just finished having a civil war. I was one of the only foreigners there, and there was military presence walking around. I wasn’t allowed to leave my hotel unless the Drupal people came and picked me up, and they were absolutely wonderful. They were so nice and appreciative, and so many people were incredibly smart and really talented. After I did a training or presentation, they all had a million questions and we ended up talking for hours. And that was when I had an ‘aha!’ moment and really felt like I was part of something bigger. It drove home that it’s my responsibility to spread the word of Drupal as much as possible."

“That was actually one of the trainings that I did when I was traveling a lot” Matthew added. “I literally traveled for years living out of hostels, and couch surfing. I was being contracted at the time to go teach Drupal in Europe, Asia, and Australia."

And for Matthew, one of his proudest accomplishments is Drupal’s participation in the GSoC and GCI programs.

"I think it’s cool that I maintain Drupal’s relationship with Google,” he said. "It’s Google. They’re the biggest tech company ever, and I have this relationship with multiple people at their offices. They’re all super nice and it’s great that they’re pushing this open source stuff. They want our community's feedback, and I think it’s really cool that I can represent Drupal with Google. They’re trying to give us money and get us to do more with the students. It’s something I think is really great."

The importance of networking

When asked if he had any advice for new Drupal users, Matthew had several thoughts to share.

“Find your local communities and participate,” he advised. “Go to as many events as you can. If you have to drive a couple of hours to go to a Drupal camp, do it. If there isn’t a local community in your area, start one. If you go to meetup.com and post a similar topic, even if just one other person shows up, that’s cool and the community will grow. Drupal is a software, but it helps to get some face time in with other people. Network. Go out. Have some coffee (or beer). Hang out. Bring your laptop and start sharing ideas— you’ll be surprised at how welcoming the Drupal community is. I can’t tell you how many times I’ve sat at the bar until the bar kicked us out just helping people fix websites and giving free training.

“If you’re struggling, it’s important to network with other groups — non-Drupal groups — like a web-dev group or an open source group or a Linux group or something. That’s what helped me when I did the first Drupal camp for Chicago 2008. We were thinking 100 people might show up, but we did a lot of networking. I contacted every tech user group in Milwaukee, Detroit, southern Illinois, and Minnesota. I found every group that was related to web or open source or Drupal, and I contacted every one of their organizers. Within a couple weeks we had over 200 registrations, and people were emailing us about hotels and hotel accommodations. It was a big wow moment for us."

As for those who are involved in the community and want to do more, Matthew has a few tips.

“If you want to teach training, my advice is not to focus on the curriculum. Everywhere I go, everyone wants me to submit my curriculum. I’ve taught a lot of classes, and I’ve learned that every group of students is going to be different. I can’t tell you how many times I had a curriculum and thought I was going to go through the whole thing, and instead I wind up talking about something else. Make sure you understand your students, and teach them what they want to learn about.

“And it’s not even all that difficult,” he added. “Here’s what I do for Global Training Days. You know what happens when you make an event that’s free anything? You get all sorts of people registering. So I keep the classes as small as possible — so whenever anybody registers for the event, I don’t give them the address unless they give me their phone number. When someone registers, I call them and ask them, 'what do you want to get from this training,' 'what is your experience,' — and I can’t tell you how many people have thanked me for that. Requiring a phone call with the registration helped keep the classes smaller, and limit it to people who were actually new to Drupal and would be in proper context of attending a GTD. So, that’s my advice — know your audience. Don’t assume everyone knows about Drupal. Don’t worry about a set curriculum — go with what you students are actually looking to gain from the experience."

For those who are looking for more tips, tricks, and knowledge from Matthew, his wisdom will soon be available at your fingertips in paperback form.

“I’m working with Packt publishing on a collection of ‘Drupal 8 Administration Cookbook Recipes.’ They’re step-by-step tutorials that teach admins to do everything from getting started with Drupal to doing advanced site building stuff, and it's coming out in later this summer. I’m pretty excited about it! I’ve always read a lot of the Drupal books and I’m excited to make a contribution. I started on my own but have had a Drupal business for several years, and have had several people who have worked for me. The book is literally just a collection of all the same questions every client always asks. It’s almost like a tell-all: here’s step-by-step documentation for how to do everything. Now stop asking us,” he added with a laugh.

Front page news: Planet Drupal
Catégories: Elsewhere

OpenLucius: How to build a Drupal 8 website | Part 1: initialize theme

mar, 07/06/2016 - 15:34

In this blog serie I will explain how we have built a new multilingual Drupal 8 site. During the process the required HTML, CSS en Javascript was supplied statically and used to build a custom Drupal 8 theme. So we did not use a Drupal core or contrib theme.

Catégories: Elsewhere

Platform.sh: Drupal 8 starter kit now with PHP 7 and Composer!

mar, 07/06/2016 - 14:00

Drupal 8.1 has made a significant shift toward embracing Composer as the standard way to install and manage a Drupal site. Starting today, with Drupal 8.1.2, Platform.sh’s Drupal 8 templates are also Composer-based, and default to using PHP 7.

Catégories: Elsewhere

Drop Guard: How Drupal shops can sell SLAs with 40% more profit & recurring revenue

mar, 07/06/2016 - 13:00

Most Drupal shops depend on a transactional business model which requires hunting for new projects every month. Building Drupal applications is a great base to add more value to your business by selling support contracts, to grow your recurring revenue and deliver continuous value for your clients that have built their online business with Drupal. Using the transactional project business strategically to sell support contracts can help Drupal shops to grow fast and sustainable.

You will learn from some real examples of other Drupal shops how they grow recurring revenue with the project business and deliver support SLAs with 40% less personal costs.

 

Drupal Drupal shops SLA Drupal Planet Business
Catégories: Elsewhere

Arpit Jalan: Second week of Google Summer of Coding

mar, 07/06/2016 - 12:26
TL;DR I have already created services for the functions which bridges the module to the API, implementing the features offered by the Google Cloud Vision API, thus completing my first step towards Integrating Google Cloud Vision API to Drupal 8. This week I worked on generating error reports if the API key is not set by the user, and developing tests to test the API key configuration and whether the key is stored successfully or not.
The first step towards the integration of Google Cloud Vision API to Drupal 8 was completed with the functions moved to services. I had posted the patch for the review by my mentors. They provided their suggestions on the patch, which I worked on, and every step resulting in a better and more beautiful code.
I would also like to share that this week, our team expanded from a team of three, to a team of four members. Yes! Eugene Ilyin, the original maintainer of the module Google Vision API has joined us to mentor me through the project.
Now coming to the progress of the project, the schedule says I need to configure the Google Cloud Vision API at taxonomy field level, so that the end users may use the taxonomy terms to get the desired response from the API. However, the module already used the configuration for Label Detection, and in a discussion with my mentors, it was figured out that the current configuration does not need any changes, as at present the behaviour is pretty clear and obvious to let the developers use it easily; rather we should work on implementing the runtime verification and storage of API key supplied by the end users.I was required to write and implement the code which would give error report if the API key was not saved prior to the use of the module, and also to write tests for verifying the configuration and ensuring the storage of the key.
I created issue for the new task, Implement a runtime requirement checking if API key is not set in the issue queues of the module, and started coding the requirement.I created patches and posted it in the issue to get it reviewed by my mentors. I brought the suggested changes in the code and finally have submitted the patch implementing the required functionalities.On the other hand, the previous issue Moving the common functions to services was also under the review process.I also worked on this issue, solving and implementing minor suggestions before it gets ready to be accepted and committed! And finally, my first patch in this project has been accepted and the changes has been reflected in the module.
At the end of these two weeks, I learnt about services and dependency injection which prove to be very useful concepts implemented in Drupal 8. I also had experiences of writing tests to check the runtime functionality of the module.
Catégories: Elsewhere

Pronovix: Brightcove Video Connect for Drupal 8 - Part 4: Including Videos & Playlists in Drupal content

mar, 07/06/2016 - 12:25

This posts reads as a step-by-step guide for the following tasks:

  1. Add Brightcove Video field
  2. Browse Brightcove Videos using the Entity Browser module
  3. Upload Brightcove Videos inside Drupal content using the Inline Entity Form module
  4. Browse and upload Brightcove Videos inside Drupal content from the same field using the Entity Browser IEF submodule
  5. Browse and upload Brightcove Videos from CKEditor
Catégories: Elsewhere

Into my Galaxy: GSoC’ 16: Coding Week #2

mar, 07/06/2016 - 09:05

Google summer of code (GSoC) seems to be a venue for students to get in touch with new technologies and be a part of many interesting open source organisations. Thanks to google for co- ordinating this initiative.

The last week was really a productive one for me in all aspects. I could manage time better to focus more towards my project. The climate here seems to have improved a lot. It’s now rainy here which has reduced the hot and humid climate to a large extent. My geographical location, Kerala, the southern part of India usually faces a fair climate.

If you are searching for a flashback of my previous GSoC’ 16 ventures, please have a look at these posts.

So, as you were expecting, now let’s talk about my activities in the second week of GSoC. The second week commenced with a bit more elaborative planning of the tasks to be carried out in the coming days. My main intention for the week was to discover more Drupal hooks and adapt it to my project code.

Wondering, what are hooks?

Hooks, in the simple definition, are PHP functions which have an inbuilt meaning given by Drupal to interact with modules. They also give us the freedom to extend the functionalities. The api.drupal.org gives wonderful explanations about the various hooks in action and their modifications that have come in the different Drupal versions.

Topics I have covered:

I would like to take this opportunity to share with you some of the concepts I could grasp from the previous week of GSoC.

  • hook_install
    • This performs the setup tasks when the module is installed.
  • hook_schema
    • This hooks the database schema for the module. This is invoked once the module is enabled. This resides in the .install file of the module.
  • hook_theme
    • This is for enhancing the module’s theme implementations.
  • hook_permission
    •  This hook defines the user permissions of the module; granting and restricting permission to the roles.
  • Configuration API
    • The Drupal variables are replaced by the configuration API.  You need to define the properties and default values in the new format.

Hoping to learn more Drupal concepts in the days ahead. I will be posting the updates regularly. Stay tuned for more Drupal concepts.


Catégories: Elsewhere

Jeff Geerling's Blog: DrupalCamp St. Louis 2016 will be September 10-11

mar, 07/06/2016 - 04:40

I wanted to post this as a 'save the date' to any other midwestern Drupalists—here in St. Louis, we'll be hosting our third annual DrupalCamp on September 10 and 11 (Saturday and Sunday) in St. Louis, MO. We'll have sessions on Saturday, and a community/sprint day Sunday, and just like last year, we'll record all the sessions and post them to the Drupal STL YouTube channel after the Camp.

We're still working on a few details (nailing down the location, getting things set up so we can accept session submissions and registrations, etc.), but if you're interested in coming, please head over to the official DrupalCamp STL 2016 website and sign up to be notified of further information!

Catégories: Elsewhere

Pages