Elsewhere

Tanguy Ortolo: GNU/Linux graphic sessions: suspending your computer

Planet Debian - Wed, 23/07/2014 - 14:45

Major desktop environments such as Xfce or KDE have a built-in computer suspend feature, but when you use a lighter alternative, things are a bit more complicated, because basically: only root can suspend the computer. There used to be a standard solution to that, using a D-Bus call to a running daemon upowerd. With recent updates, that solution first stopped working for obscure reasons, but it could still be configured back to be usable. With newer updates, it stopped working again, but this time it seems it is gone for good:

$ dbus-send --system --print-reply \ --dest='org.freedesktop.UPower' \ /org/freedesktop/UPower org.freedesktop.UPower.Suspend Error org.freedesktop.DBus.Error.UnknownMethod: Method "Suspend" with signature "" on interface "org.freedesktop.UPower" doesn't exist

The reason seems to be that upowerd is not running, because it no longer provides an init script, only a systemd service. So, if you do not use systemd, you are left with one simple and stable solution: defining a sudo rule to start the suspend or hibernation process as root. In /etc/sudoers.d/power:

%powerdev ALL=NOPASSWD: /usr/sbin/pm-suspend, \ /usr/sbin/pm-suspend-hybrid, \ /usr/sbin/pm-hibernate

That allows members of the powderdev group to run sudo pm-suspend, sudo pm-suspend-hybrid and sudo pm-hibernate, which can be used with a key binding manager such as your window manager's or xbindkeys. Simple, efficient, and contrary to all that ever-changing GizmoKit and whatsitd stuff, it has worked and will keep working for years.

Categories: Elsewhere

Code Karate: Drupal 7 Splashify

Planet Drupal - Wed, 23/07/2014 - 13:36
Episode Number: 158

In this episode we cover the Splashify module. This module is used to display splash pages or popups. There are multiple configuration options available to fit your site needs.

In this episode you will learn:

  • How to set up Splashify
  • How to configure Splashify
  • How to get Splashify to use the Mobile Detect plugin
  • How Splashify displays to the end user
  • How to be awesome
Tags: DrupalDrupal 7Drupal PlanetUI/DesignJavascriptResponsive Design
Categories: Elsewhere

Francesca Ciceri: Adventures in Mozillaland #3

Planet Debian - Wed, 23/07/2014 - 13:04

Yet another update from my internship at Mozilla, as part of the OPW.

A brief one, this time, sorry.

Bugs, Bugs, Bugs, Bacon and Bugs

I've continued with my triaging/verifying work and I feel now pretty confident when working on a bug.
On the other hand, I think I've learned more or less what was to be learned here, so I must think (and ask my mentor) where to go from now on.
Maybe focus on a specific Component?
Or steadily work on a specific channel for both triaging/poking and verifying?
Or try my hand at patches?
Not sure, yet.

Also, I'd like to point out that, while working on bug triaging, the developer's answers on the bug report are really important.
Comments like this help me as a triager to learn something new, and be a better triager for that component.
I do realize that developers cannot always take the time to put in comments basic information on how to better debug their component/product, but trust me: this will make you happy on the long run.
A wiki page with basic information on how debug problems for your component is also a good idea, as long as that page is easy to find ;).

So, big shout-out for MattN for a very useful comment!

Community

After much delaying, we finally managed to pick a date for the Bug Triage Workshop: it will be on July 25th. The workshop will be an online session focused on what is triaging, why is important, how to reproduce bugs and what information ask to the reporter to make a bug report the most complete and useful possible.
We will do it in two different time slots, to accomodate various timezones, and it will be held on #testday on irc.mozilla.org.
Take a look at the official announcement and subscribe on the event's etherpad!

See you on Friday! :)

Categories: Elsewhere

Steinar H. Gunderson: The sad state of Linux Wi-Fi

Planet Debian - Wed, 23/07/2014 - 12:45

I've been using 802.11 on Linux now for over a decade, and to be honest, it's still a pretty sad experience. It works well enough that I mostly don't care... but when I care, and try to dig deeper, it always ends up in the answer “this is just crap”.

I can't say exactly why this is; between the Intel cards I've always been using, the Linux drivers, the firmware, the mac80211 layer, wpa_supplicant and NetworkManager, I have no idea who are supposed to get all these things right, and I have no idea how hard or easy they actually are to pull off. But there are still things annoying me frequently that we should really have gotten right after ten years or more:

  • Why does my Intel card consistently pick 2.4 GHz over 5 GHz? The 5 GHz signal is just as strong, and it gives a less crowded 40 MHz channel (twice the bandwidth, yay!) instead of the busy 20 MHz channel the 2.4 GHz one has to share. The worst part is, if I use an access point with band-select (essentially forcing the initial connection to be to 5 GHz—this is of course extra fun when the driver sees ten APs and tries to connect to all of them over 2.4 in turn before trying 5 GHz), the driver still swaps onto 2.4 GHz a few minutes later!
  • Rate selection. I can sit literally right next to an AP and get a connection on the lowest basic rate (which I've set to 11 Mbit/sec for the occasion). OK, maybe I shouldn't trust the output of iwconfig too much, since rate is selected per-packet, but then again, when Linux supposedly has a really good rate selection algorithm (minstrel), why are so many drivers using their own instead? (Yes, hello “iwl-agn-rs”, I'm looking at you.)
  • Connection time. I dislike OS X pretty deeply and think that many of its technical merits are way overblown, but it's got one thing going for it; it connects to an AP fast. RFC4436 describes some of the tricks they're using, but Linux uses none of them. In any case, even the WPA2 setup is slow for some reason, it's not just DHCP.
  • Scanning/roaming seems to be pretty random; I have no idea how much thought really went into this, and I know it is a hard problem, but it's not unusual at all to be stuck at some low-speed AP when a higher-speed one is available. (See also 2.4 vs. 5 above.) I'd love to get proper support for CCX (Cisco Client Extensions) here, which makes this tons better in a larger Wi-Fi setting (since the access point can give the client a lot of information that's useful for roaming, e.g. “there's an access point on thannel 52 that sends its beacons every 100 ms with offset 54 from mine”, which means you only need to swap channel for a few milliseconds to listen instead of a full beacon period), but I suppose that's covered by licensing or patents or something. Who knows.

With now a billion mobile devices running Linux and using Wi-Fi all the time, maybe we should have solved this a while ago. But alas. Instead we get access points trying to layer hacks upon hacks to try to force clients into making the right decisions. And separate ESSIDs for 2.4 GHz and 5 GHz.

Augh.

Categories: Elsewhere

Andrew Pollock: [tech] Going solar

Planet Debian - Wed, 23/07/2014 - 07:36

With electricity prices in Australia seeming to be only going up, and solar being surprisingly cheap, I decided it was a no-brainer to invest in a solar installation to reduce my ongoing electricity bills. It also paves the way for getting an electric car in the future. I'm also a greenie, so having some renewable energy happening gives me the warm and fuzzies.

So today I got solar installed. I've gone for a 2 kWh system, consisting of 8 250 watt Seraphim panels (I'm not entirely sure which model) and an Aurora UNO-2.0-I-OUTD inverter.

It was totally a case of decision fatigue when it came to shopping around. Everyone claims the particular panels they want to sell at the best. It's pretty much impossible to make a decent assessment of their claims. In the end, I went with the Seraphim panels because they scored well on the PHOTON tests. That said, I've had other solar companies tell me the PHOTON tests aren't indicative of Australian conditions. It's hard to know who to believe. In the end, I chose Seraphim because of the PHOTON test results, and they're also apparently one of the few panels that pass the Thresher test, which tests for durability.

The harder choice was the inverter. I'm told that yield varies wildly by inverter, and narrowed it down to Aurora or SunnyBoy. Jason's got a SunnyBoy, and the appeal with it was that it supported Bluetooth for data gathering, although I don't much care for the aesthetics of it. Then I learned that there was a WiFi card coming out soon for the Aurora inverter, and that struck me as better than Bluetooth, so I went with the Aurora inverter. I discovered at the eleventh hour that the model of Aurora inverter that was going to be supplied wasn't supported by the WiFi card, but was able to switch models to the one that was. I'm glad I did, because the newer model looks really nice on the wall.

The whole system was up at running just in time to catch the setting sun, so I'm looking forward to seeing it in action tomorrow.

Apparently the next step is Energex has to come out to replace my analog power meter with a digital one.

I'm grateful that I was able to get Body Corporate approval to use some of the roof. Being on the top floor helped make the installation more feasible too, I think.

Categories: Elsewhere

Matthew Palmer: Per-repo update hooks with gitolite

Planet Debian - Wed, 23/07/2014 - 06:45

Gitolite is a popular way to manage collections of git repositories entirely from the command line – it’s configured using configuration stored in a git repo, which is nicely self-referential. Providing per-branch access control and a wide range of addons, it’s quite a valuable system.

In recent versions (3.6), it added support for configuring per-repository git hooks from within the gitolite-admin repo itself – something which previously required directly jiggering around with the repo metadata on the filesystem. It allows you to “chain” multiple hooks together, too, which is a nice touch. You can, for example, define hooks for “validate style guidelines”, “submit patch to code review” and “push to the CI server”. Then for each repo you can pick which of those hooks to execute. It’s neat.

There’s one glaring problem, though – you can only use these chained, per-repo hooks on the pre-receive, post-receive, and post-update hooks. The update hook is special, and gitolite wants to make sure you never, ever forget it. You can hook into the update processing chain by using something called a “virtual ref”; they’re stored in a separate configuration directory, use a different syntax in the config file, and if you’re trying to learn what they do, you’ll spend a fair bit of time on them. The documentation describes VREFs as “a mechanism to add additional constraints to a push”. The association between that and the update hook is one you get to make for yourself.

The interesting thing is that there’s no need for this gratuitous difference in configuration methods between the different hooks. I wrote a very small and simple patch that makes the update hook configurable in exactly the same way as the other server-side hooks, with no loss of existing functionality.

The reason I’m posting it here is that I tried to submit it to the primary gitolite developer, and was told “I’m not touching the update hook […] I’m not discussing this […] take it or leave it”. So instead, I’m publicising this patch for anyone who wants to locally patch their gitolite installation to have a consistent per-repo hook UI. Share and enjoy!

Categories: Elsewhere

Drupal Association News: Building the Drupal Community in Vietnam: Seeds for Empowerment and Opportunities

Planet Drupal - Wed, 23/07/2014 - 06:38

With almost 90 million people, Vietnam has the 13th largest population of any nation in the world. It's home to a young generation that is very active in adopting innovative technologies, and in the last decade, the country has been steadily emerging as an attractive IT outsourcing and staffing location for many Western software companies.

Yet amidst this clear trend, Drupal has emerged very slowly in Vietnam and all of Asia as a leading, enterprise-ready Framework (CMF). However, this is changing as one Drupalista works hard to grow the regional user base.

How it all started

Tom Tran, a German with Hanoian roots, discovered Drupal in 2008. He was overwhelmed by the technological power and flexibility that makes Drupal such a highly competitive platform, and was amazed by the friendliness and vibrancy of the global community. He realized that introducing the framework and the Drupal community to Vietnam would help local people the opportunity to access the following three benefits:

  • Steady Income: Drupal won’t make you an overnight millionaire, however if you become a Drupal expert and commit to helping clients to achieve their goals, you will never be short of work. Quality Drupal specialists are in huge demand across the world and this demand won’t stop any time soon as Drupal adoption grows.
  • Better Lifestyle: You are free and able to design a work/lifestyle balance on your terms. You can work from home or contribute remotely while traveling, as long as you continue to deliver sustainable value to your client. Many professionals in less developed countries like Vietnam have never imagined this opportunity-- and learning about this lifestyle can be very empowering and inspirational.
  • Cross Cultural Friendships: In spite of national borders and cultural differences, Tom has established fruitful partnerships between his development team from Vietnam and clients from across the globe. Whether clients are based in California, Berlin, Melbourne or Tokyo, his team has successfully collaborated on many projects and often became good friends beyond just project mates. These relationships can only grow thanks to the open Drupal community spirit and the way it connects peoples from all regions and cultures from around the world.

Tom started by organizing a Drupal 7 release party in Hanoi in January 2011. Afterwards, he reached out to Drupal enthusiasts in the region and organized informal coffee sessions, which have contributed to the growth of a solid, cohesive community in Vietnam.

Drupal Vietnam College Tour

With help from a Community Cultivation Grant, Tom put on workshops every three months at Vietnamese universities and colleges in 2012. By showcasing the big brands and institutions using Drupal, a diverse series of use cases demonstrate that the demand for Drupal is high, and that the Drupal industry is a great place to be. A three hour hands-on session walks students through the basics of sitebuilding with Drupal-- and it's at this point that most students get hooked.

March 2012
First ever Drupal Hanoi Conference at VTC Academy, with 120 visitors (facebook gallery)

June 2012
Hello Drupal workshop @ Tech University Danang (gallery)

 

July 2012
Drupal Workshop @ FTP-Aptech (fb gallery, fpt aptech news)

 

September 2012
Drupal Workshop @ NUCE (gallery, Nuce news)

 

November 2012
Drupal Workshop @ FTP University (gallery)

 

December 2012
Drupal Workshop @ Aiti-Aptech (gallery)

 

December 2012
Drupal talk & sponsorship for PHPDay.vn 2012 (local images 2x)

The results was an overall increase in members and growing everyday. Stats in 2014:

What’s next?

Tom is currently planning to organize the first DrupalCamp in Hanoi / Vietnam in late 2014. Today Drupal Vietnam has only roughly 1300 members, (less than LA DUG) but with a growing pool of software engineers graduating each year, this country is set to become a relevant resource of highly skilled developers, provided high quality training is affordable and access to jobs can be facilitated. Things look very bright in Vietnam!

Supporters About

Tom is founder of Geekpolis, a software company with a development center based in Hanoi, Vietnam. Geekpolis focuses on high-quality managed Drupal development services for bigger consultancy agencies. Currently the team is comprised of 25 engineers.

To get involved, contact Tom at:
Categories: Elsewhere

Drupal core announcements: Drupal 7.30 release this week to fix regressions in the Drupal 7.29 security release

Planet Drupal - Wed, 23/07/2014 - 06:06
Start:  2014-07-23 (All day) - 2014-07-25 (All day) America/New_York Sprint Organizers:  David_Rothstein

The Drupal 7.29 security release contained a security fix to the File module which caused some regressions in Drupal's file handling, particularly for files or images attached to taxonomy terms.

I am planning to release Drupal 7.30 this week to fix as many of these regressions as possible and allow more sites to upgrade past Drupal 7.28. The release could come as early as today (Wednesday July 23).

However, to do this we need more testing and reviews of the proposed patches to make sure they are solid. Please see #2305017: Regression: Files or images attached to certain core and non-core entities are lost when the entity is edited and saved for more details and for the patches to test, and leave a comment on that issue if you have reviewed or tested them.

Thank you!

Categories: Elsewhere

Mediacurrent: Understanding the Role of the Enterprise in Drupal

Planet Drupal - Wed, 23/07/2014 - 03:53

There is a trending topic I am seeing being discussed a lot more in the open-source software and Drupal community. The point of conversation focuses on what the role should be of enterprise organizations?  Especially, those that are or have already adopted Drupal as their web platform of choice.

Categories: Elsewhere

Jonathan McCrohan: Git remote helpers

Planet Debian - Wed, 23/07/2014 - 03:19

If you follow upstream Git development closely, you may have noticed that the Mercurial and Bazaar remote helpers (use git to interact with hg and bzr repos) no longer live in the main Git tree. They have been split out into their own repositories, here and here.

git-remote-bzr had been packaged (as git-bzr) for Debian since March 2013, but was removed in May 2014 when the remote helpers were removed upstream. There had been a wishlist bug report open since Mar 2013 to get git-remote-hg packaged, and I had submitted a patch, but it was never applied.

Split out of these remote helpers upstream has allowed Vagrant Cascadian and myself to pick up these packages and both are now available in Debian.

apt-get install git-remote-hg git-remote-bzr
Categories: Elsewhere

Tim Retout: Cowbuilder and Tor

Planet Debian - Tue, 22/07/2014 - 23:31

You've installed apt-transport-tor to help prevent targeted attacks on your system. Great! Now you want to build Debian packages using cowbuilder, and you notice these are still using plain HTTP.

If you're willing to fetch the first few packages without using apt-transport-tor, this is as easy as:

  • Add 'EXTRAPACKAGES="apt-transport-tor"' to your pbuilderrc.
  • Run 'cowbuilder --update'
  • Set 'MIRRORSITE=tor+http://http.debian.net/debian' in pbuilderrc.
  • Run 'cowbuilder --update' again.

Now any future builds should fetch build-dependencies over Tor.

Unfortunately, creating a base.cow from scratch is more problematic. Neither 'debootstrap' nor 'cdebootstrap' actually rely on apt acquire methods to download files - they look at the URL scheme themselves to work out where to fetch from. I think it's a design point that they shouldn't need apt, anyway, so that you can debootstrap on non-Debian systems. I don't have a good solution beyond using some other means to route these requests over Tor.

Categories: Elsewhere

Neil Williams: Validating ARMMP device tree blobs

Planet Debian - Tue, 22/07/2014 - 23:18

I’ve done various bits with ARMMP and LAVA on this blog already, usually waiting until I’ve got all the issues ironed out before writing it up. However, this time I’m just going to do a dump of where it’s at, how it works and what can be done.

I’m aware that LAVA can seem mysterious at first, the package description has improved enormously recently, thanks to exposure in Debian: LAVA is a continuous integration system for deploying operating systems onto physical and virtual hardware for running tests. Tests can be simple boot testing, bootloader testing and system level testing, although extra hardware may be required for some system tests. Results are tracked over time and data can be exported for further analysis.

The LAVA documentation has a glossary of terms like result bundle and all the documentation is also available in the lava-server-doc package.

The goal is to validate the dtbs built for the Debian ARMMP kernel. One of the most accessible ways to get the ARMMP kernel onto a board for testing is tftp using the Debian daily DI builds. Actually using the DI initrd can come later, once I’ve got a complete preseed config so that the entire install can be automated. (There are some things to sort out in LAVA too before a full install can be deployed and booted but those are at an early stage.) It’s enough at first to download the vmlinuz which is common to all ARMMP deployments, supply the relevant dtb, partner those with a minimal initrd and see if the board boots.

The first change comes when this process is compared to how boards are commonly tested in LAVA – with a zImage or uImage and all/most of the modules already built in. Packaged kernels won’t necessarily raise a network interface or see the filesystem without modules, so the first step is to extend a minimal initramfs to include the armmp modules.

apt install pax u-boot-tools

The minimal initramfs I selected is one often used within LAVA:

wget http://images.armcloud.us/lava/common/linaro-image-minimal-initramfs-genericarmv7a.cpio.gz.u-boot

It has a u-boot header added, as most devices using this would be using u-boot and this makes it easier to debug boot failures as the initramfs doesn’t need to have the header added, it can simply be downloaded to a local directory and passed to the board as a tftp location. To modify it, the u-boot header needs to be removed. Rather than assuming the size, the u-boot tools can (indirectly) show the size:

$ ls -l linaro-image-minimal-initramfs-genericarmv7a.cpio.gz.u-boot -rw-r--r-- 1 neil neil 5179571 Nov 26 2013 linaro-image-minimal-initramfs-genericarmv7a.cpio.gz.u-boot $ mkimage -l linaro-image-minimal-initramfs-genericarmv7a.cpio.gz.u-boot Image Name: linaro-image-minimal-initramfs-g Created: Tue Nov 26 22:30:49 2013 Image Type: ARM Linux RAMDisk Image (gzip compressed) Data Size: 5179507 Bytes = 5058.11 kB = 4.94 MB Load Address: 00000000 Entry Point: 00000000

Referencing http://www.omappedia.com/wiki/Development_With_Ubuntu, the header size is the file size minus the data size listed by mkimage.

5179571 - 5179507 == 64

So, create a second file without the header:

dd if=linaro-image-minimal-initramfs-genericarmv7a.cpio.gz.u-boot of=linaro-image-minimal-initramfs-genericarmv7a.cpio.gz skip=64 bs=1

decompress it

gunzip linaro-image-minimal-initramfs-genericarmv7a.cpio.gz

Now for the additions

dget http://ftp.uk.debian.org/debian/pool/main/l/linux/linux-image-3.14-1-armmp_3.14.12-1_armhf.deb

(Yes, this process will need to be repeated when this package is rebuilt, so I’ll want to script this at some point.)

dpkg -x linux-image-3.14-1-armmp_3.14.12-1_armhf.deb kernel-dir cd kernel-dir

Pulling in the modules we need for most needs, comes thanks to a script written by the Xen folks. The set is basically disk, net, filesystems and LVM.

find lib -type d -o -type f -name modules.\* -o -type f -name \*.ko \( -path \*/kernel/lib/\* -o -path \*/kernel/crypto/\* -o -path \*/kernel/fs/mbcache.ko -o -path \*/kernel/fs/ext\* -o -path \*/kernel/fs/jbd\* -o -path \*/kernel/drivers/net/\* -o -path \*/kernel/drivers/ata/\* -o -path \*/kernel/drivers/scsi/\* -o -path \*/kernel/drivers/md/\* \) | pax -x sv4cpio -s '%lib%/lib%' -d -w >../cpio gzip -9f cpio

original Xen script (GPL-3+)

I found it a bit confusing that i is used for extract by cpio, but that’s how it is. Extract the minimal initramfs to a new directory:

sudo cpio -id < ../linaro-image-minimal-initramfs-genericarmv7a.cpio

Extract the new cpio into the same location. (Yes, I could do this the other way around and pipe the output of find into the already extracted location but that's for when I get a script to do this):

sudo cpio --no-absolute-filenames -id < ../ramfs/cpio

CPIO Manual

Use newc format, the new (SVR4) portable format, which supports file systems having more than 65536 i-nodes. (4294967295 bytes)
(41M)

find . | cpio -H newc -o > ../armmp-image.cpio

... and add the u-boot header back:

mkimage -A arm -T ramdisk -C none -d armmp-image.cpio.gz debian-armmp-initrd.cpio.gz.u-boot Now what?

Now send the combination to LAVA and test it.

Results bundle for a local LAVA test job using this technique. (18k)

submission JSON - uses file:// references, so would need modification before being submitted to LAVA elsewhere.

complete log of the test job (72k)

Those familiar with LAVA will spot that I haven't optimised this job, it boots the ARMMP kernel into a minimal initramfs and then expects to find apt and other tools. Actual tests providing useful results would use available tools, add more tools or specify a richer rootfs.

The tests themselves are very quick (the job described above took 3 minutes to run) and don't need to be run particularly often, just once per board type per upload of the ARMMP kernel. LAVA can easily run those jobs in parallel and submission can be automated using authentication tokens and the lava-tool CLI. lava-tool can be installed without lava-server, so can be used in hooks for automated submissions.

Extensions

That's just one DTB and one board. I have a range of boards available locally:

* iMX6Q Wandboard (used for this test)
* iMX.53 Quick Start Board (needs updated u-boot)
* Beaglebone Black
* Cubie2
* CubieTruck
* arndale (no dtb?)
* pandaboard

Other devices available could involve ARMv7 devices hosted at www.armv7.com and validation.linaro.org - as part of a thank you to the Debian community for providing the OS which is (now) running all of the LAVA infrastructure.

That doesn't cover all of the current DTBs (and includes many devices which have no DTBs) so there is plenty of scope for others to get involved.

Hopefully, the above will help get people started with a suitable kernel+dtb+initrd and I'd encourage anyone interested to install lava-server and have a go at running test jobs based on those so that we start to build data about as many of the variants as possible.

(If anyone in DI fancies producing a suitable initrd with modules alongside the DI initrd for armhf builds, or if anyone comes up with a patch for DI to do that, it would help enormously.)

This will at least help Debian answer the question of what the Debian ARMMP package can actually support.

For help on LAVA, do read through the documentation and then come to us at #linaro-lava or the linaro-validation mailing list or file bugs in Debian: reportbug lava-server.

, so you can ask me.

I'm giving one talk on the LAVA software and there will be a BoF on validation and CI in Debian.

Categories: Elsewhere

Greater Los Angeles Drupal (GLAD): Drupal Migrate using xml 0 to 35

Planet Drupal - Tue, 22/07/2014 - 20:55

Using Drupal Migrate is a great way to move you content into Drupal. Unfortunately the documentation for xml import can be obscure. This comes about when those that developed the module try to communicate how they did what they did to someone that did not do the work. Things that seem obvious to them are not to someone else.

I have spent some time recently importing content using xml. In no way am I an expert that is speeding down the fast lane, something more in the cruising around town at a comfortable 35 mph.

To use Drupal Migrate you need to define your own class. A class is php code that is used in Object Oriented Programming that defines your data and defines how you can manipulate your data. Most of the actual migration work is done with the classes provide by the migrate module, you simply have to define the details of your migration.

Constructor - The constructor modifies the migration modules classes to define your specific data. I was able to follow the SourceList method, this provides one xml (file or feed) that contains the ID number for all the content you want to import, and a second (file or feed) that contains the content. The wine example migrate has this but understanding what it really wants is more difficult to understand.

Below is my class file explained:
=====================
<?php

/**
* @file
* Vision Article migration.
*/

/**
* Vision Article migration class.
*/
class VisionArticleMigration extends XMLMigration {
public function __construct() {
parent::__construct();
$this->description = t('XML feed of Ektron Articles.');

---------------
So far pretty easy. You need to name your class, extend from the proper migration. and give it an extension.

-----------------

// There isn't a consistent way to automatically identify appropriate
// "fields" from an XML feed, so we pass an explicit list of source fields.
$fields = array(
'id' => t('ID'),
'lang_type' => t('Language'),
'type' => t('Type'),
'image' => t('Image'),
'authors' => t('Authors'),
'article_category' => t('Article Category'),
'article_series_title' => t('Article Series Title'),
'article_part_no' => t('Article Series Part Number'),
'article_title' => t('Article Title'),
'article_date' => t('Article Date'),
'article_display_date' => t('Article Display Date'),
'article_dropheader' => t('Article Dropheader'),
'article_body' => t('Article Body'),
'article_author_name' => t('Article Author Name'),
'article_author_url' => t('Article Author Email Address'),
'article_authors' => t('Article Additional Authors'),
'article_postscript' => t('Article Postscript'),
'article_link_text' => t('Article Link text'),
'article_link' => t('Article Link'),
'article_image' => t('Article Image'),
'article_image_folder' => t('Article Image Folder'),
'article_image_alt' => t('Article Image Alt'),
'article_image_title' => t('Article Image Title'),
'article_image_caption' => t('Article Image Caption'),
'article_image_credit' => t('Article Image Credit'),
'article_sidebar_element' => t('Article Side Bar Content'),
'article_sidebar_element_margin' => t('Article Margin between Sidebar Content'),
'article_archived_html_content' => t('Article HTML Content from old system'),
'article_video_id' => t('Article ID of Associated Video Article'),
'metadata_title' => t('Metadata Title'),
'metadata_description' => t('Metadata Description'),
'metadata_keywords' => t('Metadata Keywords'),
'metadata_google_sitemap_priority' => t('Metadata Google Sitemap Priority'),
'metadata_google_sitemap_change_frequency' => t('Metadata Google Sitemap Change Freequency'),
'metadata_collection_number' => t('Metadata Collection Number'),
'title' => t('Title'),
'teaser' => t('Teaser'),
'alias' => t('Alias from old system'),
'taxonomy' => t('Taxonomy'),
'created_date' => t('Date Created')
);

-------------------
So what doe this mean?
You will need a field name below. It has nothing to do with your xml file, you will need a field for each thing you want to import. Such as article_image_alt is the alt text for the image. Later you will define the xpath to load this variable. This will start to come together below, just remember each unique piece of information needs a variable.

---------------------

// The source ID here is the one retrieved from the XML listing URL, and
// used to identify the specific item's URL.
$this->map = new MigrateSQLMap($this->machineName,
array(
'ID' => array(
'type' => 'int',
'unsigned' => TRUE,
'not null' => TRUE,
'description' => 'Source ID',
)
),
MigrateDestinationNode::getKeySchema()
);

---------------------
This has to do with setting up the migration table in the database. This has to do with the input database, the Source ID is the field in the input file that has the pointer to the data record. My source file looks like:

567
1054

So we need a table with a field for the id which an integer.

-----------------------

// Source list URL.
$list_url = 'http://www.vision.org/visionmedia/generateexportlist.aspx';
// Each ID retrieved from the list URL will be plugged into :id in the
// item URL to fetch the specific objects.
// @todo: Add langtype for importing translated content.
$item_url = 'http://www.vision.org/visionmedia/generatecontentxml.aspx?id=:id';

// We use the MigrateSourceList class for any source where we obtain the
// list of IDs to process separately from the data for each item. The
// listing and item are represented by separate classes, so for example we
// could replace the XML listing with a file directory listing, or the XML
// item with a JSON item.
$this->source = new MigrateSourceList(new MigrateListXML($list_url),
new MigrateItemXML($item_url), $fields);

$this->destination = new MigrateDestinationNode('vision_article');

-----------------

Now we are setting up the magic. We setup a list url that contains the ID's of all the content to import, then another one that uses this ID to fetch the details for this ID. Then you tell Migrate to use the MigrateListXML to find the items to import with MigrateItemXML. Then finally in the MigrateDestinationNode to tell Migrate which content type to use. This means we need a separate migration class for each content type to import. I have been creating each class in it's own inc file and adding this to the files section in the info file.

-----------------

// TIP: Note that for XML sources, in addition to the source field passed to
// addFieldMapping (the name under which it will be saved in the data row
// passed through the migration process) we specify the Xpath used to retrieve
// the value from the XML.
$this->addFieldMapping('created', 'created_date')
->xpath('/content/CreateDate');

------------------
Now we map the source field with the destination field. Created is the field name in the content type (vision_article), created_date is from our fields section above. Remember I said we needed a definiation for each part of the content we want to import. The xpath then points to the data in the xml feed. So this says take the content of the /contnet/CreateDate in the xml file and load this into the source variable created_date, then store this in the created field in a new vision_article content item. I say this in this way because if you do like me and cut and paste and forget to change the source varable, the source varable will contain the bottom data from xpath.

------------------

$this->addFieldMapping('field_category', 'article_category')
->defaultValue(1)
->xpath('/content/html/root/article/Category');

-------------------

You can set a default value in case the xml does not contain any data

----------

$this->addFieldMapping('field_series_title', 'article_series_title')
->xpath('/content/html/root/article/ArticleSeriesTitle');
$this->addFieldMapping('field_part_number', 'article_part_no')
->xpath('/content/html/root/article/ArticlePartNo');
$this->addFieldMapping('field_h1_title', 'article_title')
->arguments(array('format' => 'filtered_html'))
->xpath('/content/html/root/article/Title');
$this->addFieldMapping('field_display_date', 'article_display_date')
->xpath('/content/html/root/article/DisplayDate');
$this->addFieldMapping('field_drophead', 'article_dropheader')
->arguments(array('format' => 'filtered_html'))
->xpath('/content/Taxonomy');

-------------

Another field argument, the default content type is plain text, so if your content contains HTML you need to set the correct format here.

---------------

$this->addFieldMapping('body', 'article_body')
->arguments(array('format' => 'filtered_html'))
->xpath('/content/html/root/article/Body');
$this->addFieldMapping('body:summary', 'teaser')
->arguments(array('format' => 'filtered_html'))
->xpath('/content/Teaser');

-----------

Note you can set the teaser as a part of the body. One of the drush migrate commands make is easy to discover the additional parts of your content field, drush mfd (Migrate Field Destinations). This will display all the destination fields and their options.

------------

$this->addFieldMapping('field_author', 'article_author_email')
->xpath('/content/html/root/article/AuthorURL');
$this->addFieldMapping('field_author:title', 'article_author_name')
->xpath('/content/html/root/article/AuthorName');
$this->addFieldMapping('field_ext_reference_title', 'article_postscript')
->arguments(array('format' => 'filtered_html'))
->xpath('/content/html/root/article/Postscript');

---------
see explanation below
--------
$this->addFieldMapping('field_article_image:file_replace')
->defaultValue(MigrateFile::FILE_EXISTS_REUSE); //FILE_EXISTS_REUSE is in the MigrateFile class
$this->addFieldMapping('field_article_images', 'article_image')
->xpath('/content/html/root/article/Image/File/img/file_name');
$this->addFieldMapping('field_article_images:source_dir', 'article_image_folder')
->xpath('/content/html/root/article/Image/File/img/file_path');
$this->addFieldMapping('field_article_images:alt', 'article_image_alt')
->xpath('/content/html/root/article/Image/File/img/@alt');
$this->addFieldMapping('field_article_images:title', 'article_image_title')
->xpath('/content/html/root/article/Image/File/img/@alt');

--------------

This section gets tricky. You are importing an Image or other file. The default migration for a file is MigrateFileUrl. You can migrate all your files ahead of time or as I am doing do it inline. The main components for this is the main field, which is the file name, and the source_dir for the path to this image. Drual 7 has a database table for the files is uses with the url to the file. MigrateFile then uploads this file to the public folder and creates an entry into the files_,amaged table to indicate the url. What I did was copy all the images to a public location on S3 storage so I did not want Migrate to create a new file but use the existing file. Thus the file_replace setting to the constant MigrateFile::FILE_EXISTS_REUSE. This tells migrate to use the existing file and make an entry in the file_managed table for this file.

Later in the PrepareRow method I will show how we separate this and add it to the xml.

------------

$this->addFieldMapping('field_archive', 'article_archived_html_content')
->xpath('/content/archive_html');
$this->addFieldMapping('field_ektron_id', 'id')
->xpath('/content/ID');
$this->addFieldMapping('field_ektron_alias', 'alias')
->xpath('/content/html/Alias');
$this->addFieldMapping('field_sidebar', 'article_sidebar_element')
->arguments(array('format' => 'filtered_html'))
->xpath('/content/html/root/article/SidebarElement/SidebarElementInformation');
$this->addFieldMapping('field_slider_image:file_replace')
->defaultValue(MigrateFile::FILE_EXISTS_REUSE); //FILE_EXISTS_REUSE is in the MigrateFile class
$this->addFieldMapping('field_slider_image', 'image')
->xpath('/content/Image/file_name');
$this->addFieldMapping('field_slider_image:source_dir', 'image_folder')
->xpath('/content/Image/file_path');
$this->addFieldMapping('field_slider_image:alt', 'image_alt')
->xpath('/content/Title');
$this->addFieldMapping('field_slider_image:title', 'image_title')
->xpath('/content/Title');
$this->addFieldMapping('title', 'title')
->xpath('/content/Title');
$this->addFieldMapping('title_field', 'title')
->xpath('/content/Title');

// Declare unmapped source fields.
$unmapped_sources = array(
'article_author_url',
'article_authors',
'article_sidebar_element_margin',
'article_video_id',
'metadata_title',
'metadata_description',
'metadata_keywords',
'metadata_google_sitemap_priority',
'metadata_google_sitemap_change_frequency',
'metadata_collection_number',

);

-------------

If you are not using a source field, best practices state that you declare it in the unmapped sources

------------

$this->addUnmigratedSources($unmapped_sources);

// Declare unmapped destination fields.
$unmapped_destinations = array(
'revision_uid',
'changed',
'status',
'promote',
'sticky',
'revision',
'log',
'language',
'tnid',
'is_new',
'body:language',
);

----------------------

If you are not using a destination field best practices state that you declare in the unmaped destinations array. Note if you later use this field you need to remove it from the unused array.

---------------------

$this->addUnmigratedDestinations($unmapped_destinations);

if (module_exists('path')) {
$this->addFieldMapping('path')
->issueGroup(t('DNM'));
if (module_exists('pathauto')) {
$this->addFieldMapping('pathauto')
->issueGroup(t('DNM'));
}
}
if (module_exists('statistics')) {
$this->addUnmigratedDestinations(array('totalcount', 'daycount', 'timestamp'));
}
}

------------

The rest of the constructor is from the example. Did not cause me a problem so did not worry about it.

------------
/**
* {@inheritdoc}
*/

---------------

Now we can add our own magic. We can effect the data from the content item before it is saved in to the content item.

-----------------

public function prepareRow($row) {
if (parent::prepareRow($row) === FALSE) {
return FALSE;
}
$ctype = (string)$row->xml->Type;
//set variable for return code
$ret = FALSE;
//dpm($row);

------------

You will see these scattered through the prepareRow function. These are the devel command to print to the screen for debuging. They should be commented out but you can see the process I went through to debug my particular prepareRow. Also note this is a great use of the Migrate UI, these print statment only help you in the web interface, if you use Drush you will not see these diagnostic prints.

---------------

if ($ctype == '12'){

---------------

This is specific to my migrate. The following code is only applicable to a content type of 12. The other content types have a different data structure. If prepareRow returns False the row will be skipped.

------------------

// Map the article_postscript source field to the new destination fields.
//if((string)$row->xml->root->article->Title == ''){
// $row->xml->root->article->Title = $row->xml->root->Title;
//}
$postscript = $row->xml->html->root->article->Postscript->asXML();
$postscript = str_replace('','',$postscript);
$postscript = str_replace('','',$postscript);
$row->xml->html->root->article->Postscript = $postscript;

-------------------

Again this is something unique to my migrate. The content structure is contained in xml so the HTML is recognized by SimpleXML as xml. So the asXML() function returns a string containing the xml of the node. Now I can save this string to the node and it becomes a string node and is back to straight html. So I need to do this for all the nodes that contain html. Most of the time you will be able to pass the html string as a node and will not have to do this transform.

-------------------

//converts html nodes to string so they will load.
$body = $row->xml->html->root->article->Body->asXML();
$body = str_replace('','',$body);
$body = str_replace('','',$body);
$row->xml->html->root->article->Body = $body;
$title = $row->xml->html->root->article->Title->asXML();
$title = str_replace('','',$title);
$title = str_replace('','',$title);
$row->xml->html->root->article->Title = $title;
$drophead = $row->xml->html->root->article->Dropheader->asXML();
$drophead = str_replace('','',$drophead);
$drophead = str_replace('','',$drophead);
//If Dropheader is empty
$drophead = str_replace('','',$drophead);
$row->xml->html->root->article->Dropheader = $drophead;
//Array to allow conversion of Category text to IS
$cat_tax = array(
'Science and Environment' => 1,
'History' => 2,
'Social Issues' => 3,
'Family and Relationships' => 4,
'Life and Health' => 5,
'Religion and Spirituality' => 6,
'Biography' => 7,
'Ethics and Morality'=> 8,
'Society and Culture' => 9,
'Current Events and Politics' => 10,
'Philosophy and Ideas' => 11,
'Personal Development' => 12,
'Reviews' => 13,
'From the Publisher' => 14,
'Interviews' => 17,
);
//Convert additional taxonomies to tags
//$tax_id_in = (string)$row->xml->Taxonomy;
//$tax_id_array = explode(',',$tax_id_in);
//$tax_in_array = array();
//foreach($tax_id_array as $tax){
// If(is_null($cat_tax[tax]))
// $tax_in_array[] = $cat_tax[$tax];
//}
//$new_tax = implode(',',$tax_in_array);
//dpm($new_tax);
//dpm($row);
//$row->xml->Taxomomy = $new_tax;
// Change category text to ID
$category = (string)$row->xml->html->root->article->Category;
//Specify unknown category if we do not recognize the category
//This allows the migrate and allow us to fix later.
$tax_cat = $cat_tax[trim($category)];
//dpm($category);
if(is_null($tax_cat)) {$tax_cat = 18;}
//dpm($tax_cat);
$row->xml->html->root->article->Category = $tax_cat;

-------------

The category field in the source is a text field. The categories are a entity reference to a taxonomy field, which requires an id rather than text. I manually setup the categories ahead of time so I created an array that has the text as the key and the is as the content. Then you can use this to quickly look up the id for the text in he category field. Then we can replace the text in Category with the id. This works, another way to do this is migrate the categories first then use this migration to translate this for you. This is a feature built into migrate. The explanation of this will come later.

----------------

//modify the image file node.
//dpm((string)$row->xml->ID);
if((string)$row->xml->html->root->article->Image->File->asXML() != ''){
//dpm((string)$row->xml->html->root->article->Image->File->asXML());
$src = (string)$row->xml->html->root->article->Image->File->img->attributes()->src;
$src_new = str_replace('/visionmedia/uploadedImages/','http://assets.vision.org/uploadedimages/',$src);
$row->xml->html->root->article->Image->File->img->attributes()->src = $src_new;
$file_name = basename($src_new);
$file_path = rtrim(str_replace($file_name,'', $src_new), '/');;
$row->xml->html->root->article->Image->File->img->addChild('file_name',$file_name);
$row->xml->html->root->article->Image->File->img->addChild('file_path',$file_path);
}

--------------

There is alot of stuff here. Remember for the MigrateFile you need to present the file name and source directory. The Image/File node contains an img tag. So we need to get the scr attribute and extract the file name and source directory. So why the if? Migrate will import a null node as null, but this is php code running on the row. If you try to get the src attribute on a null node it will throw an error. So the if statement checks to see if the File node is empty (only contains /File) and skips this tranformation, Migrate will simply import a null or empty field.

The src is the relative path to the website, so the first thing we do is change this to full url to the s3 content storage. The path is basically the same except in the uploadedimages the i in the database is uppercase. This was a Windows server so it did not make a difference but the s3 url is case sensitive. We then use base name to extract the file name and use this to remove the file from the path for the file path and create a new child in the xml row to store these. I did not point this out but this is the xpath use in the field mapping above.

--------------

$email = (string)$row->xml->html->root->article->AuthorURL;
if (!empty($email)){
$email = 'mailto:'.$email;
$row->xml->html->root->article->AuthorURL = $email;
}

-------------

The author url is the email to the author of the article. We turn this into a mailto link so that it will generate a link to send the author an email.

---------------

$archive_html = (string)$row->xml->html->asXML();
$row->xml->addChild('archive_html',$archive_html);
$sidebar_element = (string)$row->xml->html->root->article->SidebarElement->SidebarElementInformation->asXML();
$row->xml->html->root->article->SidebarElement->SidebarElementInformation = $sidebar_element;
$slider_src = (string)$row->xml->Image;
$slider_src_new = str_replace('/visionmedia/uploadedImages/','http://assets.vision.org/uploadedimages/',$slider_src);
$row->xml->Image = $slider_src_new;
$slider_file_name = basename($slider_src_new);
$slider_file_path = rtrim(str_replace($slider_file_name,'', $slider_src_new), '/');;
$row->xml->Image->addChild('file_name',$slider_file_name);
$row->xml->Image->addChild('file_path',$slider_file_path);
//dpm($row);

---------------

The rest is repitition of the above techniques. Note that we return TRUE if we want to process the row and false if we do not want to process the row.

-----------------

$ret=TRUE;
//dpm($src);
}
//Need to add processing for other Article Content types especially 0 (HTML content)
//dpm($row);
return $ret;
}

}

----------

This is the class I use for one of the imports. I told you that I would show the use of another migrate in the field mappings. Below is a snippet of code from the issues migration. The issue contains entity reference to vision_articles that were imported from above.

-------------

$this->addFieldMapping('field_articles', 'article_id')
->sourceMigration('VisionArticle')
->xpath('/item/articles/article/ID');

--------------

So this says use the VisionArticle (I will show you were to find this next), it knows to look up the source ID and relate it to the DestinationID and store this in the field_articles field.

---------------

Migrate has been around for a while. Initially they said that the class would automaticall be registed and you could manually register them if needed. Then they changed to say that they will not manually register and you should register your classes. So you should have as part of your migration module the following that will register your classes. Note the name of the array element is the name used above.

----------------

function vision_migrate_migrate_api() {
$api = array(
'api' => 2,
// Give the group a human readable title.
'groups' => array(
'vision' => array(
'title' => t('Vision'),
),
),
'migrations' => array(
'VisionArticle' => array('class_name' => 'VisionArticleMigration'),
'VisionIssue' => array('class_name' => 'VisionIssueMigration'),
'VisionVideoArticle' => array('class_name' => 'VisionVideoArticleMigration'),
'VisionFrontpage' => array('class_name' => 'VisionFrontpageMigration'),
),
);

return $api;
}

----------------

I hope this makes things a little easier to understand. You will need some basic module building skills, knowing the file names and things like that, but this should help you through the more obscure parts of creating your migration class.

Tags: Planet Drupal
Categories: Elsewhere

Drupal Association News: Why we moved Drupal.org to a CDN

Planet Drupal - Tue, 22/07/2014 - 19:54

As of a little after 19:00 UTC on 2 July 2014, Drupal.org is now delivering as many sites as possible via our EdgeCast CDN.

Why a CDN?

We are primarily concerned with the network level security that a CDN will provide Drupal.org.

The CDN enables us to restrict access to our origin servers and disallow directly connecting to origin web nodes (which is currently possible). The two big advantages are:

  1. Accelerate cacheable content (static assets, static pages, etc).
  2. Allow us to easily manage network access and have a very large network in front of ours to absorb some levels of attacks.

Here are some examples of how the CDN helps Drupal.org:

  • We were having issues with a .js file on Drupal.org. The network was having routing issues to Europe and people were complaining about Drupal.org stalling on page loads. There was basically nothing we could do but wait for the route to get better. This should never be a problem again with EdgeCast's global network.
  • We constantly have reports of updates.drupal.org because blacklisted because it serves a ton of traffic coming in and out of a small number of IP addresses. This should also not happen again because the traffic is distributed through EdgeCast's network.
  • A few months ago we were under consistent attack from a group of IPs that was sub-HTTP and was saturating the origin network's bandwidth. We now have EdgeCast's large network in front of us that can 'take the beating'.
updates.drupal.org

By enabling EdgeCast's raw logs, rsync, and caching features, we were able to offload roughly 25 Mbps of traffic from our origin servers to EdgeCast. This change resulted in a drastic drop in origin network traffic, which freed up resources for Drupal.org. The use of rsync and the raw log features of EdgeCast enabled us to continue using our current project usage statistics tools. We do this by syncing the access logs from EdgeCast to Drupal.org’s utility server that processes project usage statistics.

Drupal.org

Minutes after switching www.drupal.org to use the CDN, there were multiple reports of faster page load times from Europe and North America.

A quick check from France / webpagetest.org:
Pre-CDN results: first page load=4.387s. repeat view=2.155s
Post-CDN results: first page load=3.779s, repeat view=1.285s

Why was the www.drupal.org rename required?

Our CDN uses a combination of Anycast IP addresses and DNS trickery. Each region (Asia, North America, Europe, etc.) has an Anycast IP address associated with it. For example cs73.wac.edgecastcdn.net might resolve to 72.21.91.99 in North America, and 117.18.237.99 in Japan.

Since 72.21.91.99, 117.18.237.99, etc. are Anycast IPs, generally their routes are as short as possible, and the IP will route to whatever POP is closest. This improves network performance globally.

Why can't drupal.org be a CNAME?

The DNS trickery above works by using a CNAME DNS record. Drupal.org must be an A record because the root domain cannot be a CNAME. MX records and any other records are not allowed by the RFC on CNAME records. To work around this DNS limitation, Drupal.org URLs are now redirected to www.drupal.org.

 

 

Related issues
https://www.drupal.org/node/2087411
https://www.drupal.org/node/2238131

Categories: Elsewhere

Stanford Web Services Blog: Cherry Picking - Small Git lesson

Planet Drupal - Tue, 22/07/2014 - 18:56

Small commits allow for big wins.

Something that I have been using a lot lately is GIT's cherry pick command. I find the command very usefull and it saves me bunches of time. Here is a quick lesson on what it does and an example use case.

What is GIT cherry-pick? man page

Git cherry pick allows you to merge a single commit from one branch into another.  To use the cherry pick command follow these steps:

Categories: Elsewhere

2bits: Improve Your Drupal Site Performance While Reducing Your Hosting Costs

Planet Drupal - Tue, 22/07/2014 - 17:00
We were recently apporached by a non-profit site that runs on Drupal. Major Complains Their major complaint was that the "content on the site does not show up". The other main complain is that the site is very slow. Diagnosis First ... In order to troubleshoot the disappearing content, we created a copy of the site in our lab, and proceeded to test it, to see if we can replicate the issues.

read more

Categories: Elsewhere

Drupalize.Me: Drupal 8 Has All the Hotness, but So Can Drupal 7

Planet Drupal - Tue, 22/07/2014 - 15:30

Drupal 8 is moving along at a steady pace, but not as quickly as we all had hoped. One great advantage this has is it gives developers time to backport lots of the features Drupal 8 has in core as modules for Drupal 7. My inspiration and blatant rip-off for this blog came from the presentation fellow Lullabot Dave Reid did at Drupalcon Austin about how to Future-Proof Your Drupal 7 Site. Dave’s presentation was more about what you can do to make your Drupal 7 “ready” where this article is more about showing off Drupal 8 “hotness” that we can use in production today.

Categories: Elsewhere

Drupal Easy: DrupalEasy Podcast 135: Deltron 3030 (Ronan Dowling, Backup and Migrate 3.0)

Planet Drupal - Tue, 22/07/2014 - 15:09
Download Podcast 135

Ronan Dowling (ronan), lead developer at Gorton Studios joins Ted and Mike to talk about all the new features in Backup and Migrate 3.0 including file and code backup and a improved plugin architecture. We also get up-to-speed with Drupal 8 development, review some Drupal-y statistics, make our picks of the week, and ask Ronan 5-ish questions.

read more

Categories: Elsewhere

Acquia: Enforcing Drupal Coding Standards During the Software Versioning Process

Planet Drupal - Tue, 22/07/2014 - 14:18

Cross-posted with permission from Genuine Interactive

Les is a web applications engineer at Genuine Interactive. He is a frequent Drupal community contributor. Genuine’s PHP team works on projects in a range of industries from CPG, B2B, financial services, and more.

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator - Elsewhere