Feed aggregator

Appnovation Technologies: Export Data From Views to CSV File

Planet Drupal - Tue, 17/02/2015 - 17:55

It is sometimes useful to be able to save our view results into a document to allow non-technical people to manipulate the data.

Categories: Elsewhere

John Goerzen: “Has Linux lost its way?” comments prompt a Debian developer to revisit FreeBSD after 20 years

Planet Debian - Tue, 17/02/2015 - 17:11

I’ll admit it. I have a soft spot for FreeBSD. FreeBSD was the first Unix I ran, and it was somewhere around 20 years ago that I did so, before I switched to Debian. Even then, I still used some of the FreeBSD Handbook to learn Linux, because Debian didn’t have the great Reference that it does now.

Anyhow, some comments in my recent posts (“Has modern Linux lost its way?” and Reactions to that, and the value of simplicity), plus a latent desire to see how ZFS fares in FreeBSD, caused me to try it out. I installed it both in VirtualBox under Debian, and in an old 64-bit Thinkpad sitting in my basement that previously ran Debian.

The results? A mixture of amazing and disappointing. I will say that I am quite glad that both exist; there is plenty of innovation happening everywhere and neat features exist everywhere, too. But I can also come right out and say that the statement that FreeBSD doesn’t have issues like Linux does is false and misleading. In many cases, it’s running the exact same stack. In others, it’s better, but there are also others where it’s worse. Perhaps this article might dispell a bit of the FUD surrounding jessie, while also showing off some of the nice things FreeBSD does. My conclusion: Both jessie and FreeBSD 10.1 are awesome Free operating systems, but both have their warts. This article is more about FreeBSD than Debian, but it will discuss a few of Debian’s warts as well.

The experience

My initial reaction to FreeBSD was: wow, this feels so familiar. It reminds me of a commercial Unix, or maybe of Linux from a few years ago. A minimal, well-documented base system, everything pretty much in logical places in the filesystem, and solid memory management. I felt right at home. It was almost reassuring, even.

Putting together a FreeBSD box is a lot of package installing and config file editing. The FreeBSD Handbook, describing how to install X, talks about editing this or that file for this or that feature. I like being able to learn directly how things fit together by doing this.

But then you start remembering the reasons you didn’t like Linux a few years ago, or the commercial Unixes: maybe it’s that programs like apache are still not as well supported, or maybe it’s that the default vi has this tendency to corrupt the terminal periodically, or perhaps it’s that root’s default shell is csh. Or perhaps it’s that I have to do a lot of package installing and config file editing. It is not quite the learning experience it once was, either; now there are things like “paste this XML file into some obscure polkit location to make your mouse work” or something.

Overall, there are some areas where FreeBSD kills it in a way no other OS does. It is unquestionably awesome in several areas. But there are a whole bunch of areas where it’s about 80% as good as Linux, a number of areas (even polkit, dbus, and hal) where it’s using the exact same stack Linux is (so all these comments about FreeBSD being so differently put together strike me as hollow), and frankly some areas that need a lot of work and make it hard to manage systems in a secure and stable way.

The amazing

Let’s get this out there: I’ve used ZFS too much to use any OS that doesn’t support it or something like it. Right now, I’m not aware of anything like ZFS that is generally stable and doesn’t cost a fortune, so pretty much: if your Unix doesn’t do ZFS, I’m not interested. (btrfs isn’t there yet, but will be awesome when it is.) That’s why I picked FreeBSD for this, rather than NetBSD or OpenBSD.

ZFS on FreeBSD is simply awesome. They have integreated it extremely well. The installer supports root on zfs, even encrypted root on zfs (though neither is a default). top on a FreeBSD system shows a line of ZFS ARC (cache) stats right alongside everything else. The ZFS defaults for maximum cache size, readahead, etc. auto-tune themselves at boot (unless overridden) based on the amount of RAM in a system and the system type. Seriously, these folks have thought of everything and it just reeks of solid. I haven’t seen ZFS this well integrated outside the Solaris-type OSs.

I have been using ZFSOnLinux for some time now, but it is just not as mature as ZFS on FreeBSD. ZoL, for instance, still has some memory tuning issues, and is not really suggested for 32-bit machines. FreeBSD just nails it. ZFS on FreeBSD even supports TRIM, which is not available in ZoL and I think fairly unique even among OpenZFS platforms. It also supports delegated administration of the filesystem, both to users and to jails on the system, seemingly very similar to Solaris zones.

FreeBSD also supports beadm, which is like a similar tool on Solaris. This lets you basically use ZFS snapshots to make lightweight “boot environments”, so you can select which to boot into. This is useful, say, before doing upgrades.

Then there are jails. Linux has tried so hard to get this right, and fallen on its face so many times, a person just wants to take pity sometimes. We’ve had linux-vserver, openvz, lxc, and still none of them match what FreeBSD jails have done for a long time. Linux’s current jail-du-jour is LXC, though it is extremely difficult to configure in a secure way. Even its author comments that “you won’t hear any of the LXC maintainers tell you that LXC is secure” and that it pretty much requires AppArmor profiles to achieve reasonable security. These are still rather in flux, as I found out last time I tried LXC a few months ago. My confidence in LXC being as secure as, say, KVM or FreeBSD is simply very low.

FreeBSD’s jails are simple and well-documented where LXC is complex and hard to figure out. Its security is fairly transparent and easy to control and they just work well. I do think LXC is moving in the right direction and might even get there in a couple years, but I am quite skeptical that even Docker is getting the security completely right.

The simply different

People have been throwing around the word “distribution” with respect to FreeBSD, PC-BSD, etc. in recent years. There is an analogy there, but it’s not perfect. In the Linux ecosystem, there is a kernel project, a libc project, a coreutils project, a udev project, a systemd/sysvinit/whatever project, etc. You get the idea. In FreeBSD, there is a “base system” project. This one project covers the kernel and the base userland. Some of what they use in the base system is code pulled in from elsewhere but maintained in their tree (ssh), some is completely homegrown (kernel), etc. But in the end, they have a nicely-integrated base system that always gets upgraded in sync.

In the Linux world, the distribution makers are responsible for integrating the bits from everywhere into a coherent whole.

FreeBSD is something of a toolkit to build up your system. Gentoo might be an analogy in the Linux side. On the other end of the spectrum, Ubuntu is a “just install it and it works, tweak later” sort of setup. Debian straddles the middle ground, offering both approaches in many cases.

There are pros and cons to each approach. Generally, I don’t think either one is better. They are just different.

The not-quite-there

I said that there are a lot of things in FreeBSD that are about 80% of where Linux is. Let me touch on them here.

Its laptop support leaves something to be desired. I installed it on a few-years-old Thinkpad — basically the best possible platform for working suspend in a Free OS. It has worked perfectly out of the box in Debian for years. In FreeBSD, suspend only works if it’s in text mode. If X is running, the video gets corrupted and the system hangs. I have not tried to debug it further, but would also note that suspend on closed lid is not automatic in FreeBSD; the somewhat obscure instuctions tell you what policykit pkla file to edit to make suspend work in XFCE. (Incidentally, it also says what policykit file to edit to make the shutdown/restart options work).

Its storage subsystem also has some surprising misses. Its rough version of LVM, LUKS, and md-raid is called GEOM. GEOM, however, supports only RAID0, RAID1, and RAID3. It does not support RAID5 or RAID6 in software RAID configurations! Linux’s md-raid, by comparison, supports RAID0, RAID1, RAID4, RAID5, RAID6, etc. There seems to be a highly experimental RAID5 patchset floating around for many years, but it is certainly not integrated into the latest release kernel. The current documentation makes no mention of RAID5, although it seems that a dated logical volume manager supported it. In any case, RAID5 does not seem to be well-supported in software like it is in Linux.

ZFS does have its raidz1 level, which is roughly the same as RAID5. However, that requires full use of ZFS. ZFS also does not support some common operations, like adding a single disk to an existing RAID5 group (which is possible with md-raid and many other implementations.) This is a ZFS limitation on all platforms.

FreeBSD’s filesystem support is rather a miss. They once had support for Linux ext* filesystems using the actual Linux code, but ripped it out because it was in GPL and rewrote it so it had a BSD license. The resulting driver really only works with ext2 filesystems, as it doesn’t work with ext3/ext4 in many situations. Frankly I don’t see why they bothered; they now have something that is BSD-licensed but only works with a filesystem so old nobody uses it anymore. There are only two FreeBSD filesystems that are really useable: UFS2 and ZFS.

Virtualization under FreeBSD is also not all that present. Although it does support the VirtualBox Open Source Edition, this is not really a full-featured or fast enough virtualization environment for a server. Its other option is bhyve, which looks to be something of a Xen clone. bhyve, however, does not support Windows guests, and requires some hoops to even boot Linux guest installers. It will be several years at least before it reaches feature-parity with where KVM is today, I suspect.

One can run FreeBSD as a guest under a number of different virtualization systems, but their instructions for making the mouse work best under VirtualBox did not work. There may have been some X.Org reshuffle in FreeBSD that wasn’t taken into account.

The installer can be nice and fast in some situations, but one wonders a little bit about QA. I had it lock up on my twice. Turns out this is a known bug reported 2 months ago with no activity, in which the installer attempts to use a package manger that it hasn’t set up yet to install optional docs. I guess the devs aren’t installing the docs in testing.

There is nothing like Dropbox for FreeBSD. Apparently this is because FreeBSD has nothing like Linux’s inotify. The Linux Dropbox does not work in FreeBSD’s Linux mode. There are sketchy reports of people getting an OwnCloud client to work, but in something more akin to rsync rather than instant-sync mode, if they get it working at all. Some run Dropbox under wine, apparently.

The desktop environments tend to need a lot more configuration work to get them going than on Linux. There’s a lot of editing of polkit, hal, dbus, etc. config files mentioned in various places. So, not only does FreeBSD use a lot of the same components that cause confusion in Linux, it doesn’t really configure them for you as much out of the box.

FreeBSD doesn’t support as many platforms as Linux. FreeBSD has only two platforms that are fully supported: i386 and amd64. But you’ll see people refer to a list of other platforms that are “supported”, but they don’t have security support, official releases, or even built packages. They includ arm, ia64, powerpc, and sparc64.

The bad: package management

Roughly 20 years ago, this was one of the things that pulled me to Debian. Perhaps I am spolied from running the distribution that has been the gold standard for package management for so long, but I find FreeBSD’s package management — even “pkg-ng” in 10.1-RELEASE — to be lacking in a number of important ways.

To start with, FreeBSD actually has two different package management systems: one for the base system, and one for what they call the ports/packages collection (“ports” being the way to install from source, and “packages” being the way to install from binaries, but both related to the same tree.) For the base system, there is freebsd-update which can install patches and major upgrades. It also has a “cron” option to automate this. Sadly, it has no way of automatically indicating to a calling script whether a reboot is necessary.

freebsd-update really manages less than a dozen packages though. The rest are managed by pkg. And pkg, it turns out, has a number of issues.

The biggest: it can take a week to get security updates. The FreeBSD handbook explains pkg audit -F which will look at your installed packages (but NOT the ones in the base system) and alert you to packages that need to be updates, similar to a stripped-down version of Debian’s debsecan. I discovered this myself, when pkg audit -F showed a vulnerability in xorg, but pkg upgrade showed my system was up-to-date. It is not documented in the Handbook, but people on the mailing list explained it to me. There are workarounds, but they can be laborious.

If that’s not bad enough, FreeBSD has no way to automatically install security patches for things in the packages collection. Debian has several (unattended-upgrades, cron-apt, etc.) There is “pkg upgrade”, but it upgrades everything on the system, which may be quite a bit more than you want to be upgraded. So: if you want to run Apache with PHP, and want it to just always apply security patches, FreeBSD packages are not up to the job like Debian’s are.

The pkg tool doesn’t have very good error-handling. In fact, its error handling seems to be nonexistent at times. I noticed that some packages had failures during install time, but pkg ignored them and marked the package as correctly installed. I only noticed there was a problem because I happened to glance at the screen at the right moment during messages about hundreds of packages. In Debian, by contrast, if there are any failures, at the end of the run, you get a nice report of which packages failed, and an exit status to use in scripts.

It also has another issue that Debian resolved about a decade ago: package scripts displaying messages that are important for the administrator, but showing so many of them that they scroll off the screen and are never seen. I submitted a bug report for this one also.

Some of these things just make me question the design of pkg. If I can’t trust it to accurately report if the installation succeeded, or show me the important info I need to see, then to what extent can I trust it?

Then there is the question of testing of the ports/packages. It seems that, automated tests aside, basically everyone is running off the “master” branch of the ports/packages. That’s like running Debian unstable on your servers. I am distinctly uncomfortable with this notion, though it seems FreeBSD people report it mostly works well.

There are some other issues, too: FreeBSD ports make no distinction between development and runtime files like Debian’s packages do. So, just by virtue of wanting to run a graphical desktop, you get all of the static libraries, include files, build scripts, etc for XOrg installed.

For a package as concerned about licensing as FreeBSD, the packages collection does not have separate sections like Debian’s main, contrib, and non-free. It’s all in one big pot: BSD-license, GPL-license, proprietary without source license. There is /usr/local/share/licenses where you can look up a license for each package, but there is no way with FreeBSD, like there is with Debian, to say “never even show me packages that aren’t DFSG-free.” This is useful, for instance, when running in a company to make sure you never install packages that are for personal use only or something.

The bad: ABI stability

I’m used to being able to run binaries I compiled years ago on a modern system. This is generally possible in Linux, assuming you have the correct shared libraries available. In FreeBSD, this is explicitly NOT possible. After every major version upgrade, you must reinstall or recompile every binary on your system.

This is not necessarily a showstopper for me, but it is a hassle for a lot of people.

Update 2015-02-17: Some people in the comments are pointing out compat packages in the ports that may help with this situation. My comment was based on advice in the FreeBSD Handbook stating “After a major version upgrade, all installed packages and ports need to be upgraded”. I have not directly tried this, so if the Handbook is overstating the need, then this point may be in error.

Conclusions

As I said above, I found little validation to the comments that the Debian ecosystem is noticeably worse than the FreeBSD one. Debian has its warts too — particularly with keeping software up-to-date. You can see that the two projects are designed around a different passion: FreeBSD’s around the base system, and Debian’s around an integrated whole system. It would be wrong to say that either of those is always better. FreeBSD’s approach clearly produces some leading features, especially jails and ZFS integration. Yet Debian’s approach also produces some leading features in the way of package management and security maintainability beyond the small base.

My criticism of excessive complexity in the polkit/cgmanager/dbus area still stands. But to those people commenting that FreeBSD hasn’t “lost its way” like Linux has, I would point out that FreeBSD mostly uses these same components also, and FreeBSD has excessive complexity in its ports/package system and system management tools. I think it’s a draw. You pick the best for your use case. If you’re looking for a platform to run a single custom app then perhaps all of the Debian package management benefits don’t apply to you (you may not even need FreeBSD’s packages, or just a few). The FreeBSD ZFS support or jails may well appeal. If you’re looking to run a desktop environment, or a server with some application that needs a ton of PHP, Python, Perl, or C libraries, then Debian’s package management and security handling may well be attractive.

I am disappointed that Debian GNU/kFreeBSD will not be a release architecture in jessie. That project had the promise to provide a best of both worlds for those that want jails or tight ZFS integration.

Categories: Elsewhere

Tag1 Consulting: How to Maintain Contrib Modules for Drupal and Backdrop at the Same Time - Part 2

Planet Drupal - Tue, 17/02/2015 - 17:00

This is the second in a series of blog posts about the relationship between Drupal and Backdrop CMS, a recently-released fork of Drupal. The goal of the series is to explain how a module (or theme) developer can take a Drupal project they currently maintain and support it for Backdrop as well, while keeping duplicate work to a minimum.

read more

Categories: Elsewhere

Clemens Tolboom: Delete and edit comments on closed node

Planet Drupal - Tue, 17/02/2015 - 16:29

Having a forum you needs quick deletions of improper comments.

In Drupal 7 and Drupal 8 you have to visit admin/content/comments to do so. But then you loose the thread.

You could review and use this patch or add this to your custom module. The first needs review and testing. The later needs a Drupal coder.

Categories: Elsewhere

Drupal Commerce: Using OpenID Connect for Single Sign-On with Drupal

Planet Drupal - Tue, 17/02/2015 - 16:03

At Commerce Guys we provide a varied range of services, including our cloud PaaS Platform.sh, this Drupal Commerce community website, support, and the Commerce Marketplace.

Our users may need to log in to any of these services, and sometimes several at the same time. So we needed to have a shared authentication system, a way of synchronizing user accounts, and single sign-on (SSO) functionality.

After a lot of research on the existing methods, such as CAS, we found that there was no generic open-source solution which would cover all of our current needs and would also allow us to grow and scale in the future when adding new features or applications.

We decided to implement the OAuth 2.0 and OpenID Connect protocols, which were designed to be flexible, yet simple and standardized - exactly what we wanted.

Categories: Elsewhere

Enrico Zini: akonadi-install

Planet Debian - Tue, 17/02/2015 - 15:34
Setting up Akonadi

Now that I have a CalDAV server that syncs with my phone I would like to use it from my desktop.

It looks like akonadi is able to sync with CalDAV servers, so I'm giving it a try.

First thing first is to give a meaning to the arbitrary name of this thing. Wikipedia says it is the oracle goddess of justice in Ghana. That still does not hint at all at personal information servers, but seems quite nice. Ok. I gave up with software having purpose-related names ages ago.

# apt-get install akonadi-server akonadi-backend-postgresql

Akonadi wants a SQL database as a backend. By default it uses MySQL, but I had enough of MySQL ages ago.

I tried SQLite but the performance with it is terrible. Terrible as in, it takes 2 minutes between adding a calendar entry and having it show up in the calendar. I'm fascinated by how Akonadi manages to use SQLite so badly, but since I currently just want to get a job done, next in line is PostgreSQL:

# su - postgres $ createuser enrico $ psql postgres postgres=# alter user enrico createdb;

Then as enrico:

$ createdb akonadi-enrico $ cat <<EOT > ~/.config/akonadi/akonadiserverrc [%General] Driver=QPSQL [QPSQL] Name=akonadi-enrico StartServer=false Host= Options= ServerPath= InitDbPath=

I can now use kontact to connect Akonadi to my CalDAV server and it works nicely, both with calendar and with addressbook entries.

KDE has at least two clients for Akonadi: Kontact, which is a kitchen sink application similar to Evolution, and KOrganizer, which is just the calendar and scheduling component of Kontact.

Both work decently, and KOrganizer has a pretty decent startup time. I now have a usable desktop PIM application that is synced with my phone. W00T!

Next step is to port my swift little calendar display tool to use Akonadi as a back-end.

Categories: Elsewhere

Drupal @ Penn State: Autopost to Facebook

Planet Drupal - Tue, 17/02/2015 - 15:15

I ran into an issue with the Drupal for Facebook module, both for D6 and D7, where I wanted articles to auomatically be posted to Facebook when they are submitted.  There appeared to be no way to do this via the module and I had played around with Rules to see if that would work, but no luck.

Categories: Elsewhere

Peter Eisentraut: Listing screen sessions on login

Planet Debian - Tue, 17/02/2015 - 02:00

There is a lot of helpful information about screen out there, but I haven’t found anything about this. I don’t want to “forget” any screen sessions, so I’d like to be notified when I log into a box and there are screens running for me. Obviously, there is screen -ls, but it needs to be wrapped in a bit logic so that it doesn’t annoy when there is no screen running or even installed.

After perusing the screen man page a little, I came up with this for .bash_profile or .zprofile:

if which screen >/dev/null; then screen -q -ls if [ $? -ge 10 ]; then screen -ls fi fi

The trick is that -q in conjuction with -ls gives you exit codes about the current status of screen.

Here is an example of how this looks in practice:

~$ ssh host Last login: Fri Feb 13 11:30:10 2015 from 192.0.2.15 There is a screen on: 31572.pts-0.foobar (2015-02-15 13.03.21) (Detached) 1 Socket in /var/run/screen/S-peter. peter@host:~$
Categories: Elsewhere

Matthew Garrett: Intel Boot Guard, Coreboot and user freedom

Planet Debian - Mon, 16/02/2015 - 21:44
PC World wrote an article on how the use of Intel Boot Guard by PC manufacturers is making it impossible for end-users to install replacement firmware such as Coreboot on their hardware. It's easy to interpret this as Intel acting to restrict competition in the firmware market, but the reality is actually a little more subtle than that.

UEFI Secure Boot as a specification is still unbroken, which makes attacking the underlying firmware much more attractive. We've seen several presentations at security conferences lately that have demonstrated vulnerabilities that permit modification of the firmware itself. Once you can insert arbitrary code in the firmware, Secure Boot doesn't do a great deal to protect you - the firmware could be modified to boot unsigned code, or even to modify your signed bootloader such that it backdoors the kernel on the fly.

But that's not all. Someone with physical access to your system could reflash your system. Even if you're paranoid enough that you X-ray your machine after every border crossing and verify that no additional components have been inserted, modified firmware could still be grabbing your disk encryption passphrase and stashing it somewhere for later examination.

Intel Boot Guard is intended to protect against this scenario. When your CPU starts up, it reads some code out of flash and executes it. With Intel Boot Guard, the CPU verifies a signature on that code before executing it[1]. The hash of the public half of the signing key is flashed into fuses on the CPU. It is the system vendor that owns this key and chooses to flash it into the CPU, not Intel.

This has genuine security benefits. It's no longer possible for an attacker to simply modify or replace the firmware - they have to find some other way to trick it into executing arbitrary code, and over time these will be closed off. But in the process, the system vendor has prevented the user from being able to make an informed choice to replace their system firmware.

The usual argument here is that in an increasingly hostile environment, opt-in security isn't sufficient - it's the role of the vendor to ensure that users are as protected as possible by default, and in this case all that's sacrificed is the ability for a few hobbyists to replace their system firmware. But this is a false dichotomy - UEFI Secure Boot demonstrated that it was entirely possible to produce a security solution that provided security benefits and still gave the user ultimate control over the code that their machine would execute.

To an extent the market will provide solutions to this. Vendors such as Purism will sell modern hardware without enabling Boot Guard. However, many people will buy hardware without consideration of this feature and only later become aware of what they've given up. It should never be necessary for someone to spend more money to purchase new hardware in order to obtain the freedom to run their choice of software. A future where users are obliged to run proprietary code because they can't afford another laptop is a dystopian one.

Intel should be congratulated for taking steps to make it more difficult for attackers to compromise system firmware, but criticised for doing so in such a way that vendors are forced to choose between security and freedom. The ability to control the software that your system runs is fundamental to Free Software, and we must reject solutions that provide security at the expense of that ability. As an industry we should endeavour to identify solutions that provide both freedom and security and work with vendors to make those solutions available, and as a movement we should be doing a better job of articulating why this freedom is a fundamental part of users being able to place trust in their property.

[1] It's slightly more complicated than that in reality, but the specifics really aren't that interesting.

comments
Categories: Elsewhere

Colan Schwartz: Integrating remote data into Drupal 7 and exposing it to Views

Planet Drupal - Mon, 16/02/2015 - 20:45
Topics: 

Drupal's strength as a content management framework is in its ability to effectively manage and display structured content through its Web user interface. However, the out-of-the-box system assumes all data is local (stored in the database). This can present challenges when attempting to integrate remote data stored in other systems. You cannot, by default, display non-local records as pages. While setting this up is in itself a challenge, it is an even bigger challenge to manipulate, aggregate and display this data through Views.

I've split this article into the following sections and subsections. Click on any of these to jump directly to the one of them.

  1. Introduction
  2. What's Changed
  3. Architecture
    1. Remote entity definition
    2. Access to remote properties
    3. Remote property definition
    4. Entity instances as Web pages
    5. Web services integration
    6. Temporary local storage
    7. Implementing the remote connection class
    8. Implementing the remote query class
  4. Views support
    1. Basic set-up
    2. Converting from an EntityFieldQuery
  5. Alternatives
  6. References
Introduction

This exposition is effectively a follow-up to some excellent articles from years past:

I'd recommend reading them for background information.

The first article (written in the Drupal 6 days) describes a "Wipe/rebuild import" method (Method 3) to bring remote data into Drupal. That's basically what we'll be discussing here, but there is now a standard method for doing so. What's interesting is that future plans mentioned there included per-field storage engines (with some being remote). The idea never made it very far. This is most likely because grabbing field data from multiple locations is far too inefficient (multiple Web-service calls) compared to fetching an entire record from a single location.

Taking a look at the second article, you can now see that Drupal 7 is dominant, and we have more tools at our disposal, but at the time this one was written, we still didn't have all of them in place. We did, however, have the following APIs for dealing with entities.

  1. The entity API in Drupal Core
  2. The Entity API contributed module
What's Changed

We now have another API, the Remote Entity API, which was inspired by Florian's article. As you can imagine, this API is dependent on the Entity API which is in turn dependent on the Drupal Core's entity functionality.

I recently added support for this new API to EntityFieldQuery Views Backend, the module allowing Views to work with data stored outside of the local SQL database. Previously, it supported non-SQL data, but still assumed that this data was local. Tying these two components together gives us what we need to achieve our goal.

Architecture

So we really need to take advantage of the three (3) entity APIs to load and display individual remote records.

  1. The entity API in Drupal Core
  2. The Entity API contributed module
  3. The Remote Entity API contributed module

The first provides basic entity functionality in Drupal. The second adds enhanced functionality for custom entities. The third and final API adds additional handling mechanisms for working with any remote data.

We'll need the following contributed modules to make all of this work.

In addition to the above, a new custom module is necessary. I recommend something like siteshortname_entities_remote for the machine name. You can have another one, siteshortname_entities_local, for local entities without all of the remote code if necessary. In the .info file, add remote_entity (the Remote Entity API) as a dependency.

You'll want to divide your module file into at least three (3) parts:

  1. Entity APIs: Code for defining remote entities through any of the above entity APIs. (Part I)
  2. Drupal Core: Code for implementing Drupal Core hooks. This is basically a hook_menu() implementation with some helper functions to get your entity instances to show up at specific paths based on the remote entity IDs. (Part II)
  3. Web Service Clients: Code for implementing what's necessary for the Web Service Clients module, a prerequisite for the Remote Entity API. It's essentially the external communications component for accessing your remote data. (Part III)

Most of the code will be in PHP class files you'll want in a classes subdirectory (autoloaded by defining these in your .info file), but you'll still need some code in your main module file.

We'll be adding only one new entity in this exercise, but the code is extensible enough to allow for more. Once one of these is set up, adding more is (in most cases) trivial.

Remote entity definition

Your basic remote entity definitions will exist in the Entity APIs section of your module file, Part I. Within the hook_entity_info() implementation, you'll see that different properties within the definition will be used by different layers, the three APIs.

For the following examples, let's assume we have a remote event data type.

<?php
/****************************************************************************
 ** Entity APIs
 ****************************************************************************/

/**
 * Implements hook_entity_info().
 *
 * @todo Add 'bundles' for different types of remote content.
 * @todo Add 'entity keys' => 'needs remote save' if remote saving required.
 * @todo Remove 'static cache' and 'field cache' settings after development.
 */
function siteshortname_entities_remote_entity_info() {
  $entities['siteshortname_entities_remote_event'] = array(

    // Core properties.
    'label' => t('Event'),
    'controller class' => 'RemoteEntityAPIDefaultController',
    'base table' => 'siteshortname_entities_remote_events',
    'uri callback' => 'entity_class_uri',
    'label callback' => 'remote_entity_entity_label',
    'fieldable' => FALSE,
    'entity keys' => array(
      'id' => 'eid',
      'label' => 'event_name',
    ),
    'view modes' => array(
      'full' => array(
        'label' => t('Full content'),
        'custom settings' => FALSE,
      ),
    ),
    'static cache' => FALSE,
    'field cache' => FALSE,

    // Entity API properties.
    'entity class' => 'SiteshortnameEvent',
    'module' => 'siteshortname_entities_remote',
    'metadata controller class' => 'RemoteEntityAPIDefaultMetadataController',
    'views controller class' => 'EntityDefaultViewsController',

    // Remote Entity API properties.
    'remote base table' => 'siteshortname_entities_remote_events',
    'remote entity keys' => array(
      'remote id' => 'event_id',
      'label' => 'event_name',
    ),
    'expiry' => array(
      // Number of seconds before a locally cached instance must be refreshed
      // from the remote source.
      'expiry time' => 600,
      // A boolean indicating whether or not to delete expired local entities
      // on cron.
      'purge' => FALSE,
    ),
  );

  // Get the property map data.
  $remote_properties = siteshortname_entities_remote_get_remote_properties();

  // Assign each map to its corresponding entity.
  foreach ($entities as $key => $einfo) {
    $entities[$key]['property map'] =
      drupal_map_assoc(array_keys($remote_properties[$key]));
  }

  // Return all of the entity information.
  return $entities;
}
?>
Notes
  1. Just like the entity type node, which is subdivided into content types (generically referred to as bundles in Drupal-speak), we can subdivide remote entities into their own bundles. In this case, we could have a "High-school event" bundle and a "College event" bundle that vary slightly, but instances of both would still be members of the entity type Event. We won't be setting this up here though.
  2. In this article, we won't be covering remote saving (only remote loading), but it is possible through the remote API.
  3. Make sure to adjust the cache settings properly once development is complete.
  4. Detailed documentation on the APIs is available for the Core entity API, the Entity API, and the the Remote Entity API.
Access to remote properties

As we're not using the Field API to attach information to our entities, we need to do it with properties. The code below exposes the data we'll define shortly.

<?php
/**
 * Implements hook_entity_property_info_alter().
 *
 * This is needed to use wrappers to access the remote entity
 * data in the entity_data property of remote entities.
 *
 * @see: Page 107 of the Programming Drupal 7 Entities book.  The code below is
 *   a variation on it.
 * @todo: Remove whenever this gets added to the remote_entity module.
 */
function siteshortname_entities_remote_entity_property_info_alter(&$info) {

  // Set the entity types and get their properties.
  $entity_types = array(
    'siteshortname_entities_remote_event',
  );

  $remote_properties = siteshortname_entities_remote_get_remote_properties();

  // Assign the property data to each entity.
  foreach ($entity_types as $entity_type) {
    $properties = &$info[$entity_type]['properties'];
    $entity_data = &$properties['entity_data'];
    $pp = &$remote_properties[$entity_type];
    $entity_data['type'] = 'remote_entity_' . $entity_type;

    // Set the default getter callback for each property.
    foreach ($pp as $key => $pinfo) {
      $pp[$key]['getter callback'] = 'entity_property_verbatim_get';
    }

    // Assign the updated property info to the entity info.
    $entity_data['property info'] = $pp;
  }
}
?>
Remote property definition

This is where we define the field (or in this case property) information, the data attached to each entity, that we exposed above.

<?php
/**
 * Get remote property information for remote entities.
 *
 * @return
 *   An array of property information keyed by entity type.
 */
function siteshortname_entities_remote_get_remote_properties() {

  // Initialize a list of entity properties.
  $properties = array();

  // Define properties for the entity type.
  $properties['siteshortname_entities_remote_event'] = array(

    // Event information.
    'event_id' => array(
      'label' => 'Remote Event ID',
      'type' => 'integer',
      'description' => 'The remote attribute "id".',
      'views' => array(
        'filter' => 'siteshortname_entities_remote_views_handler_filter_event_id',
      ),
    ),
    'event_date' => array(
      'label' => 'Date',
      'type' => 'date',
      'description' => 'The remote attribute "date".',
      'views' => array(
        'filter' => 'siteshortname_entities_remote_views_handler_filter_event_date',
      ),
    ),
    'event_details' => array(
      'label' => 'Details',
      'type' => 'text',
      'description' => 'The remote attribute "details".',
    ),
  );

  // Return all of the defined property info.
  return $properties;
}
?>
Notes
  1. Try to remember the distinction between local and remote entity IDs. At the moment, we're only interested in remote properties so we don't don't need to worry about local IDs just yet.
  2. Don't worry too much about the Views filters. These are Views filter handler classes. They're only necessary if you'd like custom filters for the respective properties.
Entity instances as Web pages

This starts the Core Hooks section of the module file, Part II. In this section, we're providing each remote data instance as a Web page just like standard local content within Drupal via nodes.

The hook_menu() implementation responds to hits to the event/EVENT_ID path, loads the object, themes all of the data, and then returns it for display as a page. We're assuming all of your HTML output will be in a template in the includes/siteshortname_entities_remote.theme.inc file in your module's directory.

<?php
/****************************************************************************
 ** Drupal Core
 ****************************************************************************/

/**
 * Implements hook_menu().
 */
function siteshortname_entities_remote_menu() {
  $items = array();

  $items['event/%siteshortname_entities_remote_event'] = array(
    'title' => 'Remote Event',
    'page callback' => 'siteshortname_entities_remote_event_view',
    'page arguments' => array(1),
    'access arguments' => array('access content'),
  );

  return $items;
}

/**
 * Menu autoloader wildcard for path 'event/REMOTE_ID'.
 *
 * @see hook_menu() documentation.
 * @param $remote_id
 *   The remote ID of the record to load.
 * @return
 *   The loaded object, or FALSE on failure.
 */
function siteshortname_entities_remote_event_load($remote_id) {
  return remote_entity_load_by_remote_id('siteshortname_entities_remote_event', $remote_id);
}

/**
 * Page callback for path 'event/%remote_id'.
 *
 * @param $event
 *   The auto-loaded object.
 * @return
 *   The themed output for the event page.
 */
function siteshortname_entities_remote_event_view($event) {
  $fullname = $event->name;
  drupal_set_title($fullname);
  $event_output = theme('siteshortname_entities_remote_event', array(
    'event' => $event,
  ));
  return $event_output;
}

/**
 * Implements hook_theme().
 */
function siteshortname_entities_remote_theme() {
  return array(
    'siteshortname_entities_remote_event' => array(
      'variables' => array('event' => NULL),
      'file' => 'includes/siteshortname_entities_remote.theme.inc',
    ),
  );
}
?>

There's one more thing to do here. In our hook_entity_info() implementation, we stated the following:

<?php
    'entity class' => 'SiteshortnameEvent',
?>

We could have used Entity here instead of SiteshortnameEvent, but we want a custom class here so that we can override the URL path for these entities. So add the following class:

<?php
class SiteshortnameEvent extends Entity {
  /**
   * Override defaultUri().
   */
  protected function defaultUri() {
    return array('path' => 'event/' . $this->remote_id);
  }
}
?>
Web services integration We're now onto Part III, setting up Web-service endpoints and associating remote resources with entities. This is done through the implementation of a few Web Service Clients hooks. <?php
/****************************************************************************
 ** Web Service Clients
 ****************************************************************************/

/**
 * Implements hook_clients_connection_type_info().
 */
function siteshortname_entities_remote_clients_connection_type_info() {
  return array(
    'our_rest' => array(
      'label'  => t('REST Data Services'),
      'description' => t('Connects to our data service using REST endpoints.'),
      'tests' => array(
        'event_retrieve_raw' => 'SiteshortnameEntitiesRemoteConnectionTestEventRetrieveRaw',
      ),
      'interfaces' => array(
        'ClientsRemoteEntityInterface',
      ),
    ),
  );
}

/**
 * Implements hook_clients_default_connections().
 */
function siteshortname_entities_remote_clients_default_connections() {

  $connections['my_rest_connection'] = new clients_connection_our_rest(array(
    'endpoint' => 'https://data.example.com',
    'configuration' => array(
      'username' => '',
      'password' => '',
    ),
    'label' => 'Our REST Service',
    'type' => 'our_rest',
  ), 'clients_connection');

  return $connections;
}

/**
 * Implements hook_clients_default_resources().
 */
function siteshortname_entities_remote_clients_default_resources() {
  $resources['siteshortname_entities_remote_event'] = new clients_resource_remote_entity(array(
    'component' => 'siteshortname_entities_remote_event',
    'connection' => 'my_rest_connection',
    'label' => 'Resource for remote events',
    'type' => 'remote_entity',
  ), 'clients_resource');

  return $resources;
}
?>

In the first function, we're adding metadata for the connection. In the second one, we're setting the endpoint and its credentials. The third function is what ties our remote entity, defined earlier, with the remote resource. There's some information on this documentation page, but there's more in the README file.

Temporary local storage

We'll need to store the remote data in a local table as a non-authoritative cache. The frequency with which it gets refreshed is up to you, as described earlier in this article. We'll need one table per entity. The good news is that we don't need to worry about the details; this is handled by the Remote Entity API. It provides a function returning the default schema. If you want to do anything different here, you are welcome to define your own.

The argument provided in the call is used for the table description as "The base table for [whatever you provide]". This will go in your siteshortname_entities_remote.install file.

<?php
/**
 * Implementation of hook_schema().
 */
function siteshortname_entities_remote_schema() {
  $schema = array(
    'siteshortname_entities_remote_events' => remote_entity_schema_table('our remote event entity type'),
  );

  return $schema;
}
?>

If you don't actually want to save one or more of your remote entities locally (say because you have private data you'd rather not have stored on your publicly-accessible Web servers), you can alter this default behaviour by defining your own controller which overrides the save() method.

<?php
/**
 * Entity controller extending RemoteEntityAPIDefaultController
 *
 * For most of our cases the default controller is fine, but we can use
 * this one for entities we don't want stored locally.  Override the save
 * behaviour and do not keep a local cached copy.
 */
class SiteshortnameEntitiesRemoteNoLocalAPIController extends RemoteEntityAPIDefaultController {

  /**
   * Don't actually save anything.
   */
  public function save($entity, DatabaseTransaction $transaction = NULL) {
    $entity->eid = uniqid();
  }
}
?>
Implementing the remote connection class

Create a file for the connection class.

<?php
/**
 * @file
 * Contains the clients_connection_our_rest class.
 */

/**
 * Set up a client connection to our REST services.
 *
 *  @todo Make private functions private once development is done.
 */
class clients_connection_our_rest extends clients_connection_base
  implements ClientsConnectionAdminUIInterface, ClientsRemoteEntityInterface {

}
?>

We'll now divide the contents of said file into three (3) sections, ClientsRemoteEntityInterface implementations, clients_connection_base overrides and local methods.

ClientsRemoteEntityInterface implementations

As you can see below, we've got three (3) methods here.

  • remote_entity_load() will load a remote entity with the provided remote ID.
  • entity_property_type_map() is supposedly required to map remote properties to local ones, but it wasn't clear to me how this gets used.
  • getRemoteEntityQuery() returns a query object, either a "select", "insert" or "update" based on whichever one was requested.
<?php
  /**************************************************************************
   * ClientsRemoteEntityInterface implementations.
   **************************************************************************/

  /**
   * Load a remote entity.
   *
   * @param $entity_type
   *   The entity type to load.
   * @param $id
   *   The (remote) ID of the entity.
   *
   * @return
   *  An entity object.
   */
  function remote_entity_load($entity_type, $id) {
    $query = $this->getRemoteEntityQuery('select');
    $query->base($entity_type);
    $query->entityCondition('entity_id', $id);
    $result = $query->execute();

    // There's only one. Same pattern as entity_load_single().
    return reset($result);
  }

  /**
   * Provide a map of remote property types to Drupal types.
   *
   * Roughly analogous to _entity_metadata_convert_schema_type().
   *
   * @return
   *   An array whose keys are remote property types as used as types for fields
   *   in hook_remote_entity_query_table_info(), and whose values are types
   *   recognized by the Entity Metadata API (as listed in the documentation for
   *   hook_entity_property_info()).
   *   If a remote property type is not listed here, it will be mapped to 'text'
   *   by default.
   */
  function entity_property_type_map() {
    return array(
      'EntityCollection' => 'list<string>',
    );
  }

  /**
   * Get a new RemoteEntityQuery object appropriate for the connection.
   *
   * @param $query_type
   *  (optional) The type of the query. Defaults to 'select'.
   *
   * @return
   *  A remote query object of the type appropriate to the query type.
   */
  function getRemoteEntityQuery($query_type = 'select') {
    switch ($query_type) {
      case 'select':
        return new OurRestRemoteSelectQuery($this);
      case 'insert':
        return new OurRestRemoteInsertQuery($this);
      case 'update':
        return new OurRestRemoteUpdateQuery($this);
    }
  }
?>
Parent overrides

The only method we need to worry about here is callMethodArray(). Basically, it sets up the remote call.

<?php
  /**************************************************************************
   * clients_connection_base overrides
   **************************************************************************/

  /**
   * Call a remote method with an array of parameters.
   *
   * This is intended for internal use from callMethod() and
   * clients_connection_call().
   * If you need to call a method on given connection object, use callMethod
   * which has a nicer form.
   *
   * Subclasses do not necessarily have to override this method if their
   * connection type does not make sense with this.
   *
   * @param $method
   *  The name of the remote method to call.
   * @param $method_params
   *  An array of parameters to passed to the remote method.
   *
   * @return
   *  Whatever is returned from the remote site.
   *
   * @throws Exception on error from the remote site.
   *  It's up to subclasses to implement this, as the test for an error and
   *  the way to get information about it varies according to service type.
   */
  function callMethodArray($method, $method_params = array()) {

    switch ($method) {
      case 'makeRequest':

        // Set the parameters.
        $resource_path = $method_params[0];
        $http_method = $method_params[1];
        $data = isset($method_params[2]) ? $method_params[2] : array();

        // Make the request.
        $results = $this->makeRequest($resource_path, $http_method, $data);
        break;
    }

    return $results;
  }
?>
Local methods We're assuming REST here, but you can use any protocol.

We have a makeRequest() method, which actually performs the remote call, and handleRestError() which deals with any errors which are returned.

<?php
  /**************************************************************************
   * Local methods
   **************************************************************************/

  /**
   * Make a REST request.
   *
   * Originally from clients_connection_drupal_services_rest_7->makeRequest().
   * Examples:
   * Retrieve an event:
   *  makeRequest('event?eventId=ID', 'GET');
   * Update a node:
   *  makeRequest('node/NID', 'POST', $data);
   *
   * @param $resource_path
   *  The path of the resource. Eg, 'node', 'node/1', etc.
   * @param $http_method
   *  The HTTP method. One of 'GET', 'POST', 'PUT', 'DELETE'. For an explanation
   *  of how the HTTP method affects the resource request, see the Services
   *  documentation at http://drupal.org/node/783254.
   * @param $data = array()
   *  (Optional) An array of data to pass to the request.
   * @param boolean $data_as_headers
   *   Data will be sent in the headers if this is set to TRUE.
   *
   * @return
   *  The data from the request response.
   *
   *  @todo Update the first two test classes to not assume a SimpleXMLElement.
   */
  function makeRequest($resource_path, $http_method, $data = array(), $data_as_headers = FALSE) {

    // Tap into this function's cache if there is one.
    $request_cache_map = &drupal_static(__FUNCTION__);

    // Set the options.
    $options = array(
      'headers' => $this->getHeaders(),  // Define if you need it.
      'method'  => $http_method,
      'data'    => $data,
    );

    // If cached, we have already issued this request during this page request so
    // just use the cached value.
    $request_path = $this->endpoint . $context_path . '/' . $resource_path;

    // Either get the data from the cache or send a request for it.
    if (isset($request_cache_map[$request_path])) {
      // Use the cached copy.
      $response = $request_cache_map[$request_path];
    } else {
      // Not cached yet so fire off the request.
      $response = drupal_http_request($request_path, $options);

      // And then cache to avoid duplicate calls within the page request.
      $request_cache_map[$request_path] = $response;
    }

    // Handle any errors and then return the response.
    $this->handleRestError($request_path, $response);
    return $response;
  }

  /**
   * Common helper for reacting to an error from a REST call.
   *
   * Originally from clients_connection_drupal_services_rest_7->handleRestError().
   * Gets the error from the response, logs the error message,
   * and throws an exception, which should be caught by the module making use
   * of the Clients connection API.
   *
   * @param $response
   *  The REST response data, decoded.
   *
   * @throws Exception
   */
  function handleRestError($request, $response) {

    // Report and throw an error if we get anything unexpected.
    if (!in_array($response->code, array(200, 201, 202, 204, 404))) {

      // Report error to the logs.
      watchdog('clients', 'Error with REST request (@req). Error was code @code with error "@error" and message "@message".', array(
        '@req'      => $request,
        '@code'     => $response->code,
        '@error'    => $response->error,
        '@message'  => isset($response->status_message) ? $response->status_message : '(no message)',
      ), WATCHDOG_ERROR);

      // Throw an error with which callers must deal.
      throw new Exception(t("Clients connection error, got message '@message'.", array(
        '@message' => isset($response->status_message) ? $response->status_message : $response->error,
      )), $response->code);
    }
  }
?>
Implementing the remote query class

This is where the magic happens. We need a new class file, OurRestRemoteSelectQuery.class.php, that will assemble the select query and execute it based on any set conditions.

Class variables and constructor

First, let's define the class, its variables and its constructor. It's a subclass of the RemoteEntityQuery class. Most of the standard conditions would be added to the $conditions array, but conditions handled in a special way (say those dealing with metadata) can be set up as variables themselves. In the example below, the constructor sets the active user as it can affect which data is returned. You can, however, set whatever you need to initialize your subclass, or leave it out entirely.

<?php
/**
 * @file
 * Contains the OurRestRemoteSelectQuery class.
 */

/**
 * Select query for our remote data.
 *
 * @todo Make vars protected once no longer developing.
 */
class OurRestRemoteSelectQuery extends RemoteEntityQuery {

  /**
   * Determines whether the query is RetrieveMultiple or Retrieve.
   *
   * The query is Multiple by default, until an ID condition causes it to be
   * single.
   */
  public $retrieve_multiple = TRUE;

  /**
   * An array of conditions on the query. These are grouped by the table they
   * are on.
   */
  public $conditions = array();

  /**
   * The from date filter for event searches
   */
  public $from_date = NULL;

  /**
   * The to date filter for event searches
   */
  public $to_date = NULL;

  /**
   * The user id.
   */
  public $user_id = NULL;

  /**
   * Constructor to generically set up the user id condition if
   * there is a current user.
   *
   * @param $connection
   */
  function __construct($connection) {
    parent::__construct($connection);
    if (user_is_logged_in()) {
      global $user;
      $this->useridCondition($user->name);
    }
  }
}
?>
Setting conditions

We have three (3) methods which set conditions within the query. entityCondition() sets conditions affecting entities in general. (The only entity condition supported here is the entity ID.) propertyCondition() sets conditions related to properties specific to the type of data. For example, this could be a location filter for one or more events. Finally, we have useridCondition() which sets the query to act on behalf of a specific user. Here we simply record the current Drupal user.

<?php
  /**
   * Add a condition to the query.
   *
   * Originally based on the entityCondition() method in EntityFieldQuery, but
   * largely from USDARemoteSelectQuery (Programming Drupal 7 Entities) and
   * MSDynamicsSoapSelectQuery.
   *
   * @param $name
   *  The name of the entity property.
   */
  function entityCondition($name, $value, $operator = NULL) {

    // We only support the entity ID for now.
    if ($name == 'entity_id') {

      // Get the remote field name of the entity ID.
      $field = $this->entity_info['remote entity keys']['remote id'];

      // Set the remote ID field to the passed value.
      $this->conditions[$this->remote_base][] = array(
        'field' => $field,
        'value' => $value,
        'operator' => $operator,
      );

      // Record that we'll only be retrieving a single item.
      if (is_null($operator) || ($operator == '=')) {
        $this->retrieve_multiple = FALSE;
      }
    }
    else {

      // Report an invalid entity condition.
      $this->throwException(
        'OURRESTREMOTESELECTQUERY_INVALID_ENTITY_CONDITION',
        'The query object can only accept the \'entity_id\' condition.'
      );
    }
  }

  /**
   * Add a condition to the query, using local property keys.
   *
   * Based on MSDynamicsSoapSelectQuery::propertyCondition().
   *
   * @param $property_name
   *  A local property. Ie, a key in the $entity_info 'property map' array.
   */
  function propertyCondition($property_name, $value, $operator = NULL) {

    // Make sure the entity base has been set up.
    if (!isset($this->entity_info)) {
      $this->throwException(
        'OURRESTREMOTESELECTQUERY_ENTITY_BASE_NOT_SET',
        'The query object was not set with an entity type.'
      );
    }

    // Make sure that the provided property is valid.
    if (!isset($this->entity_info['property map'][$property_name])) {
      $this->throwException(
        'OURRESTREMOTESELECTQUERY_INVALID_PROPERY',
        'The query object cannot set a non-existent property.'
      );
    }

    // Adding a field condition (probably) automatically makes this a multiple.
    // TODO: figure this out for sure!
    $this->retrieve_multiple = TRUE;

    // Use the property map to determine the remote field name.
    $remote_field_name = $this->entity_info['property map'][$property_name];

    // Set the condition for use during execution.
    $this->conditions[$this->remote_base][] = array(
      'field' => $remote_field_name,
      'value' => $value,
      'operator' => $operator,
    );
  }

  /**
   * Add a user id condition to the query.
   *
   * @param $user_id
   *   The user to search for appointments.
   */
  function useridCondition($user_id) {
    $this->user_id = $user_id;
  }
?>
Executing the remote query

The execute() method marshals all of the conditions, passes the built request to the connection's makeRequest() that we saw earlier, calls parseEventResponse() (which we'll investigate below) and then returns the list of remote entities that can now be used by Drupal.

Feel free to ignore the authentication code if it's not required for your implementation. I left it in as an extended example of how this could be done.

<?php
  /**
   * Run the query and return a result.
   *
   * @return
   *  Remote entity objects as retrieved from the remote connection.
   */
  function execute() {

    // If there are any validation errors, don't perform a search.
    if (form_set_error()) {
      return array();
    }

    $querystring = array();

    $path = variable_get($this->base_entity_type . '_resource_name', '');

    // Iterate through all of the conditions and add them to the query.
    if (isset($this->conditions[$this->remote_base])) {
      foreach ($this->conditions[$this->remote_base] as $condition) {
        switch ($condition['field']) {
          case 'event_id':
            $querystring['eventId'] = $condition['value'];
            break;
          case 'login_id':
            $querystring['userId'] = $condition['value'];
            break;
        }
      }
    }

    // "From date" parameter.
    if (isset($this->from_date)) {
      $querystring['startDate'] = $this->from_date;
    }

    // "To date" parameter.
    if (isset($this->to_date)) {
      $querystring['endDate'] = $this->to_date;
    }

    // Add user id based filter if present.
    if (isset($this->user_id)) {
      $querystring['userId'] = $this->user_id;
    }

    // Assemble all of the query parameters.
    if (count($querystring)) {
      $path .= '?' . drupal_http_build_query($querystring);
    }

    // Make the request.
    try {
      $response = $this->connection->makeRequest($path, 'GET');
    } catch (Exception $e) {
      if ($e->getCode() == OUR_REST_LOGIN_REQUIRED_NO_SESSION) {
        drupal_set_message($e->getMessage());
        drupal_goto('user/login', array('query' => drupal_get_destination()));
      }
      elseif ($e->getCode() == OUR_REST_LOGIN_REQUIRED_TOKEN_EXPIRED) {

        // Logout
        global $user;
        module_invoke_all('user_logout', $user);
        session_destroy();

        // Redirect
        drupal_set_message($e->getMessage());
        drupal_goto('user/login', array('query' => drupal_get_destination()));
      }
    }

    switch($this->base_entity_type) {
      case 'siteshortname_entities_remote_event' :
        $entities = $this->parseEventResponse($response);
        break;
    }

    // Return the list of results.
    return $entities;
  }
?>
Unmarshalling the response data and returning it

Here, in the parseEventResponse method, we decode the response data (if there is any), and do any additional work required to get each entity's data into an object. They're all returned as a single list (array) of entity objects. If the response provides information on the format (XML, JSON, etc.), you can unmarshal the data differently based on what the server returned.

<?php
  /**
   * Helper for execute() which parses the JSON response for event entities.
   *
   * May also set the $total_record_count property on the query, if applicable.
   *
   * @param $response
   *  The JSON/XML/whatever response from the REST server.
   *
   * @return
   *  An list of entity objects, keyed numerically.
   *  An empty array is returned if the response contains no entities.
   *
   * @throws
   *  Exception if a fault is received when the REST call was made.
   */
  function parseEventResponse($response) {

    // Fetch the list of events.
    if ($response->code == 404) {
      // No data was returned so let's provide an empty list.
      $events = array();
    }
    else /* we have response data */ {

      // Convert the JSON (assuming that's what we're getting) into a PHP array.
      // Do any unmarshalling to convert the response data into a PHP array.
      $events = json_decode($response->data, TRUE);
    }

    // Initialize an empty list of entities for returning.
    $entities = array();

    // Iterate through each event.
    foreach ($events as $event) {
      $entities[] = (object) array(

        // Set event information.
        'event_id' => isset($event['id']) ? $event['id'] : NULL,
        'event_name' => isset($event['name']) ? $event['name'] : NULL,
        'event_date' => isset($event['date']) ? $event['date'] : NULL,
      );
    }

    // Return the newly-created list of entities.
    return $entities;
  }
?>
Error handling

We provide a helper method dealing with errors raised in other methods. It records the specific error message in the log and throws an exception based on the message and the code.

<?php
  /**
   * Throw an exception when there's a problem.
   *
   * @param string $code
   *   The error code.
   *
   * @param string $message
   *   A user-friendly message describing the problem.
   *
   * @throws Exception
   */
  function throwException($code, $message) {

    // Report error to the logs.
    watchdog('siteshortname_entities_remote', 'ERROR: OurRestRemoteSelectQuery: "@code", "@message".', array(
      '@code' => $code,
      '@message' => $message,
    ));

    // Throw an error with which callers must deal.
   throw new Exception(t("OurRestRemoteSelectQuery error, got message '@message'.", array(
      '@message' => $message,
    )), $code);
  }
?>

Everything we've covered so far gets our remote data into Drupal. Below, we'll expose it to Views.

Views support Basic set-up

At the beginning of this article, I stated that we required the EntityFieldQuery Views Backend module. This allows us to replace the default Views query back-end, a local SQL database, with one that supports querying entities fetchable through the Remote Entity API. Make sure to add it, efq_views, to your custom remote entity module as a dependency.

For the curious, the changes I made to EFQ Views Backend to add this support can be found in the issue Add support for remote entities.

I added official documentation for all of this to the Remote Entity API README (via Explain how to integrate remote querying through Views). As it may not be obvious, when creating a new view of your remote entities, make sure that the base entity is the EntityFieldQuery version, not simply the entity itself. When selecting the entity type on which to base the view, you should see each entity twice: the standard one (via the default query back-end) and the EFQ version.

As stated in the documentation, you need to a add a buildFromEFQ() method to your RemoteEntityQuery subclass (which we went over in the previous section). We'll review why this is necessary and give an example next.

Converting from an EntityFieldQuery

As EFQ Views only builds EntityFieldQuery objects, we need to convert that type of query to an instance of our RemoteEntityQuery subclass. If EFQ Views stumbles upon a remote query instead of a local one, it will run the execute() method on one of these objects instead.

So we need to tell our subclass how to generate an instance of itself when provided with an EntityFieldQuery object. The method below handles the conversion, which EFQ Views calls when necessary.

<?php
  /**
   * Build the query from an EntityFieldQuery object.
   *
   * To have our query work with Views using the EntityFieldQuery Views module,
   * which assumes EntityFieldQuery query objects, it's necessary to convert
   * from the EFQ so that we may execute this one instead.
   *
   * @param $efq
   *   The built-up EntityFieldQuery object.
   *
   * @return
   *   The current object.  Helpful for chaining methods.
   */
  function buildFromEFQ($efq) {

    // Copy all of the conditions.
    foreach ($efq->propertyConditions as $condition) {

      // Handle various conditions in different ways.
      switch ($condition['column']) {

        // Get the from date.
        case 'from_date' :
          $from_date = $condition['value'];
          // Convert the date to the correct format for the REST service
          $result = $from_date->format('Y/m/d');
          // The above format() can return FALSE in some cases, so add a check
          if ( $result ) {
            $this->from_date = $result;
          }
          break;

        // Get the to date.
        case 'to_date':
          $to_date = $condition['value'];
          // Convert the date to the correct format for the REST service
          $result = $to_date->format('Y/m/d');
          // The above format() can return FALSE in some cases, so add a check
          if ( $result ) {
            $this->to_date = $result;
          }
          break;

        // Get the user ID.
        case 'user_id':
          $this->user_id = $condition['value'];
          break;

        default:
          $this->conditions[$this->remote_base][] = array(
            'field' => $condition['column'],
            'value' => $condition['value'],
            'operator' => isset($condition['operator']) ? $condition['operator'] : NULL,
          );
          break;
      }
    }

    return $this;
  }
?>

That should be it! You'll now need to spend some time (if you haven't already) getting everything connected as above to fit your specific situation. If you can get these details sorted, you'll then be ready to go.

Alternatives

At the time of this writing, there appears to be only one alternative to the Remote Entity API (not including custom architectures). It's the Web Service Data suite. The main difference between the modules is that Web Service Data doesn't store a local cache of remote data; the data is always passed through directly.

If this more closely matches what you'd like to do, be aware that there is currently no EntityFieldQuery support:

Support for EntityFieldQuery (coming soon) will allow developers to make entity field queries with web service data.

This is very clearly stated on the main project page, but I wasn't able to find an issue in the queue tracking progress. So if you choose this method, you may have to add EFQ support yourself, or you may not be able to use Views with your remote entities.

References

This article, Integrating remote data into Drupal 7 and exposing it to Views, appeared first on the Colan Schwartz Consulting Services blog.

Categories: Elsewhere

DrupalCon News: Making website magic with the DrupalCon site building track

Planet Drupal - Mon, 16/02/2015 - 19:45

In honor of this year’s DrupalCon in Tinseltown, we invite you to indulge in a bit of Drupal movie magic.

Imagine the scene…

NARRATOR
You are about to enter another dimension, a dimension not only of configuration and security but of UI. A journey into a wondrous land of complex sites without custom development. Next stop, the Drupal Zone!

THE SCENE
Intl. Acme, Inc. Meeting Room - it is day

FADE IN

Categories: Elsewhere

Chromatic: Atomic Drupal Development: Building Pieces Before Pages

Planet Drupal - Mon, 16/02/2015 - 18:12

Many designers are praising the benefits of Atomic Design. Rather than designing pages, Atomic Design focuses on designing systems of individual, reusable components. Designers aren’t – or at least shouldn’t be – the only ones thinking this way. From content strategy to QA, the entire team must be on the same atomic page.

Development is one area of a project that stands to benefit the most from this change in thought. Organizing a codebase by individual components keeps developers out of each other’s hair, reducing the code and effort overlap that often occurs when building by page or section. It also makes the codebase much easier to understand and maintain. Developers will know where to find code and how to fix, alter, or extend it, regardless of the original author. After enforcing coding standards, only git’s history will know who wrote what. This all saves time and money.

Because there are many ways to do anything in Drupal, building every component with the same approach is crucial. In the Drupal world, this approach is known as “the Drupal way”.

Building a component the Drupal way

Individual blocks, panel panes, or other UI elements would be examples of a component in Drupal. They are placed into regions within layouts to build pages. Other pages may use the same component in the same or different regions. A given component may vary across pages, but the design and intended functionality are similar. A simple search form is a good example, but they can be much more complex.

Design deliverables often arrive as complete pages. If the designers haven’t already, identify the components that each page consists of. Break up the page’s layout into regions and those regions into components. Determine which components live on more than one page and if they vary between them. It also helps to identify different components that share design or functionality with others. It’s important to recognize early if they will be sharing code.

Before writing a line of code, determine where in the codebase the component will live. Organize custom modules by content types or sections and add relevant components to the same modules. A module exported with Features should be treated no differently than one created by hand; don’t be afraid to add custom code to them (please do). The end goal is to have all back-end and (most) front-end code for a given component living in the same module.

Warning: This article is about to move fast and cover more ground than it should. It will move from back-end to front-end. There are many wonderful resources about each topic covered below, so they will be linked to rather than recreated. This will instead provide a high level overview of how they fit together and will highlight the most important pieces.

Component containers and placement

The most common container for a custom component is a block, created with a series of hooks. Contributed modules like Context can help place them on the page. More complex projects may choose to build pages with the Panels module. For pages built with Panels, custom panel page plugins are a component’s container of choice.

The decision between blocks and Context, Panels, or another approach is important to make early in the project. It is also important to stick with the same approach for every component. This article will focus less on this decision and more on how to construct the markup within the container of choice.

View modes and entity_view()

If the component displays information from a node or another type of entity, render it with a view mode. View modes can render different information from the same entity in different ways. Among other benefits, this helps display content in similar ways among different components.

Create a view mode with hook_entity_info_alter() or with the Entity view modes contributed module. This module also provide template suggestions for each entity type in each view mode. Render an individual piece of information with a view mode inside of a component using entity_view() (you’ll need the Entity API module) or node_view(). Alter the entity’s information as needed using a preprocess function and adjust the markup in a template. Those pieces will be discussed later.

If a component lists more than one entity or node, build a view with the Views contributed module. It is best if the view renders content with view modes using the Format options. Create Views components with the Block (or Content pane for Panels) display(s). Views also provides template suggestions to further customize the markup of the component. The exported view should live in the same module as the code that customizes it. EntityFieldQuery might be worth considering as an alternative to using Views.

hook_theme() and render arrays

If the component does not display information from an entity, such as a UI element, build it with hook_theme(). Drupal core and contributed modules use hook_theme() to build elements like links and item lists. This allows other modules to override and alter the information used to render the element. Default theme functions and templates can also be overridden to alter their markup.

Choose a name for the element that will identify it throughout the codebase. Outline what information the element will need to build the desired output. Use these decisions to define it using hook_theme(). Again, keep this hook in the same custom module as the rest of the code for the component.

To render a hook_theme() implementation, construct a render array. This array should contain the name of the implementation to render and any data it needs as input. Build and return this array to render the element as markup. The theme() function is a common alternative to render arrays, but it has been deprecated in Drupal 8. There are advantages to using render arrays instead, as explained in Render Arrays in Drupal 7.

Custom templates

Drupal renders all markup through templates and theme functions. Use templates to construct markup instead of theme functions. Doing so makes it easier for front-end developers to build and alter the markup they need.

Templates place variables provided by entity_view(), render arrays, and preprocess functions into the markup. They should live in the “templates” directory of the same module as the rest of the component’s code. The name of a template will come from theme hook suggestions. Underscores get replaced with dashes. Tell hook_theme() about the template for each element it defines.

There should be no logic in the template and they should not have to dig deep into Drupal’s objects or arrays. They should only use an if statement to determine if a variable has a value before printing its markup and value. They can also use a foreach to loop through an array of data. Further manipulation or function calls should happen in a preprocess function.

Preprocess functions

Use preprocess functions to extract and manipulate data such as field values and prepare them for the template. They are the middleman between the input and the output.

Preprocess functions follow the naming convention of hook_theme() implementations. Common base themes often use Drupal core’s preprocess functions, such as hook_preprocess_node(), in their template.php file. Keeping all preprocess functions in one file will create a mess in no time. Instead, place preprocess functions in the modules that define the parts their working with. This might be the custom feature that contains the exported content type.

jQuery/JavaScript files

Create a separate JavaScript file for each component that needs custom JavaScript. Place it in a “js” directory within the module and name the file after the component. Be sure to use the Drupal behavior system and name the behavior after the module and component.

Add the JavaScript file to each page the component will appear on. If the component appears on most pages, it might be best to just add it to every page. This will cause less HTTP requests with JavaScript aggregation enabled. The best way to do so is with hook_page_build(). JavaScript files can also be attached to entities rendered through view modes within hook_entity_view(). The best way to add JavaScript to a hook_theme() implementation is by attaching it to the render array.

Sass components

When using a CSS preprocessor like Sass, there isn’t much of a penalty to dividing the CSS into many files. Create a new Sass partial for each component and give the file the same name as the component. Keep them in a “components” directory within the Sass folder structure. Unlike all other code mentioned in this article, it is often best to keep all CSS for these components within the theme. Only keep CSS that supports the core behavior of the component in the module. Consider what styles should persist if it were a contributed module used with other themes.

In the component’s template, base the class names off of the component’s name as well. This makes it easy to find the component’s Sass after inspecting the element in the source. Follow the popular BEM / SMACSS / OOCSS methodologies from there.

Coming up for air

As mentioned, there are often endless ways to complete the same task in Drupal. This makes learning best practices difficult and “the Drupal way” will vary in the minds of different experts. The best way to grasp what works best is to start building something with other people and learn from mistakes. The approach outlined in this article aligns with common practice, but mileage will vary per project.

Regardless of approach, focusing on components before pages will only become more important. Drupal content is already displayed on everything from watches to car dashboards. The web is not made of pages anymore. Designers have begun to embrace this and Drupal developers should too; everyone will benefit!

Categories: Elsewhere

Makak Media: Taking BackDrop For A Test Drive

Planet Drupal - Mon, 16/02/2015 - 17:41

So the first BackDrop release is out there in the wild ready for a quick test drive! We're excited to see where this fork of Drupal 7 leads as we believe it to be a good complementary system to Drupal with a long term future.

First off we checked under the hood to get things configured and found the settings.php file in the root folder, which makes for easier access. Also all those txt files have been removed including the CHANGELOG.txt file, which we remove by default, as it supplies useful info to any hacker out there!

Naturally the installation process is very similar to Drupal but with a few less settings giving it a simpler feel.

Upon installation you're presented with a responsive admin menu with a slightly different structure to the standard Drupal menu. Responsiveness out of the box is great and the new menu again has a simpler look.

read more

Categories: Elsewhere

DrupalDare: G-WAN as a static Drupal file server

Planet Drupal - Mon, 16/02/2015 - 17:26
So now that we have concluded that it's easy to setup distribution of files on a separate subdomain, what about using a completely other web server (or in this case an application server)? Will it blend?
Categories: Elsewhere

Acquia: Development based on Drupal's Fundamental Particles - Brad Czerniak

Planet Drupal - Mon, 16/02/2015 - 13:34
Language Undefined

Presenter Brad Czerniak caught my eye with a blog post entitled "10 things I learned using Drupal at a hackathon," based on his experiences taking part in the #hackDPL (Detroit Public Library) competitive hackathon. In our podcast interview we talk about that – before moving on to Brad's session about the Drupal development best practices he and his team use at Commercial Progression in Michigan.

Categories: Elsewhere

Annertech: Enlightening - The Dark Art of Solr Search with Drupal

Planet Drupal - Mon, 16/02/2015 - 12:41
Enlightening - The Dark Art of Solr Search with Drupal Why this blog post?

Often when I add a search function to a Drupal website using Apache Solr, I'm amazed at how complex some people think this is. Many developers/site builders are of the belief that this is some kind of very-hard-to-master black art. They could not be more wrong.

So what I want to contribute back to the Drupal community is an understanding of how Solr works, why/how it differs from Drupal Core Search module, and the benefits Solr has over core search.

Categories: Elsewhere

lakshminp.com: The Drupal 8 plugin system - part 2

Planet Drupal - Mon, 16/02/2015 - 11:38

We saw in part 1 how plugins help us in writing reusable functionality in Drupal 8. There are a lot of concepts which plugins share in common with services, like:

  1. limited scope. Do one thing and do it right.
  2. PHP classes which are swappable.

Which begs the question, how exactly are plugins different from services?
If your interface expects implementations to yield the same behaviour, then go for services. Otherwise, you should write it as a plugin. This needs some explaining.
For instance, if you are creating an interface to store data in a persistent system, like MySQL or MongoDB, then it would be implemented as a service. The save() function in your interface interface will be implemented differently for both the services, but the behaviour will be the same, i.e., it takes data as input parameters, stores them in the respective data store and returns a success message.

On the other hand, if you are creating an image effect, it needs to be a plugin. (It already is. Check image effects as plugins). The core concept of image plugins is to take in an image, apply an effect on it and return the modified image. Different image effects yield different behaviours. An image scaling effect might not produce the same behaviour as that of an image rotating effect. Hence, each of these effects need to be implemented as a plugin. If any module wants to create a new image effect, it needs to write a new plugin by extending the ImageEffectBase class.

Plugins used in core

Let's take a look at the major plugin types provided by Drupal 8 core. An example plugin of each plugin types will be the subjects of future blog posts.

  1. Blocks
    Drupal 8 finally got blocks right. Custom blocks can be created from the BlockBase class.

  2. Field Types, Field Widgets and Field Formatters
    Check part 1 for how this is done in Drupal 8.

  3. Actions
    Drupal 8 allows module developers to perform custom actions by implementing the ActionBase class. Blocking a user, unpublishing a comment, making a node sticky etc. are examples of actions.

  4. Image Effects
    Image effects are plugins which manipulate an image. You can create new image effects by extending ImageEffectBase. Examples of core image effects are CropImageEffect and ScaleImageEffect.

  5. Input filters
    User submitted input is passed through a series of filters before it is persisted in the database or output in HTML. These filters are implemented as plugins by implementing the FilterBase class.

  6. Entity Types
    In Drupal parlance, entities are objects that persist content or configuration in the database. Each entity is an instance of an entity type. New entity types can be defined using the annotation discovery mechanism.

  7. Views related plugins
    A large collection of different plugin types are employed by views during the querying, building and rendering stages.

Plugin Discovery

Plugin discovery is the process by which Drupal finds plugins written in your module. Drupal 8 has the following plugin discovery mechanisms:

  1. Annotation based. Plugin classes are annotated and have a directory structure which follows the PSR-4 notation.

  2. Hooks. Plugin modules need to implement a hook to tell the manager about their plugins.

  3. YAML files. Plugins are listed in YAML files. Drupal Core uses this method for discovering local tasks and local actions.

  4. Static. Plugin classes are registered within the plugin manager class itself. This is useful if other modules should not create new plugins of this type.

Annotation based discovery is the most popular plugin discovery method in use. We will briefly look at how we create a new plugin type using this method in the next part.

Categories: Elsewhere

DrupalDare: CDN, Cookieless Requests and Subdomains

Planet Drupal - Mon, 16/02/2015 - 10:52
In this text I will go in to the topic of using a separate domain for serving your static files to avoid the client sending unnecessary cookies in the headers and why it may be or may not be a solution to speed up your website.
Categories: Elsewhere

Drupal core announcements: Drupal core security release window on Wednesday, February 18

Planet Drupal - Mon, 16/02/2015 - 04:37
Start:  2015-02-18 (All day) America/New_York Online meeting (eg. IRC meeting) Organizers:  David_Rothstein

The monthly security release window for Drupal 6 and Drupal 7 core will take place on Wednesday, February 18.

This does not mean that a Drupal core security release will necessarily take place on that date for either the Drupal 6 or Drupal 7 branches, only that you should prepare to look out for one (and be ready to update your Drupal sites in the event that the Drupal security team decides to make a release).

There will be no bug fix release on this date; the next window for a Drupal core bug fix release is Wednesday, March 4.

For more information on Drupal core release windows, see the documentation on release timing and security releases, and the discussion that led to this policy being implemented.

Categories: Elsewhere

Vincent Sanders: To a child, often the box a toy came in is more appealing than the toy itself.

Planet Debian - Mon, 16/02/2015 - 01:31
I think Allen Klein might not have been referring to me when he said that but I do seem to like creating boxes for my toys.

My Lenovo laptop has an Ultrabay, these are a way to easily swap optical and hard drives drives. They allow me to carry around additional storage and, providing I remembered to pack the drive, access optical media.
Over time I have acquired several additional hard drives housed in Ultrabay caddies. Generally I only need to access one at a time but increasingly I want to have more than one available.

Lenovo used to sell docking stations with multiple Ultrabays but since Series 3 was introduced this is no longer the case as the docks have been reduced to port replicators.

One solution is to buy a SATA to USB convertor which lets you use the drive externally. However once you have more than one drive this becomes somewhat untidy, not to mention all those unhoused drives on your desk become something of a hazard.

Recently after another close call I decided what I needed was a proper external enclosure to house all my drives. After some extensive googling I found nothing suitable ready to buy. Most normal people would give up at this point, I appear to be an abnormal person so I got the CAD package out.

A few hours of design and a load of laser cutting later I came up with a four bay enclosure that now houses all my Ultrabay caddies.

The design was slightly evolved to accommodate the features of some older caddies and allow a pencil to be used to eject the drives (I put a square hole in the back)

The completed unit uses about £10 of plastic and takes 30 minutes to lasercut.

The only issue with the enclosure as manufactured is that Makespace ran out of black plastic stock and I had to use transparent to finish so it is not in classic black as lenovo intended.

As usual all the design files are publicly available from my design repo.
Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator