Elsewhere

Wim Leers: Eaton & Urbina: structured, intelligent and adaptive content

Planet Drupal - dim, 21/06/2015 - 20:08

While walking, I started listening to Jeff Eaton’s Insert Content Here podcast episode 25: Noz Urbina Explains Adaptive Content. People must’ve looked strangely at me because I was smiling and nodding — while walking :) Thanks Jeff & Noz!

Jeff Eaton explained how the web world looks at and defines the term WYSIWYG. Turns out that in the semi-structured, non-web world that Noz comes from, WYSIWYG has a totally different interpretation. And they ended renaming it to what it really was: WYSIWOO.

Jeff also asked Noz what “adaptive content” is exactly. Adaptive content is a more specialized/advanced form of structured content, and in fact “structured content”, “intelligent content” and “adaptive content” form a hierarchy:

  • structured content
    • intelligent content
      • adaptive content

In other words, adaptive content is also intelligent and structured; intelligent content is also structured, but not all structured content is also intelligent or adaptive, nor is all intelligent content also adaptive.

Basically, intelligent content better captures the precise semantics (e.g. not a section, but a product description). Adaptive content is about using those semantics, plus additional metadata (“hints”) that content editors specify, to adapt the content to the context it is being viewed in. E.g. different messaging for authenticated versus anonymous users, or different nuances depending on how the visitor ended up on the current page (in other words: personalization).

Noz gave an excellent example of how adaptive content can be put to good use: he described how we he had arrived in Utrecht in the Netherlands after a long flight, “checked in” to Utrecht on Facebook, and then Facebook suggested to him 3 open restaurants, including cuisine type and walking distance relative to his current position. He felt like thanking Facebook for these ads — which obviously is a rare thing, to be grateful for ads!

Finally, a wonderful quote from Noz Urbina that captures the essence of content modeling:

How descriptive do we make it without making it restrictive?

If it isn’t clear by now — go listen to that podcast! It’s well worth the 38 minutes of listening. I only captured a few of the interesting points, to get more people interested and excited.1

What about adaptive & intelligent content in Drupal 8?

First, see my closely related article Drupal 8: best authoring experience for structured content?.

Second, while listening, I thought of many ways that Drupal 8 is well-prepared for intelligent & adaptive content. (Drupal already does structured content by means of Field API and the HTML tag restrictions in the body field.) Implementing intelligent & adaptive will surely require experimentation, and different sites/use cases will prefer different solutions, but:

  • An intelligent_content module for Drupal 8: allow site builders/content strategists to define custom HTML tags (e.g. <product_description>) to capture site-specific semantics. A CKEditor Widget could hugely simplify the authoring experience for creating intelligent content, by showing a specific HTML representation while editing (WYSIWOO!), thanks to HTML (Twig) templates associated with those custom HTML tags.
  • An adaptive_content module for Drupal 8: a text filter that allows any tag to be wrapped in a <adaptive_content> tag, which specifies the context in which the wrapped content should be shown/hidden.
  • The latter leads to cacheability problems, because the same content may be rendered in a multitude of different ways, but thanks to cache contexts in Drupal 8 and the fact that text filters can specify cache contexts means adaptive content that is still cacheable is perfectly possible. (This is in fact exactly what it was intended for!) cache contexts

I think that those two modules would be very interesting, useful additions to the Drupal ecosystem. If you are working on this, please let me know — I would love to help!

  1. That’s right, this is basically voluntary marketing for Jeff Eaton — you’re welcome, Jeff! 

  • Drupal
  • WYSIWYG
  • structured content
Catégories: Elsewhere

Enrico Zini: debtags-rewrite-python3

Planet Debian - dim, 21/06/2015 - 18:04
debtags rewritten in python3

In my long quest towards closing #540218, I have uploaded a new libept to experimental. Then I tried to build debtags on a sid+experimental chroot and the result runs but has libc's free() print existential warnings about whatevers.

At a quick glance, there are now things around like a new libapt, gcc 5 with ABI changes, and who knows what else. I figured how much time it'd take me to debug something like that, and I've used that time to rewrite debtags in python3. It took 8 hours, 5 of pleasant programming and the usual tax of another 3 of utter frustration packaging the results. I guess I gained over the risk of spending an unspecified amount of hours of just pure frustration.

So from now on debtags is going to be a pure python3 package, with dependencies on only python3-apt and python3-debian. 700 lines of python instead of several C++ files built on 4 layers of libraries. Hopefully, this is the last of the big headaches I get from hacking on this package. Also, one less package using libept.

Catégories: Elsewhere

Blue Drop Shop: Camp Record Beta Test Four: DrupalCamp STL 2015

Planet Drupal - dim, 21/06/2015 - 17:58

Following a successful MidCamp and with some new ideas how to improve the kit, I was eager to hit the road for more testing. Problem is, I'm a freelancer with a limited budget, and getting to camps comes out of my own pocket. On a lark, I tweeted the following:

Planning a #drupalcamp and need your sessions recorded? Sponsor me & I will record your sessions. Ping me! #drupal /cc @drupalstl @tcdrupal

— Kevin Thull (@kevinjthull) April 8, 2015

To my delight, both Twin Cities and St. Louis camps took me up on my offer. Of course, the stakes are even higher now, because it's no longer my own money on the line.

But I'm also feeling more confident about this solution and improve on the process with each camp. Connecting to non-HDMI-capable laptops remains the biggest challenge overall. I've added in a couple (full) DisplayPort to HDMI converters and even successfully tested a new VGA to HDMI converter that got my ancient Sony VAIO to display on my home flatscreen:

The new VGA to HDMI converter shows promise. My ancient Sony Vaio WinXP laptop just connected! #drupalcamp pic.twitter.com/PXb0kBvsCl

— Kevin Thull (@kevinjthull) June 16, 2015

And at DrupalCamp STL I finally got the 100% success rate that I've been shooting for! Three sessions needed fixing in post, but overall, this camp went very smoothly. A huge bonus was the fact that the two rooms were next to each other, minimizing the distance to cover when trying to coordinate laptop hookups and verify timely starts and stops of the records.

Twin Cities is next week, with a much more challenging schedule: five concurrent sessions across two buildings and multiple floors. My Fitbit will likely hit a new high. That, and I need to finally get down to some documentation and podium signage. It's time to share the knowledge I've gained and get more hands and minds involved.

And now for the learnings from DCSTL:

  • swapping thumb drives throughout the day means recordings can be posted during camp
  • well-timed presenter starts/stops means no trimming, which means more recordings can be posted during camp
  • one room had screen flicker and setting the PVR resolution to 1080 helped (typically, the resolution needs to come down to 720 for this, as well as fixing color shifts)
  • having extra SD cards means bad audio can be fixed during down times, which means more recordings can be posted during camp
  • power strips at the podium shouldn't be assumed, and the powered USB hub and voice recorder both have short plugs
  • never plug the powered usb into the laptop, because that can kill your record if resolution changes or the laptop goes to sleep
  • taping down individual components means less cord chaos throughout the day
  • access to ethernet port with a reasonably large pipe going up will get videos posted faster
Tags:
Catégories: Elsewhere

Steve Kemp: We're all about storing objects

Planet Debian - dim, 21/06/2015 - 02:00

Recently I've been experimenting with camlistore, which is yet another object storage system.

Camlistore gains immediate points because it is written in Go, and is a project initiated by Brad Fitzpatrick, the creator of Perlbal, memcached, and Livejournal of course.

Camlistore is designed exactly how I'd like to see an object storage-system - each server allows you to:

  • Upload a chunk of data, getting an ID in return.
  • Download a chunk of data, by ID.
  • Iterate over all available IDs.

It should be noted more is possible, there's a pretty web UI for example, but I'm simplifying. Do your own homework :)

With those primitives you can allow a client-library to upload a file once, then in the background a bunch of dumb servers can decide amongst themselves "Hey I have data with ID:33333 - Do you?". If nobody else does they can upload a second copy.

In short this kind of system allows the replication to be decoupled from the storage. The obvious risk is obvious though: if you upload a file the chunks might live on a host that dies 20 minutes later, just before the content was replicated. That risk is minimal, but valid.

There is also the risk that sudden rashes of uploads leave the system consuming all the internal-bandwith constantly comparing chunk-IDs, trying to see if data is replaced that has been copied numerous times in the past, or trying to play "catch-up" if the new-content is larger than the replica-bandwidth. I guess it should possible to detect those conditions, but they're things to be concerned about.

Anyway the biggest downside with camlistore is documentation about rebalancing, replication, or anything other than simple single-server setups. Some people have blogged about it, and I got it working between two nodes, but I didn't feel confident it was as robust as I wanted it to be.

I have a strong belief that Camlistore will become a project of joy and wonder, but it isn't quite there yet. I certainly don't want to stop watching it :)

On to the more personal .. I'm all about the object storage these days. Right now most of my objects are packed in a collection of boxes. On the 6th of next month a shipping container will come pick them up and take them to Finland.

For pretty much 20 days in a row we've been taking things to the skip, or the local charity-shops. I expect that by the time we've relocated the amount of possesions we'll maintain will be at least a fifth of our current levels.

We're working on the general rule of thumb: "If it is possible to replace an item we will not take it". That means chess-sets, mirrors, etc, will not be carried. DVDs, for example, have been slashed brutally such that we're only transferring 40 out of a starting collection of 500+.

Only personal, one-off, unique, or "significant" items will be transported. This includes things like personal photographs, family items, and similar. Clothes? Well I need to take one jacket, but more can be bought. The only place I put my foot down was books. Yes I'm a kindle-user these days, but I spent many years tracking down some rare volumes, and though it would be possible to repeat that effort I just don't want to.

I've also decided that I'm carrying my complete toolbox. Some of the tools I took with me when I left home at 18 have stayed with me for the past 20+ years. I don't need this specific crowbar, or axe, but I'm damned if I'm going to lose them now. So they stay. Object storage - some objects are more important than they should be!

Catégories: Elsewhere

Joachim Breitner: Running circle-packing in the Browser, now using GHCJS

Planet Debian - sam, 20/06/2015 - 22:50

Quite a while ago, I wrote a small Haskell library called circle-packing to pack circles in a tight arrangement. Back then, I used the Haskell to JavaScript compiler fay to create a pretty online demo of that library, and shortly after, I create the identical demo using haste (another Haskell to JavaScript compiler).

The main competitor of these two compilers, and the most promising one, is GHCJS. Back then, it was too annoying to install. But after two years, things have changed, and it only takes a few simple commands to get GHCJS running, so I finally created the circle packing demo in a GHCJS variant.

Quick summary: Cabal integration is very good (like haste, but unline fay), interfacing JavaScript is nice and easy (like fay, but unlike haste), and a quick check seems to indicate that it is faster than either of these two. I should note that I did not update the other two demos, so they represent the state of fay and haste back then, respectively.

With GHCJS now available at my fingertips, maybe I will produce some more Haskell to be run in your browser. For example, I could port FrakView, a GUI program to render, expore and explain iterated function systems, from GTK to HTML.

Catégories: Elsewhere

Russell Coker: BTRFS Status June 2015

Planet Debian - sam, 20/06/2015 - 06:47

The version of btrfs-tools in Debian/Jessie is incapable of creating a filesystem that can be mounted by the kernel in Debian/Wheezy. If you want to use a BTRFS filesystem on Jessie and Wheezy (which isn’t uncommon with removable devices) the only options are to use the Wheezy version of mkfs.btrfs or to use a Jessie kernel on Wheezy. I recently got bitten by this issue when I created a BTRFS filesystem on a removable device with a lot of important data (which is why I wanted metadata duplication and checksums) and had to read it on a server running Wheezy. Fortunately KVM in Wheezy works really well so I created a virtual machine to read the disk. Setting up a new KVM isn’t that difficult, but it’s not something I want to do while a client is anxiously waiting for their data.

BTRFS has been working well for me apart from the Jessie/Wheezy compatability issue (which was an annoyance but didn’t stop me doing what I wanted). I haven’t written a BTRFS status report for a while because everything has been OK and there has been nothing exciting to report.

I regularly get errors from the cron jobs that run a balance supposedly running out of free space. I have the cron jobs due to past problems with BTRFS running out of metadata space. In spite of the jobs often failing the systems keep working so I’m not too worried at the moment. I think this is a bug, but there are many more important bugs.

Linux kernel version 3.19 was the first version to have working support for RAID-5 recovery. This means version 3.19 was the first version to have usable RAID-5 (I think there is no point even having RAID-5 without recovery). It wouldn’t be prudent to trust your important data to a new feature in a filesystem. So at this stage if I needed a very large scratch space then BTRFS RAID-5 might be a viable option but for anything else I wouldn’t use it. BTRFS still has had little performance optimisation, while this doesn’t matter much for SSD and for single-disk filesystems for a RAID-5 of hard drives that would probably hurt a lot. Maybe BTRFS RAID-5 would be good for a scratch array of SSDs. The reports of problems with RAID-5 don’t surprise me at all.

I have a BTRFS RAID-1 filesystem on 2*4TB disks which is giving poor performance on metadata, simple operations like “ls -l” on a directory with ~200 subdirectories takes many seconds to run. I suspect that part of the problem is due to the filesystem being written by cron jobs with files accumulating over more than a year. The “btrfs filesystem” command (see btrfs-filesystem(8)) allows defragmenting files and directory trees, but unfortunately it doesn’t support recursively defragmenting directories but not files. I really wish there was a way to get BTRFS to put all metadata on SSD and all data on hard drives. Sander suggested the following command to defragment directories on the BTRFS mailing list:

find / -xdev -type d -execdir btrfs filesystem defrag -c {} +

Below is the output of “zfs list -t snapshot” on a server I run, it’s often handy to know how much space is used by snapshots, but unfortunately BTRFS has no support for this.

NAME USED AVAIL REFER MOUNTPOINT hetz0/be0-mail@2015-03-10 2.88G – 387G – hetz0/be0-mail@2015-03-11 1.12G – 388G – hetz0/be0-mail@2015-03-12 1.11G – 388G – hetz0/be0-mail@2015-03-13 1.19G – 388G –

Hugo pointed out on the BTRFS mailing list that the following command will give the amount of space used for snapshots. $SNAPSHOT is the name of a snapshot and $LASTGEN is the generation number of the previous snapshot you want to compare with.

btrfs subvolume find-new $SNAPSHOT $LASTGEN | awk '{total = total + $7}END{print total}'

One upside of the BTRFS implementation in this regard is that the above btrfs command without being piped through awk shows you the names of files that are being written and the amounts of data written to them. Through casually examining this output I discovered that the most written files in my home directory were under the “.cache” directory (which wasn’t exactly a surprise).

Now I am configuring workstations with a separate subvolume for ~/.cache for the main user. This means that ~/.cache changes don’t get stored in the hourly snapshots and less disk space is used for snapshots.

Conclusion

My observation is that things are going quite well with BTRFS. It’s more than 6 months since I had a noteworthy problem which is pretty good for a filesystem that’s still under active development. But there are still many systems I run which could benefit from the data integrity features of ZFS and BTRFS that don’t have the resources to run ZFS and need more reliability than I can expect from an unattended BTRFS system.

At this time the only servers I run with BTRFS are located within a reasonable drive from my home (not the servers in Germany and the US) and are easily accessible (not the embedded systems). ZFS is working well for some of the servers in Germany. Eventually I’ll probably run ZFS on all the hosted servers in Germany and the US, I expect that will happen before I’m comfortable running BTRFS on such systems. For the embedded systems I will just take the risk of data loss/corruption for the next few years.

Related posts:

  1. BTRFS Status Dec 2014 My last problem with BTRFS was in August [1]. BTRFS...
  2. BTRFS Status March 2014 I’m currently using BTRFS on most systems that I can...
  3. BTRFS Status July 2014 My last BTRFS status report was in April [1], it...
Catégories: Elsewhere

Drupal core announcements: Requiring hook_update_N() for Drupal 8 core patches beginning June 24

Planet Drupal - sam, 20/06/2015 - 02:58

In [policy, no patch] Require hook_update_N() for Drupal 8 core patches beginning June 24, the Drupal 8 release managers outline a policy to begin requiring hook_update_N() implementations for core patches that introduce data model changes starting after the next beta release. The goal of this policy change is to start identifying common update use-cases, to uncover any limitations we have for providing update functions in core, and to prepare core developers for considering upgrade path issues as we create the last few betas and first release candidates of Drupal 8. We need your help reviewing and communicating about this proposed policy, as well as identifying core issues that will be affected. Read the issue for more details.

Catégories: Elsewhere

Norbert Preining: Localizing a WordPress Blog

Planet Debian - sam, 20/06/2015 - 02:09

There are many translation plugins available for WordPress, and most of them deal with translations of articles. This might be of interest for others, but not for me. If you have a blog with visitors from various language background, because you are living abroad, or writing in several languages, you might feel tempted to provide visitors with a localized “environment”, meaning that as much as possible is translated into the native language of the visitor, without actually translating content – but allowing to.

In my case I am writing mostly in English and Japanese, but sometimes (in former times) in Italian and now and then in my mother tongue, German. Visitors from my site are from all over the world, but at least for Japanese visitors I wanted to provide a localized environment. This blog describes how to get as much as possible translated of your blog, and here I mean not the actual articles, because this is the easy part and most translation plugins handle that fine, but the things around the articles (categories, tags, headers, …).

Starting point and aims

My starting point was a blog where I already had language added as extra taxonomy, and have tagged all articles with a language. But I didn’t have any other translation plugin installed or used. Furthermore, I am using a child theme of the main theme in use (that is always a good idea anyway!). And of course, the theme you are using should be prepared for translation, that is that most literal strings in the theme source code are wrapped in __( ... ) or _e( ... ) calls. And by the way, if you don’t have the language taxonomy, don’t worry, that will come in automatically.

One more thing: The following descriptions are not for the very beginner. I expect certain fluency with WordPress, where for example themese and plugins keep their files, as well as PHP programming experience is needed for some of the steps.

With this starting point my aims were quite clear:

  • allow for translation of articles
  • translate as much as possible of the surroundings
  • auto-selection of language either depending on article or on browser language of visitor
  • by default show all articles independent of selected language
  • if possible, keep database clean as far as possible
Translation plugins

There is a huge bunch of translation plugins, localization plugins, or internationalization plugins out there, and it is hard to select one. I don’t say that what I propose here is the optimal solution, just one that I was pointed at by a colleague, namely utilizing the xili-language plugin.

Installation and initial setup

Not much to say here, just follow the usual procedure (search, install, activate), followed by the initial setup of xili-language. If you haven’t had a language taxonomy by now, you can add languages from the preference page of xili-language, first tab. After having added some languages you should have something similar to the above screen shot. Having defined your languages, you can assign a language to your articles, but for now nothing has actually changed on the blog pages.

As I already mentioned, I assume that you are using a child theme. In this case you should consult the fourth tab of the xili-language settings page, called Managing language files, where on the right you should see / set up things in a way that translations in the child theme override the ones in the main theme, see screen shot on the right. I just mention here that there is another xili plugin, xili-dictionary, that can do a lot of things for you when it comes to translation – but I couldn’t figure out its operation mode, so I switched back (i.e., uninstalled that plugin) and used normal .po/.mo files as described in the next section.

Adding translations – po and mo files

Translations are handled in normal (at least for the Unix world) gettext format. Matthias wrote about this in this blog. In principle you have to:

  • create a directory languages in your child theme folder
  • create there .po file named local-LL.po or local-LL_DD.po, where LL and LL_DD are the same as the values in the field ISO Names in the list of defined languages (see above)
  • convert the .po files to .mo files using msgfmt local-LL.po -o local-LL.mo

The contents of the po files are described in Matthias’ blog, and in the following when I say add a translation, then I mean: adding a stanza

msgid "some string" msgstr "translation of some string"

to the po file, and not forgetting to recompile it to mo file.

So let us go through a list of changes I made to translate various pieces of the blog appearance:

Translation of categories

This is the easiest part, simply throw in the names of your categories into the respective local-DD_LL.po file, and be ready. In my case I used local-ja.po which besides other categories contains stanzas like:

msgid "Travel" msgstr "旅行" Translation of widget titles

In most cases the widget titles are already automatically translated, if the plugin/widget author cared for it, meaning that he called the widget_title filter on the title. If this does not happen, please report this to the widget/plugin author. I have done this for example for the simple links plugin, which I use for various elements of the side-bar. The author was extremely responsive and the fix will be in the next release is already in the latest release – big thanks!

Translation of tags

This is a bit a problem, as the tags appear in various places on my blog: next to the title line and the footer of each blog, as well as in the tag cloud in the side bar.

Furthermore, I want to translate tags instead of having related tag groups as provided by xili tidy tags plugin, so we have to deal with the various appearances of tags one by one:

Tags on the main page – shown by the theme

This is the easier part – in my case I had already a customized content.php and content-single.php in my child theme folder. If not, you need to copy the one from the parent theme and change the appearance of it to translate tags. Since this is something that depends on the specific theme, I cannot give detailed advice, but if you see something like:

$tags_list = get_the_tag_list( '', __( ', ', 'mistylake' ) );

(here the get_the_tag_list is the important part), then you can replace this by the following code:

$posttags = get_the_tags(); $first = 1; $tag_list = ''; if ($posttags) { foreach($posttags as $tag) { if ($first == 1) { $first = 0; } else { $tag_list = $tag_list . __( ', ', 'mistylake' ); } $tag_list = $tag_list . '<a href="' . esc_url( home_url( '/tag/' . $tag->slug ) ) . '">' . __($tag->name, 'mistylake') . '</a>'; } }

(there are for sure simpler ways …) This code loops over the tags and translates them using the __ function. Note that the second parameter must be the text domain of the parent theme.

If you have done this right and the web site is still running (I recommend testing it on a test installation – I had white pages many times due to php programming errors), and of course you have actual translations available and are looking at a localized version of the web site, then the list of tags as shown by your theme should be translated.

Tag cloud widget

This one is a tricky one: The tag cloud widget comes by default with WordPress, but doesn’t translate the tags. I tried a few variants (e.g. creating a new widget as extension of the original tag cloud widget, and only changing the respective functions), but that didn’t work out at all. I finally resorted to a trick: Reading the code of the original widget, I saw that it applies the tag-sort-filter filter on the array of tags. That allows us to hook into the tag cloud creating and translate the tags.

You have to add the following code to your child theme’s functions.php:

function translate_instead_of_sort($tags) { foreach ( (array) $tags as $tag ) { $tag->name = __( $tag->name , 'mistylake' ); } return $tags; } add_action('tag_cloud_sort', 'translate_instead_of_sort');

(again, don’t forget to change the text domain in the __(.., ..) call!) There might be some more things one could do, like changing the priority to be used after the sorting, or sort directly, but I haven’t played around with that. Using the above code and translating several of the tags, the tag cloud now looks like the screenshot on the right – I know, it could use some tweaking. Also, now the untranslated tags are sorted all before the translated, things one probably can address with the priority of the filter.

Having done the above things, my blog page when Japanese is selected is now mostly in Japanese, with of course the exception the actual articles, which are in a variety of languages.

Open problems

There are a few things I haven’t managed till now to translate, and they are mostly related to the Jetpack plugin, but not only:

  • translation of the calendar – it is strange that although this is a standard widget of WordPress, the translation somehow does not work out there
  • transalation of the meta text entries (Log in, RSS feed, …) – interestingly, even adding the translation of these strings did not help get them translated
  • translation of simple links text fields – here I haven’t invested by now
  • translation of (Jetpack) subscribe to this blog widget

I have a few ideas how to tackle this problem: With Jetpack the biggest problem seems that all the strings are translated in a different text domain. So one should be able to add some code to the functions.php to override/add translations to the jetpack text domain. But somehow it didn’t work out in my case. The same goes for things that are in the WordPress core and use the translation functions without a text domain – so I guess the translation function will use the main WordPress translation files/text domain.

Conclusion

The good thing of the xili-language plugin is that it does not change the actual posts (some plugins save the translations in the the post text), and is otherwise not too intrusive IMHO. Still, it falls short of allowing to translate various parts of the blog, including the widget areas.

I am not sure whether there are better plugins for this usage scenario, I would be surprised if not, but all the plugins I have seen were doing a bit too much on the article translation side and not enough on the translation of the surroundings side.

In any case, I would like to see more separation between the functionality of localization (translating the interface) and translation (translating the content). But at the moment I don’t have enough impetus to write my own plugin for this.

If you have any suggestions for improvement, please let me know!

Enjoy.

Catégories: Elsewhere

Drupal Association News: Updates to our 2015 Financial Plan

Planet Drupal - ven, 19/06/2015 - 20:41

I want to share today that the Association is implementing a new financial plan to address lower than anticipated revenues in 2015. To align our spending more closely with our revenue, we are implementing expense cuts that I’m very sorry to say include staffing. Regrettably, we are losing three staff people today from operations, engineering and our community teams. This was not a decision we came to lightly, and we’re committed to helping those staff through their transition as best we can. In this post I want to share some information about how we got here, and our revised plan.

A Brief history

This is a really hard post to write because we delivered a plan to the community at the beginning of 2015, and it’s clear that we are not going to be able to fully execute to that plan. I take responsibility for that.

I started at the Association two and half years ago, at a very different time for the organization. At that point in early 2013, the Association was a handful of staff, mostly focused on the DrupalCons. The D7 upgrade of Drupal.org had been halted. Not without some good reason, community trust in the Association was low, and that’s among the people who even knew the Association existed.

When I joined, the message I heard from the board and from the many community members I talked to was that the Association had to learn to implement consistently and communicate more. In other words, we needed to build our credibility in the community by executing our work well and making sure the community knew what we were up to and how to get involved.

One thing that was clear from the outset was that Drupal.org was key to our success. If we could not begin to make visible improvements to Drupal.org with the community, we would fail. With support from the board, we decided to invest our healthy reserve in ourselves and build a team that could improve Drupal.org. As our CTO Josh Mitchel pointed out in his anniversary blog post, we’ve done a LOT on Drupal.org. We’ve also made great strides in DrupalCons, introducing more first-time attendee support, providing more resources to all the sprints, and adding the third Con in global communities that are so eager to have us there. Our marketing team has helped create some key content for Drupal 8 and we’ve even raised over $210,000 to help fund the completion of D8 release blockers, The revenue we generate to do this work has also increased, and diversified. We've grown the Drupal Jobs, and rolled out Try Drupal. You can see, even with our revised expectations for 2015, that things are still growing. One of our key programs, Supporting Partners, is up 26% over this same time period last year, for example. Growth of this program was only 4% in 2014.

 

So lots of amazing things are happening, but we have to address that we overestimated what was possible for revenue. We have to adjust our plan to meet reality.

Changing the Plan

Addressing our situation is not work we took lightly. We set several goals for the process that guided our thinking throughout:

  • Solve for short-term revenue shortfalls while retaining resources we need to succeed long-term
  • Minimize staff impact
  • Do this once - find the scenario we can truly sustain, and then grow out of
  • Retaining credibility with staff and ensuring we communicate how valuable they are for our future
  • Maintain community confidence

The strategy we used was two-fold. First, we strove to preserve our core services to the community and our ability to fund our own work. Second, we decided to take action as quickly as possible because the sooner we made changes to the plan, the greater the long-term benefit to the organization. We know that this second strategy makes some of this seem like it's out of the blue, but it means that we impact as few people as possible.

Our leadership team looked at three approaches to addressing our cash flow issues:

  • Incremental revenue: Our new forecast extends actuals from the beginning of 2015 out through the end of the year. We believe that it is possible for us to improve upon this forecast slightly because, although our primary mistake was overestimating revenue, we also had some staffing change-ups (a retirement, hiring new reps) on the team at the beginning of the year that adversely affected the numbers. There is some room to modestly improve our revenue from the forecast.
  • Non-labor expense: We looked at travel, consulting fees, hardware and software, among other places in the budget where we had built in buffers or non-essential expenses. Eliminating these now, and not carrying them into 2016 was a key part of our process.
  • Labor expense: This was the last option we looked at because at the end of the day, not only do all our staff give the community everything they’ve got, we really like each other here. I care deeply for the well-being of everyone at the Association. There is also lot of discussion in the business community about the long-term negative impacts of layoffs on organizations. We looked at lots of ways to reduce labor expense, but were not able to find a solution that did not include some layoffs.

Using this process, we were able to identify $450,000 in non-labor expense savings, and increase revenue projections $250,000 from July 1 2015 through December 31 2016. That was enough to solve our 2015 revenue shortfalls, but it did not address the issues long-term. We needed to reach deeper to ensure our long-term success. We had to consider labor reductions.

Prior to looking at any other staff, the leadership team at the Association decided that the first staff cut had to come from us. As a team, Megan, Joe, Josh, and Matt volunteered a 10% reduction, and I volunteered a 15% reduction. But we still weren’t there. Looking at the remaining labor cuts, we wanted to use our values as our guide. We know that our team believes in our teamwork value above all else, and would want to minimize layoffs as much as possible. With that in mind, we experimented with the model and determined that we could limit layoffs to three if we asked remaining staff to take a 5% pay cut across the board.

All told, here’s what measures look like:

 

We believe this approach meets our goals and puts us in the best position possible to continue the great work we’ve been doing.

What Happens Next?

On the financial front, we’ll be managing to our cash flow for the next 18 months, as well as modernizing our budgeting and forecasting tools to reflect an Agile methodology. This will let us see further into the future more often, and give us more opportunities to update our plans based on what’s actually happening. And, if we find we are performing favorably to our plan, our first action will be to restore salaries for our staff.

Most importantly, we’re going to be focused on our team. They all got the news earlier today, and we’re taking this time to talk things through all together, in our teams, and one on one. I am here to answer questions and hear concerns for every one of them. We’ll also implement monthly internal review of our progress to the new plan with staff so that they have transparency on a monthly basis about what’s happening. These people are the best thing we have going for us, and I won’t ever be able to make this up to any of them, but I am committed to helping them find the best path forward they can.

Thank you

Sharing this is not easy. The only thing that makes it better is knowing that the Association, like Drupal itself, has so much potential. I want to thank our Supporters, partners, sponsors, members, and the general community for everything you’ve given us so far. The only way we will realize our potential and move forward is together, and we are so happy that we get to do that with you.

Catégories: Elsewhere

LevelTen Interactive: Drupal 8: Marketer Friendly

Planet Drupal - ven, 19/06/2015 - 19:12

The digital marketing world keeps changing, basically every day, or whenever Google decides it’s time to change their algorithm. As a person who practices digital marketing, I know the challenges of working with a CMS and the need to for it to allow me to publish blog posts (like this one) easily and have it be mobile responsive, because who uses actual computers these days?... Read more

Catégories: Elsewhere

Vasily Yaremchuk: Anchors Panels Navigation Module as an Excellent Alternative to Single Page Website Module

Planet Drupal - ven, 19/06/2015 - 18:30
Background

Several years ago I was working on my personal Web site. Even in that time One Page solutions were very popular for some presentation, personal or CV pages.

The main idea of such approach to put all information on one long page with several link anchors corresponding some separate sub-sections of this page.

In 2011 Single Page Website module was created. Initially my home page was done on the base of this module.

Single Page Website module is good out of box solution for Drupal beginners but it has a lot of weak points connected with it's architectural solution. Some more information about this module you can see in my report on Kiev Drupal Camp 2011.

Frustration due to Single Page Website module

It was my fault to build single page on the base of my custom solution without some prepared and ready to use approaches (Views or Panels for example) that can put several nodes or other content entities together on one page. And according incorrect architecture Single Page Website module has a lot of restrictions. The most significant one is theme restriction. Module works with limited number of themes. Also on the module there is out of box ability to have only one page with anchor navigation from menu.

So we should have only one language One Page Website. And the last frustrated feature of the module we should have anchors links in menu only without links to some internal or external pages.

New Approach based on Panels

I'm start working on the other solution about one year ago, see my post Anchors Panels Navigation Module. And now I have some stable version of Anchors Panels Navigation Module with no theme restriction and with manual anchor name management.

Of course, new approach is Panel based and it require several modules to be installed. Also to set One page website driven on Anchors Panels Navigation module takes more time than on the base Of Single Page Website one. But this solution more flexible. You can use several menus, and links in blocks and content for one page navigation. Also you can use this module to set on your site several Landing Pages and the number of such pages is not limited!

If you would like to set Landing Page solution on the base of Anchors Panels Navigation module you should do a lot of manual work in Drupal admin area.

  1. In addition to setup this module you should create node with type Panel and put several pieces of content in the panes.
  2. Set CSS IDs to each pane that should have #anchors. The name of #anchors will be equal to the names of CSS IDs.
  3. To set links in menu with #hashes. You can use absolute links to your site (like I use on my personal site) or use Void Menu Module (I think it is overkill approach).
  4. To make this menu fixed in the browser window. You can use Code per Node module or Floating Block module or, of course, put required CSS code directly in your theme.

After this steps Anchors Panels Navigation module module will take care about scroll to you anchors when visitors will click to the links and about #hash changing in browser address string. By the fact this new approach less complex than Single Page Website module. It has less PHP and JS code and cause less problems to the site developers I hope :-)

What will be the next step?

After one year of developing and using this module I find out that "Anchors Panels Navigation" is not good name for this module according marketing view. It reflects some architectural semantic core of the module but there is no any ideas about module applying in the name of the module. So I would like to ask Drupal community about better name for this module.

Other solutions

It is fair to mention some other solutions belong to the other developers.

There is Drupal Distribution One Page CV created by Ukrainian Drupal Developer Artem Shymko.

There is Single Page Site module developed by Belgium Drupal Developer Robin Ingelbrecht. This module has no theme restriction such as Single Page Website module but there is no ability do more than one Landing Page into one Drupal site also there is no anchors in address line that does not allow to send link on separate block in the One Page site. But in this module there is beautiful Next Page submodule, and it works perfect, see http://www.starfisk.com.

Please let me know is there are some other Drupal based solutions that I should mention here.

Blog tags: Planet Drupal
Catégories: Elsewhere

Acquia: Drupal: Helping NGOs & Civil Society in Myanmar and beyond

Planet Drupal - ven, 19/06/2015 - 16:11
Language Undefined

When Tom Feichter told me he only gets to one Drupal event a year, I wanted to know why. When he told me it's because he runs a Drupal shop–mspiral creative media–in Yangon, Myanmar, I had to know more! We talked about Tom's history in Drupal, how Drupal's multilingual capabilities have helped him, how excited he is about Drupal 8's architecture, his history working with NGOs on the Thai/Burmese border and how that has flowed into ethical digital agency work, and more.

Catégories: Elsewhere

Blink Reaction: Building Native Apps - Part 4

Planet Drupal - ven, 19/06/2015 - 15:00
Building native mobile apps with Ionic Framework and Drupal back-end: request data from Drupal Define app constants

Now we have a REST server from which we will get all required data for our application. First of all let’s define an Angular constant and store some configuration variables in it - for example, where we’ll we set the base url for services requests. In the app.js file, add a new constant method with that value.

gist link

Ionic Framework comes with a couple of useful directives that can help in app building. I decided to make one small user experience improvement: when categories list our article details page as loading, we should show a loading overlay to indicate progress. To do this, we will use the $ionicLoading service. To change its default options you must add another constant - $ionicLoadingConfig - to the app.js file.

gist link

Configure services

Previously, we had defined factories for categories and articles in the services.js file, but the endpoints were empty. Now we can set them. First of all, we have to transfer newly created config objects to the factories and prefix url property value in $http options object with config.serviceBaseUrl. We should also pass the page parameter to Categories get and Articles all methods to handle pagination. And finally we set endpoint variables. Here is the final services.js:

gist link

Complete templates

Now we should create templates for each tab using Ionic directives. Let’s look closer at the index.html file. Here we have a main Angular directive ng-app, which defines our app on a global scope; inside it we can see ion-nav-bar, the global dynamic navigation bar. Next to it there is the ion-nav-view directive; this helps to handle application routing according to the UI Router config in app.js. All template content should render inside this directive.

The first screen of our app is a tab with an all articles list, using the tab-articles template. Here we use ion-view to define the tab controller scope and set the title of this page with the view-title directive. Inside this view we set the container for content with ion-content. Inside it we set ion-list with an ion-item child. Also, we set the ng-repeat directive in ion-item. Angular should walk though all articles data and render each article with title and image; for image, we use ng-src directive instead of src attribute. At the bottom of ion-content we add ion-infiniteiscroll - it gives us an opportunity to portionally load more articles.

gist link

The template for the single category is very similar to the articles tab; the changes are in the link structure to the article details pages, and the view title, which in this case will be the name of the current category.

gist link

On the categories tab we should show the list of categories with the number of articles in each; the list item should be linked to a single category page.

gist link

The last template that we need is an article-details.html. Here, we will show the article image, title and body text. We use the ng-bind-html directive to render the body with its html markup, for example: paragraphs, lists, links etc..

gist link

Controllers

Previously we have created empty controllers for all templates, so we will add the code for them now. We should start from more simple controllers: CategoriesCtrl and ArticleDetailCtrl. CategoriesCtrl are attached to the tab-categories template; we will pass the $ionicLoading service to it, to show data loading progress to user. Inside this controller we show a loading overlay calling the show method on $ionicLoading, and load categories list with Categories factory. All of our factories return promises, so after the call it method we then add the method in which we pass 2 functions: first will run on success, second on error. In this tutorial I route all error messages to the browser console.

gist link

ArticleDetailCtrl is the same, but here we get an article data by its id, which we get from the state parameter.

gist link

CategoryCtrl and ArticlesCtrl are similar so we define a loadMore function in them, that will try to load more articles on scrolling the page down and concatenating them with articles that have been already loaded. Then it will broadcast that the infinite scroll process was completed, and there are no additional results.

gist link

gist link

You can clone and try all this code from my github repository; to get code for this part, checkout the part4 branch(just run “git checkout -f part4”).

Test, build and compile

Before compiling and testing an app on an emulator or real device you may run it in the browser with command “ionic serve” from you project directory.

If the application worked fine in your browser you can test it in emulators, but first let’s add a platform to our project with command “ionic platform add android”, if you are using a Mac you can also add iOS platform with the command “ionic platform add ios”. Before running the app in emulator you must build it and run “ionic build android” (“ionic build ios” for iOS app). Then you can try the application in emulator by running “ionic emulate android” to emulate it in the native Android emulator that comes with Android SDK, or by running “ionic run android” to use the Genymotion emulator (it is faster and has a lot of device settings), which you can get here.

To emulate iOS you must work on Mac OS and run “ionic emulate ios”.

To build apps for production you must run

“cordova build --release android”

then navigate to project folder platforms/android/ant-build/ and generate a key to sign app -

“keytool -genkey -v -keystore starter-release-key.keystore -alias alias_name -keyalg RSA -keysize 2048 -validity 10000”

and sign your application -

“jarsigner -verbose -sigalg SHA1withRSA -digestalg SHA1 -keystore starter-release-key.keystore CordovaApp-release-unsigned.apk alias_name”.

To optimize your apk you should run

“zipalign -v 4 CordovaApp-release-unsigned.apk TutorialApp.apk”

and you will be ready for publishing the file TutorialApp.apk in Google Play. You can find more information about publishing available here.

 

In the next part of this series I will show how to integrate user authentication in your app with Drupal session login.

DrupalBest PracticesDrupal How toDrupal PlanetDrupal TrainingLearning SeriesPost tags: AppsIonic
Catégories: Elsewhere

Drupal core announcements: Recording from June 19th 2015 Drupal 8 critical issues discussion

Planet Drupal - ven, 19/06/2015 - 12:37

This was our fourth critical issues discussion meeting to be publicly recorded in a row. (See all prior recordings). This time to make discussions easier to follow for all of us, we switched to #drupal-contribute in IRC to post links, so those following real time can follow the links and we can just paste the meeting log here as well. Here is the recording of the meeting from today in the hope that it helps more than just those who were on the meeting:

Unfortunately not all people invited made it this time. If you also have significant time to work on critical issues in Drupal 8 and we did not include you, let me know as soon as possible.

The meeting log is as follows (all times are CEST real time at the meeting):


[11:07am] plach: https://www.drupal.org/node/2478459
[11:07am] Druplicon: https://www.drupal.org/node/2478459 => FieldItemInterface methods are only invoked for SQL storage and are inconsistent with hooks [#2478459] => 93 comments, 19 IRC mentions
[11:07am] dawehner: https://www.drupal.org/node/2500527
[11:07am] Druplicon: https://www.drupal.org/node/2500527 => Rewrite \Drupal\file\Controller\FileWidgetAjaxController::upload() to not rely on form cache [#2500527] => 34 comments, 6 IRC mentions
[11:08am] plach: https://www.drupal.org/node/2453153
[11:08am] Druplicon: https://www.drupal.org/node/2453153 => Node revisions cannot be reverted per translation [#2453153] => 107 comments, 31 IRC mentions
[11:09am] jibran: https://www.drupal.org/node/2263569#comment-10039344
[11:10am] Druplicon: https://www.drupal.org/node/2263569 => Bypass form caching by default for forms using #ajax. [#2263569] => 219 comments, 35 IRC mentions
[11:11am] Fabianx-screen: https://www.drupal.org/node/2354889
[11:11am] Druplicon: https://www.drupal.org/node/2354889 => Make block context faster by removing onBlock event and replace it with loading from a BlockContextManager [#2354889] => 66 comments, 13 IRC mentions
[11:11am] WimLeers: https://www.drupal.org/node/2375695
[11:11am] Druplicon: https://www.drupal.org/node/2375695 => Condition plugins should provide cache contexts AND cacheability metadata needs to be exposed [#2375695] => 75 comments, 25 IRC mentions
[11:13am] GaborHojtsy: Fabianx-screen is talking about https://www.drupal.org/node/2354889
[11:13am] Druplicon: https://www.drupal.org/node/2354889 => Make block context faster by removing onBlock event and replace it with loading from a BlockContextManager [#2354889] => 66 comments, 14 IRC mentions
[11:14am] WimLeers: No, he was talking about https://www.drupal.org/node/2501989
[11:14am] Druplicon: https://www.drupal.org/node/2501989 => [meta] Page Cache Performance [#2501989] => 24 comments, 5 IRC mentions
[11:14am] WimLeers: (i.e. the very first part of what he said)
[11:14am] GaborHojtsy: (I directly copied the link he posted in hangouts :D)
[11:14am] WimLeers: lol ok :P
[11:16am] WimLeers: https://www.drupal.org/node/2429287
[11:16am] Druplicon: https://www.drupal.org/node/2429287 => [meta] Finalize the cache contexts API & DX/usage, enable a leap forward in performance [#2429287] => 102 comments, 7 IRC mentions
[11:17am] WimLeers: https://www.drupal.org/node/2450993
[11:17am] Druplicon: https://www.drupal.org/node/2450993 => Rendered Cache Metadata created during the main controller request gets lost [#2450993] => 35 comments, 14 IRC mentions
[11:18am] larowlan: GaborHojtsy: still working sorry, sent apology to dawehne_r this morning with my update
[11:18am] GaborHojtsy: larowlan: yeah jibran relayed that :)
[11:19am] GaborHojtsy: https://www.drupal.org/node/2495179
[11:19am] Druplicon: https://www.drupal.org/node/2495179 => Twig placeholder filter should not map to raw filter [#2495179] => 53 comments, 7 IRC mentions
[11:20am] GaborHojtsy: https://www.drupal.org/node/2487972
[11:20am] Druplicon: https://www.drupal.org/node/2487972 => [META] Results of testing localize.drupal.org on Drupal 7 in June 2015 [#2487972] => 18 comments, 5 IRC mentions
[11:21am] jibran: https://www.drupal.org/node/2453153
[11:21am] Druplicon: https://www.drupal.org/node/2453153 => Node revisions cannot be reverted per translation [#2453153] => 107 comments, 32 IRC mentions
[11:31am] larowlan: jibran++
[11:31am] larowlan: GaborHojtsy++
[11:31am] GaborHojtsy: Fabianx-screen: what’s the issue link?
[11:33am] jibran: https://www.drupal.org/node/2489024
[11:33am] dawehner: https://www.drupal.org/node/2508591
[11:33am] Druplicon: https://www.drupal.org/node/2489024 => Arbitrary code execution via 'trans' extension for dynamic twig templates (when debug output is on) [#2489024] => 18 comments, 7 IRC mentions
[11:33am] Druplicon: https://www.drupal.org/node/2508591 => Move Drupal into subdirectory and get external dependencies/libraries out of the web-accessible path [#2508591] => 8 comments, 3 IRC mentions
[11:42am] dawehner: https://www.drupal.org/node/2508654#comment-10039315
[11:42am] Druplicon: https://www.drupal.org/node/2508654 => File inclusion in transliteration service [#2508654] => 17 comments, 2 IRC mentions
[11:43am] GaborHojtsy: dawehner: that one yeah
[11:43am] GaborHojtsy: https://www.drupal.org/drupal8-security-bounty running for 2 more months
[11:43am] jibran: https://www.drupal.org/node/1305882
[11:43am] Druplicon: https://www.drupal.org/node/1305882 => drupal_html_id() considered harmful; remove ajax_html_ids to use GET (not POST) AJAX requests [#1305882] => 153 comments, 22 IRC mentions
[11:48am] dawehner: https://www.drupal.org/node/2500523
[11:48am] Druplicon: https://www.drupal.org/node/2500523 => Rewrite views_ui_add_ajax_trigger() to not rely on /system/ajax. [#2500523] => 6 comments, 2 IRC mentions

Catégories: Elsewhere

Tyler Frankenstein: Headless Drupal with Angular JS and Bootstrap - Hello World

Planet Drupal - ven, 19/06/2015 - 08:50

This tutorial describes how to build a very simple de-coupled Drupal web application powered by Angular JS and Bootstrap. The inspiration for writing this tutorial came after completing my first Angular JS module (angular-drupal), which of course is for Drupal!

To keep things simple, and in the spirit of "Hello World", the application will let us login using credentials from the Drupal website.

The complete code for this example app is available here: https://github.com/signalpoint/headless-drupal-angular-bootstrap-hello-w...

Ready? Alright, let's go headless...

Catégories: Elsewhere

Lullabot: Project Management

Planet Drupal - jeu, 18/06/2015 - 21:56

In this week's Drupalize.Me podcast, hostess Amber Matz chats about all things Project Management with Seth Brown (COO at Lullabot) and Lullabot Technical Project Managers Jessica Mokrzecki and Jerad Bitner. To continue the conversation, check out Drupalize.Me's series on Project Management featuring interviews and insights from these fine folks and others at Lullabot.

Catégories: Elsewhere

DrupalCon News: Register for DrupalCon Barcelona

Planet Drupal - jeu, 18/06/2015 - 21:42

Registration is live! For those of you have been waiting to purchase your ticket to DrupalCon Barcelona, the time has come!

Catégories: Elsewhere

Open Source Training: Allow Users to Delete Their Drupal Accounts

Planet Drupal - jeu, 18/06/2015 - 20:40

It's good practice to allow users to leave your site completely.

That means users should be able to delete their account entirely, together with all the data associated with it.

In Drupal, you can allow users to delete their accounts. Here's how the feature works:

Catégories: Elsewhere

Drupal Association News: My Week at DrupalCon, part 2

Planet Drupal - jeu, 18/06/2015 - 20:30

Part 1 of My Week at DrupalCon

Part 2:

As our community grows, so do our programs.  This year in addition to hosting trainings and both the Community Summit and Business Summit, we offered a Higher-Ed Summit at DrupalCon.  As soon as it was announced folks clamored to sign up, and the tickets sold out at a rapid pace.  We at the Drupal Association feel like this is a great example of how the growing variety of offerings at DrupalCon illustrates the increasing diversity of our community’s interests and skillsets.

The Higher-Ed Summit was a huge hit and that was due largely in part to the efforts of the Summit Leads, Christina and Shawn.  They worked hard to understand what the Higher-Ed community wanted and needed from the Summit and strategized to provide it down to the last detail.  Their planning and experience were integral to the popularity of the event, and we look forward to working with these awesome volunteers again in the future.    

Maybe I’m naive or a wide-eyed optimist, but meeting and speaking to people from all over the world is invigorating and exciting to me. Throughout the course of DrupalCon I had the opportunity to meet with community organizers from near and far. While it’s true that many attendees came from the United States and Canada, there were also organizers who came from as far away as Latin America, Europe, India, and Japan, and talked about how Drupal has affected their communities and their livelihoods.  It is always such a pleasure to see Drupal changing lives and bringing opportunities for personal growth and business everywhere.  

After an exhausting week of keynotes, and BOFs, and meetings, and dinners, I launched into the sprints on Friday with the purpose of understanding Drupal more.  I always enjoy discussing Drupal’s unique qualities with developers, site-builders, and themers, but this DrupalCon I really wanted to engage in more than just conversations.  I wanted to experience what it is like to directly develop and work with Drupal.  At the Friday sprints, my friend and new mentor Amy agreed to sit down with me and help me put together my own blog, run on a Drupal website.  During the process, I realized that there is no better way to start to understand the complexity of Drupal than to use the product myself.  

When learning to use Drupal in the sprint, I realized that we really are about fostering a friendly, inclusive, and diverse community. We talk the talk and we walk the walk.  Amy sat down with me and patiently showed me step-by-step how to start my site.  We picked a hosting site, domain name, downloaded Drupal, and began the process of organizing our modules and features. Finally, I started to really get it, which was incredibly exciting. Both personally and professionally, it meant a lot to me that someone would take the time to help me on my journey. It really brought home the fact that Drupalers genuinely care, are excited and willing to share knowledge, and have fun while doing it.  

DrupalCon Los Angeles was a spectacular event.  I feel like this blog wouldn’t be a proper message from LShey without some shout-outs and kudos, so please join me in celebrating others. I’d like to say out a big thank you to our talented Events team at the Drupal Association for organizing a seamless and beautiful event.  Thank you to our sponsors who help us put on this event with their support.  Thank you to our dedicated volunteers: whether you were a sprint-mentor, room-monitor, or speaker, your time and expertise is appreciated and valued.  Our volunteers truly make DrupalCon a wonderful event.  I’d like to share a special shout-out to the team who keeps us all informed, too: thank you to Alex and Paul for running the @drupalconna twitter handle.  Thank you to Emma Jane, who was our MC this DrupalCon, and who engaged our keynote speakers with witty and thoughtful interviews.  Lastly, thank you to you all, our community.  DrupalCon would not be the same without you.  I’m looking forward to seeing you all at the next one!  

Drupal on, 

Lauren Shey
Community Outreach Coordinator
Drupal Association
@lsheydrupal

Catégories: Elsewhere

Pages

Subscribe to jfhovinne agrégateur - Elsewhere