Agrégateur de flux

Russell Coker: BTRFS Status June 2015

Planet Debian - sam, 20/06/2015 - 06:47

The version of btrfs-tools in Debian/Jessie is incapable of creating a filesystem that can be mounted by the kernel in Debian/Wheezy. If you want to use a BTRFS filesystem on Jessie and Wheezy (which isn’t uncommon with removable devices) the only options are to use the Wheezy version of mkfs.btrfs or to use a Jessie kernel on Wheezy. I recently got bitten by this issue when I created a BTRFS filesystem on a removable device with a lot of important data (which is why I wanted metadata duplication and checksums) and had to read it on a server running Wheezy. Fortunately KVM in Wheezy works really well so I created a virtual machine to read the disk. Setting up a new KVM isn’t that difficult, but it’s not something I want to do while a client is anxiously waiting for their data.

BTRFS has been working well for me apart from the Jessie/Wheezy compatability issue (which was an annoyance but didn’t stop me doing what I wanted). I haven’t written a BTRFS status report for a while because everything has been OK and there has been nothing exciting to report.

I regularly get errors from the cron jobs that run a balance supposedly running out of free space. I have the cron jobs due to past problems with BTRFS running out of metadata space. In spite of the jobs often failing the systems keep working so I’m not too worried at the moment. I think this is a bug, but there are many more important bugs.

Linux kernel version 3.19 was the first version to have working support for RAID-5 recovery. This means version 3.19 was the first version to have usable RAID-5 (I think there is no point even having RAID-5 without recovery). It wouldn’t be prudent to trust your important data to a new feature in a filesystem. So at this stage if I needed a very large scratch space then BTRFS RAID-5 might be a viable option but for anything else I wouldn’t use it. BTRFS still has had little performance optimisation, while this doesn’t matter much for SSD and for single-disk filesystems for a RAID-5 of hard drives that would probably hurt a lot. Maybe BTRFS RAID-5 would be good for a scratch array of SSDs. The reports of problems with RAID-5 don’t surprise me at all.

I have a BTRFS RAID-1 filesystem on 2*4TB disks which is giving poor performance on metadata, simple operations like “ls -l” on a directory with ~200 subdirectories takes many seconds to run. I suspect that part of the problem is due to the filesystem being written by cron jobs with files accumulating over more than a year. The “btrfs filesystem” command (see btrfs-filesystem(8)) allows defragmenting files and directory trees, but unfortunately it doesn’t support recursively defragmenting directories but not files. I really wish there was a way to get BTRFS to put all metadata on SSD and all data on hard drives. Sander suggested the following command to defragment directories on the BTRFS mailing list:

find / -xdev -type d -execdir btrfs filesystem defrag -c {} +

Below is the output of “zfs list -t snapshot” on a server I run, it’s often handy to know how much space is used by snapshots, but unfortunately BTRFS has no support for this.

NAME USED AVAIL REFER MOUNTPOINT hetz0/be0-mail@2015-03-10 2.88G – 387G – hetz0/be0-mail@2015-03-11 1.12G – 388G – hetz0/be0-mail@2015-03-12 1.11G – 388G – hetz0/be0-mail@2015-03-13 1.19G – 388G –

Hugo pointed out on the BTRFS mailing list that the following command will give the amount of space used for snapshots. $SNAPSHOT is the name of a snapshot and $LASTGEN is the generation number of the previous snapshot you want to compare with.

btrfs subvolume find-new $SNAPSHOT $LASTGEN | awk '{total = total + $7}END{print total}'

One upside of the BTRFS implementation in this regard is that the above btrfs command without being piped through awk shows you the names of files that are being written and the amounts of data written to them. Through casually examining this output I discovered that the most written files in my home directory were under the “.cache” directory (which wasn’t exactly a surprise).

Now I am configuring workstations with a separate subvolume for ~/.cache for the main user. This means that ~/.cache changes don’t get stored in the hourly snapshots and less disk space is used for snapshots.


My observation is that things are going quite well with BTRFS. It’s more than 6 months since I had a noteworthy problem which is pretty good for a filesystem that’s still under active development. But there are still many systems I run which could benefit from the data integrity features of ZFS and BTRFS that don’t have the resources to run ZFS and need more reliability than I can expect from an unattended BTRFS system.

At this time the only servers I run with BTRFS are located within a reasonable drive from my home (not the servers in Germany and the US) and are easily accessible (not the embedded systems). ZFS is working well for some of the servers in Germany. Eventually I’ll probably run ZFS on all the hosted servers in Germany and the US, I expect that will happen before I’m comfortable running BTRFS on such systems. For the embedded systems I will just take the risk of data loss/corruption for the next few years.

Related posts:

  1. BTRFS Status Dec 2014 My last problem with BTRFS was in August [1]. BTRFS...
  2. BTRFS Status March 2014 I’m currently using BTRFS on most systems that I can...
  3. BTRFS Status July 2014 My last BTRFS status report was in April [1], it...
Catégories: Elsewhere

Drupal core announcements: Requiring hook_update_N() for Drupal 8 core patches beginning June 24

Planet Drupal - sam, 20/06/2015 - 02:58

In [policy, no patch] Require hook_update_N() for Drupal 8 core patches beginning June 24, the Drupal 8 release managers outline a policy to begin requiring hook_update_N() implementations for core patches that introduce data model changes starting after the next beta release. The goal of this policy change is to start identifying common update use-cases, to uncover any limitations we have for providing update functions in core, and to prepare core developers for considering upgrade path issues as we create the last few betas and first release candidates of Drupal 8. We need your help reviewing and communicating about this proposed policy, as well as identifying core issues that will be affected. Read the issue for more details.

Catégories: Elsewhere

Norbert Preining: Localizing a WordPress Blog

Planet Debian - sam, 20/06/2015 - 02:09

There are many translation plugins available for WordPress, and most of them deal with translations of articles. This might be of interest for others, but not for me. If you have a blog with visitors from various language background, because you are living abroad, or writing in several languages, you might feel tempted to provide visitors with a localized “environment”, meaning that as much as possible is translated into the native language of the visitor, without actually translating content – but allowing to.

In my case I am writing mostly in English and Japanese, but sometimes (in former times) in Italian and now and then in my mother tongue, German. Visitors from my site are from all over the world, but at least for Japanese visitors I wanted to provide a localized environment. This blog describes how to get as much as possible translated of your blog, and here I mean not the actual articles, because this is the easy part and most translation plugins handle that fine, but the things around the articles (categories, tags, headers, …).

Starting point and aims

My starting point was a blog where I already had language added as extra taxonomy, and have tagged all articles with a language. But I didn’t have any other translation plugin installed or used. Furthermore, I am using a child theme of the main theme in use (that is always a good idea anyway!). And of course, the theme you are using should be prepared for translation, that is that most literal strings in the theme source code are wrapped in __( ... ) or _e( ... ) calls. And by the way, if you don’t have the language taxonomy, don’t worry, that will come in automatically.

One more thing: The following descriptions are not for the very beginner. I expect certain fluency with WordPress, where for example themese and plugins keep their files, as well as PHP programming experience is needed for some of the steps.

With this starting point my aims were quite clear:

  • allow for translation of articles
  • translate as much as possible of the surroundings
  • auto-selection of language either depending on article or on browser language of visitor
  • by default show all articles independent of selected language
  • if possible, keep database clean as far as possible
Translation plugins

There is a huge bunch of translation plugins, localization plugins, or internationalization plugins out there, and it is hard to select one. I don’t say that what I propose here is the optimal solution, just one that I was pointed at by a colleague, namely utilizing the xili-language plugin.

Installation and initial setup

Not much to say here, just follow the usual procedure (search, install, activate), followed by the initial setup of xili-language. If you haven’t had a language taxonomy by now, you can add languages from the preference page of xili-language, first tab. After having added some languages you should have something similar to the above screen shot. Having defined your languages, you can assign a language to your articles, but for now nothing has actually changed on the blog pages.

As I already mentioned, I assume that you are using a child theme. In this case you should consult the fourth tab of the xili-language settings page, called Managing language files, where on the right you should see / set up things in a way that translations in the child theme override the ones in the main theme, see screen shot on the right. I just mention here that there is another xili plugin, xili-dictionary, that can do a lot of things for you when it comes to translation – but I couldn’t figure out its operation mode, so I switched back (i.e., uninstalled that plugin) and used normal .po/.mo files as described in the next section.

Adding translations – po and mo files

Translations are handled in normal (at least for the Unix world) gettext format. Matthias wrote about this in this blog. In principle you have to:

  • create a directory languages in your child theme folder
  • create there .po file named local-LL.po or local-LL_DD.po, where LL and LL_DD are the same as the values in the field ISO Names in the list of defined languages (see above)
  • convert the .po files to .mo files using msgfmt local-LL.po -o

The contents of the po files are described in Matthias’ blog, and in the following when I say add a translation, then I mean: adding a stanza

msgid "some string" msgstr "translation of some string"

to the po file, and not forgetting to recompile it to mo file.

So let us go through a list of changes I made to translate various pieces of the blog appearance:

Translation of categories

This is the easiest part, simply throw in the names of your categories into the respective local-DD_LL.po file, and be ready. In my case I used local-ja.po which besides other categories contains stanzas like:

msgid "Travel" msgstr "旅行" Translation of widget titles

In most cases the widget titles are already automatically translated, if the plugin/widget author cared for it, meaning that he called the widget_title filter on the title. If this does not happen, please report this to the widget/plugin author. I have done this for example for the simple links plugin, which I use for various elements of the side-bar. The author was extremely responsive and the fix will be in the next release is already in the latest release – big thanks!

Translation of tags

This is a bit a problem, as the tags appear in various places on my blog: next to the title line and the footer of each blog, as well as in the tag cloud in the side bar.

Furthermore, I want to translate tags instead of having related tag groups as provided by xili tidy tags plugin, so we have to deal with the various appearances of tags one by one:

Tags on the main page – shown by the theme

This is the easier part – in my case I had already a customized content.php and content-single.php in my child theme folder. If not, you need to copy the one from the parent theme and change the appearance of it to translate tags. Since this is something that depends on the specific theme, I cannot give detailed advice, but if you see something like:

$tags_list = get_the_tag_list( '', __( ', ', 'mistylake' ) );

(here the get_the_tag_list is the important part), then you can replace this by the following code:

$posttags = get_the_tags(); $first = 1; $tag_list = ''; if ($posttags) { foreach($posttags as $tag) { if ($first == 1) { $first = 0; } else { $tag_list = $tag_list . __( ', ', 'mistylake' ); } $tag_list = $tag_list . '<a href="' . esc_url( home_url( '/tag/' . $tag->slug ) ) . '">' . __($tag->name, 'mistylake') . '</a>'; } }

(there are for sure simpler ways …) This code loops over the tags and translates them using the __ function. Note that the second parameter must be the text domain of the parent theme.

If you have done this right and the web site is still running (I recommend testing it on a test installation – I had white pages many times due to php programming errors), and of course you have actual translations available and are looking at a localized version of the web site, then the list of tags as shown by your theme should be translated.

Tag cloud widget

This one is a tricky one: The tag cloud widget comes by default with WordPress, but doesn’t translate the tags. I tried a few variants (e.g. creating a new widget as extension of the original tag cloud widget, and only changing the respective functions), but that didn’t work out at all. I finally resorted to a trick: Reading the code of the original widget, I saw that it applies the tag-sort-filter filter on the array of tags. That allows us to hook into the tag cloud creating and translate the tags.

You have to add the following code to your child theme’s functions.php:

function translate_instead_of_sort($tags) { foreach ( (array) $tags as $tag ) { $tag->name = __( $tag->name , 'mistylake' ); } return $tags; } add_action('tag_cloud_sort', 'translate_instead_of_sort');

(again, don’t forget to change the text domain in the __(.., ..) call!) There might be some more things one could do, like changing the priority to be used after the sorting, or sort directly, but I haven’t played around with that. Using the above code and translating several of the tags, the tag cloud now looks like the screenshot on the right – I know, it could use some tweaking. Also, now the untranslated tags are sorted all before the translated, things one probably can address with the priority of the filter.

Having done the above things, my blog page when Japanese is selected is now mostly in Japanese, with of course the exception the actual articles, which are in a variety of languages.

Open problems

There are a few things I haven’t managed till now to translate, and they are mostly related to the Jetpack plugin, but not only:

  • translation of the calendar – it is strange that although this is a standard widget of WordPress, the translation somehow does not work out there
  • transalation of the meta text entries (Log in, RSS feed, …) – interestingly, even adding the translation of these strings did not help get them translated
  • translation of simple links text fields – here I haven’t invested by now
  • translation of (Jetpack) subscribe to this blog widget

I have a few ideas how to tackle this problem: With Jetpack the biggest problem seems that all the strings are translated in a different text domain. So one should be able to add some code to the functions.php to override/add translations to the jetpack text domain. But somehow it didn’t work out in my case. The same goes for things that are in the WordPress core and use the translation functions without a text domain – so I guess the translation function will use the main WordPress translation files/text domain.


The good thing of the xili-language plugin is that it does not change the actual posts (some plugins save the translations in the the post text), and is otherwise not too intrusive IMHO. Still, it falls short of allowing to translate various parts of the blog, including the widget areas.

I am not sure whether there are better plugins for this usage scenario, I would be surprised if not, but all the plugins I have seen were doing a bit too much on the article translation side and not enough on the translation of the surroundings side.

In any case, I would like to see more separation between the functionality of localization (translating the interface) and translation (translating the content). But at the moment I don’t have enough impetus to write my own plugin for this.

If you have any suggestions for improvement, please let me know!


Catégories: Elsewhere

Drupal Association News: Updates to our 2015 Financial Plan

Planet Drupal - ven, 19/06/2015 - 20:41

I want to share today that the Association is implementing a new financial plan to address lower than anticipated revenues in 2015. To align our spending more closely with our revenue, we are implementing expense cuts that I’m very sorry to say include staffing. Regrettably, we are losing three staff people today from operations, engineering and our community teams. This was not a decision we came to lightly, and we’re committed to helping those staff through their transition as best we can. In this post I want to share some information about how we got here, and our revised plan.

A Brief history

This is a really hard post to write because we delivered a plan to the community at the beginning of 2015, and it’s clear that we are not going to be able to fully execute to that plan. I take responsibility for that.

I started at the Association two and half years ago, at a very different time for the organization. At that point in early 2013, the Association was a handful of staff, mostly focused on the DrupalCons. The D7 upgrade of had been halted. Not without some good reason, community trust in the Association was low, and that’s among the people who even knew the Association existed.

When I joined, the message I heard from the board and from the many community members I talked to was that the Association had to learn to implement consistently and communicate more. In other words, we needed to build our credibility in the community by executing our work well and making sure the community knew what we were up to and how to get involved.

One thing that was clear from the outset was that was key to our success. If we could not begin to make visible improvements to with the community, we would fail. With support from the board, we decided to invest our healthy reserve in ourselves and build a team that could improve As our CTO Josh Mitchel pointed out in his anniversary blog post, we’ve done a LOT on We’ve also made great strides in DrupalCons, introducing more first-time attendee support, providing more resources to all the sprints, and adding the third Con in global communities that are so eager to have us there. Our marketing team has helped create some key content for Drupal 8 and we’ve even raised over $210,000 to help fund the completion of D8 release blockers, The revenue we generate to do this work has also increased, and diversified. We've grown the Drupal Jobs, and rolled out Try Drupal. You can see, even with our revised expectations for 2015, that things are still growing. One of our key programs, Supporting Partners, is up 26% over this same time period last year, for example. Growth of this program was only 4% in 2014.


So lots of amazing things are happening, but we have to address that we overestimated what was possible for revenue. We have to adjust our plan to meet reality.

Changing the Plan

Addressing our situation is not work we took lightly. We set several goals for the process that guided our thinking throughout:

  • Solve for short-term revenue shortfalls while retaining resources we need to succeed long-term
  • Minimize staff impact
  • Do this once - find the scenario we can truly sustain, and then grow out of
  • Retaining credibility with staff and ensuring we communicate how valuable they are for our future
  • Maintain community confidence

The strategy we used was two-fold. First, we strove to preserve our core services to the community and our ability to fund our own work. Second, we decided to take action as quickly as possible because the sooner we made changes to the plan, the greater the long-term benefit to the organization. We know that this second strategy makes some of this seem like it's out of the blue, but it means that we impact as few people as possible.

Our leadership team looked at three approaches to addressing our cash flow issues:

  • Incremental revenue: Our new forecast extends actuals from the beginning of 2015 out through the end of the year. We believe that it is possible for us to improve upon this forecast slightly because, although our primary mistake was overestimating revenue, we also had some staffing change-ups (a retirement, hiring new reps) on the team at the beginning of the year that adversely affected the numbers. There is some room to modestly improve our revenue from the forecast.
  • Non-labor expense: We looked at travel, consulting fees, hardware and software, among other places in the budget where we had built in buffers or non-essential expenses. Eliminating these now, and not carrying them into 2016 was a key part of our process.
  • Labor expense: This was the last option we looked at because at the end of the day, not only do all our staff give the community everything they’ve got, we really like each other here. I care deeply for the well-being of everyone at the Association. There is also lot of discussion in the business community about the long-term negative impacts of layoffs on organizations. We looked at lots of ways to reduce labor expense, but were not able to find a solution that did not include some layoffs.

Using this process, we were able to identify $450,000 in non-labor expense savings, and increase revenue projections $250,000 from July 1 2015 through December 31 2016. That was enough to solve our 2015 revenue shortfalls, but it did not address the issues long-term. We needed to reach deeper to ensure our long-term success. We had to consider labor reductions.

Prior to looking at any other staff, the leadership team at the Association decided that the first staff cut had to come from us. As a team, Megan, Joe, Josh, and Matt volunteered a 10% reduction, and I volunteered a 15% reduction. But we still weren’t there. Looking at the remaining labor cuts, we wanted to use our values as our guide. We know that our team believes in our teamwork value above all else, and would want to minimize layoffs as much as possible. With that in mind, we experimented with the model and determined that we could limit layoffs to three if we asked remaining staff to take a 5% pay cut across the board.

All told, here’s what measures look like:


We believe this approach meets our goals and puts us in the best position possible to continue the great work we’ve been doing.

What Happens Next?

On the financial front, we’ll be managing to our cash flow for the next 18 months, as well as modernizing our budgeting and forecasting tools to reflect an Agile methodology. This will let us see further into the future more often, and give us more opportunities to update our plans based on what’s actually happening. And, if we find we are performing favorably to our plan, our first action will be to restore salaries for our staff.

Most importantly, we’re going to be focused on our team. They all got the news earlier today, and we’re taking this time to talk things through all together, in our teams, and one on one. I am here to answer questions and hear concerns for every one of them. We’ll also implement monthly internal review of our progress to the new plan with staff so that they have transparency on a monthly basis about what’s happening. These people are the best thing we have going for us, and I won’t ever be able to make this up to any of them, but I am committed to helping them find the best path forward they can.

Thank you

Sharing this is not easy. The only thing that makes it better is knowing that the Association, like Drupal itself, has so much potential. I want to thank our Supporters, partners, sponsors, members, and the general community for everything you’ve given us so far. The only way we will realize our potential and move forward is together, and we are so happy that we get to do that with you.

Catégories: Elsewhere

LevelTen Interactive: Drupal 8: Marketer Friendly

Planet Drupal - ven, 19/06/2015 - 19:12

The digital marketing world keeps changing, basically every day, or whenever Google decides it’s time to change their algorithm. As a person who practices digital marketing, I know the challenges of working with a CMS and the need to for it to allow me to publish blog posts (like this one) easily and have it be mobile responsive, because who uses actual computers these days?... Read more

Catégories: Elsewhere

Vasily Yaremchuk: Anchors Panels Navigation Module as an Excellent Alternative to Single Page Website Module

Planet Drupal - ven, 19/06/2015 - 18:30

Several years ago I was working on my personal Web site. Even in that time One Page solutions were very popular for some presentation, personal or CV pages.

The main idea of such approach to put all information on one long page with several link anchors corresponding some separate sub-sections of this page.

In 2011 Single Page Website module was created. Initially my home page was done on the base of this module.

Single Page Website module is good out of box solution for Drupal beginners but it has a lot of weak points connected with it's architectural solution. Some more information about this module you can see in my report on Kiev Drupal Camp 2011.

Frustration due to Single Page Website module

It was my fault to build single page on the base of my custom solution without some prepared and ready to use approaches (Views or Panels for example) that can put several nodes or other content entities together on one page. And according incorrect architecture Single Page Website module has a lot of restrictions. The most significant one is theme restriction. Module works with limited number of themes. Also on the module there is out of box ability to have only one page with anchor navigation from menu.

So we should have only one language One Page Website. And the last frustrated feature of the module we should have anchors links in menu only without links to some internal or external pages.

New Approach based on Panels

I'm start working on the other solution about one year ago, see my post Anchors Panels Navigation Module. And now I have some stable version of Anchors Panels Navigation Module with no theme restriction and with manual anchor name management.

Of course, new approach is Panel based and it require several modules to be installed. Also to set One page website driven on Anchors Panels Navigation module takes more time than on the base Of Single Page Website one. But this solution more flexible. You can use several menus, and links in blocks and content for one page navigation. Also you can use this module to set on your site several Landing Pages and the number of such pages is not limited!

If you would like to set Landing Page solution on the base of Anchors Panels Navigation module you should do a lot of manual work in Drupal admin area.

  1. In addition to setup this module you should create node with type Panel and put several pieces of content in the panes.
  2. Set CSS IDs to each pane that should have #anchors. The name of #anchors will be equal to the names of CSS IDs.
  3. To set links in menu with #hashes. You can use absolute links to your site (like I use on my personal site) or use Void Menu Module (I think it is overkill approach).
  4. To make this menu fixed in the browser window. You can use Code per Node module or Floating Block module or, of course, put required CSS code directly in your theme.

After this steps Anchors Panels Navigation module module will take care about scroll to you anchors when visitors will click to the links and about #hash changing in browser address string. By the fact this new approach less complex than Single Page Website module. It has less PHP and JS code and cause less problems to the site developers I hope :-)

What will be the next step?

After one year of developing and using this module I find out that "Anchors Panels Navigation" is not good name for this module according marketing view. It reflects some architectural semantic core of the module but there is no any ideas about module applying in the name of the module. So I would like to ask Drupal community about better name for this module.

Other solutions

It is fair to mention some other solutions belong to the other developers.

There is Drupal Distribution One Page CV created by Ukrainian Drupal Developer Artem Shymko.

There is Single Page Site module developed by Belgium Drupal Developer Robin Ingelbrecht. This module has no theme restriction such as Single Page Website module but there is no ability do more than one Landing Page into one Drupal site also there is no anchors in address line that does not allow to send link on separate block in the One Page site. But in this module there is beautiful Next Page submodule, and it works perfect, see

Please let me know is there are some other Drupal based solutions that I should mention here.

Blog tags: Planet Drupal
Catégories: Elsewhere

Acquia: Drupal: Helping NGOs & Civil Society in Myanmar and beyond

Planet Drupal - ven, 19/06/2015 - 16:11
Language Undefined

When Tom Feichter told me he only gets to one Drupal event a year, I wanted to know why. When he told me it's because he runs a Drupal shop–mspiral creative media–in Yangon, Myanmar, I had to know more! We talked about Tom's history in Drupal, how Drupal's multilingual capabilities have helped him, how excited he is about Drupal 8's architecture, his history working with NGOs on the Thai/Burmese border and how that has flowed into ethical digital agency work, and more.

Catégories: Elsewhere

Blink Reaction: Building Native Apps - Part 4

Planet Drupal - ven, 19/06/2015 - 15:00
Building native mobile apps with Ionic Framework and Drupal back-end: request data from Drupal Define app constants

Now we have a REST server from which we will get all required data for our application. First of all let’s define an Angular constant and store some configuration variables in it - for example, where we’ll we set the base url for services requests. In the app.js file, add a new constant method with that value.

gist link

Ionic Framework comes with a couple of useful directives that can help in app building. I decided to make one small user experience improvement: when categories list our article details page as loading, we should show a loading overlay to indicate progress. To do this, we will use the $ionicLoading service. To change its default options you must add another constant - $ionicLoadingConfig - to the app.js file.

gist link

Configure services

Previously, we had defined factories for categories and articles in the services.js file, but the endpoints were empty. Now we can set them. First of all, we have to transfer newly created config objects to the factories and prefix url property value in $http options object with config.serviceBaseUrl. We should also pass the page parameter to Categories get and Articles all methods to handle pagination. And finally we set endpoint variables. Here is the final services.js:

gist link

Complete templates

Now we should create templates for each tab using Ionic directives. Let’s look closer at the index.html file. Here we have a main Angular directive ng-app, which defines our app on a global scope; inside it we can see ion-nav-bar, the global dynamic navigation bar. Next to it there is the ion-nav-view directive; this helps to handle application routing according to the UI Router config in app.js. All template content should render inside this directive.

The first screen of our app is a tab with an all articles list, using the tab-articles template. Here we use ion-view to define the tab controller scope and set the title of this page with the view-title directive. Inside this view we set the container for content with ion-content. Inside it we set ion-list with an ion-item child. Also, we set the ng-repeat directive in ion-item. Angular should walk though all articles data and render each article with title and image; for image, we use ng-src directive instead of src attribute. At the bottom of ion-content we add ion-infiniteiscroll - it gives us an opportunity to portionally load more articles.

gist link

The template for the single category is very similar to the articles tab; the changes are in the link structure to the article details pages, and the view title, which in this case will be the name of the current category.

gist link

On the categories tab we should show the list of categories with the number of articles in each; the list item should be linked to a single category page.

gist link

The last template that we need is an article-details.html. Here, we will show the article image, title and body text. We use the ng-bind-html directive to render the body with its html markup, for example: paragraphs, lists, links etc..

gist link


Previously we have created empty controllers for all templates, so we will add the code for them now. We should start from more simple controllers: CategoriesCtrl and ArticleDetailCtrl. CategoriesCtrl are attached to the tab-categories template; we will pass the $ionicLoading service to it, to show data loading progress to user. Inside this controller we show a loading overlay calling the show method on $ionicLoading, and load categories list with Categories factory. All of our factories return promises, so after the call it method we then add the method in which we pass 2 functions: first will run on success, second on error. In this tutorial I route all error messages to the browser console.

gist link

ArticleDetailCtrl is the same, but here we get an article data by its id, which we get from the state parameter.

gist link

CategoryCtrl and ArticlesCtrl are similar so we define a loadMore function in them, that will try to load more articles on scrolling the page down and concatenating them with articles that have been already loaded. Then it will broadcast that the infinite scroll process was completed, and there are no additional results.

gist link

gist link

You can clone and try all this code from my github repository; to get code for this part, checkout the part4 branch(just run “git checkout -f part4”).

Test, build and compile

Before compiling and testing an app on an emulator or real device you may run it in the browser with command “ionic serve” from you project directory.

If the application worked fine in your browser you can test it in emulators, but first let’s add a platform to our project with command “ionic platform add android”, if you are using a Mac you can also add iOS platform with the command “ionic platform add ios”. Before running the app in emulator you must build it and run “ionic build android” (“ionic build ios” for iOS app). Then you can try the application in emulator by running “ionic emulate android” to emulate it in the native Android emulator that comes with Android SDK, or by running “ionic run android” to use the Genymotion emulator (it is faster and has a lot of device settings), which you can get here.

To emulate iOS you must work on Mac OS and run “ionic emulate ios”.

To build apps for production you must run

“cordova build --release android”

then navigate to project folder platforms/android/ant-build/ and generate a key to sign app -

“keytool -genkey -v -keystore starter-release-key.keystore -alias alias_name -keyalg RSA -keysize 2048 -validity 10000”

and sign your application -

“jarsigner -verbose -sigalg SHA1withRSA -digestalg SHA1 -keystore starter-release-key.keystore CordovaApp-release-unsigned.apk alias_name”.

To optimize your apk you should run

“zipalign -v 4 CordovaApp-release-unsigned.apk TutorialApp.apk”

and you will be ready for publishing the file TutorialApp.apk in Google Play. You can find more information about publishing available here.


In the next part of this series I will show how to integrate user authentication in your app with Drupal session login.

DrupalBest PracticesDrupal How toDrupal PlanetDrupal TrainingLearning SeriesPost tags: AppsIonic
Catégories: Elsewhere

Drupal core announcements: Recording from June 19th 2015 Drupal 8 critical issues discussion

Planet Drupal - ven, 19/06/2015 - 12:37

This was our fourth critical issues discussion meeting to be publicly recorded in a row. (See all prior recordings). This time to make discussions easier to follow for all of us, we switched to #drupal-contribute in IRC to post links, so those following real time can follow the links and we can just paste the meeting log here as well. Here is the recording of the meeting from today in the hope that it helps more than just those who were on the meeting:

Unfortunately not all people invited made it this time. If you also have significant time to work on critical issues in Drupal 8 and we did not include you, let me know as soon as possible.

The meeting log is as follows (all times are CEST real time at the meeting):

[11:07am] plach:
[11:07am] Druplicon: => FieldItemInterface methods are only invoked for SQL storage and are inconsistent with hooks [#2478459] => 93 comments, 19 IRC mentions
[11:07am] dawehner:
[11:07am] Druplicon: => Rewrite \Drupal\file\Controller\FileWidgetAjaxController::upload() to not rely on form cache [#2500527] => 34 comments, 6 IRC mentions
[11:08am] plach:
[11:08am] Druplicon: => Node revisions cannot be reverted per translation [#2453153] => 107 comments, 31 IRC mentions
[11:09am] jibran:
[11:10am] Druplicon: => Bypass form caching by default for forms using #ajax. [#2263569] => 219 comments, 35 IRC mentions
[11:11am] Fabianx-screen:
[11:11am] Druplicon: => Make block context faster by removing onBlock event and replace it with loading from a BlockContextManager [#2354889] => 66 comments, 13 IRC mentions
[11:11am] WimLeers:
[11:11am] Druplicon: => Condition plugins should provide cache contexts AND cacheability metadata needs to be exposed [#2375695] => 75 comments, 25 IRC mentions
[11:13am] GaborHojtsy: Fabianx-screen is talking about
[11:13am] Druplicon: => Make block context faster by removing onBlock event and replace it with loading from a BlockContextManager [#2354889] => 66 comments, 14 IRC mentions
[11:14am] WimLeers: No, he was talking about
[11:14am] Druplicon: => [meta] Page Cache Performance [#2501989] => 24 comments, 5 IRC mentions
[11:14am] WimLeers: (i.e. the very first part of what he said)
[11:14am] GaborHojtsy: (I directly copied the link he posted in hangouts :D)
[11:14am] WimLeers: lol ok :P
[11:16am] WimLeers:
[11:16am] Druplicon: => [meta] Finalize the cache contexts API & DX/usage, enable a leap forward in performance [#2429287] => 102 comments, 7 IRC mentions
[11:17am] WimLeers:
[11:17am] Druplicon: => Rendered Cache Metadata created during the main controller request gets lost [#2450993] => 35 comments, 14 IRC mentions
[11:18am] larowlan: GaborHojtsy: still working sorry, sent apology to dawehne_r this morning with my update
[11:18am] GaborHojtsy: larowlan: yeah jibran relayed that :)
[11:19am] GaborHojtsy:
[11:19am] Druplicon: => Twig placeholder filter should not map to raw filter [#2495179] => 53 comments, 7 IRC mentions
[11:20am] GaborHojtsy:
[11:20am] Druplicon: => [META] Results of testing on Drupal 7 in June 2015 [#2487972] => 18 comments, 5 IRC mentions
[11:21am] jibran:
[11:21am] Druplicon: => Node revisions cannot be reverted per translation [#2453153] => 107 comments, 32 IRC mentions
[11:31am] larowlan: jibran++
[11:31am] larowlan: GaborHojtsy++
[11:31am] GaborHojtsy: Fabianx-screen: what’s the issue link?
[11:33am] jibran:
[11:33am] dawehner:
[11:33am] Druplicon: => Arbitrary code execution via 'trans' extension for dynamic twig templates (when debug output is on) [#2489024] => 18 comments, 7 IRC mentions
[11:33am] Druplicon: => Move Drupal into subdirectory and get external dependencies/libraries out of the web-accessible path [#2508591] => 8 comments, 3 IRC mentions
[11:42am] dawehner:
[11:42am] Druplicon: => File inclusion in transliteration service [#2508654] => 17 comments, 2 IRC mentions
[11:43am] GaborHojtsy: dawehner: that one yeah
[11:43am] GaborHojtsy: running for 2 more months
[11:43am] jibran:
[11:43am] Druplicon: => drupal_html_id() considered harmful; remove ajax_html_ids to use GET (not POST) AJAX requests [#1305882] => 153 comments, 22 IRC mentions
[11:48am] dawehner:
[11:48am] Druplicon: => Rewrite views_ui_add_ajax_trigger() to not rely on /system/ajax. [#2500523] => 6 comments, 2 IRC mentions

Catégories: Elsewhere

Tyler Frankenstein: Headless Drupal with Angular JS and Bootstrap - Hello World

Planet Drupal - ven, 19/06/2015 - 08:50

This tutorial describes how to build a very simple de-coupled Drupal web application powered by Angular JS and Bootstrap. The inspiration for writing this tutorial came after completing my first Angular JS module (angular-drupal), which of course is for Drupal!

To keep things simple, and in the spirit of "Hello World", the application will let us login using credentials from the Drupal website.

The complete code for this example app is available here:

Ready? Alright, let's go headless...

Catégories: Elsewhere

Lullabot: Project Management

Planet Drupal - jeu, 18/06/2015 - 21:56

In this week's Drupalize.Me podcast, hostess Amber Matz chats about all things Project Management with Seth Brown (COO at Lullabot) and Lullabot Technical Project Managers Jessica Mokrzecki and Jerad Bitner. To continue the conversation, check out Drupalize.Me's series on Project Management featuring interviews and insights from these fine folks and others at Lullabot.

Catégories: Elsewhere

DrupalCon News: Register for DrupalCon Barcelona

Planet Drupal - jeu, 18/06/2015 - 21:42

Registration is live! For those of you have been waiting to purchase your ticket to DrupalCon Barcelona, the time has come!

Catégories: Elsewhere

Open Source Training: Allow Users to Delete Their Drupal Accounts

Planet Drupal - jeu, 18/06/2015 - 20:40

It's good practice to allow users to leave your site completely.

That means users should be able to delete their account entirely, together with all the data associated with it.

In Drupal, you can allow users to delete their accounts. Here's how the feature works:

Catégories: Elsewhere

Drupal Association News: My Week at DrupalCon, part 2

Planet Drupal - jeu, 18/06/2015 - 20:30

Part 1 of My Week at DrupalCon

Part 2:

As our community grows, so do our programs.  This year in addition to hosting trainings and both the Community Summit and Business Summit, we offered a Higher-Ed Summit at DrupalCon.  As soon as it was announced folks clamored to sign up, and the tickets sold out at a rapid pace.  We at the Drupal Association feel like this is a great example of how the growing variety of offerings at DrupalCon illustrates the increasing diversity of our community’s interests and skillsets.

The Higher-Ed Summit was a huge hit and that was due largely in part to the efforts of the Summit Leads, Christina and Shawn.  They worked hard to understand what the Higher-Ed community wanted and needed from the Summit and strategized to provide it down to the last detail.  Their planning and experience were integral to the popularity of the event, and we look forward to working with these awesome volunteers again in the future.    

Maybe I’m naive or a wide-eyed optimist, but meeting and speaking to people from all over the world is invigorating and exciting to me. Throughout the course of DrupalCon I had the opportunity to meet with community organizers from near and far. While it’s true that many attendees came from the United States and Canada, there were also organizers who came from as far away as Latin America, Europe, India, and Japan, and talked about how Drupal has affected their communities and their livelihoods.  It is always such a pleasure to see Drupal changing lives and bringing opportunities for personal growth and business everywhere.  

After an exhausting week of keynotes, and BOFs, and meetings, and dinners, I launched into the sprints on Friday with the purpose of understanding Drupal more.  I always enjoy discussing Drupal’s unique qualities with developers, site-builders, and themers, but this DrupalCon I really wanted to engage in more than just conversations.  I wanted to experience what it is like to directly develop and work with Drupal.  At the Friday sprints, my friend and new mentor Amy agreed to sit down with me and help me put together my own blog, run on a Drupal website.  During the process, I realized that there is no better way to start to understand the complexity of Drupal than to use the product myself.  

When learning to use Drupal in the sprint, I realized that we really are about fostering a friendly, inclusive, and diverse community. We talk the talk and we walk the walk.  Amy sat down with me and patiently showed me step-by-step how to start my site.  We picked a hosting site, domain name, downloaded Drupal, and began the process of organizing our modules and features. Finally, I started to really get it, which was incredibly exciting. Both personally and professionally, it meant a lot to me that someone would take the time to help me on my journey. It really brought home the fact that Drupalers genuinely care, are excited and willing to share knowledge, and have fun while doing it.  

DrupalCon Los Angeles was a spectacular event.  I feel like this blog wouldn’t be a proper message from LShey without some shout-outs and kudos, so please join me in celebrating others. I’d like to say out a big thank you to our talented Events team at the Drupal Association for organizing a seamless and beautiful event.  Thank you to our sponsors who help us put on this event with their support.  Thank you to our dedicated volunteers: whether you were a sprint-mentor, room-monitor, or speaker, your time and expertise is appreciated and valued.  Our volunteers truly make DrupalCon a wonderful event.  I’d like to share a special shout-out to the team who keeps us all informed, too: thank you to Alex and Paul for running the @drupalconna twitter handle.  Thank you to Emma Jane, who was our MC this DrupalCon, and who engaged our keynote speakers with witty and thoughtful interviews.  Lastly, thank you to you all, our community.  DrupalCon would not be the same without you.  I’m looking forward to seeing you all at the next one!  

Drupal on, 

Lauren Shey
Community Outreach Coordinator
Drupal Association

Catégories: Elsewhere

Acquia: How Improved Their Page Load Times

Planet Drupal - jeu, 18/06/2015 - 19:07

In November, 2014 launched on Drupal and became one of the highest trafficked websites in the world to launch on an open-source content management system (CMS). Mediacurrent and Acquia are excited to announce a new, 3-part blog post series that will share insight around how was migrated to Drupal. Our team of experts will share best practices and what lessons we learned during the project.

There's an old saying, “Everyone talks about the weather, but nobody does anything about it.” While we are a long way from controlling the weather, has done a spectacular job of delivering accurate weather news, as rapidly as possible, to all kinds of devices.

This is a small miracle, especially when you consider served up a billion requests during its busiest week. Even slow weeks require delivering hundreds of dynamic maps and streaming video to at least 30 million unique users in over three million forecast locations. The site has to remain stable with instantaneous page loads and 100 percent uptime, despite traffic bumps of up to 300 percent during bad weather.

Page load times are the key to their business and their growth. When The Weather Channel's legacy CMS showed signs of strain, they came to Drupal.

On their legacy platform, was tethered to a 50 percent cache efficiency. Their app servers were taking on far too much of the work. The legacy platform ran on 144 origin servers across three data centers. It takes all that muscle to keep up with the number of changes that are constantly happening across the site.

Traditionally, when you have a highly trafficked site, you put a content delivery network (CDN) in front of it and call it a day. The very first time a page is requested, the CDN fetches it from the origin server and then caches it to serve to all future requestors.

Unfortunately, it doesn't work that way for a site like

Consider this: If a user in Austin visits a forecast page, they see a certain version of that page. A visitor from Houston sees a slightly different version of that page. Not only are there two different versions of the page, one for each location, but much of the information on the page is only valid for about five minutes.

At the scale of three million locations, that's a lot of pages that have to rebuild on an ongoing basis only to be cached for 5 minutes each. Couple this with the fact that the number of served locations kept increasing as developers worked on the site, and you can see that things are rapidly getting out of control.

The first thing we did was break up the page into pieces that have longer or shorter life spans based on the time-sensitivity of the content. That allowed us to identify the parts of the pages that were able to live longest and that we could serve to the majority of users. The parts that varied, we no longer change on the origin servers, but instead delegate to systems closer to the user where they actually vary.

To accomplish that trick, we switched to a service-oriented architecture and client side rendering, using Angular.js, ESI (Edge Side Includes), and some Drupal magic. The combination of these three components boosted cache efficiency, page performance, and reduced the required number of servers to deliver it.

The result? After launch, we showed a 90 percent cache efficiency. In other words, in going from 50 to 90% cache efficiency they reduced the number of hits to the origin servers, which means that you need fewer of them. Post launch, we were able to increase cache efficiency even further.

This cache efficiency was also measured only at the edge. Varnish (a caching proxy) further reduced the amount of traffic, meaning that Drupal itself and the Varnish stack were serving less than 4 percent of their requested traffic. The benefits of the service-oriented architecture also mean that scaling is simpler, architectural changes are less painful, and the end user can experience a richer user experience.

Doing something about the weather is still way out on the horizon, but can certainly claim that it has improved the delivery of weather news.

Tags:  acquia drupal planet
Catégories: Elsewhere

Lullabot: Drupal 8 Theming Fundamentals, Part 2

Planet Drupal - jeu, 18/06/2015 - 19:00

In our last post on Drupal 8 theming fundamentals, we learned to set up a theme and add our CSS and JavaScript. This time around we’re talking about the Twig templating engine, how to add regions to our theme, and then finish with a look at the wonderful debugging available in Drupal 8.

Catégories: Elsewhere Upcasting menu parameters in Drupal 8

Planet Drupal - jeu, 18/06/2015 - 18:06
Menu upcasting means converting a menu argument to anything. It can be an object or an array. In this article, we will look at how it used to be done in Drupal 7 codebase & how should we port this into Drupal 8 codebase.
Lets take an example of the following code in Drupal 7: function my_module_menu() { $items['node/%my_menu/mytab'] = array( // ... // ... ); }
Catégories: Elsewhere

Drupal Watchdog: Small Sites, Big Drupal

Planet Drupal - jeu, 18/06/2015 - 17:58

In a much-analyzed 2013 interview with Computerworld, Drupal founder and “benevolent dictator” Dries Buytaert laid out a future path for the software focused squarely on enterprise clients (see also “Will the Revolution be Drupalized?”). While small sites had their place, Buytaert asserted, “I think we just need to say we’re more about big sites.” With Drupal 8, he concluded, “I really think we can say we’ve built the best CMS for enterprise systems.”[1]

Where does this bright future leave the smaller sites that up till now have formed the mainstay of Drupal adopters?

What’s in the Pipe

Drupal 8 is not all bad news for smaller sites; there are many new features and enhancements that should lower or eliminate some previous barriers.

  • More in core Many areas of key functionality that previously required downloading, installing, and configuring modules and other dependencies now will work out of the box. Case in point: WYSIWYG editing.
  • UI improvements A lot of customization that previously required specialized modules or custom code is now exposed via the core admin interface.

That said, there are signs of trouble ahead:

Hosting Barriers

Drupal 7 performance already pushed the limits of the typical, inexpensive, shared hosting that most small sites rely on. And Drupal 8? Watch out. It has what Drupal 8 maintainer Nathaniel Catchpole frankly called “an embarrassingly high memory requirement.”[2] Yes, memory issues can be addressed through solutions like reverse proxy caching or pushing search indexing to Solr. But those options are precisely the ones that are missing from the vast majority of shared hosts.

DIYers Beware

Small Drupal sites have benefited from the ease of dabbling in Drupal development. Drupal 8, in contrast, has been rewritten from the ground up with professional programmers in mind. Dependency injection, anyone?

Catégories: Elsewhere

Phase2: Developer Soft Skills Part 1: Online Research

Planet Drupal - jeu, 18/06/2015 - 17:32
Developer Soft Skills

One of my earliest jobs was customer service for a call center. I worked for many clients that all had training specific to their service. No matter the type of training, whether technical or customer oriented, soft skills were always a included. Margaret Rouse said, “Soft skills are personal attributes that enhance an individual’s interactions, career prospects and job performance. Unlike hard skills, which tend to be specific to a certain type of task or activity, soft skills are broadly applicable.”

In this blog series I will be discussing what I call “developer soft skills.” The hard skills in development are (among others) logic, languages, and structure. Developer soft skills are those that help a developer accomplish their tasks outside of that knowledge. I will be covering the following topics:

  • Online research
  • Troubleshooting
  • Enhancing/Customizing
  • Integrating
  • Architecting
Part 1: Online Research

One of the first skills a developer should master is online researching. This is an area with some controversy (which will be discussed later) but a necessary skill for learning about new technologies, expanding your knowledge, and solving problems.

One of the best reasons for research is continuous education. For many professions (such as the military, education and medical fields) continuing education is required to keep up on updated information, concepts, and procedures. As a developer, continuing to grow our skill set helps us develop better projects by using better code, better tools, and better methods.

Search engine queries

When researching a topic on the internet it usually involves using a search engine. Understanding how a search engine works and how to get to the results.There are two parts to how a search engine works. Part one is data collection and indexing. Part two is searching or querying that index. I will be focusing on how to write the best possible query, to learn more about how search collect and index data see this link. In order to write good queries we should understand how search engines respond to what we type into the search box. Early search results were rendered based on simple (by today’s standards) comparison of search terms to indexed page word usage and boolean logic. Since then search engines have started to use natural language queries.

So we can get better results by using this to our advantage. If I wanted to research how to make a calendar with the Java programming language. instead of searching for keywords and distinct ideas “java -script calendar” by them selves; use natural language to include phraseology and context in our queries: “how can I make a calendar with java”. The first result from the keyword search returns a reference to the Java Calendar class. The first result from the second query return example code on writing a calendar in Java. The better the query the better the results.

Search result inspection

Once we have the right query we can then turn our attention to the results. One of the first things I do is limit the results to a date range. This prevents results from the previous decade (or earlier) to be displayed with more recent and applicable ones. Another way to focus our search is to limit the site that the search takes place on. If we know we want to search for a jQuery function search

Once we have filtered our results, it’s time for further inspection. When viewing a results page, the first thing I look for is the context of the article or post. Does the author and/or site have a lot of ads? This can sometimes mean that the site is more about making money then providing good answers. Does the page have links or other references to related topic or ideas? This can show if the author is knowledgeable in the subject matter.

The controversy

Earlier I mentioned online researching can be a controversial topic. One of the points of controversy is discussed in Scott Hanselman’s blog post, Am I really a developer or just a good googler? While I agree with his major point, that researching bad code can be dangerous, I contend that using a search engine can produce good results and learning opportunities.

Almost anytime you search for any programming topic, one site or group of sites is predominant in almost every result: Stack Overflow or the Stack Exchange group of sites. Several articles have been written about reasons not to use, consequence of using and why some developers no longer use Stack Overflow. Using Stack Overflow will not solve all your problems or make you a better developer.

Again, these arguments make some good points. But I think that using Stack Overflow correctly, just like good use of search engines, can produce good results. Using a Stack Exchange site comes with the benefit of community. These sites have leveraged Stack Exchange Q&A methodology for their specific topic or technology and can be a great resource on how to solve a problem within the bounds of that community. One of my development mentors told me that there were thousands of ways to solve a programming problem and usually several wrong ones. The key is to not do one of the wrong ones and try to find one of the best ones. Searching within a Stack exchange site for answers can highlight the wrong ones but also provide the ones that work best in that system.

Here is an example of a Stack Overflow Drupal community response that came up when I searched for: “drupal create term programmatically.”

This response is correct, but if you look at the link provided, you will see this is for Drupal 6. If you were looking for how to do this in Drupal 7, for instance, the answer provided would not be correct. We could have improved our results by adding “Drupal 7″ to our query. But most important is to keep in mind that sites like Stack Overflow, or other community sites such as include a mix of user generated responses. Meaning anyone can respond without being vetted.

Keep going

The best piece of advice I can offer for the arguments against using online search results and Stack Overflow is: “This is not the end.” Keep going past the result and research the answer. Don’t just copy and paste the code. Don’t just believe the top rated answer or blog post. Click the references sited, search the function or api calls that are in the answer, and make the research a part of your knowledge. And then give back by writing about your article or posting your own answers. Answering questions can sometimes be just as powerful a learning tool as searching for them.

In the end, anything you find through search, blog, and code sites should be considered a suggestion as one way of solving a problem – not necessarily the solution to your concern.

In the next post I will discuss a good use case for Stack Exchange sites, Developer Soft Skills Part 2: Troubleshooting.

Subscribe to our newsletter to keep up with new projects and blogs from the Phase2 team!

Catégories: Elsewhere


Subscribe to jfhovinne agrégateur