Elsewhere

Raphael Geissert: Editing Debian online with sources.debian.net

Planet Debian - Tue, 16/12/2014 - 09:00
How cool would it be to fix that one bug you just found without having to download a source package? and without leaving your browser?

Inspired by github's online code editing, during Debconf 14 I worked on integrating an online editor on debsources (the software behind sources.debian.net). Long story short: it is available today, for users of chromium (or anything supporting chrome extensions).

After installing the editor for sources.debian.net extension, go straight to sources.debian.net and enjoy!

Go from simple debsources:


To debsources on steroids:


All in all, it brings:
  • Online editing of all of Debian
  • In-browser patch generation, available for download
  • Downloading the modified file
  • Sending the patch to the BTS
  • Syntax highlighting for over 120 file formats!
  • More hidden gems from Ace editor that can be integrated thanks to patches from you

Clone it or fork it:
git clone https://github.com/rgeissert/ace-sourced.n.git

For example, head to apt's source code, find a typo and correct it online: open apt.cc, click on edit, make the changes, click on email patch. Yes! it can generate a mail template for sending the patch to the BTS: just add a nice message and your patch is ready to be sent.

Didn't find any typo to fix? how sad, head to codesearch and search Debian for a spelling mistake, click on any result, edit, correct, email! you will have contributed to Debian in less than 5 minutes without leaving your browser.

The editor was meant to be integrated into debsources itself, without the need of a browser extension. This is expected to be done when the requirements imposed by debsources maintainers are sorted out.

Kudos to Harlan Lieberman who helped debug some performance issues in the early implementations of the integration and for working on the packaging of the Ace editor.
Categories: Elsewhere

Code Drop: Thoughts on taking the Acquia Drupal certification exam

Planet Drupal - Tue, 16/12/2014 - 02:52

Earlier this week I had the opportunity to be the first developer at Code Drop to sit an exam to become an "Acquia Certified Developer". I managed to clear the exam with the following results:

  • Section 1 - Fundamental Web Development Concepts: 87%
  • Section 2 - Site Building: 87%
  • Section 3 - Front end development (Theming) : 92%
  • Section 4 - Back end development (Coding) : 81%

Like many others who have done the exam, I will briefly run over my experience and thoughts on the whole process.

Categories: Elsewhere

Gustavo Noronha Silva: Web Engines Hackfest 2014

Planet Debian - Tue, 16/12/2014 - 00:20

For the 6th year in a row, Igalia has organized a hackfest focused on web engines. The 5 years before this one were actually focused on the GTK+ port of WebKit, but the number of web engines that matter to us as Free Software developers and consultancies has grown, and so has the scope of the hackfest.

It was a very productive and exciting event. It has already been covered by Manuel RegoPhilippe Normand, Sebastian Dröge and Andy Wingo! I am sure more blog posts will pop up. We had Martin Robinson telling us about the new Servo engine that Mozilla has been developing as a proof of concept for both Rust as a language for building big, complex products and for doing layout in parallel. Andy gave us a very good summary of where JS engines are in terms of performance and features. We had talks about CSS grid layouts, TyGL – a GL-powered implementation of the 2D painting backend in WebKit, the new Wayland port, announced by Zan Dobersek, and a lot more.

With help from my colleague ChangSeok OH, I presented a description of how a team at Collabora led by Marco Barisione made the combination of WebKitGTK+ and GNOME’s web browser a pretty good experience for the Raspberry Pi. It took a not so small amount of both pragmatic limitations and hacks to get to a multi-tab browser that can play youtube videos and be quite responsive, but we were very happy with how well WebKitGTK+ worked as a base for that.

One of my main goals for the hackfest was to help drive features that were lingering in the bug tracker for WebKitGTK+. I picked up a patch that had gone through a number of iterations and rewrites: the HTML5 notifications support, and with help from Carlos Garcia, managed to finish it and land it at the last day of the hackfest! It provides new signals that can be used to authorize notifications, show and close them.

To make notifications work in the best case scenario, the only thing that the API user needs to do is handle the permission request, since we provide a default implementation for the show and close signals that uses libnotify if it is available when building WebKitGTK+. Originally our intention was to use GNotification for the default implementation of those signals in WebKitGTK+, but it turned out to be a pain to use for our purposes.

GNotification is tied to GApplication. This allows for some interesting features, like notifications being persistent and able to reactivate the application, but those make no sense in our current use case, although that may change once service workers become a thing. It can also be a bit problematic given we are a library and thus have no GApplication of our own. That was easily overcome by using the default GApplication of the process for notifications, though.

The show stopper for us using GNotification was the way GNOME Shell currently deals with notifications sent using this mechanism. It will look for a .desktop file named after the application ID used to initialize the GApplication instance and reject the notification if it cannot find that. Besides making this a pain to test – our test browser would need a .desktop file to be installed, that would not work for our main API user! The application ID used for all Web instances is org.gnome.Epiphany at the moment, and that is not the same as any of the desktop files used either by the main browser or by the web apps created with it.

For the future we will probably move Epiphany towards this new era, and all users of the WebKitGTK+ API as well, but the strictness of GNOME Shell would hurt the usefulness of our default implementation right now, so we decided to stick to libnotify for the time being.

Other than that, I managed to review a bunch of patches during the hackfest, and took part in many interesting discussions regarding the next steps for GNOME Web and the GTK+ and Wayland ports of WebKit, such as the potential introduction of a threaded compositor, which is pretty exciting. We also tried to have Bastien Nocera as a guest participant for one of our sessions, but it turns out that requires more than a notebook on top of a bench hooked up to   a TV to work well. We could think of something next time ;D.

I’d like to thank Igalia for organizing and sponsoring the event, Collabora for sponsoring and sending ChangSeok and myself over to Spain from far away Brazil and South Korea, and Adobe for also sponsoring the event! Hope to see you all next year!

Web Engines Hackfest 2014 sponsors: Adobe, Collabora and Igalia

Categories: Elsewhere

Gregor Herrmann: GDAC 2014/15

Planet Debian - Mon, 15/12/2014 - 22:58

nothing exciting today in my debian life. just yet another nice example of collaboration around an RC bug where the bug submitter, the maintainer & me investigated via the BTS, & the maintainer also got support on IRC from others. – now we just need someone to come up with an actual fix for the problem :)

this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

Categories: Elsewhere

Tag1 Consulting: yumrepos Puppet Module

Planet Drupal - Mon, 15/12/2014 - 21:12

Earlier this year we undertook a project to upgrade a client's infrastructure to all new servers including a migration from old Puppet scripts which were starting to show their age after many years of server and service changes. During this process, we created a new set of Puppet scripts using Hiera to separate configuration data from modules.

read more

Categories: Elsewhere

Holger Levsen: 20121214-not-everybody-is-equal

Planet Debian - Mon, 15/12/2014 - 20:25
We ain't equal in Debian neither and wishful thinking won't help.

"White people think calling them white is racist." - "White people think calling them racist is racist."

(Thanks to and via 2damnfeisty and blackgirlsparadise!)

Posted here as food for thought. What else is invisible for whom? Or hardly visible or distorted or whatever shade of (in)visible... - and how can we know about things we cannot (yet) see...

Categories: Elsewhere

Drupal Association News: Meeting Personas: The Drupal Newcomer

Planet Drupal - Mon, 15/12/2014 - 20:01

 This post is part of an ongoing series detailing the new personas that have been drawn up as part of our Drupal.org user research.

Bronwen Buswell is a newcomer to Drupal. Based out of Colorado Springs, Colorado, Bronwen works as a Conference and Communications Coordinator at a nonprofit called PEAK Parent Center, which is dedicated to supporting the families of children with disabilities. While Bronwen’s role isn’t technical, she needs to use her company’s website as part of getting her work done.

“We’re federally designated by the US Department of Education, so we try to be a total one-stop shop information and referral center,” Bronwen said. “Families can call us about any situation related to their child, and we will either refer them to the right agency or provide what they need. We’re focused on helping families navigate the education and special education systems, and we serve families with children ages birth through 26, with all sorts of disabilities, including autism, down syndrome, learning disabilities, and so on."

Keeping Up With Technology

In the past few years, PEAK Parent Center’s website became very outdated, and this was a problem. Bronwen’s clients were very dependent on being able to receive assistance over the phone, as many of the resources that the center provides are not readily available online. When updates needed to be made, Bronwen and her company were forced to rely on their tech vendors to make changes to the website, as they were working with a custom solution rather than a CMS.

“Our website was pre-cutting edge, made by local vendors, all in HTML code and SQL database. We had excellent tech vendors who helped us create what we needed, and this was before the CMS options came along so it was really good at first. However, in the past 5 to 6 years, it has gotten really archaic, and we’re super reliant upon our vendors for updating our website. What’s simple in a CMS is complex for us,” Bronwen said.

After doing lots of research and working with the federal government to find the best solution for PEAK Parent Center and other centers like it, Bronwen and her colleagues decided to explore using Drupal to create a site template that could be deployed for PEAK Parent Center  and for other similar centers that it supports across the country.

“We're the technical assistance center for parent centers like ours in a 12 state region,” said Bronwen. “When [Drupal Association Executive Director] Holly Ross was at NTEN we started going to their conferences, which led us to launch a tech leadership initiative where we supported participating parent centers across the nation. As part of that, we got connected with great consultants and thinkers in tech, and we were asked by the US Department of Education to participate in the creation of website templates in 2 content management systems — Wordpress and Drupal — that could be used in other parent centers in the future."

Getting Experienced Assistance

With help from Aaron Pava and Nikki Pava at Alegria Partners, the staff at PEAK Parent Center has been learning to use their new Drupal website. Aaron has advised Bronwen and her colleagues every step of the way, from proposing solutions in the discovery process to walking Bronwen and her coworkers through specific tasks.

Occasionally, Bronwen encounters small problems due to updates or little glitches with distributions, which is why Aaron has encouraged her to get involved and do some training on Drupal. Unfortunately, most of Bronwen’s time is spent trying to get the website ready to launch, as she’s under pressure from the federal government and her board of directors to deploy the new site. Though Bronwen isn’t working on the technical side of the website, she’s busy populating it with content and making sure that it will be a useful tool for her clients.

“What I haven’t done is specific Drupal training,” said Bronwen. “I know about Lynda and Build A Module, but I’ve only had time to do sessions one-on-one with Aaron, for example, ‘Here’s how to upload content in this template.’

"I have learned a lot on Drupal.org, but it’s been primarily through Aaron sending me a link— for example, he’ll send me links about Red Hen since we’re exploring our CRM options— but I haven’t surfed around it much,” Bronwen added.

Areas For Improvement

Bronwen wishes there was a recommended Drupal 101 section on Drupal.org, something that would help content editors like herself learn to use the CMS better, but for now, she is limited to relying on more educated ambassadors for Drupal to point her in the right direction.

“It’s delicate to recommend vendors,” said Bronwen, "but it seems that the community is really powerful, and is certainly one of the most unique aspects that sets Drupal aside from other CMS options. Even a few vendors recommended by the community, or a recommend Drupal 101 lesson where you can go through it, go off and work in Drupal, and come back and get Drupal 201 would be really valuable for me.

“I know that there are local Drupal meet-ups that happen all over the country” Bronwen added. “[One group we talked with] told us that nonprofits can go to these events and say “I need this or that,” and some hardcore Drupal techie will take the work on pro bono. That was another factor that helped draw us to using Drupal — the availability of the community. It would be useful if there was more information on how to tap into those meetups, perhaps, when they’re happening."

Bronwen knows that the Drupal community is really powerful, and considers it one of the most unique aspects that sets Drupal aside from other CMS options. She is excited by the availability of the Drupal community, and is looking forward to interacting with it and working with them as she continues to run and improve PEAK Parent Center’s website.

Personal blog tags: drupal.org user researchpersona interviews
Categories: Elsewhere

Midwestern Mac, LLC: Highly-Available PHP infrastructure with Ansible

Planet Drupal - Mon, 15/12/2014 - 17:03

I just posted a large excerpt from Ansible for DevOps over on the Server Check.in blog: Highly-Available Infrastructure Provisioning and Configuration with Ansible. In it, I describe a simple set of playbooks that configures a highly-available infrastructure primarily for PHP-based websites and web applications, using Varnish, Apache, Memcached, and MySQL, each configured in a way optimal for high-traffic and highly-available sites.

Here's a diagram of the ultimate infrastructure being built:

Categories: Elsewhere

SitePoint PHP Drupal: AngularJS in Drupal Apps

Planet Drupal - Mon, 15/12/2014 - 17:00

Angular.js is the hot new thing right now for designing applications in the client. Well, it’s not so new anymore but is sure as hell still hot, especially now that it’s being used and backed by Google. It takes the idea of a JavaScript framework to a whole new level and provides a great basis for developing rich and dynamic apps that can run in the browser or as hybrid mobile apps.

In this article I am going to show you a neat little way of using some of its magic within a Drupal 7 site. A simple piece of functionality but one that is enough to demonstrate how powerful Angular.js is and the potential use cases even within heavy server-side PHP frameworks such as Drupal. So what are we doing?

We are going to create a block that lists some node titles. Big whoop. However, these node titles are going to be loaded asynchronously using Angular.js and there will be a textfield above them to filter/search for nodes (also done asyncronously). As a bonus, we will also use a small open source Angular.js module that will allow us to view some of the node info in a dialog when we click on the titles.

Continue reading %AngularJS in Drupal Apps%

Categories: Elsewhere

Cheppers blog: Apache Solr and Drupal - Part I: Set up Apache Solr to enhance Drupal search

Planet Drupal - Mon, 15/12/2014 - 16:40

Today most of the websites have search functionality. With the help of Apache Solr the time spent on waiting for a search result can be radically reduced. In this article we are going to set up a basic searching infrastructure on a *nix-based system.

Categories: Elsewhere

Drupal Commerce: Major improvements in addressfield 7.x-1.0-rc1

Planet Drupal - Mon, 15/12/2014 - 16:39

Many people know that addressfield hasn’t been the easiest module to maintain. There are over 200 countries in the world, each with its own addressing requirements. Addressfield attempted to provide a sane default for all of them, along with a plugin architecture for handling per-country customizations. But with so many countries, the list of needed improvements became never-ending, and the customizations themselves started gathering in only one plugin (address.inc), which quickly became impossible to maintain.

A radical change was needed, so after a lot of research we introduced a new plan for Drupal 8, along with a brand new PHP library we can depend on from addressfield 8.x-2.x. The new plan resolves around two powerful ideas:

  • The introduction of address formats, which hold information on how a country’s address and its form need to be rendered and validated.
  • The use of Google’s addressing dataset, freely available and built for Chrome and Android, with address formats for 200 countries.

The introduced solutions were obviously superior to anything we had before that, but Drupal 8 is still far from production, and we needed improvements on our Drupal 7 sites today, so we decided to try and backport as many concepts as we could into the 7.x-1.x codebase. The result of that is addressfield 7.x-1.0-rc1:

Read more...

Categories: Elsewhere

Annertech: Best Modules for Media in Drupal: How to Install and Configure Scald

Planet Drupal - Mon, 15/12/2014 - 16:19
Best Modules for Media in Drupal: How to Install and Configure Scald

In the first part of this series, “Scalable & Sustainable Media Management for Drupal Websites”, I talked about media management solutions for Drupal. Specifically, I am interested in managing large amounts of files in a reusable manner. The solution I like best at the moment is Scald.

Just so we don't get confused with some phrasing, Scald stores all media items as custom entities called "atoms"; Scald "contexts" are very similar to view modes.

Categories: Elsewhere

Thomas Goirand: Supporting 3 init systems in OpenStack packages

Planet Debian - Mon, 15/12/2014 - 09:15

tl;dr: Providing support for all 3 init systems (sysv-rc, Upstart and systemd) isn’t hard, and generating the init scripts / Upstart job / systemd using a template system is a lot easier than I previously thought.

As always, when writing this kind of blog post, I do expect that others will not like what I did. But that’s the point: give me your opinion in a constructive way (please be polite even if you don’t like what you see… I had too many times had to read harsh comments), and I’ll implement your ideas if I find it nice.

History of the implementation: how we came to the idea

I had no plan to do this. I don’t believe what I wrote can be generalized to all of the Debian archive. It’s just that I started doing things, and it made sense when I did it. Let me explain how it happened.

Since it’s clear that many, and especially the most advanced one, may have an opinion about which init system they prefer, and because I also support Ubuntu (at least Trusty), I though it was a good idea to support all the “main” init system: sysv-rc, Upstart and systemd. Though I have counted (for the sake of being exact in this blog) : OpenStack in Debian contains currently 64 init scripts to run daemons in total. That’s quite a lot. A way too much to just write them, all by hand. Though that’s what I was doing for the last years… until this the end of this last summer!

So, doing all by hand, I first started implementing Upstart. Its support was there only when building in Ubuntu (which isn’t the correct thing to do, this is now fixed, read further…). Then we thought about adding support for systemd. Gustavo Panizzo, one of the contributors in the OpenStack packages, started implementing it in Keystone (the auth server for OpenStack) for the Juno release which was released this October. He did that last summer, early enough so we didn’t expect anyone to use the Juno branch Keystone. After some experiments, we had kind of working. What he did was invoking “/etc/init.d/keystone start-systemd”, which was still using start-stop-daemon. Yes, that’s not perfect, and it’s better to use systemd foreground process handling, but at least, we had a unique place where to write the startup scripts, where we check the /etc/default for the logging configuration, configure the log file, and so on.

Then around in october, I took a step backward to see the whole picture with sysv-rc scripts, and saw the mess, with all the tiny, small difference between them. It became clear that I had to do something to make sure they were all the same, with the support for the same things (like which log system to use, where to store the PID, create /var/lib/project, /var/run/project and so on…).

Last, on this month of December, I was able to fix the remaining issues for systemd support, thanks to the awesome contribution of Mikael Cluseau on the Alioth OpenStack packaging list. Now, the systemd unit file is still invoking the init script, but it’s not using start-stop-daemon anymore, no PID file involved, and daemons are used as systemd foreground processes. Finally, daemons service files are also activated on installation (they were not previously).

Implementation

So I took the simplistic approach to use always the same template for the sysv-rc switch/case, and the start and stop functions, happening it at the end of all debian/*.init.in scripts. I started to try to reduce the number of variables, and I was surprised of the result: only a very small part of the init scripts need to change from daemon to daemon. For example, for nova-api, here’s the init script (LSB header stripped-out):

DESC="OpenStack Compute API" PROJECT_NAME=nova NAME=${PROJECT_NAME}-api

That is it: only 3 lines, defining only the name of the daemon, the name of the project it attaches (eg: nova, cinder, etc.), and a long description. There’s of course much more complicated init scripts (see the one for neutron-server in the Debian archive for example), but the vast majority only needs the above.

Here’s the sysv-rc init script template that I currently use:

#!/bin/sh # The content after this line comes from openstack-pkg-tools # and has been automatically added to a .init.in script, which # contains only the descriptive part for the daemon. Everything # else is standardized as a single unique script. # Author: Thomas Goirand <zigo@debian.org> # PATH should only include /usr/* if it runs after the mountnfs.sh script PATH=/sbin:/usr/sbin:/bin:/usr/bin if [ -z "${DAEMON}" ] ; then DAEMON=/usr/bin/${NAME} fi PIDFILE=/var/run/${PROJECT_NAME}/${NAME}.pid if [ -z "${SCRIPTNAME}" ] ; then SCRIPTNAME=/etc/init.d/${NAME} fi if [ -z "${SYSTEM_USER}" ] ; then SYSTEM_USER=${PROJECT_NAME} fi if [ -z "${SYSTEM_USER}" ] ; then SYSTEM_GROUP=${PROJECT_NAME} fi if [ "${SYSTEM_USER}" != "root" ] ; then STARTDAEMON_CHUID="--chuid ${SYSTEM_USER}:${SYSTEM_GROUP}" fi if [ -z "${CONFIG_FILE}" ] ; then CONFIG_FILE=/etc/${PROJECT_NAME}/${PROJECT_NAME}.conf fi LOGFILE=/var/log/${PROJECT_NAME}/${NAME}.log if [ -z "${NO_OPENSTACK_CONFIG_FILE_DAEMON_ARG}" ] ; then DAEMON_ARGS="${DAEMON_ARGS} --config-file=${CONFIG_FILE}" fi # Exit if the package is not installed [ -x $DAEMON ] || exit 0 # If ran as root, create /var/lock/X, /var/run/X, /var/lib/X and /var/log/X as needed if [ "x$USER" = "xroot" ] ; then for i in lock run log lib ; do mkdir -p /var/$i/${PROJECT_NAME} chown ${SYSTEM_USER} /var/$i/${PROJECT_NAME} done fi # This defines init_is_upstart which we use later on (+ more...) . /lib/lsb/init-functions # Manage log options: logfile and/or syslog, depending on user's choosing [ -r /etc/default/openstack ] && . /etc/default/openstack [ -r /etc/default/$NAME ] && . /etc/default/$NAME [ "x$USE_SYSLOG" = "xyes" ] && DAEMON_ARGS="$DAEMON_ARGS --use-syslog" [ "x$USE_LOGFILE" != "xno" ] && DAEMON_ARGS="$DAEMON_ARGS --log-file=$LOGFILE" do_start() { start-stop-daemon --start --quiet --background ${STARTDAEMON_CHUID} --make-pidfile --pidfile ${PIDFILE} --chdir /var/lib/${PROJECT_NAME} --startas $DAEMON \ --test > /dev/null || return 1 start-stop-daemon --start --quiet --background ${STARTDAEMON_CHUID} --make-pidfile --pidfile ${PIDFILE} --chdir /var/lib/${PROJECT_NAME} --startas $DAEMON \ -- $DAEMON_ARGS || return 2 } do_stop() { start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 --pidfile $PIDFILE RETVAL=$? rm -f $PIDFILE return "$RETVAL" } do_systemd_start() { exec $DAEMON $DAEMON_ARGS } case "$1" in start) init_is_upstart > /dev/null 2>&1 && exit 1 log_daemon_msg "Starting $DESC" "$NAME" do_start case $? in 0|1) log_end_msg 0 ;; 2) log_end_msg 1 ;; esac ;; stop) init_is_upstart > /dev/null 2>&1 && exit 0 log_daemon_msg "Stopping $DESC" "$NAME" do_stop case $? in 0|1) log_end_msg 0 ;; 2) log_end_msg 1 ;; esac ;; status) status_of_proc "$DAEMON" "$NAME" && exit 0 || exit $? ;; systemd-start) do_systemd_start ;; restart|force-reload) init_is_upstart > /dev/null 2>&1 && exit 1 log_daemon_msg "Restarting $DESC" "$NAME" do_stop case $? in 0|1) do_start case $? in 0) log_end_msg 0 ;; 1) log_end_msg 1 ;; # Old process is still running *) log_end_msg 1 ;; # Failed to start esac ;; *) log_end_msg 1 ;; # Failed to stop esac ;; *) echo "Usage: $SCRIPTNAME {start|stop|status|restart|force-reload|systemd-start}" >&2 exit 3 ;; esac exit 0

Nothing particularly fancy here… You’ll noticed that it’s really OpenStack centric (see the LOGFILE and CONFIGFILE things…). You may have also noticed the call to “init_is_upstart” which is needed for upstart support. I’m not sure if it’s at the correct place in the init script. Should I put that on top of the script? Was I right with the exit values for it? Please send me your comments…

Then I thought about generalizing all of this. Because not only the sysv-rc scripts needed to be squared-up, but also Upstart. The approach here was to source the sysv-rc script in debian/*.init.in, and then generate the Upstart job accordingly, using the above 3 variables (or more as needed). Here, the fun is that, instead of taking the approach of calculating everything at runtime with the sysv-rc, for Upstart jobs, many things are calculated at build time. For each debian/*.init.in script that the debian/rules finds, pkgos-gen-upstart-job is called. Here’s pkgos-gen-upstart-job:

#!/bin/sh INIT_TEMPLATE=${1} UPSTART_FILE=`echo ${INIT_TEMPLATE} | sed 's/.init.in/.upstart/'` # Get the variables defined in the init template . ${INIT_TEMPLATE} ## Find out what should go in After= #SHOULD_START=`cat ${INIT_TEMPLATE} | grep "# Should-Start:" | sed 's/# Should-Start://'` # #if [ -n "${SHOULD_START}" ] ; then # AFTER="After=" # for i in ${SHOULD_START} ; do # AFTER="${AFTER}${i}.service " # done #fi if [ -z "${DAEMON}" ] ; then DAEMON=/usr/bin/${NAME} fi PIDFILE=/var/run/${PROJECT_NAME}/${NAME}.pid if [ -z "${SCRIPTNAME}" ] ; then SCRIPTNAME=/etc/init.d/${NAME} fi if [ -z "${SYSTEM_USER}" ] ; then SYSTEM_USER=${PROJECT_NAME} fi if [ -z "${SYSTEM_GROUP}" ] ; then SYSTEM_GROUP=${PROJECT_NAME} fi if [ "${SYSTEM_USER}" != "root" ] ; then STARTDAEMON_CHUID="--chuid ${SYSTEM_USER}:${SYSTEM_GROUP}" fi if [ -z "${CONFIG_FILE}" ] ; then CONFIG_FILE=/etc/${PROJECT_NAME}/${PROJECT_NAME}.conf fi LOGFILE=/var/log/${PROJECT_NAME}/${NAME}.log DAEMON_ARGS="${DAEMON_ARGS} --config-file=${CONFIG_FILE}" echo "description \"${DESC}\" author \"Thomas Goirand <zigo@debian.org>\" start on runlevel [2345] stop on runlevel [!2345] chdir /var/run pre-start script for i in lock run log lib ; do mkdir -p /var/\$i/${PROJECT_NAME} chown ${SYSTEM_USER} /var/\$i/${PROJECT_NAME} done end script script [ -x \"${DAEMON}\" ] || exit 0 DAEMON_ARGS=\"${DAEMON_ARGS}\" [ -r /etc/default/openstack ] && . /etc/default/openstack [ -r /etc/default/\$UPSTART_JOB ] && . /etc/default/\$UPSTART_JOB [ \"x\$USE_SYSLOG\" = \"xyes\" ] && DAEMON_ARGS=\"\$DAEMON_ARGS --use-syslog\" [ \"x\$USE_LOGFILE\" != \"xno\" ] && DAEMON_ARGS=\"\$DAEMON_ARGS --log-file=${LOGFILE}\" exec start-stop-daemon --start --chdir /var/lib/${PROJECT_NAME} \\ ${STARTDAEMON_CHUID} --make-pidfile --pidfile ${PIDFILE} \\ --exec ${DAEMON} -- --config-file=${CONFIG_FILE} \${DAEMON_ARGS} end script " >${UPSTART_FILE}

The only thing which I don’t know how to do, is how to implement the Should-Start / Should-Stop in an Upstart job. Can anyone shoot me a mail and tell me the solution?

Then, I wanted to add support for systemd. Here, we cheated, since we only just called the sysv-rc script from the systemd unit, however, the systemd-start target uses exec, so the process stays in the foreground. It’s also much smaller than the Upstart thing. However, here, I could implement the “After” thing, corresponding to the Should-Start:

#!/bin/sh INIT_TEMPLATE=${1} SERVICE_FILE=`echo ${INIT_TEMPLATE} | sed 's/.init.in/.service/'` # Get the variables defined in the init template . ${INIT_TEMPLATE} if [ -z "${SCRIPTNAME}" ] ; then SCRIPTNAME=/etc/init.d/${NAME} fi if [ -z "${SYSTEM_USER}" ] ; then SYSTEM_USER=${PROJECT_NAME} fi if [ -z "${SYSTEM_GROUP}" ] ; then SYSTEM_GROUP=${PROJECT_NAME} fi # Find out what should go in After= SHOULD_START=`cat ${INIT_TEMPLATE} | grep "# Should-Start:" | sed 's/# Should-Start://'` if [ -n "${SHOULD_START}" ] ; then AFTER="After=" for i in ${SHOULD_START} ; do AFTER="${AFTER}${i}.service " done fi echo "[Unit] Description=${DESC} $AFTER [Service] User=${SYSTEM_USER} Group=${SYSTEM_GROUP} WorkingDirectory=/var/lib/${PROJECT_NAME} PermissionsStartOnly=true ExecStartPre=/bin/mkdir -p /var/lock/${PROJECT_NAME} /var/log/${PROJECT_NAME} /var/lib/${PROJECT_NAME} ExecStartPre=/bin/chown ${SYSTEM_USER}:${SYSTEM_GROUP} /var/lock/${PROJECT_NAME} /var/log/${PROJECT_NAME} /var/lib/${PROJECT_NAME} ExecStart=${SCRIPTNAME} systemd-start Restart=on-failure [Install] WantedBy=multi-user.target " >${SERVICE_FILE}

As you can see, it’s calling /etc/init.d/${SCRIPTNAME} sytemd-start, which isn’t great. I’d be happy to have comments from systemd user / maintainers on how to fix it to make it better.

Integrating in debian/rules

To integrate with the Debian package build system, we only need had to write this:

override_dh_installinit: # Create the init scripts from the template for i in `ls -1 debian/*.init.in` ; do \ MYINIT=`echo $$i | sed s/.init.in//` ; \ cp $$i $$MYINIT.init ; \ cat /usr/share/openstack-pkg-tools/init-script-template >>$$MYINIT.init ; \ pkgos-gen-systemd-unit $$i ; \ done # If there's an upstart.in file, use that one instead of the generated one for i in `ls -1 debian/*.upstart.in` ; do \ MYPKG=`echo $$i | sed s/.upstart.in//` ; \ cp $$MYPKG.upstart.in $$MYPKG.upstart ; \ done # Generate the upstart job if there's no already existing .upstart.in for i in `ls debian/*.init.in` ; do \ MYINIT=`echo $$i | sed s/.init.in/.upstart.in/` ; \ if ! [ -e $$MYINIT ] ; then \ pkgos-gen-upstart-job $$i ; \ fi \ done dh_installinit --error-handler=true # Generate the systemd unit file # Note: because dh_systemd_enable is called by the # dh sequencer *before* dh_installinit, we have # to process it manually. for i in `ls debian/*.init.in` ; do \ pkgos-gen-systemd-unit $$i ; \ MYSERVICE=`echo $$i | sed 's/debian\///'` ; \ MYSERVICE=`echo $$MYSERVICE | sed 's/.init.in/.service/'` ; \ dh_systemd_enable $$MYSERVICE ; \ done

As you can see, it’s possible to use a debian/*.upstart.in and not use the templating system, in the more complicated case (I needed it mostly for neutron-server and neutron-plugin-openvswitch-agent).

Conclusion

I do not pretend that what I wrote in the openstack-pkg-tools is the ultimate solution. But I’m convince that it answers our own need as the OpenStack maintainers in Debian. There’s a lot of room for improvements (like implementing the Should-Start in Upstart jobs, or stop calling the sysv-rc script in the systemd units), but that this is a very good move that we did to use templates and generated scripts, as the init scripts are a way more easy to maintain now, in a much more unified way. As I’m not completely satisfied for the systemd and Upstart implementation, I’m sure that there’s already a huge improvements on the sysv-rc script maintainability.

Last and again: please send your comments and help improving the above! :)

Categories: Elsewhere

Liran Tal's Enginx: Drupal Performance Tip – “I’m too young to die” – know your DB engines

Planet Drupal - Mon, 15/12/2014 - 08:16
This entry is part 4 of 4 in the series Drupal Performance Tips

In the spirit of the computer video game Doom and its skill levels, we’ll review a few ways you can improve  your Drupal speed performance     and optimize for better results and server response time. These tips that we’ll cover may be at times specific to Drupal 6 versions, although     you can always learn the best practices from these examples and apply them on your own code base.

Doom skill levels: (easiest first)

1. I’m too young to die

2. Hey, not too rough

3. Hurt me plenty

4. Ultra-violence

5. Nightmare!

  This post is rated “I’m too young too die” difficulty level.

 

Drupal 6 shipped with all tables being MyISAM, and then Drupal 7 changed all that and shipped with all of its tables using the InnoDB database engine. Each one with its own strengths and weaknesses but it’s quite clear that InnoDB will probably perform better for your Drupal site (though it has quite a bit of fine tuning configuration to be tweaked on my.cnf).

Some modules, whether on Drupal 6, or those on Drupal 7 that simply upgraded but didn’t quite review all of their code, might ship with queries like SELECT COUNT() which if you have migrated your tables to InnoDB (or simply using Drupal 7) then this will hinder on database performance. That’s mainly because InnoDB and MyISAM work differently, and where-as this proved as quite a fast responding query being executed on a MyISAM database which uses the main index to store this information, for InnoDB the situation is different and will result in doing a full table scan for the count. Obviously, on an InnoDB configuration running such queries on large tables will result in very poor performance

Note to ponder upon – what about the Views module which uses similar type of COUNT() queries to create the pagination for its views?

(adsbygoogle = window.adsbygoogle || []).push({});

The post Drupal Performance Tip – “I’m too young to die” – know your DB engines appeared first on Liran Tal's Enginx.

Categories: Elsewhere

Drupal core announcements: Drupal core security release window on Wednesday, December 17

Planet Drupal - Mon, 15/12/2014 - 00:15
Start:  2014-12-17 (All day) America/New_York Online meeting (eg. IRC meeting) Organizers:  David_Rothstein

The monthly security release window for Drupal 6 and Drupal 7 core will take place on Wednesday, December 17.

This does not mean that a Drupal core security release will necessarily take place on that date for either the Drupal 6 or Drupal 7 branches, only that you should prepare to look out for one (and be ready to update your Drupal sites in the event that the Drupal security team decides to make a release).

There will be no bug fix release on this date; the next window for a Drupal core bug fix release is Wednesday, January 7.

For more information on Drupal core release windows, see the documentation on release timing and security releases, and the discussion that led to this policy being implemented.

Categories: Elsewhere

Gregor Herrmann: GDAC 2014/14

Planet Debian - Sun, 14/12/2014 - 22:27

I just got a couple of mails from the BTS. like almost every day, several times per day. now it made me realize how much I like the BTS, & how happy I am that it works so well & even gets new features. – thanks to the BTS maintainers for their continuous work!

this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

Categories: Elsewhere

Mario Lang: Data-binding MusicXML

Planet Debian - Sun, 14/12/2014 - 21:30

My long-term free software project (Braille Music Compiler) just produced some offspring! xsdcxx-musicxml is now available on GitHub.

I used CodeSynthesis XSD to generate a rather complete object model for MusicXML 3.0 documents. Some of the classes needed a bit of manual adjustment, to make the client API really nice and tidy.

During the process, I have learnt (as is almost always the case when programming) quite a lot. I have to say, once you got the hang of it, CodeSynthesis XSD is really a very powerful tool. I definitely prefer having these 100k lines of code auto-generated from a XML Schema, instead of having to implement small parts of it by hand.

If you are into MusicXML for any reason, and you like C++, give this library a whirl. At least to me, it is what I was always looking for: Rather type-safe, with a quite self-explanatory API.

For added ease of integration, xsdcxx-musicxml is sub-project friendly. In other words, if your project uses CMake and Git, adding xsdcxx-musicxml as a subproject is as easy as using git submodule add and putting add_subdirectory(xsdcxx-musicxml) into your CMakeLists.txt.

Categories: Elsewhere

Drupal Association News: Introducing the Drupal.org User Personas

Planet Drupal - Sun, 14/12/2014 - 20:15


As part of our mission to reinvent Drupal.org, we’ve been digging deep to understand who uses the website and how. At DrupalCon Austin, we began the process of discovering the personas of users who visit Drupal.org: to do so, we interviewed numerous Drupal.org users and asked questions about how frequently they use Drupal.org, how they use the website, their frustrations with Drupal.org, the things they enjoy about the site, and how we can make it easier for people to learn, use, and connect on Drupal.org.

Once we had that data, we set about looking for patterns and common themes. We built categories where we grouped people's similar experiences and frustrations together, and at the end of the process we had come up with five distinct personas that can apply to everyone who visits Drupal.org. These personas detail our users’ familiarity with Drupal software and Drupal community, how they use Drupal.org, how they contribute (or don’t), and more.

The five personas that we drew up are based on proficiency in Drupal and the Drupal ecosystem. They are:

  • Newcomer: This person has heard of Drupal, but has never built a Drupal site and doesn’t know where to start.
  • Learner: This person knows a bit about Drupal and the general Drupal ecosystem. He or she may have built a Drupal website, but likely has used only a few contrib modules and hasn’t made any customizations.
  • Skilled: This person understands and is fluent in Drupal-specific terminology, can build a Drupal website themselves using contributed modules, themes or distributions, or with the help of Drupal service providers. She or he has spent a decent amount of time working with Drupal, and is lightly engaged with the community, often not directly, via some sort of liaison.
  • Expert: This person has a deep understanding of Drupal and the Drupal ecosystem, knows how to build advanced websites with Drupal. Expert typically has been working with Drupal for at least a couple of years, is actively engaged with the community online and via local/national events, and actively contributes back in a variety of ways.
  • Master: This person has pervasive knowledge of Drupal and the Drupal ecosystem. He or she knows how to build Drupal websites of great complexity, is deeply engaged in the Drupal community, knows and has access to other Masters. Usually this person has been using Drupal and been around the Drupal community for a long time.

Proficiency-based personas are a new facet through which we can look at our community. It’s important to note that these personas are NOT only about developers. All kinds of roles can be on different levels of this ladder — UX designers, project managers, and business owners can be Experts and Masters, just like developers and themers. Simultaneously, people can have different backgrounds and be experts in other areas, but when it comes to fluency in Drupal and Drupal ecosystem, they would be represented as Newcomers, or Learners, or any of the other personas.

How will we use personas?

User personas will guide feature prioritization and feature development for Drupal.org, as we improve the site to make it easier for our users to progress from Newcomers to Masters. There are a variety of different ways we can go about it, but since our resources are limited, we will focus on just a few critical areas that will have the biggest impact on the overall user experience. So, to start our work, we’ll be focused on removing barriers and helping our users move more easily from Learners to Skilled. We found that our users have great success moving from Newcomer to Learner today, whereas moving from Learner to Skilled is much more difficult, since so much of the project is focused on doing things “the Drupal way” and learning the processes. Our secondary focus will be on moving users from Skilled to Expert.

Growing our pool of Skilled users is crucial, because by doing so we grow the number of people who own and/or build websites using Drupal, thus grow Drupal adoption. On the path from Skilled to Expert is when our users begin to give back by contributing patches, writing documentation, building and sharing modules and themes, helping others in the issue queues, and bringing in their friends. By growing the number of Skilled and Expert users on Drupal.org, we’ll directly grow our community. It’s a win-win.

By growing Drupal adoption and growing our community, we directly support our mission and goals as an organization (you can read more about those in our 2015 Leadership plan and budget), and that’s why improving Drupal.org is one of our organizational imperatives in the coming year. The 2015 Drupal.org roadmap outlines the numerous ways we’re planning to do it.

As we use personas in our work, you may hear us refer to our “Primary” (Learner and Skilled), “Secondary” (Expert), and “Tertiary” (Master and Newcomer) personas — these distinctions correspond to the order of conversions we look to make easier, not to the users’ importance. Every Drupal.org user is important to us!

As we modify Drupal.org, we’ll be using the personas to help us make the experience for the whole community better. After all, that’s what these personas are — a representation of the entire Drupal community. To help bring our personas to life, we talked to five different community members, each representing one user persona. Over the next few days we’ll share the stories of each person’s unique Drupal journey so that we can see how they got to where they are now. We’d like to say a big thank you to each of our volunteers for sharing their personal stories — as always, they’ve reminded us how fantastic our community really is.

At the end of the series, we’ll close it all off with interviews with several prominent community members who will share their views on how personas can be used outside of Drupal.org development.

We enjoyed working on the user research project and are excited to share user personas with the Drupal community. As a reminder, you can view and download the full report. Take them, use them, go out and make great things!

Personal blog tags: drupal.org user research
Categories: Elsewhere

Friendly Machine: Drupal 8 and Backdrop CMS - A Brief Comparison

Planet Drupal - Sun, 14/12/2014 - 18:28

I recently had the opportunity to see Nate Haug deliver a presentation about the Backdrop CMS project and it's upcoming 1.0.0 release (Jan. 15). It had been a while since I had taken a look at Backdrop and I came away quite impressed with both its progress and direction.

Many of you reading this will be familiar with Backdrop, but for those of you who haven't heard of the project, it is the first fork of the Drupal project, and the source of a great deal of controversy and angst in the Drupal community.

Backdrop has been perceived as a threat by many Drupalists, but I think as we step through the features and approaches of the two projects, those fears will be at least somewhat allayed. My own take is that the two systems seem complementary instead of competitive.

As a bit of background for the origin of Backdrop CMS, Nate told the story of his reaction to the massive changes in Drupal 8. He realized that his own business, Webform.com, was going to have major issues with the upgrade path.

It was going to take a huge effort to upgrade his site - we're talking many, many months - to simply replicate the work he had already done in Drupal 7. He didn't want to throw away the huge investment he had already made in his business and start over. His solution to the problem was forking Drupal to create Backdrop CMS.

And then...all hell broke loose.

Feature Comparison

I'll set the controversy behind Backdrop aside and get straight into a comparison of the features. Keep in mind, however, I'm using the term "features" here a bit loosely. That's because I also want to talk about how Backdrop is managed as well as other differences between the two projects. This list is not exhaustive. It just has some of the things that seem to me the most significant or interesting.

Target Market

I know many will squirm uncomfortably when I say this, but the target market for Drupal 8 is large enterprises. By contrast, the target for Backdrop is small to medium size businesses and non-profits - really the original market of the Drupal project. As we go through this list, you'll see how this targeting plays out in some of the decisions the two projects have made.

Configuration Management

This has been widely touted as the killer feature of Drupal 8. If you've dreamed of having all the cool configuration management features in D8 available for Drupal 7, then Backdrop may be tempting because that is essentially what it offers. Instead of using YAML files to store configuration data, however, Backdrop uses JSON. Otherwise, it's pretty much the same.

Theming

Another one of the major additions to Drupal 8 is the Twig template engine. This is a big plus for many front-end folks and it's something that is not available in Backdrop at this time - and I'm not sure I would look for it in the near future. Backdrop currently uses the Drupal 7 PHPTemplate theme engine.

Responsive Images

As a front-end developer, I have a particular interest in this one. Drupal 8 includes the Responsive Image module, which is essentially a reworking of the Picture module in D7.

At this writing, Backdrop doesn't have a responsive image solution. I asked Nate about this and he's not a fan of the Picture module approach (he favors using srcset, something that may possibly be added in versions 1.1 or 1.2 of Backdrop), so if that is something you require, it will need to be added as either a custom or contributed module.

Contributed Modules

Speaking of contrib, most of you reading this will be familiar with Drupal's massive collection of contributed modules. The contributed modules for Backdrop CMS will be hosted on GitHub and managed similar to how the jQuery project organizes its plugin registry. I don't think there have been any ports as of yet (all the energy is going to the 1.0.0 release), so this is pending.

Some of you may have heard that Drupal 7 modules will be compatible with Backdrop. This isn't true, primarily due to modules needing to be rewritten to support configuration management. Porting a Drupal 7 module should be fairly straightforward, however. Instead of storing config in the variables table, it needs to be in JSON files. Here's a video that will help get you started.

As a quick aside, having Backdrop (and eventually the contrib modules) hosted on GitHub seems like it will be a more familiar and friendly environment for potential project contributors.

Project Organization

The "do-ocracy" that is the Drupal project has been much discussed lately. Nate has organized the Backdrop CMS project along the same lines as the Project Management Committee of the Apache project. That was very wise in my opinion. It bodes well for the project.

WYSIWYG

Another really nice thing in Drupal 8 is the inclusion of a default WYSIWYG editor. Love them or hate them, virtually every client wants one, so now with D8 you won't have to add one yourself for every project. As of version 1.0.0, Backdrop doesn't have this functionality, but look for it in version 1.1 or 1.2.

I remember Nate saying something about it being ironic that Backdrop was launching both without Twig or a WYSIWYG since he and Backdrop co-founder Jen Lampton had been instrumental in bringing those to Drupal 8.

I suppose I should mention that Backdrop minor versions - from 1.0 to 1.1, for example - will occur regularly at an interval of about three or four months. So for the features mentioned that may be in version 1.1 or 1.2, it means they can be expected in either late spring or late summer.

Panels and Views

How about Panels and Views in core? Yeah, I like it! And that's what you get with Backdrop. Drupal 8 provides Views in core, but not Panels. It may be a while before Panels is ready for D8, but it may also be a while before D8 is ready, so I guess that's not a problem.

System Requirements and Backwards Compatibility

It may seem odd to group these two, but this is one point where the intended audiences (enterprise vs small organizations) are put into stark contrast. For example, Backdrop is intentionally friendly to cheap hosting. Drupal 8, by contrast, is almost certainly going to use more server resources than Drupal 7, potentially causing issues for those on shared hosting plans. 

For large organizations, the cost of hosting is not a big deal, but for some small organizations, it can be. So a solution architected to work well with limited resources may be attractive and also serves to highlight the different approaches between the two projects.

With backwards compatibility, we see the same philosophical divergence. Drupal has never focused much on backwards compatibility, making it a pain in the ass (and often expensive) to upgrade across major versions. The benefit of that approach is that Drupal has been able to innovate without being constrained by past decisions.

Backdrop, however, places a lot of value on carefully managing change so that existing sites can be upgraded affordably. I would recommend looking at Backdrop's philosophy, because it's there where you really find the motivations for the project and how it differs (and will differ more in the future) from the Drupal project. From system requirements, to upgrade path, to reaching out to hear voices not found in the issue queue, Backdrop CMS is consistently friendly to the needs of the little guy.

Wrap Up

Again, this isn't a comprehensive list of all the features or differences between the two systems. There is an issue on GitHub that might be of some help in learning more as well as this Drupal 8 feature list.

To me, these two projects don't compete with one another. Sure, some enterprises may use Backdrop and many small organizations may use Drupal 8. But really, the changes in Drupal 8 are a move toward the enterprise and the talk around Drupal 8 has reinforced that message. Having an alternative for small organizations on a budget and with a need to preserve software investments isn't a bad thing.

You may politely leave any comments below.

Drupal
Categories: Elsewhere

Gregor Herrmann: RC bugs 2014/49-50

Planet Debian - Sun, 14/12/2014 - 17:01

it's getting harder to find "nice" RC bugs, due to the efforts of various bug hunters & the awesome auto-removal-from-testing feature. – anyway, here's the list of bugs I worked on in the last 2 weeks:

  • #766740 – gamera: "gamera FTBFS on arm64, testsuite failure."
    sponsor maintainer upload
  • #766773 – irssi-plugin-xmpp: "irssi-plugin-xmpp: /query <JID> fails with "Irssi: critical query_init: assertion 'query->name != NULL' failed""
    add some speculation to the bug report, request binNMU after submitter's confirmation, close this bug afterwards
  • #768127 – dhelp: "Fails to build the index when invalid UTF-8 is met"
    apply patch from Daniel Getz, upload to DELAYED/5
  • #770672 – src:gnome-packagekit: "gnome-packagekit: FTBFS without docbook: reference to entity "REFENTRY" for which no system identifier could be generated"
    provide information, ask for clarification, severity lowered by maintainer
  • #771496 – dpkg-cross: "overwrites user changes to configuration file /etc/dpkg-cross/cross-compile on upgrade (violates 10.7.3)"
    tag confirmed and add information, later downgraded by maintainer, then set back to RC by submitter …
  • #771500 – darcsweb: "darcsweb: postinst uses /usr/share/doc content (Policy 12.3): /usr/share/doc/darcsweb/examples/darcsweb.conf"
    install config sample into /usr/share/<package>, upload to DELAYED/5
  • #771501 – pygopherd: "pygopherd: postinst uses /usr/share/doc content (Policy 12.3): /usr/share/doc/pygopherd/examples/gophermap"
    sponsor NMU from Cameron Norman, upload to DELAYED/5
  • #771727 – fex: "fex: postinst uses /usr/share/doc content (Policy 12.3)"
    propose patch, installing config templates under /usr/share/<package>, upload to DELAYED/5 later
  • #772005 – libdevice-cdio-perl: "libdevice-cdio-perl: Debian patch causes Perl crashes in Device::Cdio::ISO9660::IFS's readdir: "Error in `/usr/bin/perl': realloc(): invalid next size: 0x0000000001f05850""
    reproduce the bug (pkg-perl)
  • #772159 – ruby-moneta: "ruby-moneta: leaves mysqld running after build"
    apply patch from Colin Watson, upload to DELAYED/2
Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator - Elsewhere