Elsewhere

DrupalCon Amsterdam: Training spotlight: Professional Agile Project Management For Drupal Projects

Planet Drupal - jeu, 21/08/2014 - 12:51

Over 30 people attended this wildly successful training at DrupalCon Austin. Now is your chance to attend this training at DrupalCon Amsterdam!

In this course, we cut past the evangelism that exists around Agile, and instead focus on real-world practical training that you can put into action.

The course is delivered using the Agile Scrum techniques it teaches. At the start, delegates see the backlog of requirements that the product owner (a trainer) has developed for the course, and the prioritization of those requirements. The course then progresses in one-hour periods of work called sprints, working through training modules from the top of the backlog.

Part way through the morning, delegates are ready to take over as the product owners. They will take responsibility for specifying the requirements for the course, based on the needs and interests of the delegates in the room, re-prioritise them, and even add completely new requirements. Our trainers will respond to these changes by creating new training modules on the fly based on real project experience, to provide the highest possible value to the delegates.

Through this approach we demonstrate and explain the processes of Agile many times, and we also demonstrate their value, and delegates leave with a real insight into how they could apply agile, and handle some of the challenges they may have faced.

The trainers are highly experienced Agile coaches, who mentor teams at Wunderkraut (known as WunderRoot in the UK), as well as consulting with large clients about ensuring successful delivery of their projects.

Meet the Trainers from Wunderkraut

Steve Parks (steveparks), UK Managing Director
Vesa Palmu (wesku), CEO
Roel De Meester (demeester_roel), CTO Benelux
Florian Huber (fuber), Project Manager, Scrum Master

Attend this Drupal Training

This training will be held on Monday, 29 September from 09:00-17:00 at the Amsterdam RAI during DrupalCon Amsterdam. The cost of attending this training is €400 and includes training materials, meals and coffee breaks. A DrupalCon ticket is not required to register to attend this event.

Our training courses are designed to be small enough to provide attendees plenty of one-on-one time with the instructor, but large enough that they are a good use of the instructor's time. Each training course must meet its minimum sign-up number by 5 September in order for the course to take place. You can help to ensure your training course takes place by registering before this date and asking friends and colleagues to attend.

Register today

Catégories: Elsewhere

Steve Kemp: Updating Debian Administration

Planet Debian - jeu, 21/08/2014 - 10:50

Recently I've been getting annoyed with the Debian Administration website; too often it would be slower than it should be considering the resources behind it.

As a brief recap I have six nodes:

  • 1 x MySQL Database - The only MySQL database I personally manage these days.
  • 4 x Web Nodes.
  • 1 x Misc server.

The misc server is designed to display events. There is a node.js listener which receives UDP messages and stores them in a rotating buffer. The messages might contain things like "User bob logged in", "Slaughter ran", etc. It's a neat hack which gives a good feeling of what is going on cluster-wide.

I need to rationalize that code - but there's a very simple predecessor posted on github for the curious.

Anyway enough diversions, the database is tuned, and "small". The misc server is almost entirely irrelevent, non-public, and not explicitly advertised.

So what do the web nodes run? Well they run a lot. Potentially.

Each web node has four services configured:

  • Apache 2.x - All nodes.
  • uCarp - All nodes.
  • Pound - Master node.
  • Varnish - Master node.

Apache runs the main site, listening on *:8080.

One of the nodes will be special and will claim a virtual IP provided via ucarp. The virtual IP is actually the end-point visitors hit, meaning we have:

Master HostOther hosts

Running:

  • Apache.
  • Pound.
  • Varnish

Running:

  • Apache.

Pound is configured to listen on the virtual IP and perform SSL termination. That means that incoming requests get proxied from "vip:443 -> vip:80". Varnish listens on "vip:80" and proxies to the back-end apache instances.

The end result should be high availability. In the typical case all four servers are alive, and all is well.

If one server dies, and it is not the master, then it will simply be dropped as a valid back-end. If a single server dies and it is the master then a new one will appear, thanks to the magic of ucarp, and the remaining three will be used as expected.

I'm sure there is a pathological case when all four hosts die, and at that point the site will be down, but that's something that should be atypical.

Yes, I am prone to over-engineering. The site doesn't have any availability requirements that justify this setup, but it is good to experiment and learn things.

So, with this setup in mind, with incoming requests (on average) being divided at random onto one of four hosts, why is the damn thing so slow?

We'll come back to that in the next post.

(Good news though; I fixed it ;)

Catégories: Elsewhere

Wouter Verhelst: Multiarchified eID libraries, now public

Planet Debian - jeu, 21/08/2014 - 10:30

Yesterday, I spent most of the day finishing up the multiarch work I'd been doing on introducing multiarch to the eID middleware, and did another release of the Linux builds. As such, it's now possible to install 32-bit versions of the eID middleware on a 64-bit Linux distribution. For more details, please see the announcement.

Learning how to do multiarch (or biarch, as the case may be) for three different distribution families has been a, well, learning experience. Being a Debian Developer, figuring out the technical details for doing this on Debian and its derivatives wasn't all that hard. You just make sure the libraries are installed to the multiarch-safe directories (i.e., /usr/lib/<gnu arch triplet>), you add some Multi-Arch: foreign or Multi-Arch: same headers where appropriate, and you're done. Of course the devil is in the details (define "where appropriate"), but all in all it's not that difficult and fairly deterministic.

The Fedora (and derivatives, like RHEL) approach to biarch is that 64-bit distributions install into /usr/lib64 and 32-bit distributions install into /usr/lib. This goes for any architecture family, not just the x86 family; the same method works on ppc and ppc64. However, since fedora doesn't do powerpc anymore, that part is a detail of little relevance.

Once that's done, yum has some heuristics whereby it will prefer native-architecture versions of binaries when asked, and may install both the native-architecture and foreign-architecture version of a particular library package at the same time. Since RPM already has support for installing multiple versions of the same package on the same system (a feature that was originally created, AIUI, to support the installation of multiple kernel versions), that's really all there is to it. It feels a bit fiddly and somewhat fragile, since there isn't really a spec and some parts seem fairly undefined, but all in all it seems to work well enough in practice.

The openSUSE approach is vastly different to the other two. Rather than installing the foreign-architecture packages natively, as in the Debian and Fedora approaches, openSUSE wants you to take the native foo.ix86.rpm package and convert that to a foo-32bit.x86_64.rpm package. The conversion process filters out non-unique files (only allows files to remain in the package if they are in library directories, IIUC), and copes with the lack of license files in /usr/share/doc by adding a dependency header on the native package. While the approach works, it feels like unnecessary extra work and bandwidth to me, and obviously also wouldn't scale beyond biarch.

It also isn't documented very well; when I went to openSUSE IRC channels and started asking questions, the reply was something along the lines of "hand this configuration file to your OBS instance". When I told them I wasn't actually using OBS and had no plans of migrating to it (because my current setup is complex enough as it is, and replacing it would be far too much work for too little gain), it suddenly got eerily quiet.

Eventually I found out that the part of OBS which does the actual build is a separate codebase, and integrating just that part into my existing build system was not that hard to do, even though it doesn't come with a specfile or RPM package and wants to install files into /usr/bin and /usr/lib. With all that and some more weirdness I've found in the past few months that I've been building packages for openSUSE I now have... Ideas(TM) about how openSUSE does things. That's for another time, though.

(disclaimer: there's a reason why I'm posting this on my personal blog and not on an official website... don't take this as an official statement of any sort!)

Catégories: Elsewhere

Gunnar Wolf: Walking without crutches

Planet Debian - jeu, 21/08/2014 - 07:34

I still consider myself a newbie teacher. I'm just starting my fourth semester. And yes, I really enjoy it.

Now, how did I come to teaching? Well, my training has been mostly on stages for different conferences. More technical, more social, whatever — I have been giving ~10 talks a year for ~15 years, and I must have learnt something from that.

Some good things, some bad habits.

When giving presentations, a most usual technique is to prepare a set of slides to follow/support the ideas. And yes, that's what I did for my classes: Since my first semester, I prepared a nice set of slides, thematically split in 17 files, with ~30 to ~110 pages each (yes, huge variation). Given the course spans 32 classes (72 hours, 2¼ hours per class), each slide lasts for about two classes.

But, yes, this tends to make the class much less dynamic, much more scripted, rigid, and... Boring. From my feedback, I understand the students don't think I am a bad teacher, but still, I want to improve!

So, today I was to give the introduction to memory management. Easy topic, with few diagrams and numbers, mostly talking about the intuitive parts of a set of functions. I started scribbling and shortening the main points on a piece of paper (yes, the one on the picture). I am sure I can get down to more reduction — But this does feel like an improvement!

The class was quite successful. I didn't present the 100% of the material (which is one of the reasons I cling to my presentations — I don't want to skip important material), and at some point I do feel I was a bit going in circles. However, Operating Systems is a very intuitive subject, and getting the students to sketch by themselves the answers that describe the working of real operating systems was a very pleasant experience!

Of course, when I use my slides I do try to make it as interactive and collaborative as possible. But it is often unfeasible when I'm following a script. Today I was able to go around with the group's questions, find my way back to the outline I prepared.

I don't think I'll completely abandon my slides, specially for some subjects which include many diagrams or pictures. But I'll try to have this alternative closer to my mind.

Catégories: Elsewhere

Ian Donnelly: Technical Demo

Planet Debian - jeu, 21/08/2014 - 01:46

Hi Everybody,

Today I wanted to talk a bit about our technical demo. We patched a version of Samba to use our elektra-merge script in order to handle it’s configuration file smb.conf. Using the steps from my previous tutorial, we patched Samba to use this new technique of config merging. This patched version of Samba mounts it’s configuration to system/samba/smb in the Elektra Key Database. Then, during package upgrades, it uses the new --threeway-merge-command command with elektra-merge as the specified command. The result is automatic handling of smb.conf that is conffile-like (thanks ucf!) and the ability to have a powerful, automatic, three-way merge solution on package upgrades.

The main thing I would like to discuss is how this project is an improvement upon the current implementation of three-way merges in ucf. Before this project, ucf could attempt three-way merges on files it controlled using the diff3 tool. The main limitation of tools like diff3 are that they are line based and don’t inherently understand the files they are dealing with. Elektra on the other hand allows for a powerful system of backends which use plug-ins to understand configuration files. Elektra doesn’t store configuration data on a line-by-line basis, but in a more abstract way that is tailored to each configuration file using backends. smb.conf is a great example of this because it uses the syntax of an ini file so Elektra can mount it in a way that is intuitive to an ini file. Since data is stored in a format of key=data within ini files, Elektra stores this data in a similar way. For each key in smb.conf store a Key in Elektra with the value of that key store in a string. Then, during a merge, we can compare Keys in each version of smb.conf and easily see which ones changed and how they need to be merged into the result. On the other hand, diff3 has no concept of ini files or keys, it just compares the different versions line by line which results in many more conflicts than using elektra-merge. Moreover, a traditional weakness of diff is moving lines or data around. While diff3 does a very good job at handling this, it’s not perfect. In Elektra, Keys are named in an intelligent way based on their backend, so for smb.conf the line workgroup = HOME would always be saved under system/samba/smb/workgroup. It doesn’t matter if the lines are changed between versions because Elektra just has to check for the Key and its value.

My favorite example is a shortcoming in the diff3 algorithm (at least in my opinion). If something is changed to the same value in ours and theirs, but they differ from base, diff3 reports a conflict. On the other hand elektra-merge can easily handle this problem. A simple example of this would be changing the max log size value in Samba. Here is that line in each version of smb.conf:
Base:
max log size = 1000
Ours:
max log size = 2000
Theirs:
max log size = 2000

Obviously, in the merged version, result, one would expect this line to be:
max log size = 2000

Let’s check the result from elektra-merge:
max log size = 2000

Great! How about diff3:
<<<<<<< smb.conf.base
max log size = 1000
=======
max log size = 2000
>>>>>>> smb.conf.theirs

Whoops! As I mentioned the diff3 algorithm can’t handle this type of change, it just results in a conflict. Note that smb.conf.base is just representative of the file used as base and that smb.conf.theirs is representative of the file used as theirs. The file names were changed for the sake of clarity.

There are many other examples of the benefits to storing configuration data in a database of Keys that can better conform to actual data as opposed to storing configuration parameters in files where they can only be compared on a line to line basis. With the help of storage plug-ins, Elektra can ‘understand’ the configurations stored in it’s Key Database. Since we store the data in a way that makes sense for configuration data, we can easily merge actual configuration data as opposed to just lines in a file. A good example of this is in-line comments. Many of our storage plug-ins understand the difference between comments and actual configuration data. So if a configuration file has an inline comment like so:
max log size = 10000 ; Controls the size of the log file (in KiB)
we can compare the actual Keys, value pairs between versions max log size = 10000 and deal with the comments separately.

As a result, if we have a base:
max log size = 1000 ; Size in KiB

Ours:
max log size = 10000 ; Size in KiB

Theirs:
max log size = 1000 ; Controls the size of the log file (in KiB)

The result using elektra-merge would be:
max log size = 10000 ; Controls the size of the log file (in KiB)

Obviously, this line would cause a conflict on any line-based merge algorithm such as diff3 or git. It is worth noting that the ability of elektra-merge is directly related to the quality of the storage plug-ins that Elektra relies on. elektra-merge only looks at the name, value, and any metadata affiliated with each key. As a result, using the line plug-in would result in a merge only as powerful as any other line-based merge. Yet by using the ini plug-in on an ini file we get a much more advanced merge like the one described above.

As you can tell, this new method offers clear advantages to the traditional method of using diff3 to handle configuration merges. Also, I hope this demo shows how easy it is to implement these great features into your Debian packages. The community can only benefit if maintainers take advantage of these great features. I am glad to say that my Google Summer of Code Project has been a success, even if we had to do a little change of plans. The ucf integration ended up working great and is really easy for maintainers to implement. Hope you enjoyed this demo and better understand the power of using Elektra.

Sincerely,
Ian S. Donnelly

Catégories: Elsewhere

Four Kitchens: DrupalCamp Twin Cities: Frontend Wrap-up

Planet Drupal - mer, 20/08/2014 - 22:58

This year’s Twin Cities DrupalCamp had no shortage of new faces, quality sessions, trainings, and after parties. Most of my time was spent in frontend sessions and talking with folks. Being that I live in Minneapolis, this camp is especially rewarding from a hometown Drupal represent kind of perspective. Below are some of my favorite sessions and camp highlights.

Community Drupal
Catégories: Elsewhere

Aurelien Jarno: MIPS Creator CI20

Planet Debian - mer, 20/08/2014 - 22:52

I have received two MIPS Creator CI20 boards, thanks to Imagination Technologies. It’s a small MIPS32 development board:

As you can see it comes in a nice packaging with a world-compatible power adapter. It uses a Ingenic JZ4780 SoC with a dual core MIPS32 CPU running at 1.2GHz with a PowerVR SGX540 GPU. The board is fitted with 1GB of RAM, 8GB of NOR flash, HDMI output, USB 2.0 ports, Ethernet + Wi-Fi + BlueTooth, SD card slot, IR receiver, expansion headers and more. The schematics are available. The Linux kernel and the U-Boot bootloader sources are also available.

Powering this board with a USB keyboard, a USB mouse and a HDMI display, it boots off the internal flash on a Debian Wheezy up to the XFCE environment. Besides the kernel, the Wi-Fi + Bluetooth firmware, and very few configuration changes, it runs a vanilla Debian. Unfortunately I haven’t found time to play more with it yet, but it looks already quite promising.

The board has not been formally announced yet, so I do not know when it will become available, nor the price, but if you are interested I’ll bring it to DebConf14. Don’t hesitate to ask me if you want to look at it or play with it.

Catégories: Elsewhere

Mediacurrent: UX - Above the Fold &amp; Scrolling

Planet Drupal - mer, 20/08/2014 - 22:51

More and more often I am asked, when putting together a design Drupal for a website, what is the importance of designing above the fold and whether or not that today’s users will scroll to read content.

Catégories: Elsewhere

Drupal Association News: Drupal Association Values

Planet Drupal - mer, 20/08/2014 - 19:54

In my experience as an organization leader one of the most important tools in my toolbox has always been my personal values. It's been my experience that even when data points in one direction, and best practices say you should approach the problem in this way, it's always been my values that help me make the best decisions as an individual. And the truth is, there are very rarely any decisions that are 100% clear cut. In almost every decision, some amount of judegment is required. 

That's why I believe so strongly in defining values for the organizations I work for. When everyone is working from a shared sense of values, we're making decisions - even big giant judgement calls - from the same perspective. To that end, we spent some time this year working with the board and staff to develop a values statement for the Drupal Association.

We started in a board retreat, brainstorming the implicit (though not documented) values of the Association and the larger Drupal Community. The board ranked their favorites, and then we created a committee of board and staff to draft some language. Those initial values statements were vetted by both the entire Association staff and the full board, and then additional edits were made. Here's the result of that process:

The Drupal Association shares the values of our community, our staff, and open source projects:

  • TEAMWORK: We add value to the Drupal community by helping each other solve problems to create quality human and digital experiences.
  • COMMUNICATION: We value communication. We seek community participation. We are open and transparent.
  • ACTION: We act decisively and proactively, embracing what we learn from both our successes and our mistakes.
  • RESPECT: We respect and value inclusivity in our global community and strive to recognize, understand, and respond to its needs.
  • FUN: We create environments that embrace humor resulting in fun, positive, supportive and safe interactions.

To be clear, these are the values we're defining for our staff. We're not trying to impose these values on the larger community. However, we do hope they reflect the values you feel are important in the larger Drupal community as well. We also want to recognize that writing down the words is one thing, and living up to them is something else. We intend to live these values in all our work. 

Now it's your turn. The values are set by the board and staff, but we want to make sure we know what you think.

Flickr photo: Howard Lake

Catégories: Elsewhere

Nuvole: Git workflow for managing Drupal 8 configuration

Planet Drupal - mer, 20/08/2014 - 19:30
The D8 way to replace "features-update" and "features-revert-all".

This is a preview of Nuvole's training at DrupalCon Amsterdam: An Effective Development Workflow in Drupal 8.

One of the new key features of Drupal 8 is the possibility to deal with configuration in code. Since configuration is now in text files, we want to put it under version control in Git to enjoy the many advantages this brings: comparing configuration states, keeping a history of configuration changes and even moving configuration between sites.

Setup

We will assume that you have a development version of Drupal 8, git and drush available on your system. You can set up your Drupal git repository in several ways. One of them is outlined in Building a Drupal site with Git on drupal.org. The document is written for Drupal 7, but can easily be adapted for Drupal 8.
Another, probably simpler method is to simply download a Drupal 8 (alpha) release and initialise a new repository with it.

In either case you should copy example.gitignore to .gitignore and adapt it to your needs to prevent settings.php and the files directory from being versioned.

The next step is to make sure that a configuration directory is versionable. By default Drupal 8 will place the staging directory under sites/default/files and it is considered a good practice to not version that location, but an alternative location can easily be specified in settings.php:

<?php
$config_directories['staging'] = 'config/staging';
?>

It is also possible and even advisable to specify a directory outside of the web root of course. In that case you would put the parent directory of your web root where drupal is under version control and use ../config/staging. We will later see that it is also possible to add more directories and keys to the $config_directories variable.

Because the configuration management of Drupal 8 only works between different instances of the same site, the different instances of the site need to be cloned. Cloning a Drupal 8 site is done the same way as cloning a Drupal 7 site. Just dump the database of the site to clone and import it in the other environment.

Development

After cloning your site you can go ahead and start configuring your site.
Once the part of the configuration you were working on is done the whole configuration of the site needs to be exported.

local$ drush config-export staging
The current contents of your export directory (config/staging) will be deleted. (y/n): y
Configuration successfully exported to config/staging.

Next, you need to merge the work of other developers. In some cases it may be enough to simply use git pull, otherwise the configuration has to be merged after it has been committed:

  • Add all configuration to git and commit it.

  • Use git pull (or git fetch and git merge) and resolve any conflicts if necessary.

Git can merge changes in text files quite well, but git does not know about Drupal and its yaml format for configuration. It is, therefore, important to verify that the merged configuration makes sense and is valid. In most cases it will probably not be an issue and just work, but it is always better to be vigilant and be on the safe side. So, after merging, you should always run:

local$ drush config-import staging

If the import went smooth you can push the configuration to the remote repository. Otherwise the configuration needs to be fixed first.

Deployment

The simplest case is when the configuration on the production site has not been changed. There is an interesting Configuration Read-only mode module that can enforce this.

If the configuration did not change deploying the new configuration is simply:

remote$ git pull
remote$ drush config-import staging

If the configuration changes on the production site, it is best to frequently export the live configuration into a dedicated directory.
Add a new config directory in settings.php:

<?php
$config_directories['to_dev'] = 'config/to_dev';
?> remote$ drush config-export to_dev -y

Add, commit and push it to the production branch so that the developers can deal with it and integrate the changes into the configuration which will be deployed next. Exporting the configuration into a dedicated directory rather than the staging directory avoids the danger that merge conflicts happen on the production site. The deployment to the production site should be kept hassle free, so it should always be safe to pull from git and import the configuration without the risk of a conflict.

Important notes

It is important to first export the configuration changes and then pull changes from collaborators because the exporting action wipes the directory and re-populates it with the active configuration. Since everything is in git, you can recover from such a mistake without much difficulty but why make your life complicated.

Import the configuration before pushing it to the remote repository. Broken configuration breaks the site, be a nice co-worker.

Git doesn't solve everything! Imagine Alice and Bob start with the same site, it has one content type and among others an "attachment" field. Alice deletes the attachment field, exports the configuration and pushes it to git. In the meantime, Bob creates a new content type and adds the attachment field to it. Bob exports his configuration, merges Alice's configuration changes without a problem (the changes are separate files) and imports the merged configuration. The attentive reader sees where this leads. The commit of Alice deletes the field storage for the attachment field, but Bob added a field instance which depends on the field storage. The exported configuration now contains a field instance that can't be imported.
At the time of writing, drush will signal a successful import but doesn't actually import it while the UI is more helpful and complains that the attachment field instance was not imported due to the missing field storage.

Tags: Drupal Planet, Drupal 8, Code Driven DevelopmentImage: 
Catégories: Elsewhere

Olivier Berger: Building a lab VM based on Debian for a MOOC, using Vagrant + VirtualBox

Planet Debian - mer, 20/08/2014 - 15:59

We’ve been busy setting up a Virtual Machine (VM) image to be used by participants of a MOOC that’s opening in early september on Relational Databases at Telecom SudParis.

We’ve chosen to use Vagrant and VirtualBox which are used to build, distribute and run the box, providing scriptability (reproducibility) and making it portable on most operating systems.

The VM itself contains a Debian (jessie) minimal system which runs (in the background) PostgreSQL, Apache + mod_php, phpPgAdmin, and a few applications of our own to play with example databases already populated in PostgreSQL.

As the MOOC’s language will be french, we expect the box to be used mostly on machines with azerty keyboards. This and other context elements led us to add some customizations (locale, APT mirror) in provisioning scripts run during the box creation.

At the moment, we generate 2 variants of the box, one for 32 bits kernel (i686) and one for 64 bits kernel (amd64) which (once compressed) represent betw. 300 and 350 Mb.

The resulting boxes are uploaded to a self-hosting site, and distributed through vagrantcloud. Once the VM are created in VirtualBox, the typical VMDK drives file is around 1.3Gb.

We use our own Debian base boxes containing a minimal Debian jessie/testing, instead of relying on someone else’s, and recreate them using (the development branch version of) bootsrap-vz. This ensure we can put more trust in the content as it’s a native Debian package installation without MITM intervention.

The VM are meant to be run headless for the moment, keeping their size to the minimum, even though we also provide a script to install and configure a desktop environment based on XFCE4.

The applications are either used through vagrant ssh, for instance for SQL command-line in psql, or in the Web browser, for our own Web based SQL exerciser, or phpPgAdmin (see a demo screencast (in french, w/ english subtitles)), which can then be used even off-line by the participants, which also means this requires no servers availability for our IT staff.

The MOOC includes a section on PHP + SQL programming, whose exercises can be performed using a shared sub-folder of /vagrant/ which allows editing on the host with the favourite native editor/IDE, while running PHP inside the VM’s Apache + mod_php.

The sources of our environment are available as free software, if you’re interested to replicate a similar environment for another project.

As we’re still polishing the environment before the MOOC opening (on september 10th), I’m not mentioning the box URLs but they shouldn’t be too hard to find if you’re investigating (refering to the fusionforge project’s web site).

We don’t know yet how suitable this environment will be for learning SQL and database design and programming, and if Vagrant will bring more difficulties than benefits. Still we hope that the participants will find this practical, allowing them to work on the lab / exercises whenever and wherever they chose, removing the pain of installing and configuring a RDBMS on their machines, or the need to be connected to a cloud or to our overloaded servers. Of course, one limitation will be the requirements on the host machines, that will need to be reasonably modern, in order to run a virtualized Linux system. Another is access to high bandwidth for downloading the boxes, but this is kind of a requirement already for downloading/watching the videos of the MOOC classes

Big thanks go to our intern Stéphane Germain, who joined us this summer to work on this virtualized environment.

Catégories: Elsewhere

Acquia: Drupal 8 & Empowerment through Drupal

Planet Drupal - mer, 20/08/2014 - 09:35

Part 1 of a 2-part conversation with Angie Byron in front of the cameras at NYC Camp 2014. In this part of our conversation we go over some of the inspiring and thought-provoking ideas we encountered there, and then jump to some of the benefits to users of the technical improvements built into Drupal 8.

Catégories: Elsewhere

Kristian Polso: How to create Drupal Commerce products programmatically

Planet Drupal - mer, 20/08/2014 - 08:00
Sometimes you just need to get your hands dirty and start adding Drupal Commerce products programmatically. Luckily that is not that hard of an thing to do.
Catégories: Elsewhere

Modules Unraveled: 115 Drupal Core Gittip Team with Jennifer Hodgdon, Bojhan Somers Alex Pott and Cathy Theys - Modules Unraveled Podcast

Planet Drupal - mer, 20/08/2014 - 07:00
Published: Wed, 08/20/14Download this episodeGitTip
  • What is GitTip? How does it work?

  • What is a GitTip team?

Drupal Core GitTip Team
  • How did the Drupal Core team come about? What prompted it’s genesis?

  • Who is the organizer of the Drupal Core team, and who is benefiting from it?
    19 members, Alex and Cathy are administering the group, a couple are on vacation.
    16 others are taking money.

  • On the GitTip page it says your goal is $5,000 US/week. What would that cover?
    Cathy: This week is the first week that we will not be able to fund the modest goal of giving people $64/week. The past few weeks we have been paying out $700. We have now eaten all our balance and have only $350 coming in this week.
    The $5k goal is what a guess at funding 6 people about ¼ time.

  • What have you all been working on lately as a result of this funding?
    Cathy: tips are for work already done, so… I'm not sure. Maybe it motivates future work, or planning to be able to do future work? Jen? Bojhan?
    What has this funding enabled you to do?

Episode Links: GitTip Team PageGitTip MembersDrupal Core Gittip Team FAQBojhan on drupal.orgBojhan on TwitterJennifer on drupal.orgJennifer on TwitterJennifer’s company siteCathy on drupal.orgCathy on TwitterAlex on drupal.orgAlex on TwitterCore Conversation at AmsterdamDrupal.org Working GroupsDries call for many separate companies to hire core developersJennifer’s case for core developers on staff at small shopsAlex Pott hired by Chapter ThreeCathy hired by BlackMeshGittip issue to change how team funds are splitTags: Drupal 8Drupal Coreplanet-drupal
Catégories: Elsewhere

Gunnar Wolf: Bigger than the cloud

Planet Debian - mer, 20/08/2014 - 06:10

Summer is cool in Mexico City.

It is cool because, unlike Spring, this is our rainy season — And rains are very predictable. Almost every day we wake up with a gorgeous, clean, blue sky.

Cool, nice temperature, around 15°C. The sun slowly evaporates the rain throughout the morning; when I go out for lunch, the sky is no longer so blue, giving way to a seemingly dirty white/grayish tint. No, it's not our world-famous pollution: It's just yesterday's rain.

Rain starts falling usually between 4 and 7 PM. Sometimes it starts as a light rain, sometimes it starts with all of its thunder, all of its might. But anyway, almost every night, there is a moment of awe, of not believing how much rain we are getting today.

It slowly fades away during the late night. And when I wake up, early next morning, everything is wet and still smells fresh.

Yes, I love our summer, even though it makes shy away from my much enjoyed cycling to work and school. And I love taking some minutes off work, look through the window of my office (located ~70m over the level of our mostly flat city) and watching how different parts of the city have sun or rain; learning to estimate the distance to the clouds, adding it to the direction and guessing which of my friends have which weather.

But I didn't realize our city had so clearly defined micro-climates... (would they really be *micro*-climates?) In fact, it even goes against my knowledge of Mexico City's logic — I always thought Coyoacán, towards the South of the city, got more rain than the Center and North because we are near the mountains, and the dominant air currents go Southwards, "clumping" the clouds by us.

But no, or at least, not this year. Regina (still in the far South — Far because she's too far away from me and I'm too egocentric; she returns home after DebConf) often asks me about the weather, as our friends working nearer the center of the city. According to the photos they post on their $social_media_of_the_day accounts, rains are really heavier there.

Today I heard on the radio accounts of yesterday's chaos after the rain. This evening, at ESIME-Culhuacán, I saw one of the reported fallen trees (of course, I am not sure if it's from yesterday's rain). And the media pushes galleries of images of a city covered in hail... While in Copilco we only had a regular rain, I'd even say a mild one.

This city is bigger than any cloud you can throw at it.

AttachmentSize IMG_20140819_155052.jpg929.69 KB
Catégories: Elsewhere

PreviousNext: Drupal 8 Now: PHPUnit tests in Drupal 7

Planet Drupal - mer, 20/08/2014 - 06:10

Drupal 8 comes with built-in support for PHP Unit for unit-testing, the industry standard for unit-tests.

But that doesn't mean you can't use PHP Unit for your testing and CI in Drupal 7, if you structure your code well.

Read on to find out what you need to do to use PHP Unit in Drupal 7.

Catégories: Elsewhere

Holger Levsen: 20140819-lts-july-2014

Planet Debian - mer, 20/08/2014 - 04:34
Debian LTS - impressions and thoughts from my first month involvement About LTS - we want feedback and more companies supporting it financially

Squeeze LTS, and even more Debian LTS, is a pretty young project, started in May 2014, so it's still a bit unclear where exactly we'll be going One purpose of this post is to spread some information about the initiative and invite you to tell us what you think about it or what your needs are.

LTS stands for "Long Term Support" and the goal of the project is to extend the security support for Squeeze (aka the current oldstable distribution) by two years. If it weren't for Squeeze LTS, the security support for it would have been stopped in May 2014 (=one year after the release of the current stable distribution), which for many is a too short timespan after it's release in February 2011. It's an experiment, we hope that there will be a similar Wheezy LTS initiative in future, but the future is unwritten and things will change based on our experiences and your needs.

If you have feedback on the direction LTS should take (or anything else LTS related), please comment on the lts mailing list. For immediate feedback there is also the #debian-lts IRC channel.

Another quite pragmatic way to express your needs is to read more about how to financially contribute to LTS and then doing exactly that - and unsurprisingly we are prioritizing the updates based on the needs expressed from our paying customers.

My LTS work in July 2014

So, "somehow" I started working for money on Debian LTS in July, which means there were 10h I got paid, and probably another 10h where I did LTS related work unpaid. I used those to release four updates for squeeze-lts (linux-2.6, file, munin and reportbug) fixing 22 CVEs in total.

The unpaid work was mostly spent on unsuccessfully working on security updates and on adding support for LTS to the security team tracker, which I improved but couldn't fully address and which I haven't properly shared / committed yet... but at least I have a local instance of the tracker now, which - for LTS - is more useful than the .debian.org one. Hopefully during DebConf14 we'll manage to fix the tracker for good.

Without success I've looked at libtasn1-3 (where the first fixes applied easily but then the code had changed too much from what was in squeeze compared to the available patches that I gave up) and libxstream-java (which is at version 1.3, while patches exist for upstream branches 1.4 and 2.x, but those need newer java to build and maybe if I'll spend two more hours I'll can get it build and then I'll have to find a useful test case, which looked quite difficult on a brief look.. so maybe I give up on libxstream-java too.... OTOH, if you use it and can do some testing, please do tell me.

Working on all these updates turned out to be more team work than expected and a lot of work involving code (which I did expect), and often code which I'd normally not look at... similarily with tools: one has to deal with tools one doesnt like, eg I had to install cdbs... And as I usually like challenges, this has actually been a lot of fun! Though it's also pretty cool to use common best practices, easy and understandable workflows. I love README.Source (or better yet, when it's not needed). And while git is of course really really great, it's still very desirable if your package builds twice (=many times) in a row, without resetting it via git.

Some more observations

The first 16 updates (until July 19th) didn't have a DLA ID, until I suggested to introduce them and insisted until we agreed. So now we agreed to put the DLA ID in the subject of the announcement mails and there is also some tool support for generating the templates/mails, but enforcing proper subjects is not done, as silent bounces are useless (and non silent ones can be abused too easily). I'm not really happy about this, but who is happy with the way email works today? And I agree, it's not the end of the world if an LTS announcement is done without a proper ID, but it looks unprofessional and could be avoided, but we have more important work to do. But one day we should automate this better.

Another detail I'm not entirely happy is the policy/current decision that "almost everything is fine to upload if deemed sensible by the uploader" (which is everyone in the Debian upload keyring(s)). In discussions before actually having the archive some people suggested the desire to upload new upstream versions too (eg newer kernels, iceweasel or other software to keep running a squeeze desktop in the modern world were discussed). I sincerely hope for most intrusive new upstream versions squeeze-(sloppy-)backports is used instead, and squeeze-lts rather not. Luckily so far all uploads were (IMHO) sensible and so, right now, I will just say that I hope it will stay this way. And it's true, one also has to install these upgrades in the first place. But people do that blindly all the time...

So by design/desire currently there is no gatekeeping mechanism whatsover (no NEW, no proposed updates), except that only some selected "few" can upload. What is uploaded (and signed correctly), gets pushed to archive, buildds and the mirrors, and hopefully maybe someone will send an announcement. So far this has worked remarkedly well - and it's also the way the Debian Security team works, as I'm told. Looking at this from a process quality / automatisation perspective, all this manual and errorprone labour seems very strange to me.

And then there is another thing: as already mentioned, the people working paid hours for this, are prioritizing their work based on customer requests. So we did two updates (python-scipy and file), which are not fixed in wheezy yet. I think this is unfortunate and while I could probably prepare the wheezy DSA for file, I don't really want to join the Security Team... or maybe I want/should join the Security Team and be a seldomly active member (eg fixing file in wheezy now....)

A note related to this: out of those 37 uploads done until today, 16 were done by those two people being paid, while the other 21 uploads were done by 10 volunteers (or at least not paid by Debian LTS). It will be interesting to see how this statistics evolves over time.

Last, but not least, there is also this can of worms (aka: the discussion) about paying people to work on Debian... I do agree it's work we didnt find volunteers for and I also see how the (financial side of the) setup is done outside of Debian (and well too, btw!), but... then we also use Debian ressources like buildds, the archive itself and official mailing lists. Also I'm currently biased in this discussion, as I directly (and happily) profit from it. I'm mentioning this here, because I believe it's important we discuss this and come to both good and practical conclusions. FWIW, we have also discussed this on the list, feel free to search the archives for it.

To bring this post to an end: for those attending DebConf14 (directly or thanks to some ninjas), there will be an event about LTS in Portland, though I'm not sure yet what I will have to talk about what hasn't been already covered here But this probably means that will be a good opportunity for you to do lots of talking instead! I'm curious what you will have to say!

Thanks for reading this far. I think I can promise that my next LTS report will be shorter

Catégories: Elsewhere

Dirk Eddelbuettel: RcppArmadillo 0.4.400.0

Planet Debian - mer, 20/08/2014 - 04:00
After two pre-releases in the last few days, Conrad finalised a new Armadillo version 4.400 today. I had kept up with the pre-releases, tested twice against all eighty (!!) CRAN dependents of RcppArmadillo and have hence uploaded RcppArmadillo 0.4.400.0 to CRAN and into Debian.

This release brings a number of new upstream features which are detailed below. As included is s bugfix for sparse matrix creation at the RcppArmadillo end which was found by the ASAN tests at CRAN --- which are similar to the sanitizers tests I recently blogged. I was able to develop and test the fix in the very docker r-devel-san images I had written about which was nice. Special thanks also to Ryan Curtin for help with the fix.

Changes in RcppArmadillo version 0.4.400.0 (2014-08-19)
  • Upgraded to Armadillo release Version 4.400 (Winter Shark Alley)

    • added gmm_diag class for statistical modelling using Gaussian Mixture Models; includes multi-threaded implementation of k-means and Expectation-Maximisation for parameter estimation

    • added clamp() for clamping values to be between lower and upper limits

    • expanded batch insertion constructors for sparse matrices to add values at repeated locations

    • faster handling of subvectors by dot()

    • faster handling of aliasing by submatrix views

  • Corrected a bug (found by the g++ Address Sanitizer) in sparse matrix initialization where space for a sentinel was allocated, but the sentinel was not set; with extra thanks to Ryan Curtin for help

  • Added a few unit tests for sparse matrices

Courtesy of CRANberries, there is also a diffstat report for the most recent release. As always, more detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Catégories: Elsewhere

Joey Hess: using a debian package as the remote for a local config repo

Planet Debian - mer, 20/08/2014 - 00:55

Today I did something interesting with the Debian packaging for propellor, which seems like it could be a useful technique for other Debian packages as well.

Propellor is configured by a directory, which is maintained as a local git repository. In propellor's case, it's ~/.propellor/. This contains a lot of haskell files, in fact the entire source code of propellor! That's really unusual, but I think this can be generalized to any package whose configuration is maintained in its own git repository on the user's system. For now on, I'll refer to this as the config repo.

The config repo is set up the first time a user runs propellor. But, until now, I didn't provide an easy way to update the config repo when the propellor package was updated. Nothing would break, but the old version would be used until the user updated it themselves somehow (probably by pulling from a git repository over the network, bypassing apt's signature validation).

So, what I wanted was a way to update the config repo, merging in any changes from the new version of the Debian package, while preserving the user's local modifications. Ideally, the user could just run git merge upstream/master, where the upstream repo was included in the Debian package.

But, that can't work! The Debian package can't reasonably include the full git repository of propellor with all its history. So, any git repository included in the Debian binary package would need to be a synthetic one, that only contains probably one commit that is not connected to anything else. Which means that if the config repo was cloned from that repo in version 1.0, then when version 1.1 came around, git would see no common parent when merging 1.1 into the config repo, and the merge would fail horribly.

To solve this, let's assume that the config repo's master branch has a parent commit that can be identified, somehow, as coming from a past version of the Debian package. It doesn't matter which version, although the last one merged with will be best. (The easy way to do this is to set refs/heads/upstream/master to point to it when creating the config repo.)

Once we have that parent commit, we have three things:

  1. The current content of the config repo.
  2. The content from some old version of the Debian package.
  3. The new content of the Debian package.

Now git can be used to merge #3 onto #2, with -Xtheirs, so the result is a git commit with parents of #3 and #2, and content of #3. (This can be done using a temporary clone of the config repo to avoid touching its contents.)

Such a git commit can be merged into the config repo, without any conflicts other than those the user might have caused with their own edits.

So, propellor will tell the user when updates are available, and they can simply run git merge upstream/master to get them. The resulting history looks like this:

* Merge remote-tracking branch 'upstream/master' |\ | * merging upstream version | |\ | | * upstream version * | user change |/ * upstream version

So, generalizing this, if a package has a lot of config files, and creates a git repository containing them when the user uses it (or automatically when it's installed), this method can be used to provide an easily mergable branch that tracks the files as distributed with the package.

It would perhaps not be hard to get from here to a full git-backed version of ucf. Note that the Debian binary package doesn't have to ship a git repisitory, it can just as easily ship the current version of the config files somewhere in /usr, and check them into a new empty repository as part of the generation of the upstream/master branch.

Catégories: Elsewhere

Phase2: Profiling Drupal Performance with PHPStorm and Xdebug

Planet Drupal - mar, 19/08/2014 - 22:34

Profiling is about measuring the performance of PHP code, at least when we are talking about Drupal and Xdebug. You might need to profile your site or app if you work at a firm where performance is highly scrutinized, or if you are having problems getting a migration to complete. Whatever the reason, if you have been tasked with analyzing the performance of your Drupal codebase, profiling is one great way of doing so. Note that Xdebug’s profiler does not track memory usage. If you want to know more about memory performance tracking you should check out Xdebug’s execution trace features.

Alright then lets get started! 

Whoa there cowboy! First you need to know that the act of profiling your code is itself taking resources to accomplish. The more work your code does, the more information that the profiler stores; file sizes for these logs can get very big very quickly. You have been warned. To get going with profiling Drupal in PHPStorm and Xdebug you need:

To setup your environment, edit your php.ini file and add the following lines:

xdebug.profiler_output_dir=/tmp/profiler/ xdebug.profiler_enable=on xdebug.profiler_trigger=on xdebug.profiler_append=on

Depending on what you are testing and how, you may want to adjust the settings for your site. For instance, if you are using Drush to run a migration, you can’t start the profiler on-demand, and that affects the profiler_trigger setting. For my dev site I used the php.ini config you see above and simply added a URL parameter “XDEBUG_PROFILE=on” to my site’s url; this starts Xdebug profiling from the browser.

To give you an idea of what is possible, lets profile the work required to view a simple Drupal node. To profile the node view I visited http://profiler.loc/node/48581?XDEBUG_PROFILE=on in my browser. I didn’t see any flashing lights or hear bells and whistles, but I should have a binary file that PHPStorm can inspect, located in the path I setup in my php.ini profiler_output_dir directive.

Finally lets look at all of our hard work! In PHPStorm navigate to Tools->Analyze Xdebug Profile Snapshot. Browse to your profiler output directory and you should see at least one cachgrind.out.%p file (%p refers to the process id the script used). Open the file with the largest process id appended to the end of the filename.

We are then greeted with a new tab showing the results of the profiler.

The output shows us the functions called, how many times they were called, and the amount of execution time each function took. Additionally, you can see the hierarchy of all function calls and follow potential bottlenecks down to their roots.

There you have it! Go wild and profile all the things! Just kidding, don’t do that.

Catégories: Elsewhere

Pages

Subscribe to jfhovinne agrégateur - Elsewhere