Elsewhere

Phase2: Profiling Drupal Performance with PHPStorm and Xdebug

Planet Drupal - Tue, 19/08/2014 - 22:00

Profiling is about measuring the performance of PHP code, at least when we are talking about Drupal and Xdebug. You might need to profile your site or app if you work at a firm where performance is highly scrutinized, or if you are having problems getting a migration to complete. Whatever the reason, if you have been tasked with analyzing the performance of your Drupal codebase, profiling is one great way of doing so. Note that Xdebug’s profiler does not track memory usage. If you want to know more about memory performance tracking you should check out Xdebug’s execution trace features.

Alright then lets get started! 

Whoa there cowboy! First you need to know that the act of profiling your code is itself taking resources to accomplish. The more work your code does, the more information that the profiler stores; file sizes for these logs can get very big very quickly. You have been warned. To get going with profiling Drupal in PHPStorm and Xdebug you need:

  • PHPStorm
  • PHP with the Xdebug extension
  • A website running on Drupal.

To setup your environment, edit your php.ini file and add the following lines:

xdebug.profiler_output_dir=/tmp/profiler/ xdebug.profiler_enable=on xdebug.profiler_trigger=on xdebug.profiler_append=on

Depending on what you are testing and how, you may want to adjust the settings for your site. For instance, if you are using Drush to run a migration, you can’t start the profiler on-demand, and that affects the profiler_trigger setting. For my dev site I used the php.ini config you see above and simply added a URL parameter “XDEBUG_PROFILE=on” to my site’s url; this starts Xdebug profiling from the browser.

To give you an idea of what is possible, lets profile the work required to view a simple Drupal node. To profile the node view I visited http://profiler.loc/node/48581?XDEBUG_PROFILE=on in my browser. I didn’t see any flashing lights or hear bells and whistles, but I should have a binary file that PHPStorm can inspect, located in the path I setup in my php.ini profiler_output_dir directive.

Finally lets look at all of our hard work! In PHPStorm navigate to Tools->Analyze Xdebug Profile Snapshot. Browse to your profiler output directory and you should see at least one cachgrind.out.%p file (%p refers to the process id the script used). Open the file with the largest process id appended to the end of the filename.

We are then greeted with a new tab showing the results of the profiler.

 

The output shows us the functions called, how many times they were called, and the amount of execution time each function took. Additionally, you can see the hierarchy of all function calls and follow potential bottlenecks down to their roots.

There you have it! Go wild and profile all the things! Just kidding, don’t do that.

Categories: Elsewhere

DrupalCon Amsterdam: Schedules, BOFS, & Training in Amsterdam

Planet Drupal - Tue, 19/08/2014 - 20:20

The schedule for DrupalCon Amsterdam is live, which means that you can start planning out every detail of your Amsterdam experience. You can start the hard work of choosing the sessions, BOFs, and social events you want to attend, and build your own schedule right on the DrupalCon Amsterdam site.

BOF scheduling is live

Speaking of BOFs, you don’t have to wait until DrupalCon Amsterdam to claim yours: you can start using the online booking feature today to schedule your BOFs. Be sure you do it soon, though— BOF rooms go fast!

Register for training before 5 September

Lastly, we need more people to register to attend training at DrupalCon Amsterdam. Show us you're interested in these topics! Book before 5 September to make sure the course you want to attend runs. We know it's difficult to make a decision this early out, but classes which do not meet the minimum to run will be cancelled!

The training options are all fantastic and a great opportunity to learn more about Drupal, so register today.

See you in Amsterdam!

Categories: Elsewhere

Julien Danjou: Tracking OpenStack contributions in GitHub

Planet Debian - Tue, 19/08/2014 - 19:00

I've switched my Git repositories to GitHub recently, and started to watch my contributions statistics, which were very low considering I spend my days hacking on open source software, especially OpenStack.

OpenStack hosts its Git repositories on its own infrastructure at git.openstack.org, but also mirrors them on GitHub. Logically, I was expecting GitHub to track my commits there too, as I'm using the same email address everywhere.

It turns out that it was not the case, and the help page about that on GitHub describes the rule in place to compute statistics. Indeed, according to GitHub, I had no relations to the OpenStack repositories, as I never forked them nor opened a pull request on them (OpenStack uses Gerrit).

Starring a repository is enough to build a relationship between a user and a repository, so this is was the only thing needed to inform GitHub that I have contributed to those repositories. Considering OpenStack has hundreds of repositories, I decided to star them all by using a small Python script.

And voilà, my statistics are now including all my contributions to OpenStack!

Categories: Elsewhere

Drupalize.Me: Upgrading Drush to work with Drupal 8

Planet Drupal - Tue, 19/08/2014 - 16:00

When I first started learning Drupal, I remember the process of enabling and disabling modules on the Modules page and it took for-ev-er. My laptop was in serious danger of getting hurled across the room, due to my frustration. Then I discovered drush, and downloading and enabling modules was now performed with ease instead of pain and suffering. Of course there's a lot more you can do with drush than just download and enable modules, this is just one example.

I've been using Drush 6.x on my local machine for quite some time now. Poking around Drupal 8's UI and seeing what's new, I haven't missed drush too much...until it was time to test drive a new contrib module for Drupal 8. When I typed into my Terminal window drush dl page_manager, I got quite the error message:

Drush 6.x only works with Drupal 6 or 7. If I wanted to use Drush on my Drupal 8 site, I would need to upgrade to Drush 7.x.

Categories: Elsewhere

Drupalize.Me: Upgrading Drush to work with Drupal 8

Planet Drupal - Tue, 19/08/2014 - 16:00

When I first started learning Drupal, I remember the process of enabling and disabling modules on the Modules page and it took for-ev-er. My laptop was in serious danger of getting hurled across the room, due to my frustration. Then I discovered drush, and downloading and enabling modules was now performed with ease instead of pain and suffering. Of course there's a lot more you can do with drush than just download and enable modules, this is just one example.

I've been using Drush 6.x on my local machine for quite some time now. Poking around Drupal 8's UI and seeing what's new, I haven't missed drush too much...until it was time to test drive a new contrib module for Drupal 8. When I typed into my Terminal window drush dl page_manager, I got quite the error message:

Drush 6.x only works with Drupal 6 or 7. If I wanted to use Drush on my Drupal 8 site, I would need to upgrade to Drush 7.x.

Categories: Elsewhere

ComputerMinds.co.uk: Language lessons: What are you translating?

Planet Drupal - Tue, 19/08/2014 - 14:00
Content (node-level) translation or entity (field-level) translation?

It seems an obvious question to ask, but what are you translating?

The tools exist to translate just about anything in Drupal 7*, but in many different ways, so you need to know exactly what you're translating. Language is 'a first-class citizen', in the sense that any piece of text is inherently written by someone on some language, which Drupal 7 is built to recognise. Sometimes you want to translate each & every individual piece of text (e.g. at the sentence or paragraph level). Other times you want to translate a whole page or section that is made up of multiple pieces of text.

Categories: Elsewhere

Joachim's blog: Getting Module Builder ready for Drupal 8

Planet Drupal - Tue, 19/08/2014 - 10:12

I've just made a commit to Module Builder that adds unit tests. This is a big deal, because having these frees me up to start making the big changes that are needed for supporting Drupal 8's new structures: routes, plugins, forms, and so on.

The biggest challenge is going to be the interface. Currently, you give Module Builder just a module name and a list of hook names, and it does the necessary. On the command line it's nice and simple:

drush mb mymodule install schema node_insert form_alter views_data_alter

The first parameter is the module name, and everything that follows is a hook name. Now we add to the mix requests such as a form called MyModuleCakeToppingForm, or an entity type plugin, or a route bake_my_cake and its page controller. How to elegantly specify all that over the command line, without making it horribly unwieldy and impossible to remember how to use?

It's also going to be an interesting exercise in reading my own documentation and seeing how much sense it makes after something like 7 months away from the code.

From what I recall, Module Builder uses a hierarchy of component generators to build your module. Taking our example above, the first thing that happens is that the Module generator class kicks in. 'So, you want a module, do you?' it asks, 'You'll need some of these.' And it begins to assemble a list of further generators, for the components it needs: an info file, and the hooks generator. The hooks generator does the actual job of examining your list of requested hooks, and decides based on that that you need three code files: a .module, a .install, and a .views.inc. So by now we have a tree of generators like this:

- Module -- Info file -- Hooks --- Code file: .module --- Code file: .install --- Code file: .views.inc

This is not a class hierarchy; this is a tree of objects where each generator has a list of the generators beneath it, and is responsible for collecting data from them. Once we have the tree, we iteratively have each generator assemble the data it wants to contribute, starting with the Module generator at the top.

The original plan when I wrote this system was to make the smallest granularity be a file. The leaves of the generator tree would assemble the text for their file's contents, and the Module generator would collect the files up and return them to the caller for output (either in the UI, or to write them directly).

However, while the original intention of this system was that it could be generalised to base components other than modules (so profiles and themes, which are both supported to some extend but lack the UI, see above!), it's also proven to be extendable downwards to smaller components, and to be worthwhile to do so.

Enter the Form generator. Once we have a generic Function generator (and its child class the HookImplementation), we can create a Form generator. Given a form machine name, 'foo_form', it simply knows to add three copies of the Function generator: 'foo_form', 'foo_form_validate', 'foo_form_submit', along with the correct parameters and some boiler plate code.

And we can specialize this further: the AdminSettingsForm simply extends the Form generator, and adds a menu item component, which itself ensures hook_menu() is requested.

At this point it starts to get a bit complicated, as we have components that request other components that are in totally different parts of the component tree. That's the point at which I think I was when I realized I needed tests so that I can refactor and clean up the messy bits of this, and enhance and extend it, without breaking what's already there.

So that's the current state of Module Builder: not yet ready for Drupal 8, but has lots of potential. At this point, I'd really welcome input on the Drush interface, as that's the big quandary. And any input on new Drupal 8 component generators would be great too; there are a few open issues in the queue. And finally, Module Builder is a complex beast; should anyone looking at the code find it baffling and impenetrable, do please file a documentation issue to highlight the problem and request clarification.

Categories: Elsewhere

Jelmer Vernooij: Using Propellor for configuration management

Planet Debian - Mon, 18/08/2014 - 23:15

For a while, I've been wanting to set up configuration management for my home network. With half a dozen servers, a VPS and a workstation it is not big, but large enough to make it annoying to manually log into each machine for network-wide changes.

Most of the servers I have are low-end ARM machines, each responsible for a couple of tasks. Most of my machines run Debian or something derived from Debian. Oh, and I'm a member of the declarative school of configuration management.

Propellor

Propellor caught my eye earlier this year. Unlike some other configuration management tools, it doesn't come with its own custom language but it is written in Haskell, which I am already familiar with. It's also fairly simple, declarative, and seems to do most of the handful of things that I need.

Propellor is essentially a Haskell application that you customize for your site. It works very similar to e.g. xmonad, where you write a bit of Haskell code for configuration which uses the upstream library code. When you run the application it takes your code and builds a binary from your code and the upstream libraries.

Each host on which Propellor is used keeps a clone of the site-local Propellor git repository in /usr/local/propellor. Every time propellor runs (either because of a manual "spin", or from a cronjob it can set up for you), it fetches updates from the main site-local git repository, compiles the Haskell application and runs it.

Setup

Propellor was surprisingly easy to set up. Running propellor creates a clone of the upstream repository under ~/.propellor with a README file and some example configuration. I copied config-simple.hs to config.hs, updated it to reflect one of my hosts and within a few minutes I had a basic working propellor setup.

You can use ./propellor <host> to trigger a run on a remote host.

At the moment I have propellor working for some basic things - having certain Debian packages installed, a specific network configuration, mail setup, basic Kerberos configuration and certain SSH options set. This took surprisingly little time to set up, and it's been great being able to take full advantage of Haskell.

Propellor comes with convenience functions for dealing with some commonly used packages, such as Apt, SSH and Postfix. For a lot of the other packages, you'll have to roll your own for now. I've written some extra code to make Propellor deal with Kerberos keytabs and Dovecot, which I hope to submit upstream.

I don't have a lot of experience with other Free Software configuration management tools such as Puppet and Chef, but for my use case Propellor works very well.

The main disadvantage of propellor for me so far is that it needs to build itself on each machine it runs on. This is fine for my workstation and high-end servers, but it is somewhat more problematic on e.g. my Raspberry Pi's. Compilation takes a while, and the Haskell compiler and libraries it needs amount to 500Mb worth of disk space on the tiny root partition.

In order to work with Propellor, some Haskell knowledge is required. The Haskell in the configuration file is reasonably easy to understand if you keep it simple, but once the compiler spits out error messages then I suspect you'll have a hard time without any Haskell knowledge.

Propellor relies on having a central repository with the configuration that it can pull from as root. Unlike Joey, I am wary of publishing the configuration of my home network and I don't have a highly available local git server setup.

Categories: Elsewhere

Daniel Pocock: Is WebRTC private?

Planet Drupal - Mon, 18/08/2014 - 21:55

With the exciting developments at rtc.debian.org, many people are starting to look more closely at browser-based real-time communications.

Some have dared to ask: does it solve the privacy problems of existing solutions?

Privacy is a relative term

Perfect privacy and its technical manifestations are hard to define. I had a go at it in a blog on the Gold Standard for free communications technology on 5 June 2013. By pure co-incidence, a few hours later, the first Snowden leaks appeared and this particular human right was suddenly thrust into the spotlight.

WebRTC and ICE privacy risk

WebRTC does not give you perfect privacy.

At least one astute observer at my session at Paris mini-DebConf 2014 questioned the privacy of Interactive Connectivity Establishment (ICE, RFC 5245).

In its most basic form, ICE scans all the local IP addresses on your machine and NAT gateway and sends them to the person calling you so that their phone can find the optimal path to contact you. This clearly has privacy implications as a caller can work out which ISP you are connected to and some rough details of your network topology at any given moment in time.

What WebRTC does bring to the table

Some of this can be mitigated though: an ICE implementation can be tuned so that it only advertises the IP address of a dedicated relay host. If you can afford a little latency, your privacy is safe again. This privacy protecting initiative could be made by a browser vendor such as Mozilla or it can be done in JavaScript by a softphone such as JSCommunicator.

Many individuals are now using a proprietary softphone to talk to family and friends around the world. The softphone in question has properties like a virus, siphoning away your private information. This proprietary softphone is also an insidious threat to open source and free operating systems on the desktop. WebRTC is a positive step back from the brink. It gives people a choice.

WebRTC is a particularly relevant choice for business. Can you imagine going to a business and asking them to make all their email communication through hotmail? When a business starts using a particular proprietary softphone, how is it any different? WebRTC offers a solution that is actually easier for the user and can be secured back to the business network using TLS.

WebRTC is based on open standards, particularly HTML5. Leading implementations, such as the SIP over WebSocket support in reSIProcate, JSCommunicator and the DruCall module for Drupal are fully open source. Not only is it great to be free, it is possible to extend and customize any of these components.

What is missing

There are some things that are not quite there yet and require a serious effort from the browser vendors. At the top of the list for privacy:

  • ZRTP support - browsers currently support DTLS-SRTP, which is based on X.509. ZRTP is more like PGP, a democratic and distributed peer-to-peer privacy solution without needing to trust some central certificate authority.
  • TLS with PGP - the TLS protocol used to secure the WebSocket signalling channel is also based on X.509 with the risk of a central certificate authority. There is increasing chatter about the need for TLS to use PGP instead of X.509 and WebRTC would be a big winner if this were to eventuate and be combined with ZRTP.

You may think "I'll believe it when I see it". Each of these features, including WebRTC itself, is a piece of the puzzle and even solving one piece at a time brings people further out of danger from the proprietary mess the world lives with today.

Categories: Elsewhere

Daniel Pocock: Is WebRTC private?

Planet Debian - Mon, 18/08/2014 - 21:55

With the exciting developments at rtc.debian.org, many people are starting to look more closely at browser-based real-time communications.

Some have dared to ask: does it solve the privacy problems of existing solutions?

Privacy is a relative term

Perfect privacy and its technical manifestations are hard to define. I had a go at it in a blog on the Gold Standard for free communications technology on 5 June 2013. By pure co-incidence, a few hours later, the first Snowden leaks appeared and this particular human right was suddenly thrust into the spotlight.

WebRTC and ICE privacy risk

WebRTC does not give you perfect privacy.

At least one astute observer at my session at Paris mini-DebConf 2014 questioned the privacy of Interactive Connectivity Establishment (ICE, RFC 5245).

In its most basic form, ICE scans all the local IP addresses on your machine and NAT gateway and sends them to the person calling you so that their phone can find the optimal path to contact you. This clearly has privacy implications as a caller can work out which ISP you are connected to and some rough details of your network topology at any given moment in time.

What WebRTC does bring to the table

Some of this can be mitigated though: an ICE implementation can be tuned so that it only advertises the IP address of a dedicated relay host. If you can afford a little latency, your privacy is safe again. This privacy protecting initiative could be made by a browser vendor such as Mozilla or it can be done in JavaScript by a softphone such as JSCommunicator.

Many individuals are now using a proprietary softphone to talk to family and friends around the world. The softphone in question has properties like a virus, siphoning away your private information. This proprietary softphone is also an insidious threat to open source and free operating systems on the desktop. WebRTC is a positive step back from the brink. It gives people a choice.

WebRTC is a particularly relevant choice for business. Can you imagine going to a business and asking them to make all their email communication through hotmail? When a business starts using a particular proprietary softphone, how is it any different? WebRTC offers a solution that is actually easier for the user and can be secured back to the business network using TLS.

WebRTC is based on open standards, particularly HTML5. Leading implementations, such as the SIP over WebSocket support in reSIProcate, JSCommunicator and the DruCall module for Drupal are fully open source. Not only is it great to be free, it is possible to extend and customize any of these components.

What is missing

There are some things that are not quite there yet and require a serious effort from the browser vendors. At the top of the list for privacy:

  • ZRTP support - browsers currently support DTLS-SRTP, which is based on X.509. ZRTP is more like PGP, a democratic and distributed peer-to-peer privacy solution without needing to trust some central certificate authority.
  • TLS with PGP - the TLS protocol used to secure the WebSocket signalling channel is also based on X.509 with the risk of a central certificate authority. There is increasing chatter about the need for TLS to use PGP instead of X.509 and WebRTC would be a big winner if this were to eventuate and be combined with ZRTP.

You may think "I'll believe it when I see it". Each of these features, including WebRTC itself, is a piece of the puzzle and even solving one piece at a time brings people further out of danger from the proprietary mess the world lives with today.

Categories: Elsewhere

Drupal.org Featured Case Studies: Viraland

Planet Drupal - Mon, 18/08/2014 - 20:27
Completed Drupal site or project URL: http://www.viraland.gr/

Viraland is a Greek online community where you can find the latest viral messages from around the world. Users participate in the community by posting new content and sharing it through social channels. Strange, meaningful and popular messages that go viral are posted daily in Viraland, reflecting the interests of its users.

Key modules/theme/distribution used: ViewsPanelsZenMediaJanrain Social LoginRulesInternationalizationField PermissionsTeam members: highvrahos
Categories: Elsewhere

SitePoint PHP Drupal: Fine Tuning Drupal Themes with Patterns, Arg and Types

Planet Drupal - Mon, 18/08/2014 - 18:00

In this article, we’ll discuss how you can leverage various Drupal API functions to achieve more fine grained theming. We’ll cover template preprocessing and alter hooks using path patterns, types and args(). We’ll use the arg() function which returns parts of a current Drupal URL path and some pattern matching for instances when you want […]

Continue reading %Fine Tuning Drupal Themes with Patterns, Arg and Types%

Categories: Elsewhere

Julien Danjou: OpenStack Ceilometer and the Gnocchi experiment

Planet Debian - Mon, 18/08/2014 - 17:00

A little more than 2 years ago, the Ceilometer project was launched inside the OpenStack ecosystem. Its main objective was to measure OpenStack cloud platforms in order to provide data and mechanisms for functionalities such as billing, alarming or capacity planning.

In this article, I would like to relate what I've been doing with other Ceilometer developers in the last 5 months. I've lowered my involvement in Ceilometer itself directly to concentrate on solving one of its biggest issue at the source, and I think it's largely time to take a break and talk about it.

Ceilometer early design

For the last years, Ceilometer didn't change in its core architecture. Without diving too much in all its parts, one of the early design decision was to build the metering around a data structure we called samples. A sample is generated each time Ceilometer measures something. It is composed of a few fields, such as the the resource id that is metered, the user and project id owning that resources, the meter name, the measured value, a timestamp and a few free-form metadata. Each time Ceilometer measures something, one of its components (an agent, a pollster…) constructs and emits a sample headed for the storage component that we call the collector.

This collector is responsible for storing the samples into a database. The Ceilometer collector uses a pluggable storage system, meaning that you can pick any database system you prefer. Our original implementation has been based on MongoDB from the beginning, but we then added a SQL driver, and people contributed things such as HBase or DB2 support.

The REST API exposed by Ceilometer allows to execute various reading requests on this data store. It can returns you the list of resources that have been measured for a particular project, or compute some statistics on metrics. Allowing such a large panel of possibilities and having such a flexible data structure allows to do a lot of different things with Ceilometer, as you can almost query the data in any mean you want.

The scalability issue

We soon started to encounter scalability issues in many of the read requests made via the REST API. A lot of the requests requires the data storage to do full scans of all the stored samples. Indeed, the fact that the API allows you to filter on any fields and also on the free-form metadata (meaning non indexed key/values tuples) has a terrible cost in terms of performance (as pointed before, the metadata are attached to each sample generated by Ceilometer and is stored as is). That basically means that the sample data structure is stored in most drivers in just one table or collection, in order to be able to scan them at once, and there's no good "perfect" sharding solution, making data storage scalability painful.

It turns out that the Ceilometer REST API is unable to handle most of the requests in a timely manner as most operations are O(n) where n is the number of samples recorded (see big O notation if you're unfamiliar with it). That number of samples can grow very rapidly in an environment of thousands of metered nodes and with a data retention of several weeks. There is a few optimizations to make things smoother in general cases fortunately, but as soon as you run specific queries, the API gets barely usable.

During this last year, as the Ceilometer PTL, I discovered these issues first hand since a lot of people were feeding me back with this kind of testimony. We engaged several blueprints to improve the situation, but it was soon clear to me that this was not going to be enough anyway.

Thinking outside the box

Unfortunately, the PTL job doesn't leave him enough time to work on the actual code nor to play with anything new. I was coping with most of the project bureaucracy and I wasn't able to work on any good solution to tackle the issue at its root. Still, I had a few ideas that I wanted to try and as soon as I stepped down from the PTL role, I stopped working on Ceilometer itself to try something new and to think a bit outside the box.

When one takes a look at what have been brought recently in Ceilometer, they can see the idea that Ceilometer actually needs to handle 2 types of data: events and metrics.

Events are data generated when something happens: an instance start, a volume is attached, or an HTTP request is sent to an REST API server. These are events that Ceilometer needs to collect and store. Most OpenStack components are able to send such events using the notification system built into oslo.messaging.

Metrics is what Ceilometer needs to store but that is not necessarily tied to an event. Think about an instance CPU usage, a router network bandwidth usage, the number of images that Glance is storing for you, etc… These are not events, since nothing is happening. These are facts, states we need to meter.

Computing statistics for billing or capacity planning requires both of these data sources, but they should be distinct. Based on that assumption, and the fact that Ceilometer was getting support for storing events, I started to focus on getting the metric part right.

I had been a system administrator for a decade before jumping into OpenStack development, so I know a thing or two on how monitoring is done in this area, and what kind of technology operators rely on. I also know that there's still no silver bullet – this made it a good challenge.

The first thing that came to my mind was to use some kind of time-series database, and export its access via a REST API – as we do in all OpenStack services. This should cover the metric storage pretty well.

Cooking Gnocchi A cloud of gnocchis!

At the end of April 2014, this led met to start a new project code-named Gnocchi. For the record, the name was picked after confusing so many times the OpenStack Marconi project, reading OpenStack Macaroni instead. At least one OpenStack project should have a "pasta" name, right?

The point of having a new project and not send patches on Ceilometer, was that first I had no clue if it was going to make something that would be any better, and second, being able to iterate more rapidly without being strongly coupled with the release process.

The first prototype started around the following idea: what you want is to meter things. That means storing a list of tuples of (timestamp, value) for it. I've named these things "entities", as no assumption are made on what they are. An entity can represent the temperature in a room or the CPU usage of an instance. The service shouldn't care and should be agnostic in this regard.

One feature that we discussed for several OpenStack summits in the Ceilometer sessions, was the idea of doing aggregation. Meaning, aggregating samples over a period of time to only store a smaller amount of them. These are things that time-series format such as the RRDtool have been doing for a long time on the fly, and I decided it was a good trail to follow.

I assumed that this was going to be a requirement when storing metrics into Gnocchi. The user would need to provide what kind of archiving it would need: 1 second precision over a day, 1 hour precision over a year, or even both.

The first driver written to achieve that and store those metrics inside Gnocchi was based on whisper. Whisper is the file format used to store metrics for the Graphite project. For the actual storage, the driver uses Swift, which has the advantages to be part of OpenStack and scalable.

Storing metrics for each entities in a different whisper file and putting them in Swift turned out to have a fantastic algorithm complexity: it was O(1). Indeed, the complexity needed to store and retrieve metrics doesn't depends on the number of metrics you have nor on the number of things you are metering. Which is already a huge win compared to the current Ceilometer collector design.

However, it turned out that whisper has a few limitations that I was unable to circumvent in any manner. I needed to patch it to remove a lot of its assumption about manipulating file, or that everything is relative to now (time.time()). I've started to hack on that in my own fork, but… then everything broke. The whisper project code base is, well, not the state of the art, and have 0 unit test. I was starring at a huge effort to transform whisper into the time-series format I wanted, without being sure I wasn't going to break everything (remember, no test coverage).

I decided to take a break and look into alternatives, and stumbled upon Pandas, a data manipulation and statistics library for Python. Turns out that Pandas support time-series natively, and that it could do a lot of the smart computation needed in Gnocchi. I built a new file format leveraging Pandas for computing the time-series and named it carbonara (a wink to both the Carbon project and pasta, how clever!). The code is quite small (a third of whisper's, 200 SLOC vs 600 SLOC), does not have many of the whisper limitations and… it has test coverage. These Carbonara files are then, in the same fashion, stored into Swift containers.

Anyway, Gnocchi storage driver system is designed in the same spirit that the rest of OpenStack and Ceilometer storage driver system. It's a plug-in system with an API, so anyone can write their own driver. Eoghan Glynn has already started to write a InfluxDB driver, working closely with the upstream developer of that database. Dina Belova started to write an OpenTSDB driver. This helps to make sure the API is designed directly in the right way.

Handling resources

Measuring individual entities is great and needed, but you also need to link them with resources. When measuring the temperature and the number of a people in a room, it is useful to link these 2 separate entities to a resource, in that case the room, and give a name to these relations, so one is able to identify what attribute of the resource is actually measured. It is also important to provide the possibility to store attributes on these resources, such as their owners, the time they started and ended their existence, etc.

Relationship of entities and resources

Once this list of resource is collected, the next step is to list and filter them, based on any criteria. One might want to retrieve the list of resources created last week or the list of instances hosted on a particular node right now.

Resources also need to be specialized. Some resources have attributes that must be stored in order for filtering to be useful. Think about an instance name or a router network.

All of these requirements led to to the design of what's called the indexer. The indexer is responsible for indexing entities, resources, and link them together. The initial implementation is based on SQLAlchemy and should be pretty efficient. It's easy enough to index the most requested attributes (columns), and they are also correctly typed.

We plan to establish a model for all known OpenStack resources (instances, volumes, networks, …) to store and index them into the Gnocchi indexer in order to request them in an efficient way from one place. The generic resource class can be used to handle generic resources that are not tied to OpenStack. It'd be up to the users to store extra attributes.

Dropping the free form metadata we used to have in Ceilometer makes sure that querying the indexer is going to be efficient and scalable.

The indexer classes and their relations REST API

All of this is exported via a REST API that was partially designed and documented in the Gnocchi specification in the Ceilometer repository; though the spec is not up-to-date yet. We plan to auto-generate the documentation from the code as we are currently doing in Ceilometer.

The REST API is pretty easy to use, and you can use it to manipulate entities and resources, and request the information back.

Macroscopic view of the Gnocchi architecture Roadmap & Ceilometer integration

All of this plan has been exposed and discussed with the Ceilometer team during the last OpenStack summit in Atlanta in May 2014, for the Juno release. I led a session about this entire concept, and convinced the team that using Gnocchi for our metric storage would be a good approach to solve the Ceilometer collector scalability issue.

It was decided to conduct this project experiment in parallel of the current Ceilometer collector for the time being, and see where that would lead the project to.

Early benchmarks

Some engineers from Mirantis did a few benchmarks around Ceilometer and also against an early version of Gnocchi, and Dina Belova presented them to us during the mid-cycle sprint we organized in Paris in early July.

The following graph sums up pretty well the current Ceilometer performance issue. The more you feed it with metrics, the more slow it becomes.

For Gnocchi, while the numbers themselves are not fantastic, what is interesting is that all the graphs below show that the performances are stable without correlation with the number of resources, entities or measures. This proves that, indeed, most of the code is built around a complexity of O(1), and not O(n) anymore.

Next steps Clément drawing the logo

While the Juno cycle is being wrapped-up for most projects, including Ceilometer, Gnocchi development is still ongoing. Fortunately, the composite architecture of Ceilometer allows a lot of its features to be replaced by some other code dynamically. That, for example, enables Gnocchi to provides a Ceilometer dispatcher plugin for its collector, without having to ship the actual code in Ceilometer itself. That should help the development of Gnocchi to not be slowed down by the release process for now.

The Ceilometer team aims to provide Gnocchi as a sort of technology preview with the Juno release, allowing it to be deployed along and plugged with Ceilometer. We'll discuss how to integrate it in the project in a more permanent and strong manner probably during the OpenStack Summit for Kilo that will take place next November in Paris.

Categories: Elsewhere

Zivtech: Creating Parallax Scrolling with CSS

Planet Drupal - Mon, 18/08/2014 - 16:58

Here at Zivtech, we are obsessed with creating immersive experiences for mobile and the web using cutting-edge design and Open Source Software like Drupal and Angular.js. One of the web design techniques that we've had on our radar is Parallax Scrolling, which gives depth to a page by scrolling two dimensions of the site at different rates (for example, text in the front would scroll faster than the image behind it). Parallax Scrolling is most often associated with 2D video game development, but has been becoming more and more prevalent on the web (for some live examples see Creative Bloq's post "46 Great Examples of Parallax Scrolling"). 

While we find this technique engaging, we never adopted it for our designs due to the fact that it relied heavily on Javascript tools and techniques that we found caused performance issues, and especially due to problems with making it work within a responsive web design. However, that may be about to change. In a recent post on his blog, Keith Clark wrote about an exciting new way to create Parallax Scrolling through CSS rather than Javascript, making for more mobile-friendly and responsive Parallax Scrolling effects. Clark writes:

Deferring the parallax effect to CSS removes all these issues and allows the browser to leverage hardware acceleration resulting in almost all the heavy lifting being handled directly by the compositor.

This technique, which removes the bulk of the work off the browser, creates the illusion of 3D without bogging pages down. Now, with CSS, we can maintain the same effect without creating a disjointed experience across multiple platforms. Check out Keith's post on pure CSS parallax scrolling websites for code snippets and samples.

Terms: Drupal PlanetParallax ScrollingCSSJavaScriptDrupalAngular.jsDesignWeb Designresponsive web design
Categories: Elsewhere

Appnovation Technologies: Different Point of Views

Planet Drupal - Mon, 18/08/2014 - 16:25

The Drupal Views module is an amazing tool. It certainly has contributed significantly to the widespread adoption of Drupal.

var switchTo5x = false;stLight.options({"publisher":"dr-75626d0b-d9b4-2fdb-6d29-1a20f61d683"});
Categories: Elsewhere

Gábor Hojtsy: Moving Drupal forward at Europe's biggest warm water lake

Planet Drupal - Mon, 18/08/2014 - 16:08

Drupalaton 2014 was amazing. I got involved pretty late in the organization when we added sprinting capacity on all four days, but I must say doing that was well worth it. While the pre-planned schedule of the event focused on longer full day and half day workshops on business English, automation, rules, commerce, multilingual, etc. the sprint was thriving with backend developer luminaries such as Wim Leers, dawehner, fago, swentel, pfrennsen, dasjo as well as sizable frontend crew such as mortendk, lewisnyman, rteijeiro, emmamaria, etc. This setup allowed us to work on a very wide range of issues.

The list of 70+ issues we worked on shows our work on the drupal.org infrastructure, numerous frontend issues to clean up Drupal's markup, important performance problems, several release critical issues and significant work on all three non-postponed beta blockers at the time.


Drupalers "shipped" from port to port; Photo by TCPhoto

Our coordinated timing with the TCDrupal sprints really helped in working on some of the same issues together. We successfully closed one of the beta blockers shortly after the sprint thanks to coordinated efforts between the two events.

Our list of issues also shows the success of the Rules training on the first day in bringing new people in to porting Rules components, as well as work on other important contributed modules: fixing issues with the Git deploy module's Drupal 8 port and work on the Drupal 8 version of CAPTCHA.

Thanks to the organizers, the sponsors of the event including the Drupal Association Community Cultivation Grants program for enabling us to have some of the most important Drupal developers work together on pressing issues, eat healthy and have fun on the way.

Ps. There is never a lack of opportunity to work with these amazing people. Several days of sprints are coming up around DrupalCon Amsterdam in a little over a month! The weekend sprint locations before/after the DrupalCon days are also really cool! See you there!

Categories: Elsewhere

Acquia: Drupal Stories Kick Off: My Own Drupal Story

Planet Drupal - Mon, 18/08/2014 - 15:55

It’s no secret that Drupalists are in high demand. I’ve blogged about the need for training more Drupalers and getting to them earlier in their careers previously, but that’s just one aspect of the greater topic which merits a closer inspection as a cohesive whole.

Categories: Elsewhere

godel.com.au: Use Behat to track down PHP notices before they take over your Drupal site forever

Planet Drupal - Mon, 18/08/2014 - 15:15
Mon August 18, 2014 Use Behat to track down PHP notices before they take over your Drupal site forever

Behat is one of the more popular testing frameworks in the Drupal community at the moment, for various reasons. One of these reasons is the useful Behat Drupal Extension that provides a DrupalContext class that can be extended to get a lot of Drupal specific functionality in your FeatureContext right off the bat.

In this post, I'm going to show you how to make Behat aware of any PHP errors that are logged to the watchdog table during each scenario that it runs. In Behat's default setup, a notice or warning level PHP error will not usually break site functionality and so won't fail any tests. Generally though, we want to squash every bug we know about during our QA phase so it would be great to fail any tests that incidentally throw errors along the way.

The main benefits of this technique are:

  • No need to write extra step definitions or modify existing steps, but you'll get some small degree of coverage for all functionality that just happens to be on the same page as whatever you are writing tests for
  • Very simple to implement once you have a working Behat setup with the DrupalContext class and Drupal API driver
  • PHP errors are usually very easy to cleanup if you notice them immediately after introducing them, but not necessarily 6 months later. This is probably the easiest way I've found to nip them in the bud, especially when upgrading contrib modules between minor versions (where it's quite common to find new PHP notices being introduced).
The setup

Once you've configured the Drupal extension for Behat, and set the api_driver to drupal in your behat.yml file, you can use Drupal API functions directly inside your FeatureContext.php file (inside your step definitions).

Conceptually, what we're trying to achieve is pretty straightforward. We want to flush the watchdog table before we run any tests and then fail any scenario that has resulted in one or more PHP messages logged by the end of it. It's also important that we give ourselves enough debugging information to track down errors that we detect. Luckily, watchdog already keeps serlialized PHP error debug information serialized by default, so we can unserlialize what we need and print it straight to the console as required.

You will need to write a custom FeatureContext class extending DrupalContext with hooks for @BeforeSuite and @AfterScenario.

Your @BeforeSuite should look something like this:

<?php /** * @BeforeSuite */ public static function prepare(SuiteEvent $event) { // Clear out anything that might be in the watchdog table from god knows // where. db_truncate('watchdog')->execute(); }

And your corresponding @AfterScenario would look like this:

<?php /** * Run after every scenario. */ public function afterScenario($event) { $log = db_select('watchdog', 'w') ->fields('w') ->condition('w.type', 'php', '=') ->execute() ->fetchAll(); if (!empty($log)) { foreach ($log as $error) { // Make the substitutions easier to read in the log. $error->variables = unserialize($error->variables); print_r($error); } throw new \Exception('PHP errors logged to watchdog in this scenario.'); } }

My apologies, I know this code is a little rough, I'm just using print_r() to spit out the data I'm interested in without even bothering to process the Drupal variable substitutions through format_string(), but hey, it's still legible enough for the average PHP developer and it totally works! Maybe someone else will see this, be inspired, and share a nicer version back here...

David MeisterDirector & lead developerDave is one of the two directors of Godel. He is also our best developer. Dave spends his time improving processes, researching new and shiny techniques and generally working on making Godel the best it can be. Want to work with us?

If you have a project that requires a creative but practical approach...

Get in touch Turn your emails in to actions with ActiveInbox Thu July 31, 2014 Harness email hell with ActiveInbox, which turns your Gmail in to actionable tasks and helps you remind yourself to do the things you said you would.
Categories: Elsewhere

Junichi Uekawa: sigaction bit me.

Planet Debian - Mon, 18/08/2014 - 13:34
sigaction bit me. There's a system call and a libc function of the similar (sigaction vs rt_sigaction) name but they behave differently.

Categories: Elsewhere

Deeson Online: Using Grunt, bootstrap, Compass and SASS in a Drupal sub theme

Planet Drupal - Mon, 18/08/2014 - 08:37
*/

If you have a separate front end design team from your Drupal developers, you will know that after static pages are moved into a Drupal theme there can be a huge gap in structure between the original files and the final Drupal site.

We wanted to bridge the gap between our theme developers, UX designers, front end coders, and create an all encompassing boilerplate that could be used as a starting point for any project and then easily ported into Drupal.

After thinking about this task for a few weeks it was clear that the best way forward was to use Grunt to automate all of our tasks and create a scalable, well structured sub theme that all of our coders can use to start any project.

What is Grunt?

Grunt is a Javascript task runner that allows you to automate repetitive tasks such as file minifying files, javascript linting, CSS preprocessing, and even reloading your browser.

Just like bootstrap, there are many resources and a vast amount of plugins available for Grunt that can automate any task you could think of, plus it is very easy to write your own, so setting Grunt as a standard for our boilerplate was an easy decision.

The purpose of this post

We use bootstrap in most projects and recently switched to using SASS for CSS preprocessing bundled with Compass, so for the purpose of this tutorial we will create a simple bootstrap sub theme that utilises Grunt & Compass to compile SASS files and automatically reloads our browser every time a file is changed.

You can then take this approach and use the best Grunt plugins that suit your project.

Step 1. Prerequisites

To use Grunt you will need node.js and ruby installed on your system. Open up terminal, and type:

node -v ruby -v

If you don't see a version number, head to the links below to download and install them.

Don’t have node? Download it here

Don’t have ruby? Follow this great tutorial

Step 2. Installing Grunt

Open up terminal, and type:

sudo npm install -g grunt-cli

This will install the command line interface for Grunt. Be patient whilst it is downloading as sometimes it can take a minute or two.

Step 3. Installing Compass and Grunt plugins

Because we want to use the fantastic set of mixins and features bundled with Compass, lets install the Compass and SASS ruby gems.

Open up terminal, and type:

sudo gem install sass sudo gem install compass

For our boilerplate we only wanted to install plugins that we would need in every project, so we kept it simple and limited it to Watch, Compass and SASS to compile all of our files. Our team members can then add extra plugins later in the project as and when needed.

So lets get started and use the node package manager to install our Grunt plugins.

Switch back to Terminal and run the following commands:

sudo npm install grunt-contrib-watch —save-dev sudo npm install grunt-contrib-compass —save-dev sudo npm install grunt-contrib-sass —save-dev Step 4. Creating the boilerplate

Note: For the purposes of this tutorial we are going to use the bootstrap sub theme for our Grunt setup, but the same Grunt setup described below can be used with any Drupal sub theme.

  • Create a new Drupal site
  • Download the bootstrap theme into your sites/all/themes directory
    drush dl bootstrap
  • Copy the bootstrap starter kit (sites/all/themes/bootstrap/bootstrap_subtheme) into your theme directory
  • Rename bootstrap_subtheme.info.starterkit to bootstrap_subtheme.info
  • Navigate to admin/appearance and click “Enable, and set default" for your sub-theme.

Your Drupal site should now be setup with Bootstrap and your folder structure should now look like this:

For more information on creating a bootstrap sub theme check out the community documentation.

Step 5. Switching from LESS to SASS

Our developers liked less, our designers likes SASS, but after a team tech talk explaining the benefits of using SASS with Compass (a collection of mixins with an updater with some cleaver sprite creation), everyone agreed that SASS was the way forward.

Officially Bootstrap is now packaged with SASS, so lets replace our .less files with .scss files in our bootstrap_subtheme so we can utilise all of the mixin goodness that comes with it SASS & Compass.

  • Head over to bootstrap and download the SASS version
  • Copy the stylesheets folder from boostrap-sass/assets/ and paste it into your bootstrap_subtheme
  • Rename the stylesheets folder to bootstrap-sass
  • Create a new folder called custom-sass in bootsrap_subtheme
  • Create a new file in the custom-sass called style.scss
  • Import bootstrap-sass/bootstrap.scss into style.scss

​You should now have the following setup in your sub theme:

We are all set!

Step 6. Setting up Grunt - The package.json & Gruntfile.js

Now lets configure Grunt to run our tasks. Grunt only needs two files to be setup, a package.json file that defines our dependencies and a Gruntfiles.js to configure our plugins.

Within bootstrap_subtheme, create a package.json and add the following code:

{ "name": "bootstrap_subtheme", "version": "1.0.0", "author": “Your Name", "homepage": "http://homepage.com", "engines": { "node": ">= 0.8.0" }, "devDependencies": { "grunt-contrib-compass": "v0.9.0", "grunt-contrib-sass": "v0.7.3", "grunt-contrib-watch": "v0.6.1" } }

In this file you can add whichever plugins are best suited for your project, check out the full list of plugins at the official Grunt site.

Install Grunt dependencies

Next, open up terminal, cd into sites/all/themes/bootstrap_subtheme, and run the following task:

sudo npm install

This command looks through your package.json file and installs the plugins listed. You only have to run this command once when you set up a new Grunt project, or when you add a new plugin to package.json.

Once you run this you will notice a new folder in your bootstrap_subtheme called node_modules which stores all of your plugins. If you are using git or SVN in your project, make sure to ignore this folder.

Now lets configure Grunt to use our plugins and automate some tasks. Within bootstrap_subtheme, create a Gruntfile.js file and add the following code:

module.exports = function (grunt) { grunt.initConfig({ watch: { src: { files: [‘**/*.scss', '**/*.php'], tasks: ['compass:dev'] }, options: { livereload: true, }, }, compass: { dev: { options: { sassDir: 'custom-sass/scss', cssDir: 'css', imagesPath: 'assets/img', noLineComments: false, outputStyle: 'compressed' } } } }); grunt.loadNpmTasks('grunt-contrib-compass'); grunt.loadNpmTasks('grunt-contrib-sass'); grunt.loadNpmTasks('grunt-contrib-watch'); };

This file is pretty straight forward, we configure our watch tasks to look for certain files and reload our browser, and then we define our scss and css directories so that compass knows where to look.

I won’t go into full detail with the options available, but visit the links below to see the documentation:

Watch documentatation

SASS documentatation

 

Step 7. Enabling live reload

Download and enable the livereload module into your new Drupal site. By default, you will have to be logged in as admin for live reload to take effect, but you can change this under Drupal permissions.

Once you enable livereload, refresh your browser window to load the livereload.js library.

Step 8. Running Grunt

We are all set! Head back over to Terminal and check you are in the bootstrap_subtheme directory, then type:

grunt watch

Now every time you edit a scss file, Grunt will compile your SASS into a compressed style.css file and automatically reload your browser.

Give it a go by importing compass into the top of your style folder and changing the body background to be a compass mixin.

@import 'compass'; @import '../bootstrap-sass/bootstrap.scss'; /* * Custom overrides */ body { @include background(linear-gradient(#eee, #fff)); }

To stop Grunt from watching your files, press Ctrl and C simultaneously on your keyboard.

Step 9. Debugging

One common problem you may encounter when using Grunt alongside live reload is the following error message:

Fatal error: Port 35729 is already in use by another process.

This means that the port being used by live reload is currently in use by another process, either by a different grunt project, or an application such as Chrome.

If you experience this problem run the following command and find out which application is using the port.

lsof | grep 35729

Simply close the application and run “grunt watch” again. If the error still persists and all else fails, restart your machine and try to stop Grunt from watching files before moving on to another project.

Next steps…

This is just a starting point on what you can achieve using Grunt to automate your tasks and gives you a quick insight to how we go about starting a project.

Other things to consider:

  • Duplicating the _variables.scss bootstrap file to override the default settings.
  • Adding linted, minified javascript files using the uglify plugin
  • Configure Grunt to automatically validate your markup using the W3C Markup Validator
  • Write your own Grunt plugins to suite your own projects
Let me know your thoughts - you can share your ideas and views in the comments below.

 

Read moreUsing Grunt, bootstrap, Compass and SASS in a Drupal sub themeBy David Allard | 18th August 2014
Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator - Elsewhere