Elsewhere

Chapter Three: Principles of Configuration Management - Part One

Planet Drupal - Tue, 09/12/2014 - 20:16

This is the first in a series of posts about Drupal 8's configuration management system. This system is one of its most eagerly anticipated features, according to a recent survey. The Configuration Management Initiative (CMI) was the first Drupal 8 initiative to be announced in 2011, and we've learned a lot during thousands of hours of work on the initiative since then. These posts will share what we've learned and provide background on the why and how.



Categories: Elsewhere

Joey Hess: podcasts that don't suck, 2014 edition

Planet Debian - Tue, 09/12/2014 - 20:05
  • The Memory Palace: This is the way history should be taught, but rarely is. Nate DiMeo takes past events and puts you in the middle of them, in a way that makes you emphathise so much with people from the past. Each episode is a little short story, and they're often only a few minutes long. A great example is this description of when Niagra falls stopped. I have listened to the entire back archive, and want more. Only downside is it's a looong time between new episodes.

  • The Haskell Cast: Panel discussion with a guest, there is a lot of expertise amoung them and I'm often scrambling to keep up with the barrage of ideas. If this seems too tame, check out The Type Theory Podcast instead..

  • Benjamen Walker's Theory of Everything: Only caught 2 episodes so far, but they've both been great. Short, punchy, quirky, geeky. Astoundingly good production values.

  • Lightspeed magazine and Escape Pod blur together for me. Both feature 20-50 minute science fiction short stories, and occasionally other genre fictions. They seem to get all the award-winning short stories. I sometimes fall asleep to these which can make for strange dreams. Two strongly contrasting examples: "Observations About Eggs from the Man Sitting Next to Me on a Flight from Chicago, Illinois to Cedar Rapids, Iowa" and "Pay Phobetor"

  • Serial: You probably already know about this high profile TAL spinoff. If you didn't before: You're welcome. :) Nuff said.

  • Redecentralize: Interviews with creators of decentralized internet tools like Tahoe-LAFS, Ethereum, Media Goblin, TeleHash. I just wish it went into more depth on protocols and how they work.

  • Love and Radio: This American Life squared and on acid.

  • Debian & Stuff: My friend Asheesh and that guy I ate Thai food with once in Portland in a marvelously unfocused podcast that somehow connects everything up in the end. Only one episode so far; what are you guys waiting on? :P

  • Hacker Public Radio: Anyone can upload an episode, and multiple episodes are published each week, which makes this a grab bag to pick and choose from occasionally. While mostly about Linux and Free Software, the best episodes are those that veer var afield, such as the 40 minute river swim recording featured in Wildswimming in France.

Also, out of the podcasts I listed previously, I still listen to and enjoy Free As In Freedom, Off the Hook, and the Long Now Seminars.

PS: A nice podcatcher, for the technically inclined is git-annex importfeed. Featuring list of feeds in a text file, and distributed podcatching!

Categories: Elsewhere

Wouter Verhelst: Playing with ExtreMon

Planet Debian - Tue, 09/12/2014 - 19:43

Munin is a great tool. If you can script it, you can monitor it with munin. Unfortunately, however, munin is slow; that is, it will take snapshots once every five minutes, and not look at systems in between. If you have a short load spike that takes just a few seconds, chances are pretty high munin missed it. It also comes with a great webinterfacefrontendthing that allows you to dig deep in the history of what you've been monitoring.

By the time munin tells you that your Kerberos KDCs are all down, you've probably had each of your users call you several times to tell you that they can't log in. You could use nagios or one of its brethren, but it takes about a minute before such tools will notice these things, too.

Maybe use CollectD then? Rather than check once every several minutes, CollectD will collect information every few seconds. Unfortunately, however, due to the performance requirements to accomplish that (without causing undue server load), writing scripts for CollectD is not as easy as it is for Munin. In addition, webinterfacefrontendthings aren't really part of the CollectD code (there are several, but most that I've looked at are lacking in some respect), so usually if you're using CollectD, you're missing out some.

And collectd doesn't do the nagios thing of actually telling you when things go down.

So what if you could see it when things go bad?

At one customer, I came in contact with Frank, who wrote ExtreMon, an amazing tool that allows you to visualize the CollectD output as things are happening, in a full-screen fully customizable visualization of the data. The problem is that ExtreMon is rather... complex to set up. When I tried to talk Frank into helping me getting things set up for myself so I could play with it, I got a reply along the lines of...

well, extremon requires a lot of work right now... I really want to fix foo and bar and quux before I start documenting things. Oh, and there's also that part which is a dead end, really. Ask me in a few months?

which is fair enough (I can't argue with some things being suboptimal), but the code exists, and (as I can see every day at $CUSTOMER) actually works. So I decided to just figure it out by myself. After all, it's free software, so if it doesn't work I can just read the censored code.

As the manual explains, ExtreMon is a plugin-based system; plugins can add information to the "coven", read information from it, or both. A typical setup will run several of them; e.g., you'd have the from_collectd plugin (which parses the binary network protocol used by collectd) to get raw data into the coven; you'd run several aggregator plugins (which take that raw data and interpret it, allowing you do express things along the lines of "if the system's load gets above X, set load.status to warning"; and you'd run at least one output plugin so that you can actually see the damn data somewhere.

While setting up ExtreMon as is isn't as easy as one would like, I did manage to get it to work. Here's what I had to do.

You will need:

  • A monitor with a FullHD (or better) resolution. Currently, the display frontend of ExtreMon assumes it has a FullHD display at all time. Even if you have a lower resolution. Or a higher one.
  • Python3
  • OpenJDK 6 (or better)

First, we clone the ExtreMon git repository:

git clone https://github.com/m4rienf/ExtreMon.git extremon cd extremon

There's a README there which explains the bare necessities on getting the coven to work. Read it. Do what it says. It's not wrong. It's not entirely complete, though; it fails to mention that you need to

  • install CollectD (which is required for its types.db)
  • Configure CollectD to have a line like Hostname "com.example.myhost" rather than the (usual) FQDNLookup true. This is because extremon uses the java-style reverse hostname, rather than the internet-style FQDN.

Make sure the dump.py script outputs something from collectd. You'll know when it shows something not containing "plugin" or "plugins" in the name. If it doesn't, fiddle with the #x3. lines at the top of the from_collectd file until it does. Note that ExtreMon uses inotify to detect whether a plugin has been added to or modified in its plugins directory; so you don't need to do anything special when updating things.

Next, we build the java libraries (which we'll need for the display thing later on):

cd java/extremon mvn install cd ../client/ mvn install

This will download half the Internet, build some java sources, and drop the precompiled .jar files in your $HOME/.m2/repository.

We'll now build the display frontend. This is maintained in a separate repository:

cd ../.. git clone https://github.com/m4rienf/ExtreMon-Display.git display cd display mvn install

This will download the other half of the Internet, and then fail, because Frank forgot to add a few repositories. Patch (and push request) on github

With that patch, it will build, but things will still fail when trying to sign a .jar file. I know of four ways on how to fix that particular problem:

  1. Add your passphrase for your java keystore, in cleartext, to the pom.xml file. This is a terrible idea.
  2. Pass your passphrase to maven, in cleartext, by using some command line flags. This is not much better.
  3. Ensure you use the maven-jarsigner-plugin 1.3.something or above, and figure out how the maven encrypted passphrase store thing works. I failed at that.
  4. Give up on trying to have maven sign your jar file, and do it manually. It's not that hard, after all.

If you're going with 1 through 3, you're on your own. For the last option, however, here's what you do. First, you need a key:

keytool -genkeypair -alias extremontest

after you enter all the information that keytool will ask for, it will generate a self-signed code signing certificate, valid for six months, called extremontest. Producing a code signing certificate with longer validity and/or one which is signed by an actual CA is left as an exercise to the reader.

Now, we will sign the .jar file:

jarsigner target/extremon-console-1.0-SNAPSHOT.jar extremontest

There. Who needs help from the internet to sign a .jar file? Well, apart from this blog post, of course.

You will now want to copy your freshly-signed .jar file to a location served by HTTPS. Yes, HTTPS, not HTTP; ExtreMon-Display will fail on plain HTTP sites.

Download this SVG file, and open it in an editor. Find all references to be.grep as well as those to barbershop and replace them with your own prefix and hostname. Store it along with the .jar file in a useful directory.

Download this JNLP file, and store it on the same location (or you might want to actually open it with "javaws" to see the very basic animated idleness of my system). Open it in an editor, and replace any references to barbershop.grep.be by the location where you've stored your signed .jar file.

Add the chalins_in_http plugin from the plugins directory. Make sure to configure it correctly (by way of its first few comment lines) so that its input and output filters are set up right.

Add the configuration snippet in section 2.1.3 of the manual (or something functionally equivalent) to your webserver's configuration. Make sure to have authentication—chalice_in_http is an input mechanism.

Add the chalice_out_http plugin from the plugins directory. Make sure to configure it correctly (by way of its first few comment lines) so that its input and output filters are set up right.

Add the configuration snippet in section 2.2.1 of the manual (or something functionally equivalent) to your webserver's configuration. Authentication isn't strictly required for the output plugin, but you might wish for it anyway if you care whether the whole internet can see your monitoring.

Now run javaws https://url/x3console.jnlp to start Extremon-Display.

At this point, I got stuck for several hours. Whenever I tried to run x3mon, this java webstart thing would tell me simply that things failed. When clicking on the "Details" button, I would find an error message along the lines of "Could not connect (name must not be null)". It would appear that the Java people believe this to be a proper error message for a fairly large number of constraints, all of which are slightly related to TLS connectivity. No, it's not the keystore. No, it's not an API issue, either. Or any of the loads of other rabbit holes that I dug myself in.

Instead, you should simply make sure you have Server Name Indication enabled. If you don't, the defaults in Java will cause it to refuse to even try to talk to your webserver.

The ExtreMon github repository comes with a bunch of extra plugins; some are special-case for the place where I first learned about it (and should therefore probably be considered "examples"), others are general-purpose plugins which implement things like "is the system load within reasonable limits". Be sure to check them out.

Note also that while you'll probably be getting most of your data from CollectD, you don't actually need to do that; you can write your own plugins, completely bypassing collectd. Indeed, the from_collectd thing we talked about earlier is, simply, also a plugin. At $CUSTOMER, for instance, we have one plugin which simply downloads a file every so often and checks it against a checksum, to verify that a particular piece of nonlinear software hasn't gone astray yet again. That doesn't need collectd.

The example above will get you a small white bar, the width of which is defined by the cpu "idle" statistic, as reported by CollectD. You probably want more. The manual (chapter 4, specifically) explains how to do that.

Unfortunately, in order for things to work right, you need to pretty much manually create an SVG file with a fairly strict structure. This is the one thing which Frank tells me is a dead and and needs to be pretty much rewritten. If you don't feel like spending several days manually drawing a schematic representation of your network, you probably want to wait until Frank's finished. If you don't mind, or if you're like me and you're impatient, you'll be happy to know that you can use inkscape to make the SVG file. You'll just have to use dialog behind ctrl+shift+X. A lot.

Once you've done that though, you can see when your server is down. Like, now. Before your customers call you.

Categories: Elsewhere

Open Source Training: Filter Drupal Content Based on File Type

Planet Drupal - Tue, 09/12/2014 - 19:11

One of our members asked an interesting question about Views.

They had a file field on their user profiles. In that field, the user could upload an image, an audio file, or link to a YouTube video. So far, so good. However, in Views, they only wanted to show that field if it contained a video.

Here's the solution to that problem. We're going to show you how to filter Drupal content based on the type of file that's attached to it.

Categories: Elsewhere

Drupal Watchdog: Test Now! - Travis Integration for your Drupal Modules

Planet Drupal - Tue, 09/12/2014 - 18:46

Travis-CI is a free-for-OSS continuous integration server, which has become very popular in the PHP world. Drush, Symfony, and dreditor all use it for frequently testing their code base and pull requests for regressions and ensuring new functionality has the needed test coverage.

Compared to the current Drupal testbot, Travis-CI allows testing of not only simpletest on PHP 5.3 (for Drupal 7 projects), but of most everything that you can install on a Debian system, e.g. QUnit for JavaScript, Behat, PHPUnit, but also Ruby based projects, Bash projects, Go projects, etc.

You can also test various scenarios in a matrix like setup, e.g. different PHP versions to ensure your code runs on both PHP 5.3 and 5.4 or with different versions of a dependent library.

This flexibility comes with a price however, because you need to setup the whole environment yourself. The selected PHP version (with xdebug) and composer are pre-installed, but that's it. The Drupal base installation, the running of the tests, the parsing of the test output, and ensuring dependencies are there is all your own responsibility.

And because of that there are many different .travis.yml files floating around the net for various scenarios of setting up this or that, but in the end everyone re-invents the wheel. Until now…

As Easy as it Gets

I am proud to announce the drupal_ti project, which allows any module on drupal.org to easily leverage travis-ci.org for testing:

  • PHPUnit
  • SimpleTest
  • Behat

The process (which I will show in more detail below) is as simple as copying a generic .travis.yml.dist file as .travis.yml to your modules root, push your repository to Github, activate the repository at travis-ci.org and you are done.

Oh, and while you are at it, if you add a .coveralls.yml file, then code coverage is automatically reported to coveralls.io, too (for PHPUnit).

All the hard work of installing drupal, running a web server, setting up Selenium, etc. is done by drupal_ti.

So you don't have to copy some .travis.yml you found on the net and spend hours debugging little edge cases (HHVM and sendmail, how to parse the simpletest output, etc.), but can depend on a proven and self-tested code base.

Features
  • Drupal 8 ready: drupal_ti supports both Drupal 7 and 8 modules. Use DRUPAL_TI_ENVIRONMENT="drupal-8" for your Drupal 8 modules.
  • Tested: drupal_ti tests its own code base for both Drupal 7 and Drupal 8 modules.
  • Modular architecture: drupal_ti has so called 'runners' and you can combine either e.g. "phpunit simpletest" or run them as separate workers by specifying a matrix.
  • Environment aware: drupal_ti has a file for each environment, which makes the code generic for both Drupal 7 and 8.
  • Examples provided: drupal_ti provides easy examples of the needed files in tests/drupal-{7,8}/drupal_ti_test. So you can get started easily!
  • Extensible: By specifying DRUPAL_TI_SCRIPT_DIR_BEFORE or DRUPAL_TI_SCRIPT_DIR_AFTER you can easily create your own runners and environment includes that run before or after the main runners. This could even come from composer.
  • Usable for non-travis CI: Because drupal-ti is just a command and because .travis.yml just has some environment vars, you can just copy the main declarations to some environment.sh file, set the TRAVIS_BUILD_DIR and use it locally, too.
An Example Conversion

My module registry_autoload uses simpletest on drupal.org to test its features. Now I want to test some advanced trait support, which needs PHP 5.4, so travis-ci.org is an option to do so.

Step 1 - Create the GitHub Repository and Push Your Code
  1. Sign in to github.com
  2. Click: + > New repository, enter: registry_autoload
  3. Click: Create repository

Copy the commands displayed by Github to push your code to GitHub. I like to use drupal.org as my upstream and GitHub as my origin remote:

$ git clone --branch 7.x-1.x Fabianx@git.drupal.org:project/registry_autoload.git $ cd registry_autoload $ git remote rename origin upstream $ git remote add origin git@github.com:LionsAd/registry_autoload.git $ git push -u origin 7.x-1.x Step 2 - Activate Travis-ci.org

Now head over to travis-ci.org:

  1. Choose "Sign in with GitHub" and follow instructions
  2. Click on your name at the top right, "Fabian Franz" for me
  3. Click: "Sync now" if you don't see the repository, yet
  4. Simply switch the toggle to "ON" for the project
  5. Click on the repository settings icon (the "tools icon")
  6. Toggle "Build only if .travis.yml is present"
  7. Click on "Build history"
  8. Leave the browser window open
Step 3 - Add drupal_ti .travis.yml

Now checkout a new branch, and add the .travis.yml file:

$ git checkout -b travis-integration $ curl https://raw.githubusercontent.com/LionsAd/drupal_ti/master/.travis.yml.dist -O $ mv .travis.yml.dist .travis.yml

Then, customize the following parts of the file:

# Configuration vars. - DRUPAL_TI_MODULE_NAME="registry_autoload" - DRUPAL_TI_SIMPLETEST_GROUP="Registry"

And:

matrix: # [[[ SELECT ANY OR MORE OPTIONS ]]] - DRUPAL_TI_RUNNERS="simpletest"

The simpletest group is returned from getInfo() in Drupal 7, but an annotation @group x in Drupal 8. Despite the name of the variable, you could also put in a class like RegistryAutoloadTestCase. Basically anything that SimpleTest accepts on the command line as last argument. The clue is that this variable accepts spaces e.g. "DrupalTi Test", which is else very difficult to achieve when passing variables around.

Now add the file and push to GitHub:

$ git add .travis.yml $ git commit -m "Added travis integration" $ git push origin travis-integration Step 4 - Watch the Test Run

Now head back over to your browser window and magically there will be a new build, click on it and you will see a matrix like structure, here shown for build #2:

Click on PHP 5.4 and click the little button on the far right with "follow", to follow the output.

After a while the build is finished and all tests passed:

Congratulations, your project is now tested on travis-ci.org!

Now merge, the branch into your mainline and whenever you want to test a change on travis-ci.org just push a branch or make a pull request:

$ git checkout 7.x-1.x $ git merge travis-integration $ git push origin 7.x-1.x # Also push the changes back to drupal.org $ git push upstream 7.x-1.x

The easiest way to work with this kind of integration is to push all patches to origin first and once satisfied, push to upstream. That way GitHub and drupal.org are always in sync.

To be Continued…

In the next part of this series, I will explore how you can get started with unit testing locally and on travis-ci.org (using drupal_ti) and afterwards we will take a look at some easy behat setup.

If you are curious and want to start now, take a look at the run-* scripts in:

Enjoy and please leave me feedback either in the Drupal issue queue or on the GitHub project page.

About the Author

Fabian Franz is a Senior Performance Engineer and Technical Lead at Tag1
Consulting. He is author of the registry_autoload, service_container and render_cache modules for Drupal 7 and a contributor to Drupal 8 Core in the form of reviews, patches, and co-leader of the Twig initiative.

Tags:  Testing Contributed modules Third-party tools Images: 
Categories: Elsewhere

C.J. Adams-Collier: MySQL Meet-up 20141208

Planet Debian - Tue, 09/12/2014 - 18:31

I had an enjoyable time last night at Twitter with local MySQL DBAs and developers. We had an attendee who has no experience with SQL or programming at all. She is interested in organizing her collection of recipes and had heard a rumor that MySQL was a good tool to use for this task. She indicated that her desktop runs Windows 7. I think I’m going to encourage her to turn her concept in to a community project, as she is not the first person I’ve met who wants to organize recipes!

We were hosted by Rob at Twitter, who used to work with Lisa back before she retired. He’s a member of the site reliability team and keeps the fail whale from rearing its blubbery head.

Pizza was provided by my dear friend and long-time open source buddy Gerry Narvaja with the assistance of the folks in the kitchen at Zeek’s.

We discussed new techniques in the areas of load balancing and high availability. Five nines is no longer the thing that people talk about, instead it’s six nines. It’s a brave new world out there!

I was not the only person who was excited about one of the latest features in MariaDB / MySQL to come out of HP, the high resolution time data types.

One of the attendees is an old hand at COBOL and was asking if anyone knows where one can get a COBOL runtime environment. I’ve never thought about that before… Let me ask the googs… Looks like there’s an active project called GNU COBOL which is officially part of the GNU project:

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator - Elsewhere