Agrégateur de flux

Cyril Brulebois: How to serve Perl source files

Planet Debian - lun, 11/08/2014 - 19:45

I noticed a while ago a Perl script file included on my blog wasn’t served properly, since the charset wasn’t announced and web browsers didn’t display it properly. The received file was still valid UTF-8 (hello, little © character), at least!

First, wrong intuition

Reading Apache’s /etc/apache2/conf.d/charset it looks like the following directive might help:

AddDefaultCharset UTF-8

but comments there suggest reading the documentation! And indeed that alone isn’t sufficient since this would only affect text/plain and text/html. The above directive would have to be combined with something like this in /etc/apache2/mods-enabled/mime.conf:

AddType text/plain .pl Real solution

To avoid any side effects on other file types, the easiest way forward seems to avoid setting AddDefaultCharset and to associate the UTF-8 charset with .pl files instead, keeping the text/x-perl MIME type, with this single directive (again in /etc/apache2/mods-enabled/mime.conf):

AddCharset UTF-8 .pl

Looking at response headers (wget -d) we’re moving from:

Content-Type: text/x-perl

to:

Content-Type: text/x-perl; charset=utf-8 Conclusion

Nothing really interesting, or new. Just a small reminder that tweaking options too hastily is sometimes a bad idea. In other news, another Perl script is coming up soon. :)

Catégories: Elsewhere

Acquia: Displaying a Resultset from a Custom SQL Query in Drupal 7

Planet Drupal - lun, 11/08/2014 - 17:20

Originally posted on Yellow Pencil's blog. Follow @kimbeaudin on Twitter!

In my last Drupal blog post  I talked about how you can alter an existing view query with hook_views_query_alter, but what if you want to display a result set (your own ?view?) from a custom SQL query?

Well, here's how.

Catégories: Elsewhere

Juliana Louback: JSCommunicator - Setup and Architecture

Planet Debian - lun, 11/08/2014 - 17:06

Preface

During Google Summer of Code 2014, I got to work on the Debian WebRTC portal under the mentorship of Daniel Pocock. Now I had every intention of meticulously documenting my progress at each step of development in this blog, but I was a bit late in getting the blog up and running. I’ll now be publishing a series of posts to recap all I’ve done during GSoC. Better late than never.

Intro

JSCommunicator is a SIP communication tool developed in HTML and JavaScript. The code was designed to make integrating JSCommunicator with a website or web app as simple as possible. It’s quite easy, really. However, I do think a more detailed explanation on how to set things up and a guide to the JSCommunicator architecture could be of use, particularly for those wanting to modify the code in any way.

Setup

To begin, please fork the JSCommunicator repo.

If you are new to git, feel free to follow the steps in section “Setup” and “Clone” in this post.

If you read the README file (which you always should), you’ll see that JSCommunicator needs a SIP proxy that supports SIP over Websockets transport. Some options are:

I didn’t find a tutorial for Kamailio setup, I did find one for repro setup. And bonus, here you have a great tutorial on how to setup and configure your SIP proxy AND your TURN server.

In your project’s root directory, you’ll see a file named config-sample.js. Make a copy of that file named config.js. The config-sample.js file has comments that are very instructive. In sum, the only thing you have to modify is the turn_servers property and the websocket property. In my case, debrtc.org was the domain registered for testing my project, so my config file has:

turn_servers: [ { server:"turn:debrtc.org" } ],

Note that unlike the sample, I didn’t set a username and password so SIP credentials will be used.

Now fill in the websocket property – here we use sip5060.net.

websocket: { servers: 'wss://ws.sip5060.net', connection_recovery_min_interval: 2, connection_recovery_max_interval: 30, },

I’m pretty sure you can set the connection_recovery properties to whatever you like. Everything else is optional. If you set the user property, specifically display_name and uri, that will be used to fill in the Login form and takes preference over any ‘Remember me’ data. If you also set sip_auth_password, JSCommunicator will automatically log in.

All the other properties are for other optional functionalities and are well explained.

You’ll need some third-party javascript that is not included in the JSCommunicator git repo. Namely jQuery version 1.4 or higher and ArbiterJS version 1.0. Download jQuery here and ArbiterJS here and place the .js files in your project’s root directory. Do make sure that you are including the correct filename in your html file. For example, in phone-dev.shtml, a file named jquery.js is included. The file you downloaded will likely have version numbers in it’s file name. So rename the downloaded file or change the content of src in your includes. This is uber trivial, but I’ve made the mistake several times.

You’ll also need JsSIP.js which can be downloaded here. Same naming care must be taken as is the case for jQuery and ArbiterJS. The recently added Instant Messaging component and some of the new features need jQuery-UI - so far version 1.11.* is known to work. From the downloaded .zip all you need is the jquery-ui-...js file and the jquery-ui-...css file, also to be placed in the project’s root directory. If you’ll be using the internationalization support you’ll also need jquery.i18n.properties.js.

To try out JSCommunicator, deploy the website locally by copying your project directory to the apache document root directory (provided you are using apache, which is a good idea.). You’ll likely have to restart your server before this works. Now the demo .shtml pages only have a header with all the necessary includes, then a Server Side Include for the body of the page, with all the JSCommunicator html. The html content is in the file jscommunicator.inc. You can enable SSI on apache, OR you can simply copy and paste the content of jscommunicator.inc into phone-dev.shmtl. Start up apache, open a browser and navigate to localhost/jscommunicator/phone-dev.shmtl and you should see:

Actually, pending updates to JSCommunicator, you should see a brand new UI! But all the core stuff will be the same.

Architecture

Disclaimer: I’m explaining my view of the JSCommunicator architecture, which currently may not be 100% correct. But so far it’s been enough for me to make my modifications and additions to the code, so it could be of use. One I get a JSCommunicator Founding Father’s stamp of approval, I’ll be sure to confirm the accuracy.

Now to explain how the JSCommunicator code interacts, the use of each code ‘item’ is described, ignoring all the html and css which will vary according to how you choose to use JSCommunicator. I’m also not going to explain jQuery which is a dependency but not specific to WebRTC. The core JSCommunicator code is the following:

  • config.js
  • jssip-helper.js
  • parseuri.js
  • webrtc-check.js
  • JSCommUI.js
  • JSCommManager.js
  • make-release
  • init.js
  • Arbiter.js
  • JsSIP.js

Each of these files will be presented in what I hope is an intuitive order.

  • config.js - As expected, this file contains your custom configuration specifications, i.e. the servers being used to direct traffic, authentication credentials, and a series of properties to enable/disable optional functionalities in JSCommunicator.

The next three files could be considered ‘utils’:

  • jssip-helper.js - This will load the data from config.js necessary to run JSCommunicator, such as configurations relating to servers, websockets, connection specifications (timeout, recovery intervals), and user credentials. Properties for optional features are ignored of course.

  • parseuri.js - Contains a function to segregate data from a URI.

  • webrtc-check.js - Verifies if browser is WebRTC compatible by checking if it’s possible to enable camera and microphone access.

These two are where the magic happens:

  • JSCommUI.js - Responsible for the UI component, controling what becomes visible to the user, audio effects, client-side error handling, and gathering the data that will be fed to JSCommManager.js.

  • JSCommManager.js - Initializes the SIP User Agent to manage the session including (but not limited to) beginning/terminating the session, sending/receiving SIP messages and calls, and signaling the state of the session and other important notifications such as incoming calls, messages received, etc.

Now for some extras:

  • make-release - Combines the main .js files into one big file. Gets jssip-helper.js, parseuri.js, webrtc-check.js, JSCommUI.js and JSCommManager.js and spits out a new file JSComm.js with all that content. Now you understand why phone-dev.shtml included each of the 5 files mentioned above whereas phone.shmtl includes only JSComm.js which didn’t exist until you ran make-release. That confused me a bit.

  • init.js - On page load, calls JSCommManager.js’ init function, which eventually calls JSCommUI.js’ init function. In other words, starts up the JSCommunicator app. I guess it’s only used to show how to start up the app. This could be done directly on the page you’re embedding JSCommunicator from. So maybe not entirely needed.

Third party code:

  • Arbiter.js - Javascript implmentation of the publish/subscribe patten, written by Matt Kruse. In JSCommunicator it’s used in JSCommManager.js to publish signals to describe direct the app’s behavior. For example, JSCommManager will publish a message indicating that the user received a call from a certain URI. In event-demo.js we subscribe to this kind of message and when said message is received, an action can be performed such as adding to the app’s call history. Very cool.

  • JsSIP.js - Implements the SIP WebSocket transport in Javascript. This ensures the transportantion of data is done in adherence to the WebSocket protocol. In JSCommManager.js we initialize a SIP User Agent based in the implementation in JsSIP.js. The User Agent will ‘translate’ all of the JSCommunicator actions into SIP WebSocket format. For example, when sending an IM, the JSCommunicator app will collect the essential data such as say the origin URI, destination URI and an IM message, while the User Agent is in charge of formating the data so that it can be transported in a SIP message unit. A SIP message contains a lot more information than just the sender, receiver and message. Of course, a lot of the info in a SIP message is irrelevant to the user and in JSCommUI.js we filter through all that data and only display what the user needs to see.

Here’s a diagram of sorts to help you visualize how the code interacts:

In sum, 1 - JSCommUI.js handles what is displayed in the UI and feeds data to JSCommManager.js; 2 - JSCommManager.js actually does stuff, feeding data to be displayed to JSCommUI.js; 3 - JSCommManager.js calls functions from the three ‘utils’, parseuri.js, webrtc-check.js and jssip-helpher.js which organizes the data from config.js; 4 - JSCommManager.js initializes a SIP User Agent based on the implementation in Arbiter.js.

When making any changes to JSCommunicator, you will likely only be working with JSCommUI.js and JSCommManager.js.

Catégories: Elsewhere

Juliana Louback: JSCommunicator - Setup and Architecture

Planet Debian - lun, 11/08/2014 - 17:06

Preface

During Google Summer of Code 2014, I got to work on the Debian WebRTC portal under the mentorship of Daniel Pocock. Now I had every intention of meticulously documenting my progress at each step of development in this blog, but I was a bit late in getting the blog up and running. I’ll now be publishing a series of posts to recap all I’ve done during GSoC. Better late than never.

Intro

JSCommunicator is a SIP communication tool developed in HTML and JavaScript. The code was designed to make integrating JSCommunicator with a website or web app as simple as possible. It’s quite easy, really. However, I do think a more detailed explanation on how to set things up and a guide to the JSCommunicator architecture could be of use, particularly for those wanting to modify the code in any way.

Setup

To begin, please fork the JSCommunicator repo.

If you are new to git, feel free to follow the steps in section “Setup” and “Clone” in this post.

If you read the README file (which you always should), you’ll see that JSCommunicator needs a SIP proxy that supports SIP over Websockets transport. Some options are:

I didn’t find a tutorial for Kamailio setup, I did find one for repro setup. And bonus, here you have a great tutorial on how to setup and configure your SIP proxy AND your TURN server.

In your project’s root directory, you’ll see a file named config-sample.js. Make a copy of that file named config.js. The config-sample.js file has comments that are very instructive. In sum, the only thing you have to modify is the turn_servers property and the websocket property. In my case, debrtc.org was the domain registered for testing my project, so my config file has:

turn_servers: [ { server:"turn:debrtc.org" } ],

Note that unlike the sample, I didn’t set a username and password so SIP credentials will be used.

Now fill in the websocket property – here we use sip5060.net.

websocket: { servers: 'wss://ws.sip5060.net', connection_recovery_min_interval: 2, connection_recovery_max_interval: 30, },

I’m pretty sure you can set the connection_recovery properties to whatever you like. Everything else is optional. If you set the user property, specifically display_name and uri, that will be used to fill in the Login form and takes preference over any ‘Remember me’ data. If you also set sip_auth_password, JSCommunicator will automatically log in.

All the other properties are for other optional functionalities and are well explained.

You’ll need some third-party javascript that is not included in the JSCommunicator git repo. Namely jQuery version 1.4 or higher and ArbiterJS version 1.0. Download jQuery here and ArbiterJS here and place the .js files in your project’s root directory. Do make sure that you are including the correct filename in your html file. For example, in phone-dev.shtml, a file named jquery.js is included. The file you downloaded will likely have version numbers in it’s file name. So rename the downloaded file or change the content of src in your includes. This is uber trivial, but I’ve made the mistake several times.

You’ll also need JsSIP.js which can be downloaded here. Same naming care must be taken as is the case for jQuery and ArbiterJS. The recently added Instant Messaging component and some of the new features need jQuery-UI - so far version 1.11.* is known to work. From the downloaded .zip all you need is the jquery-ui-...js file and the jquery-ui-...css file, also to be placed in the project’s root directory. If you’ll be using the internationalization support you’ll also need jquery.i18n.properties.js.

To try out JSCommunicator, deploy the website locally by copying your project directory to the apache document root directory (provided you are using apache, which is a good idea.). You’ll likely have to restart your server before this works. Now the demo .shtml pages only have a header with all the necessary includes, then a Server Side Include for the body of the page, with all the JSCommunicator html. The html content is in the file jscommunicator.inc. You can enable SSI on apache, OR you can simply copy and paste the content of jscommunicator.inc into phone-dev.shmtl. Start up apache, open a browser and navigate to localhost/jscommunicator/phone-dev.shmtl and you should see:

Actually, pending updates to JSCommunicator, you should see a brand new UI! But all the core stuff will be the same.

Catégories: Elsewhere

Drupal.org Featured Case Studies: Chatham House

Planet Drupal - lun, 11/08/2014 - 16:51
Completed Drupal site or project URL: http://www.chathamhouse.org

Chatham House, home of the Royal Institute of International Affairs, is an independent policy institute and world-leading source of independent analysis, informed debate and influential ideas on how to build a prosperous and secure world for all.

They decided to rebuild their website to better promote their independent analysis and new research on international affairs, engaging with their audiences and disseminating their output as widely as possible in the interests of ultimately influencing all international decision-makers and -shapers.

They wanted a modern, responsive website with a clean design that would provide a better user experience and increased traffic. Additionally, one of the requirements was to integrate the website with their events and membership management software.

Chatham House have used Drupal as their CMS of choice since summer 2011. Their previous Drupal 6 based website was suffering from intermittent performance issues, a dated, non-responsive design and did not integrate with Chatham House’s internal membership management software.

Chatham House worked with Torchbox to produce a new, modern and responsive design on a new Drupal 7 instance. A lot of previous custom code was replaced with tried and tested combinations of contrib modules and a large, rolling content migration (more than 10,000 items) was required to move content from the old Drupal 6 site to the new Drupal 7 site.

Chatham House’s membership management software was also integrated with their new Drupal 7 website for user authentication and events content.

Key modules/theme/distribution used: Display SuiteFeaturesFeedsElectionLinkitMediaMenu blockMenu Trail By PathNagios monitoringCustom BreadcrumbsContextRabbit HoleSecure PagesoEmbedWorkbenchViewsWysiwygmothership
Catégories: Elsewhere

Appnovation Technologies: Bad Standards Irritate Me: Drupal Edition

Planet Drupal - lun, 11/08/2014 - 16:27

As usual, for my blog posts on the Appnovation website, I’d like to kick off this article off with a disclaimer:

var switchTo5x = false;stLight.options({"publisher":"dr-75626d0b-d9b4-2fdb-6d29-1a20f61d683"});
Catégories: Elsewhere

Cheppers blog: Drupalaton 2014 - how we saw it

Planet Drupal - lun, 11/08/2014 - 15:06

‘This is the nicest Drupal camp I’ve ever been to.’ These words are quoted from Steve Purkiss, who told this to me on the last Drupalaton night during the cruise party. I think this describes the Drupalaton experience the best - Drupal, summer, fun at the same time.

Catégories: Elsewhere

Dries Buytaert: Help me write my DrupalCon Amsterdam keynote

Planet Drupal - lun, 11/08/2014 - 14:56
Topic: DrupalDrupalCon

For my DrupalCon Amsterdam keynote, I want to try something slightly different. Instead of coming up with the talk track myself, I want to "crowdsource" it. In other words, I want the wider Drupal community to have direct input on the content of the keynote. I feel this will provide a great opportunity to surface questions and ideas from the people who make Drupal what it is.

In the past, I've done traditional surveys to get input for my keynote and I've also done keynotes that were Q&A from beginning to end. This time, I'd like to try something in between.

I'd love your help to identify the topics of interests (e.g. scaling our community, future of the web, information about Drupal's competitors, "headless" Drupal, the Drupal Association, the business of Open Source, sustaining core development, etc). You can make your suggestions in the comments of this blog post or on Twitter (tag them with @Dries and #driesnote). I'll handpick some topics from all the suggestions, largely based on popularity but also based on how important and meaty I think the topic is.

Then, in the lead-up to the event, I'll create discussion opportunities on some or all of the topics so we can dive deeper on them together, and surface various opinions and ideas. The result of those deeper conversations will form the basis of my DrupalCon Amsterdam keynote.

So what would you like me to talk about? Suggest your topics in the comments of this blog post or on Twitter by tagging your suggestions with #driesnote and/or @Dries. Thank you!

Catégories: Elsewhere

drunken monkey: Updating the Search API to D8 – Part 5: Using plugin derivatives

Planet Drupal - lun, 11/08/2014 - 14:42

The greatest thing about all the refactoring in Drupal 8 is that, in general, a lot of those special Drupalisms used nowhere else were thrown out and replaced by sound design patterns, industry best practices and concepts that newcomers from other branches of programming will have an easy time of recognizing and using. While I can understand that this is an annoyance for some who have got used to the Drupalisms (and who haven't got a formal education in programming), as someone with a CS degree and a background in Java I was overjoyed at almost anything new I learned about Drupal 8, which, in my opinion, just made Drupal so much cleaner.
But, of course, this has already been discussed in a lot of other blog posts, podcasts, sessions, etc., by a lot of other people.

What I want to discuss today is one of the few instances where it seems this principle was violated and a new Drupalism, not known anywhere else (as far as I can tell, at least – if I'm mistaken I'd be grateful to be educated in the comments), introduced: plugin derivatives.
Probably some of you have already seen it there somewhere, especially if you were foolish enough to try to understand the new block system (if you succeeded, I salute you!), but I bet (or, hope) most of you had the same reaction as me: a very puzzled look and an involuntary “What the …?” In my case, this question was all the more pressing because I first stumbled upon plugin derivatives in my own moduleFrédéric Hennequin had done a lot of the initial work of porting the module and since there was a place where they fit perfectly, he used them. Luckily, I came across this in Szeged where Bram Goffings was close by and could explain this to me slowly until it sank in. (Looking at the handbook documentation now, it actually looks quite good, but I remember that, back then, I had no idea what they were talking about.)
So, without (even) further ado, let me now share this arcane knowledge with you!

What, and why, are plugin derivatives? The problem

Plugin derivatives, even though very Drupalistic (?), are actually a rather elegant solution for an interesting (and pressing) problem: dynamically defining plugins.
For example, take Search API's "datasource" plugins. These provide item types that can be indexed by the Search API, a further abstraction from the "entity" concept to be able to handle non-entities (or, indeed, even non-Drupal content). We of course want to provide an item type for each entity type, but we don't know beforehand which entity types there will be on a site – also, since entities can be accessed with a common API we can use the same code for all entity types and don't want a new class for each.
In Drupal 7, this was trivial to do:

<?php
/**
 * Implements hook_search_api_item_type_info().
 */
function search_api_search_api_item_type_info() {
  $types = array();
  foreach (entity_get_property_info() as $type => $property_info) {
    if ($info = entity_get_info($type)) {
      $types[$type] = array(
        'name' => $info['label'],
        'datasource controller' => 'SearchApiEntityDataSourceController',
        'entity_type' => $type,
      );
    }
  }
  return $types;
}
?>

Since plugin definition happens in a hook, we can just loop over all entity types, set the same controller class for each, and put an additional entity_type key into the definition so the controller knows which entity type it should use.

Now, in Drupal 8, there's a problem: as discussed in the previous part of this series, plugins now generally use annotations on the plugin class for the definition. That, in turn, would mean that a single class can only represent a single plugin, and since you can't (or at least really, really shouldn't) dynamically define classes there's also not really any way to dynamically define plugins.
One possible workaround would be to just use the alter hook which comes with nearly any plugin type and dynamically add the desired plugins there – however, that's not really ideal as a general solution for the problem, especially since it also occurs in core in several places. (The clearest example here are probably menu blocks – for each menu, you want one block plugin defined.)

The solution

So, as you might have guessed, the solution to this problem was the introduction of the concept of derivatives. Basically, every time you define a new plugin of any type (as long as the manager inherits from DefaultPluginManager you can add a deriver key to its definition, referencing a class. This deriver class will then automatically be called when the plugin system looks for plugins of that type and allows the deriver to multiply the plugin's definition, adding or altering any definition keys as appropriate. It is, essentially, another layer of altering that is specific to one plugin, serves a specific purpose (i.e., multiplying that plugin's definition) and occurs before the general alter hook is invoked.

Hopefully, an example will make this clearer. Let's see how we used this system in the Search API to solve the above problem with datasources.

How to use derivatives

So, how do we define several datasource plugins with a single class? Once you understand how it works (or what it's supposed to do) it's thankfully pretty easy to do. We first create our plugin like normally (or, just copy it from Drupal 7 and fix class name and namespace), but add the deriver key and internally assume that the plugin definition has an additional entity_type key which will tell us which entity type this specific datasource plugin should work with.

So, we put the following into src/Plugin/SearchApi/Datasource/ContentEntityDatasource.php:

<?php
namespace Drupal\search_api\Plugin\SearchApi\Datasource;

/**
 * @SearchApiDatasource(
 *   id = "entity",
 *   deriver = "Drupal\search_api\Plugin\SearchApi\Datasource\ContentEntityDatasourceDeriver"
 * )
 */
class ContentEntityDatasource extends DatasourcePluginBase {

  public function loadMultiple(array $ids) {
    // In the real code, this of course uses dependency injection, not a global function.
    return entity_load_multiple($this->pluginDefinition['entity_type'], $ids);
  }

  // Plus a lot of other methods …

}
?>

Note that, even though we can skip even required keys in the definition (like label here), we still have to set an id. This is called the "plugin base ID" and will be used as a prefix to all IDs of the derivative plugin definitions, as we'll see in a bit.
The deriver key is of course the main thing here. The namespace and name are arbitrary (the standard is to use the same namespace as the plugin itself, but append "Deriver" to the class name), the class just needs to implement the DeriverInterface – nothing else is needed. There is also ContainerDeriverInterface, a sub-interface for when you want dependency injection for creating the deriver, and an abstract base class, DeriverBase, which isn't very useful though, since the interface only has two methods. Concretely, the two methods are: getDerivativeDefinitions(), for getting all derivative definitions, and getDerivativeDefinition() for getting a single one – the latter usually simply a two-liner using the former.

Therefore, this is what src/Plugin/SearchApi/Datasource/ContentEntityDatasourceDeriver.php looks like:

<?php
namespace Drupal\search_api\Plugin\SearchApi\Datasource;

class ContentEntityDatasourceDeriver implements DeriverInterface {

  public function getDerivativeDefinition($derivative_id, $base_plugin_definition) {
    $derivatives = $this->getDerivativeDefinitions($base_plugin_definition);
    return isset($derivatives[$derivative_id]) ? $derivatives[$derivative_id] : NULL;
  }

  public function getDerivativeDefinitions($base_plugin_definition) {
    $base_plugin_id = $base_plugin_definition['id'];
    $plugin_derivatives = array();
    foreach (\Drupal::entityManager()->getDefinitions() as $entity_type_id => $entity_type_definition) {
      if ($entity_type_definition instanceof ContentEntityType) {
        $label = $entity_type_definition->getLabel();
        $plugin_derivatives[$entity_type_id] = array(
          'id' => $base_plugin_id . PluginBase::DERIVATIVE_SEPARATOR . $entity_type_id,
          'label' => $label,
          'description' => $this->t('Provides %entity_type entities for indexing and searching.', array('%entity_type' => $label)),
          'entity_type' => $entity_type_id,
        ) + $base_plugin_definition;
      }
    }
    return $plugin_derivatives;
  }

}
?>

As you see, getDerivativeDefinitions() just returns an array with derivative plugin definitions – keyed by what's called their "derivative ID" and their id key set to a combination of base ID and derivative ID, separated by PluginBase::DERIVATIVE_SEPARATOR (which is simply a colon (":")). We additionally set the entity_type key for all definitions (as we used in the plugin) and also set the other definition keys (as defined in the annotation) accordingly.

And that's it! If your plugin type implements DerivativeInspectionInterface (which the normal PluginBase class does), you also have handy methods for finding out a plugin's base ID and derivative ID (if any). But usually the code using the plugins doesn't need to be aware of derivatives and can simply handle them like any other plugin. Just be aware that this leads to plugin IDs now all potentially containing colons, and not only the usual "alphanumerics plus underscores" ID characters.

A side note about nomenclature

This is a bit confusing actually, especially as older documentation remains unupdated: The new individual plugins that were derived from the base defintion are referred to as "derivative plugin definitions", "plugin derivatives" or just "derivatives". Confusingly, though, the class creating the derivatives was also called a "derivative class" (and the key in the plugin definition was, consequently, derivative).
In #1875996: Reconsider naming conventions for derivative classes, this discrepancy was discussed and eventually resolved by renaming the classes creating derivative definitions (along with their interfaces, etc.) to "derivers".
If you are reading documentation that is more than a few months old, hopefully this will prevent you from some confusion.

Image credit: DonkeyHotey

Catégories: Elsewhere

Paragon-Blog: Performing DRD actions from Drush: Drupal power tools, part 2 of 4

Planet Drupal - lun, 11/08/2014 - 10:33

Drupal Remote Dashboard (DRD) fully supports Drush and it does this in two ways: DRD provides all its actions as Drush commands and DRD can trigger the execution of Drush commands on remote domains. This blog post is part of a series (see part 1 of 4) that describes all the possibilities around these two powerful tools. This is part 2 which describes on how to trigger any of DRD's actions from the command line by utilizing Drush.

Catégories: Elsewhere

Sylvestre Ledru: clang 3.4, 3.5 and 3.6 are now coinstallable in Debian

Planet Debian - lun, 11/08/2014 - 07:47

Clang is finally co installable on Debian. 3.4, 3.5 and the current trunk (snapshot) can be installed together.

So, just like gcc, the different version can be called with clang-3.4, clang-3.5 or clang-3.6.

/usr/bin/clang, /usr/bin/clang++, /usr/bin/scan-build and /usr/bin/scan-view are now handled through the llvm-defaults package.

llvm-defaults is also now managing clang-check, clang-tblgen, c-index-test, clang-apply-replacements, clang-tidy, pp-trace and clang-query.

Changes are also available on llvm.org/apt/.
The next step will be to manage also llvm-defaults on llvm.org/apt to simplify the transition for people using these packages.

So, with:

# /etc/apt/sources.list deb http://llvm.org/apt/unstable/ llvm-toolchain main deb http://llvm.org/apt/unstable/ llvm-toolchain-3.4 main deb http://llvm.org/apt/unstable/ llvm-toolchain-3.5 main $ apt-get install clang-3.4 clang-3.5 clang-3.6 $ clang-3.4 --version Debian clang version 3.4.2 (branches/release_34) (based on LLVM 3.4.2) Target: x86_64-pc-linux-gnu Thread model: posix $ clang-3.5 --version Debian clang version 3.5.0-+rc2-1~exp1 (tags/RELEASE_350/rc2) (based on LLVM 3.5.0) Target: x86_64-pc-linux-gnu Thread model: posix $ clang-3.6 --version Debian clang version 3.6.0-svn214990-1~exp1 (trunk) (based on LLVM 3.6.0) Target: x86_64-pc-linux-gnu Thread model: posix

Original post blogged on b2evolution.

Catégories: Elsewhere

Paul Tagliamonte: DebConf 14

Planet Debian - lun, 11/08/2014 - 03:06

I’ll be giving a short talk on Debian and Docker!

I’ll prepare some slides to give a brief talk about Debian and Docker, then open it up to have a normal session to talk over what Docker is and isn’t, and how we can use it in Debian better.

Hope to see y’all in Portland!

Catégories: Elsewhere

Ian Donnelly: dpkg Woes

Planet Debian - lun, 11/08/2014 - 02:01

Hi Everybody,

The original plan for my Google Summer of Code project involved creating a merge tool for Elektra including a kdb merge command, that part is all finished and integrated into the newest release of Elektra, 0.8.7 already. The next step was to patch dpkg to add an option for conffile merging. The basic overview of how this would work is:

  • Original versions of conffiles would be saved somewhere on the system to serve as a base file
  •  The users current version of the conffile would serve as ours
  • The package maintainers version would be used as theirs
  • The result of a three-way merge would overwrite the user’s current version and be saved as a new base
  • Dpkg would need an option to perform a three way merge with a hook that allowed any tool to be used for the task, not just Elektra.

Obviously, all of these things require patching dpkg, although not in too large away. Luckily, we found a previous attempt to patch dpkg for a three-way merge hook, however it is a few years old and was never included in dpkg. Felix decided to update this patch to work with the new versions of dpkg and cleaned up a bit of redundancy, he then resubmitted it to see if the dpkg maintainers would be interested in such a patch, and to start a dialogue about including all the patches we need. Unfortunately there has been no response from the dpkg team, any many bugs on their tracker have been sitting unanswered for years. Additionally, dpkg’s source can be quite complex and it would require a ton of effort for us to make all these patches with no guarantee of integration. As a result, we have decided to go another route (which I will be posting about soon).

I just wanted to update anybody interested on the progress of my Google Summer of Code Project by letting them know the experience with dpkg. We have come up with a new solution that is already looking great and you will hear a lot more about it this final week of Google Summer of Code.

Stay Tuned!
- Ian S. Donnelly

Catégories: Elsewhere

Matthew Garrett: Birthplace

Planet Debian - lun, 11/08/2014 - 01:44
For tedious reasons, I will at this stage point out that I was born in Galway, Ireland.

comments
Catégories: Elsewhere

Ian Wienand: Finding out if you're a Rackspace instance

Planet Debian - lun, 11/08/2014 - 01:30

Different hosting providers do things slightly differently, so it's sometimes handy to be able to figure out where you are. Rackspace is based on Xen and their provided images should include the xenstore-ls command available. xenstore-ls vm-data will give you a handy provider and even region fields to let you know where you are.

function is_rackspace { if [ ! -f /usr/bin/xenstore-ls ]; then return 1 fi /usr/bin/xenstore-ls vm-data | grep -q "Rackspace" } if is_rackspace; then echo "I am on Rackspace" fi

Other reading about how this works:

Catégories: Elsewhere

Pronovix: Drupal uncoded

Planet Drupal - lun, 11/08/2014 - 00:09

In open source, something magical happens when like-minded people meet. You find out somebody else is dealing with a similar problem, you combine ideas and before you know it an ad hoc working group has formed to fix the problem. It's a public secret that at Drupalcon the good stuff happens in the BOF sessions.

As the Drupal community has grown we've seen new community events spring up that catered to whole groups of people that before weren't able to meet and talk:

Catégories: Elsewhere

Simon Josefsson: Wifi on S3 with Replicant

Planet Debian - dim, 10/08/2014 - 20:02

I’m using Replicant on my main phone. As I’ve written before, I didn’t get Wifi to work. The other day leth in #replicant pointed me towards a CyanogenMod discussion about a similar issue. The fix does indeed work, and allowed me to connect to wifi networks and to setup my phone for Internet sharing. Digging deeper, I found a CM Jira issue about it, and ultimately a code commit. It seems the issue is that more recent S3′s comes with a Murata Wifi chipset that uses MAC addresses not known back in the Android 4.2 (CM-10.1.3 and Replicant-4.2) days. Pulling in the latest fixes for macloader.cpp solves this problem for me, although I still need to load the non-free firmware images that I get from CM-10.1.3. I’ve created a pull request fixing macloader.cpp for Replicant 4.2 if someone else is curious about the details. You have to rebuild your OS with the patch for things to work (if you don’t want to, the workaround using /data/.cid.info works fine), and install some firmware blobs as below.

adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_apsta.bin_b1 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_apsta.bin_b2 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_mfg.bin_b0 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_mfg.bin_b1 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_mfg.bin_b2 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_p2p.bin_b0 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_p2p.bin_b1 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_p2p.bin_b2 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_sta.bin_b0 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_sta.bin_b1 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_sta.bin_b2 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt_murata /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt_murata_b2 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt_semcosh /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt_murata /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt_murata_b2 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt_semcosh /system/vendor/firmware/

Catégories: Elsewhere

Cyril Brulebois: Why is my package blocked?

Planet Debian - dim, 10/08/2014 - 19:45

A bit of history: A while ago udeb-producing packages were getting frozen on a regular fashion, when a d-i release was about to be cut. While I wasn’t looking at the time, I can easily understand the reasons behind that: d-i is built upon many components, it takes some time to make sure it’s basically in shape for a release, and it’s very annoying when a regression sneaks in right before the installation images get built.

I took over d-i release maintenance in May 2012 and only a few uploads happened before the wheezy freeze. I was only discovering the job at the time, and I basically released whatever was in testing then. The freeze began right after that (end of June), so I started double checking things affecting d-i (in addition to or instead of the review performed by other release team members), and unblocking packages when changes seemed safe, or once they were tested.

A few uploads happened after the wheezy release and there’s already a Jessie Alpha 1 release. I was about to release Jessie Beta 1 after some fair bits of testing, a debian-installer upload, and the only remaining bits were: building installation images (hello Steve), and of course communication (mail announce and website update).

Unfortunately a new upstream release reached testing in the meanwhile, breaking the installer in several ways. I’ll give details below, of course not because I want to point finger at the maintainer, but to illustrate the ramifications that a single package’s migrating to testing can induce.

  • parted 3.2-1 was uploaded on 2014-07-30 and migrated on 2014-08-05.

  • parted 3.2-2 fixed a regression reported in Ubuntu only (LP#1352252) which I also hit with images built locally after that migration.

  • I then built some images locally using fixed parted packages but then discovered that auto-lvm was still broken, which I reported in #757417.

  • After some investigation Colin confirmed some behavioral changes in this new parted release, which imply the need for an update of several other partman-* components: #757661, #757662, #757663, #757664, #757665, #757666.

  • Thankfully fixes have been added for all of those (except partman-zfs at the time of writing), but more testing is needed, before possibly urgenting those packages so that they get into testing as soon as possible.

Since I’d like to avoid such experience in the future, I’ll probably reintroduce the old method and freeze all udeb-producing packages during next d-i releases.

So you know why it might happen. Your next question might be: “What to do when your package is getting caught in that net?”. In that case, please get in touch with both debian-release@ and debian-boot@ asking for an unblock. I’ll then review your particular package, and either let it migrate to testing, or delay it until after the release.

Catégories: Elsewhere

VM(doh): OPCache Module for Drupal

Planet Drupal - dim, 10/08/2014 - 13:08

Last Friday, we published the start to our OPCache module for Drupal. While it's still lacking a few features to be ready for a tagged release, we thought it'd be a good idea to get some working code out there.

The goal for this module is to allow Drupal site administrators to clear their opcode cache if they are using the PHP OPcache extension (also known as Zend OPcache or Zend Optimizer+) as well as provide an interface similar to the Memcache Admin module.

As of right now, the only implemented feature is cache flushing. We understand that you might be running on multiple webservers (we build a lot of sites that run on multiple webservers), so we included the ability to flush caches on all of your webservers at once.

Go try the module and feel free to submit patches!

Catégories: Elsewhere

Russ Allbery: Orphaning some software packages

Planet Debian - dim, 10/08/2014 - 06:23

As mentioned in my previous message, I'm orphaning various software packages that I was maintaining as part of my job at Stanford, or that for some other reason I'm no longer using directly. My goal is to free up some time and space for new work projects at my new employer, for other hobbies, and to take better care of other projects that I'm not orphaning.

The following software packages are now orphaned, and marked as such on their web pages:

I'm also stepping down from Debian package maintenance for the OpenAFS and Shibboleth packages, and have already notified the relevant communities. For the Debian packages, and for the above software packages, I will continue to provide security support until someone else can take them over.

WebAuth is going to be in a state of transition as noted on its page:

My successor at Stanford will be continuing maintenance and development, but that person hasn't been hired yet, and it will take some time for them to ramp up and start making new releases (although there may be at least one interim release with work that I'm finishing now). It's therefore not strictly orphaned, but it's noted that way on my software pages until someone else at Stanford picks it up.

Development of the other packages that I maintain should continue as normal, with a small handful of exceptions. The following packages are currently in limbo, since I'm not sure if I'll have continued use for them:

I'm not very happy with the current design of either kadmin-remctl or wallet, so if I do continue to maintain them (and have time to work on them), I am likely to redesign them considerably.

For all of my packages, I've been adding clones of the repository to GitHub as an additional option besides my personal Git repository server. I'm of two minds about using (and locking myself into) more of the GitHub infrastructure, but repository copies on GitHub felt like it might be useful for anyone who wanted to fork or take over maintenance. I will be adding links to the GitHub repositories to the software packages for things that are in Git.

If you want to take over any of the orphaned software packages, feel free. When you're ready for the current software pages to redirect to its new home, let me know.

Catégories: Elsewhere

Pages

Subscribe to jfhovinne agrégateur