Elsewhere

drunken monkey: Updating the Search API to D8 – Part 5: Using plugin derivatives

Planet Drupal - Mon, 11/08/2014 - 14:42

The greatest thing about all the refactoring in Drupal 8 is that, in general, a lot of those special Drupalisms used nowhere else were thrown out and replaced by sound design patterns, industry best practices and concepts that newcomers from other branches of programming will have an easy time of recognizing and using. While I can understand that this is an annoyance for some who have got used to the Drupalisms (and who haven't got a formal education in programming), as someone with a CS degree and a background in Java I was overjoyed at almost anything new I learned about Drupal 8, which, in my opinion, just made Drupal so much cleaner.
But, of course, this has already been discussed in a lot of other blog posts, podcasts, sessions, etc., by a lot of other people.

What I want to discuss today is one of the few instances where it seems this principle was violated and a new Drupalism, not known anywhere else (as far as I can tell, at least – if I'm mistaken I'd be grateful to be educated in the comments), introduced: plugin derivatives.
Probably some of you have already seen it there somewhere, especially if you were foolish enough to try to understand the new block system (if you succeeded, I salute you!), but I bet (or, hope) most of you had the same reaction as me: a very puzzled look and an involuntary “What the …?” In my case, this question was all the more pressing because I first stumbled upon plugin derivatives in my own moduleFrédéric Hennequin had done a lot of the initial work of porting the module and since there was a place where they fit perfectly, he used them. Luckily, I came across this in Szeged where Bram Goffings was close by and could explain this to me slowly until it sank in. (Looking at the handbook documentation now, it actually looks quite good, but I remember that, back then, I had no idea what they were talking about.)
So, without (even) further ado, let me now share this arcane knowledge with you!

What, and why, are plugin derivatives? The problem

Plugin derivatives, even though very Drupalistic (?), are actually a rather elegant solution for an interesting (and pressing) problem: dynamically defining plugins.
For example, take Search API's "datasource" plugins. These provide item types that can be indexed by the Search API, a further abstraction from the "entity" concept to be able to handle non-entities (or, indeed, even non-Drupal content). We of course want to provide an item type for each entity type, but we don't know beforehand which entity types there will be on a site – also, since entities can be accessed with a common API we can use the same code for all entity types and don't want a new class for each.
In Drupal 7, this was trivial to do:

<?php
/**
 * Implements hook_search_api_item_type_info().
 */
function search_api_search_api_item_type_info() {
  $types = array();
  foreach (entity_get_property_info() as $type => $property_info) {
    if ($info = entity_get_info($type)) {
      $types[$type] = array(
        'name' => $info['label'],
        'datasource controller' => 'SearchApiEntityDataSourceController',
        'entity_type' => $type,
      );
    }
  }
  return $types;
}
?>

Since plugin definition happens in a hook, we can just loop over all entity types, set the same controller class for each, and put an additional entity_type key into the definition so the controller knows which entity type it should use.

Now, in Drupal 8, there's a problem: as discussed in the previous part of this series, plugins now generally use annotations on the plugin class for the definition. That, in turn, would mean that a single class can only represent a single plugin, and since you can't (or at least really, really shouldn't) dynamically define classes there's also not really any way to dynamically define plugins.
One possible workaround would be to just use the alter hook which comes with nearly any plugin type and dynamically add the desired plugins there – however, that's not really ideal as a general solution for the problem, especially since it also occurs in core in several places. (The clearest example here are probably menu blocks – for each menu, you want one block plugin defined.)

The solution

So, as you might have guessed, the solution to this problem was the introduction of the concept of derivatives. Basically, every time you define a new plugin of any type (as long as the manager inherits from DefaultPluginManager you can add a deriver key to its definition, referencing a class. This deriver class will then automatically be called when the plugin system looks for plugins of that type and allows the deriver to multiply the plugin's definition, adding or altering any definition keys as appropriate. It is, essentially, another layer of altering that is specific to one plugin, serves a specific purpose (i.e., multiplying that plugin's definition) and occurs before the general alter hook is invoked.

Hopefully, an example will make this clearer. Let's see how we used this system in the Search API to solve the above problem with datasources.

How to use derivatives

So, how do we define several datasource plugins with a single class? Once you understand how it works (or what it's supposed to do) it's thankfully pretty easy to do. We first create our plugin like normally (or, just copy it from Drupal 7 and fix class name and namespace), but add the deriver key and internally assume that the plugin definition has an additional entity_type key which will tell us which entity type this specific datasource plugin should work with.

So, we put the following into src/Plugin/SearchApi/Datasource/ContentEntityDatasource.php:

<?php
namespace Drupal\search_api\Plugin\SearchApi\Datasource;

/**
 * @SearchApiDatasource(
 *   id = "entity",
 *   deriver = "Drupal\search_api\Plugin\SearchApi\Datasource\ContentEntityDatasourceDeriver"
 * )
 */
class ContentEntityDatasource extends DatasourcePluginBase {

  public function loadMultiple(array $ids) {
    // In the real code, this of course uses dependency injection, not a global function.
    return entity_load_multiple($this->pluginDefinition['entity_type'], $ids);
  }

  // Plus a lot of other methods …

}
?>

Note that, even though we can skip even required keys in the definition (like label here), we still have to set an id. This is called the "plugin base ID" and will be used as a prefix to all IDs of the derivative plugin definitions, as we'll see in a bit.
The deriver key is of course the main thing here. The namespace and name are arbitrary (the standard is to use the same namespace as the plugin itself, but append "Deriver" to the class name), the class just needs to implement the DeriverInterface – nothing else is needed. There is also ContainerDeriverInterface, a sub-interface for when you want dependency injection for creating the deriver, and an abstract base class, DeriverBase, which isn't very useful though, since the interface only has two methods. Concretely, the two methods are: getDerivativeDefinitions(), for getting all derivative definitions, and getDerivativeDefinition() for getting a single one – the latter usually simply a two-liner using the former.

Therefore, this is what src/Plugin/SearchApi/Datasource/ContentEntityDatasourceDeriver.php looks like:

<?php
namespace Drupal\search_api\Plugin\SearchApi\Datasource;

class ContentEntityDatasourceDeriver implements DeriverInterface {

  public function getDerivativeDefinition($derivative_id, $base_plugin_definition) {
    $derivatives = $this->getDerivativeDefinitions($base_plugin_definition);
    return isset($derivatives[$derivative_id]) ? $derivatives[$derivative_id] : NULL;
  }

  public function getDerivativeDefinitions($base_plugin_definition) {
    $base_plugin_id = $base_plugin_definition['id'];
    $plugin_derivatives = array();
    foreach (\Drupal::entityManager()->getDefinitions() as $entity_type_id => $entity_type_definition) {
      if ($entity_type_definition instanceof ContentEntityType) {
        $label = $entity_type_definition->getLabel();
        $plugin_derivatives[$entity_type_id] = array(
          'id' => $base_plugin_id . PluginBase::DERIVATIVE_SEPARATOR . $entity_type_id,
          'label' => $label,
          'description' => $this->t('Provides %entity_type entities for indexing and searching.', array('%entity_type' => $label)),
          'entity_type' => $entity_type_id,
        ) + $base_plugin_definition;
      }
    }
    return $plugin_derivatives;
  }

}
?>

As you see, getDerivativeDefinitions() just returns an array with derivative plugin definitions – keyed by what's called their "derivative ID" and their id key set to a combination of base ID and derivative ID, separated by PluginBase::DERIVATIVE_SEPARATOR (which is simply a colon (":")). We additionally set the entity_type key for all definitions (as we used in the plugin) and also set the other definition keys (as defined in the annotation) accordingly.

And that's it! If your plugin type implements DerivativeInspectionInterface (which the normal PluginBase class does), you also have handy methods for finding out a plugin's base ID and derivative ID (if any). But usually the code using the plugins doesn't need to be aware of derivatives and can simply handle them like any other plugin. Just be aware that this leads to plugin IDs now all potentially containing colons, and not only the usual "alphanumerics plus underscores" ID characters.

A side note about nomenclature

This is a bit confusing actually, especially as older documentation remains unupdated: The new individual plugins that were derived from the base defintion are referred to as "derivative plugin definitions", "plugin derivatives" or just "derivatives". Confusingly, though, the class creating the derivatives was also called a "derivative class" (and the key in the plugin definition was, consequently, derivative).
In #1875996: Reconsider naming conventions for derivative classes, this discrepancy was discussed and eventually resolved by renaming the classes creating derivative definitions (along with their interfaces, etc.) to "derivers".
If you are reading documentation that is more than a few months old, hopefully this will prevent you from some confusion.

Image credit: DonkeyHotey

Categories: Elsewhere

Paragon-Blog: Performing DRD actions from Drush: Drupal power tools, part 2 of 4

Planet Drupal - Mon, 11/08/2014 - 10:33

Drupal Remote Dashboard (DRD) fully supports Drush and it does this in two ways: DRD provides all its actions as Drush commands and DRD can trigger the execution of Drush commands on remote domains. This blog post is part of a series (see part 1 of 4) that describes all the possibilities around these two powerful tools. This is part 2 which describes on how to trigger any of DRD's actions from the command line by utilizing Drush.

Categories: Elsewhere

Sylvestre Ledru: clang 3.4, 3.5 and 3.6 are now coinstallable in Debian

Planet Debian - Mon, 11/08/2014 - 07:47

Clang is finally co installable on Debian. 3.4, 3.5 and the current trunk (snapshot) can be installed together.

So, just like gcc, the different version can be called with clang-3.4, clang-3.5 or clang-3.6.

/usr/bin/clang, /usr/bin/clang++, /usr/bin/scan-build and /usr/bin/scan-view are now handled through the llvm-defaults package.

llvm-defaults is also now managing clang-check, clang-tblgen, c-index-test, clang-apply-replacements, clang-tidy, pp-trace and clang-query.

Changes are also available on llvm.org/apt/.
The next step will be to manage also llvm-defaults on llvm.org/apt to simplify the transition for people using these packages.

So, with:

# /etc/apt/sources.list deb http://llvm.org/apt/unstable/ llvm-toolchain main deb http://llvm.org/apt/unstable/ llvm-toolchain-3.4 main deb http://llvm.org/apt/unstable/ llvm-toolchain-3.5 main $ apt-get install clang-3.4 clang-3.5 clang-3.6 $ clang-3.4 --version Debian clang version 3.4.2 (branches/release_34) (based on LLVM 3.4.2) Target: x86_64-pc-linux-gnu Thread model: posix $ clang-3.5 --version Debian clang version 3.5.0-+rc2-1~exp1 (tags/RELEASE_350/rc2) (based on LLVM 3.5.0) Target: x86_64-pc-linux-gnu Thread model: posix $ clang-3.6 --version Debian clang version 3.6.0-svn214990-1~exp1 (trunk) (based on LLVM 3.6.0) Target: x86_64-pc-linux-gnu Thread model: posix

Original post blogged on b2evolution.

Categories: Elsewhere

Paul Tagliamonte: DebConf 14

Planet Debian - Mon, 11/08/2014 - 03:06

I’ll be giving a short talk on Debian and Docker!

I’ll prepare some slides to give a brief talk about Debian and Docker, then open it up to have a normal session to talk over what Docker is and isn’t, and how we can use it in Debian better.

Hope to see y’all in Portland!

Categories: Elsewhere

Ian Donnelly: dpkg Woes

Planet Debian - Mon, 11/08/2014 - 02:01

Hi Everybody,

The original plan for my Google Summer of Code project involved creating a merge tool for Elektra including a kdb merge command, that part is all finished and integrated into the newest release of Elektra, 0.8.7 already. The next step was to patch dpkg to add an option for conffile merging. The basic overview of how this would work is:

  • Original versions of conffiles would be saved somewhere on the system to serve as a base file
  •  The users current version of the conffile would serve as ours
  • The package maintainers version would be used as theirs
  • The result of a three-way merge would overwrite the user’s current version and be saved as a new base
  • Dpkg would need an option to perform a three way merge with a hook that allowed any tool to be used for the task, not just Elektra.

Obviously, all of these things require patching dpkg, although not in too large away. Luckily, we found a previous attempt to patch dpkg for a three-way merge hook, however it is a few years old and was never included in dpkg. Felix decided to update this patch to work with the new versions of dpkg and cleaned up a bit of redundancy, he then resubmitted it to see if the dpkg maintainers would be interested in such a patch, and to start a dialogue about including all the patches we need. Unfortunately there has been no response from the dpkg team, any many bugs on their tracker have been sitting unanswered for years. Additionally, dpkg’s source can be quite complex and it would require a ton of effort for us to make all these patches with no guarantee of integration. As a result, we have decided to go another route (which I will be posting about soon).

I just wanted to update anybody interested on the progress of my Google Summer of Code Project by letting them know the experience with dpkg. We have come up with a new solution that is already looking great and you will hear a lot more about it this final week of Google Summer of Code.

Stay Tuned!
- Ian S. Donnelly

Categories: Elsewhere

Matthew Garrett: Birthplace

Planet Debian - Mon, 11/08/2014 - 01:44
For tedious reasons, I will at this stage point out that I was born in Galway, Ireland.

comments
Categories: Elsewhere

Ian Wienand: Finding out if you're a Rackspace instance

Planet Debian - Mon, 11/08/2014 - 01:30

Different hosting providers do things slightly differently, so it's sometimes handy to be able to figure out where you are. Rackspace is based on Xen and their provided images should include the xenstore-ls command available. xenstore-ls vm-data will give you a handy provider and even region fields to let you know where you are.

function is_rackspace { if [ ! -f /usr/bin/xenstore-ls ]; then return 1 fi /usr/bin/xenstore-ls vm-data | grep -q "Rackspace" } if is_rackspace; then echo "I am on Rackspace" fi

Other reading about how this works:

Categories: Elsewhere

Pronovix: Drupal uncoded

Planet Drupal - Mon, 11/08/2014 - 00:09

In open source, something magical happens when like-minded people meet. You find out somebody else is dealing with a similar problem, you combine ideas and before you know it an ad hoc working group has formed to fix the problem. It's a public secret that at Drupalcon the good stuff happens in the BOF sessions.

As the Drupal community has grown we've seen new community events spring up that catered to whole groups of people that before weren't able to meet and talk:

Categories: Elsewhere

Simon Josefsson: Wifi on S3 with Replicant

Planet Debian - Sun, 10/08/2014 - 20:02

I’m using Replicant on my main phone. As I’ve written before, I didn’t get Wifi to work. The other day leth in #replicant pointed me towards a CyanogenMod discussion about a similar issue. The fix does indeed work, and allowed me to connect to wifi networks and to setup my phone for Internet sharing. Digging deeper, I found a CM Jira issue about it, and ultimately a code commit. It seems the issue is that more recent S3′s comes with a Murata Wifi chipset that uses MAC addresses not known back in the Android 4.2 (CM-10.1.3 and Replicant-4.2) days. Pulling in the latest fixes for macloader.cpp solves this problem for me, although I still need to load the non-free firmware images that I get from CM-10.1.3. I’ve created a pull request fixing macloader.cpp for Replicant 4.2 if someone else is curious about the details. You have to rebuild your OS with the patch for things to work (if you don’t want to, the workaround using /data/.cid.info works fine), and install some firmware blobs as below.

adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_apsta.bin_b1 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_apsta.bin_b2 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_mfg.bin_b0 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_mfg.bin_b1 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_mfg.bin_b2 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_p2p.bin_b0 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_p2p.bin_b1 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_p2p.bin_b2 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_sta.bin_b0 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_sta.bin_b1 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_sta.bin_b2 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt_murata /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt_murata_b2 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt_semcosh /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt_murata /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt_murata_b2 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt_semcosh /system/vendor/firmware/

Categories: Elsewhere

Cyril Brulebois: Why is my package blocked?

Planet Debian - Sun, 10/08/2014 - 19:45

A bit of history: A while ago udeb-producing packages were getting frozen on a regular fashion, when a d-i release was about to be cut. While I wasn’t looking at the time, I can easily understand the reasons behind that: d-i is built upon many components, it takes some time to make sure it’s basically in shape for a release, and it’s very annoying when a regression sneaks in right before the installation images get built.

I took over d-i release maintenance in May 2012 and only a few uploads happened before the wheezy freeze. I was only discovering the job at the time, and I basically released whatever was in testing then. The freeze began right after that (end of June), so I started double checking things affecting d-i (in addition to or instead of the review performed by other release team members), and unblocking packages when changes seemed safe, or once they were tested.

A few uploads happened after the wheezy release and there’s already a Jessie Alpha 1 release. I was about to release Jessie Beta 1 after some fair bits of testing, a debian-installer upload, and the only remaining bits were: building installation images (hello Steve), and of course communication (mail announce and website update).

Unfortunately a new upstream release reached testing in the meanwhile, breaking the installer in several ways. I’ll give details below, of course not because I want to point finger at the maintainer, but to illustrate the ramifications that a single package’s migrating to testing can induce.

  • parted 3.2-1 was uploaded on 2014-07-30 and migrated on 2014-08-05.

  • parted 3.2-2 fixed a regression reported in Ubuntu only (LP#1352252) which I also hit with images built locally after that migration.

  • I then built some images locally using fixed parted packages but then discovered that auto-lvm was still broken, which I reported in #757417.

  • After some investigation Colin confirmed some behavioral changes in this new parted release, which imply the need for an update of several other partman-* components: #757661, #757662, #757663, #757664, #757665, #757666.

  • Thankfully fixes have been added for all of those (except partman-zfs at the time of writing), but more testing is needed, before possibly urgenting those packages so that they get into testing as soon as possible.

Since I’d like to avoid such experience in the future, I’ll probably reintroduce the old method and freeze all udeb-producing packages during next d-i releases.

So you know why it might happen. Your next question might be: “What to do when your package is getting caught in that net?”. In that case, please get in touch with both debian-release@ and debian-boot@ asking for an unblock. I’ll then review your particular package, and either let it migrate to testing, or delay it until after the release.

Categories: Elsewhere

VM(doh): OPCache Module for Drupal

Planet Drupal - Sun, 10/08/2014 - 13:08

Last Friday, we published the start to our OPCache module for Drupal. While it's still lacking a few features to be ready for a tagged release, we thought it'd be a good idea to get some working code out there.

The goal for this module is to allow Drupal site administrators to clear their opcode cache if they are using the PHP OPcache extension (also known as Zend OPcache or Zend Optimizer+) as well as provide an interface similar to the Memcache Admin module.

As of right now, the only implemented feature is cache flushing. We understand that you might be running on multiple webservers (we build a lot of sites that run on multiple webservers), so we included the ability to flush caches on all of your webservers at once.

Go try the module and feel free to submit patches!

Categories: Elsewhere

Russ Allbery: Orphaning some software packages

Planet Debian - Sun, 10/08/2014 - 06:23

As mentioned in my previous message, I'm orphaning various software packages that I was maintaining as part of my job at Stanford, or that for some other reason I'm no longer using directly. My goal is to free up some time and space for new work projects at my new employer, for other hobbies, and to take better care of other projects that I'm not orphaning.

The following software packages are now orphaned, and marked as such on their web pages:

I'm also stepping down from Debian package maintenance for the OpenAFS and Shibboleth packages, and have already notified the relevant communities. For the Debian packages, and for the above software packages, I will continue to provide security support until someone else can take them over.

WebAuth is going to be in a state of transition as noted on its page:

My successor at Stanford will be continuing maintenance and development, but that person hasn't been hired yet, and it will take some time for them to ramp up and start making new releases (although there may be at least one interim release with work that I'm finishing now). It's therefore not strictly orphaned, but it's noted that way on my software pages until someone else at Stanford picks it up.

Development of the other packages that I maintain should continue as normal, with a small handful of exceptions. The following packages are currently in limbo, since I'm not sure if I'll have continued use for them:

I'm not very happy with the current design of either kadmin-remctl or wallet, so if I do continue to maintain them (and have time to work on them), I am likely to redesign them considerably.

For all of my packages, I've been adding clones of the repository to GitHub as an additional option besides my personal Git repository server. I'm of two minds about using (and locking myself into) more of the GitHub infrastructure, but repository copies on GitHub felt like it might be useful for anyone who wanted to fork or take over maintenance. I will be adding links to the GitHub repositories to the software packages for things that are in Git.

If you want to take over any of the orphaned software packages, feel free. When you're ready for the current software pages to redirect to its new home, let me know.

Categories: Elsewhere

Ian Donnelly: How-To: kdb merge

Planet Debian - Sun, 10/08/2014 - 04:30

Hi Everybody,

As you may know, part of my Google Summer of Code project involved the creation of merge tools for Elektra. The one I am going to focus on today is kdb merge. The kdb tool allows users to access and perform functions on the Elektra Key Database from the command line. We added a new command to this very useful tool, the merge command. This command allows a user to perform a three-way merge of KeySets from the kdb tool.
The command to use this tool is:
kdb merge [options] ourpath theirpath basepath resultpath

The standard naming scheme for a three-way merge consists of ours, theirs, and base. Ours refers to the local copy of a file, theirs refers to a remote copy, and base refers to their common anscestor. This works very similarly for KeySets, especially ones that consist of mounted conffiles. For mounted conffiles, ours should be the user’s copy, theirs would be the maintainers copy, and base would be the conffile as it was during the last package upgrade or during the package install. If you are just trying to merge any two KeySets that derive from the same base, ours and theirs can be interchanged. In kdb merge, ourpath, theirpath, and basepath work just like ours, theirs, and base except each one represents the root of a KeySet. Resultpath is pretty self- explanatory, it is just where you want the result of the merge to be saved under.

As for the options, there are a few basic ones and one option, strategy, that is very important. The basic options are:
-H --help which prints the help text
-i --interactive which attempts the merge in an interactive way
-t --test which tests the propsed merge and informs you about possible conflicts
-v –verbose which runs the merge in verbose mode
-V –version prints info about the version

The other option, strategy is:
s --strategy which is used to specify a strategy to use in case of a conflict

The current list of strategies are:
preserve the merge will fail if a conflict is detected
ours the merge will use our version during a conflict
theirs the merge will use their version during a conflict
base the merge will use the base version during a conflict

If no strategy is specified, the merge will default to the preserve strategy as to not risk making the wrong decision. If any of the other strategies are specified, when a conflcit is detected, merge will use the Key specified by the strategy (ours, theirs, or base) for the resulting Key.

An example of using the kdb merge command:
kdb merge -s ours system/hosts/ours system/hosts/theirs system/hosts/base system/hosts/result

-Ian S. Donnelly

Categories: Elsewhere

Russell Coker: Being Obviously Wrong About Autism

Planet Debian - Sat, 09/08/2014 - 18:01

I’m watching a Louis Theroux documentary about Autism (here’s the link to the BBC web site [1]). The main thing that strikes me so far (after watching 7.5 minutes of it) is the bad designed of the DLC-Warren school for Autistic kids in New Jersey [2].

A significant portion of people on the Autism Spectrum have problems with noisy environments, whether most Autistic people have problems with noise depends on what degree of discomfort is considered a problem. But I think it’s most likely to assume that the majority of kids on the Autism Spectrum will behave better in a quiet environment. So any environment that is noisy will cause more difficult behavior in most Autistic kids and the kids who don’t have problems with the noise will have problems with the way the other kids act. Any environment that is more prone to noise pollution than is strictly necessary is hostile to most people on the Autism Spectrum and all groups of Autistic people.

The school that is featured in the start of the documentary is obviously wrong in this regard. For starters I haven’t seen any carpet anywhere. Carpeted floors are slightly more expensive than lino but the cost isn’t significant in terms of the cost of running a special school (such schools are expensive by private-school standards). But carpet makes a significant difference to ambient noise.

Most of the footage from that school included obvious echos even though they had an opportunity to film when there was the least disruption – presumably noise pollution would be a lot worse when a class finished.

It’s not difficult to install carpet in all indoor areas in a school. It’s also not difficult to install rubber floors in all outdoor areas in a school (it seems that most schools are doing this already in play areas for safety reasons). For a small amount of money spent on installing and maintaining noise absorbing floor surfaces the school could achieve better educational results. The next step would be to install noise absorbing ceiling tiles and wallpaper, that might be a little more expensive to install but it would be cheap to maintain.

I think that the hallways in a school for Autistic kids should be as quiet as the lobby of a 5 star hotel. I don’t believe that there is any technical difficulty in achieving that goal, making a school look as good as an expensive hotel would be expensive but giving it the same acoustic properties wouldn’t be difficult or expensive.

How do people even manage to be so wrong about such things? Do they never seek any advice from any adult on the Autism Spectrum about how to run their school? Do they avoid doing any of the most basic Google searches for how to create a good environment for Autistic people? Do they just not care at all and create an environment that looks good to NTs? If they are just trying to impress NTs then why don’t they have enough pride to care that people like me will know how bad they are? These aren’t just rhetorical questions, I’d like to know what’s wrong with those people that makes them do their jobs in such an amazingly bad way.

Related posts:

  1. Autism, Food, etc James Purser wrote “Stop Using Autism to Push Your Own...
  2. Autism and a Child Beauty Contest Fenella Wagener wrote an article for the Herald Sun about...
  3. Autism Awareness and the Free Software Community It’s Autism Awareness Month April is Autism Awareness month, there...
Categories: Elsewhere

Joachim's blog: Using Human Queue Worker to process comments

Planet Drupal - Sat, 09/08/2014 - 11:44

Some time ago, I released Human Queue Worker, a module that takes the concept of the Drupal Queue system, but where the processing of the items is done by human users rather than an automated process. I say 'takes the concept'; it in fact uses the Drupal Queue to create and claim queue items, but instead of declaring your queue with hook_cron_queue_info(), you declare it to Human Queue Worker as a queue that humans will be working on.

This was written for my current project and for a fairly specific need, and I didn't imagine many sites would be using it. However, it has an obvious and popular application: approving comments. I always figured it would be nice if someone wrote a little module to define a comment processing human queue.

Well, that someone is me, and the time is now. You see, I'm an idiot: when I set up this new blog site of mine, I totally forgot to set up a CAPTCHA, and then when I added Mollon, I didn't set it up properly. So this site has a few hundred spammy comments that I need to delete.

The problem is that comment management takes time. Unless there are some magical area of the core UI I've completely missed, I can either visit each node and delete them one by one, or use the comment admin form. There, I can mass-delete the ones with obvious spammy titles, but all the others will still need individual inspection.

The Human Queue UI simplifies this hugely. There's just one page for the queue. When you go to that page, you're presented with an item to process. In the case of comment approval, that's the comment itself, plus the parent node and parent comment to give you some context. To process the comment, click one of two buttons: 'Publish' or 'Delete'. The comment is dealt with, and the form reloads, with a brand new comment for you to process. Which means that the only clicking you do is the action buttons: Publish; Delete; Publish; Delete. (Though with the amount of spam on my site, it's probably Delete; Delete; Delete, like the Cybermen.)

I've not timed it, but I reckon I can probably go at quite a rate. And that's with just one of me: the core Queue system guarantees that only one worker can claim an item at any one time, and that applies to human workers too. So if another user were to work the queue too, by going to the same page, they would be getting shown different comments to work on, and we'd work through the comments at twice the rate.

Now I just need to find a compliant friend and make them into my worker drone. If you're interested, please don't post a comment!

Categories: Elsewhere

Steve Kemp: Rebooting the CMS

Planet Debian - Sat, 09/08/2014 - 10:59

I run a cluster for the Debian Administration website, and the code is starting to show its age. Unfortunately the code is not so modern, and has evolved a lot of baggage.

Given the relatively clean separation between the logical components I'm interested in trying something new. In brief the current codebase allows:

  • Posting of articles, blog-entries, and polls.
  • The manipulation of the same.
  • User-account management.

It crossed my mind the other night that it might make sense to break this code down into a number of mini-servers - a server to handle all article-related things, a server to handle all poll-related things, etc.

If we have a JSON endpoint that will allow:

  • GET /article/32
  • POST /article/ [create]
  • GET /articles/offset/number [get the most recent]

Then we could have a very thin shim/server on top of that whihc would present the public API. Of course the internal HTTP overhead might make this unworkable, but it is an interesting approach to the problem, and would allow the backend storage to be migrated in the future without too much difficulty.

At the moment I've coded up two trivial servers, one for getting user-data (to allow login requests to succeed), and one for getting article data.

There is a tiny presentation server written to use those back-end servers and it seems like an approach that might work. Of course deployment might be a pain..

It is still an experiment rather than a plan, but it could work out: http://github.com/skx/snooze/.

Categories: Elsewhere

Russ Allbery: A new challenge

Planet Debian - Sat, 09/08/2014 - 06:01

Those reading this journal may have noticed that my rate of posting has dropped a bit in the past few years, and quite a lot in the past year. One of the major reasons for this was work, which had been getting more bureaucratic, more stressful, less trusting, and more fearful. After this got drastically worse in the past six months, I finally decided enough was enough and took advantage of a good opportunity to do something different.

I will be joining Dropbox's site reliability engineering team in a week and a half (which means that I'll be working on their servers, not on the product itself). It will take a few months to settle in, but hopefully this will mean a significant improvement to my stress levels and a lot of interesting projects to work on.

I'm taking advantage of this change to inventory the various things I'm currently committed to and let go of some projects to make more space in my life. There are also a variety of software projects that I was maintaining as part of my job at Stanford, and I will be orphaning many of those packages. I'll make another journal post about that a bit later.

For Debian folks, I am going to be at Debconf, and hope to meet many of you there. (It's going to sort of be my break between jobs.) In the long run, I'm hoping this move will let me increase my Debian involvement.

In the long run, I expect most of my free software work, my reviews, and the various services I run to continue as before, or even improve as my stress drops. But I've been at Stanford for a very long time, so this is quite the leap into the unknown, and it's going to take a while before I'm sure what new pattern my life will fall into.

Categories: Elsewhere

X-Team: ContributeX: Drupal 8 needs you

Planet Drupal - Sat, 09/08/2014 - 02:38
“Contributing to Drupal is life-changing.” Dries, you couldn’t be more right. At DrupalCamp Singapore 2014, developers from all over Asia had the opportunity to spend a day focused on learning, collaborating and preparing for Drupal 8. One takeaway was being reminded of the fact that Drupal 8 still needs significant help from its community to be...
Categories: Elsewhere

Clint Adams: The politically-correct term is a juvenile cricket

Planet Debian - Fri, 08/08/2014 - 22:16

Normally I'm disgusted by fangirling of jwz, but it seems that he finally wrote something I like.

Categories: Elsewhere

Daniel Pocock: Help needed reviewing Ganglia GSoC changes

Planet Debian - Fri, 08/08/2014 - 22:14

The Ganglia project has been delighted to have Google's support for 5 students in Google Summer of Code 2014. The program officially finishes in ten more days, on 18 August.

If you are a user of Ganglia, Nagios, RRDtool or R or just an enthusiastic C or Python developer, you may be able to use and provide feedback for the students while benefitting from the cool new features they have been working on.

Student Technology Comments Chandrika Parimoo Python, Nagios and some Syslog Chandrika generalized some of my ganglia-nagios-bridge code into the PyNag library. I then used it as the basis for syslog-nagios-bridge. Chandrika has also done some work on improving the ganglia-nagios-bridge configuration file format. Oliver Hamm C Oliver has been working on metrics about Ganglia infrastructure. If you have a large and dynamic Ganglia cloud, this is for you. Plamen Dimitrov R, RRDtool Plamen has been building an R plugin for inspecting RRD files from Ganglia or any other type of RRD. Rana NVIDIA, C Rana has been working on improvements to Ganglia monitoring of NVIDIA GPUs, especially in HPC clusters Zhi An Java, JMX Zhi An has been extending the JMXetric and gmetric4j projects to provide more convenient monitoring of Java server processes.

If you have any feedback or questions, please feel free to discuss on the Ganglia-general mailing list and CC the student and their mentor.

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator - Elsewhere