Agrégateur de flux

Drupal core announcements: Drupal 8 beta 5 on Wednesday, January 28, 2014

Planet Drupal - mar, 20/01/2015 - 15:36

The next beta for Drupal 8 will be beta 5! Here is the schedule for the beta release.

Tuesday, January 27, 2014 Only critical and major patches committed Wednesday, January 28, 2014 Drupal 8.0.0-beta5 released. Emergency commits only.
Catégories: Elsewhere

Raphael Geissert: Edit Debian, with iceweasel

Planet Debian - mar, 20/01/2015 - 08:00
Soon after publishing the chromium/chrome extension that allows you to edit Debian online, Moez Bouhlel sent a pull request to the extension's git repository: all the changes needed to make a firefox extension!

After another session of browser extensions discovery, I merged the commits and generated the xpi. So now you can go download the Debian online editing firefox extension and hack the world, the Debian world.

Install it and start contributing to Debian from your browser. There's no excuse now.

Catégories: Elsewhere

VM(doh): Be Careful with Large Select Lists on Drupal Commerce Line Item Type Configuration

Planet Drupal - lun, 19/01/2015 - 23:26

Recently, we were debugging some performance issues with a client's Drupal Commerce website. After doing the standard optimizations, we hooked up New Relic so we could see exactly what else could be trimmed.

The site is using different line item types to differentiate between products that should be taxed in different ways. Each line item type has a field where administrators can select the tax code to use for that line item type. The options for the select list are populated via an API call to another service provider. The call for the list was using the static cache because it was thought that the list would only be populated when needed on the line item type configuration page. In reality, that's not the case.

When an Add to Cart form is displayed in Drupal Commerce, it also loads the line item type and the line item type's fields. When loading the fields, it loads all of the options even if the "Include this field on Add to Cart forms for line items of this type" option is not enabled for that field. In this case, it resulted in 90 HTTP calls to populate the list of tax codes every time someone viewed a page with an Add to Cart form.

The solution was to actually cache those results using Drupal's Cache API. You can see the improvement:

Catégories: Elsewhere

Daniel Pocock: jSMPP project update, 2.1.1 and 2.2.1 releases

Planet Debian - lun, 19/01/2015 - 22:29

The jSMPP project on Github stopped processing pull requests over a year ago and appeared to be needing some help.

I've recently started hosting it under https://github.com/opentelecoms-org/jsmpp and tried to merge some of the backlog of pull requests myself.

There have been new releases:

  • 2.1.1 works in any project already using 2.1.0. It introduces bug fixes only.
  • 2.2.1 introduces some new features and API changes and bigger bug fixes

The new versions are easily accessible for Maven users through the central repository service.

Apache Camel has already updated to use 2.1.1.

Thanks to all those people who have contributed to this project throughout its history.

Catégories: Elsewhere

Daniel Pocock: Storage controllers for small Linux NFS networks

Planet Debian - lun, 19/01/2015 - 14:59

While contemplating the disk capacity upgrade for my Microserver at home, I've also been thinking about adding a proper storage controller.

Currently I just use the built-in controller in the Microserver. It is an AMD SB820M SATA controller. It is a bottleneck for the SSD IOPS.

On the disks, I prefer to use software RAID (such as md or BtrFs) and not become dependent on the metadata format of any specific RAID controller. The RAID controllers don't offer the checksumming feature that is available in BtrFs and ZFS.

The use case is NFS for a small number of workstations. NFS synchronous writes block the client while the server ensures data really goes onto the disk. This creates a performance bottleneck. It is actually slower than if clients are writing directly to their local disks through the local OS caches.

SSDs on an NFS server offer some benefit because they can complete write operations more quickly and the NFS server can then tell the client the operation is complete. The more performant solution (albeit with a slight risk of data corruption) is to use a storage controller with a non-volatile (battery-backed or flash-protected) write cache.

Many RAID controllers have non-volatile write caches. Some online discussions of BtrFs and ZFS have suggested staying away from full RAID controllers though, amongst other things, to avoid the complexities of RAID controllers adding their metadata to the drives.

This brings me to the first challenge though: are there suitable storage controllers that have a non-volatile write cache but without having other RAID features?

Or a second possibility: out of the various RAID controllers that are available, do any provide first-class JBOD support?

Observations

I looked at specs and documentation for various RAID controllers and identified some of the following challenges:

Next steps

Are there other options to look at, for example, alternatives to NFS?

If I just add in a non-RAID HBA to enable faster IO to the SSDs will this be enough to make a noticeable difference on the small number of NFS clients I'm using?

Or is it inevitable that I will have to go with one of the solutions that involves putting a vendor's volume metadata onto JBOD volumes? If I do go that way, which of the vendors' metadata formats are most likely to be recognized by free software utilities in the future if I ever connect the disk to a generic non-RAID HBA?

Thanks to all those people who provided comments about choosing drives for this type of NAS usage.

Related reading
Catégories: Elsewhere

InternetDevels: The module for changing login/registration form view

Planet Drupal - lun, 19/01/2015 - 11:16

While developing a site, we have been often faced with the task of changing the way the login form (authorization unit) is displayed. Previously, in such cases a css file was used. InternetDevels team has simplified this task by creating a “Сustomize login form” module. This tool allows to change the view of the site's authorization/registration/"Forgot your password?” forms using administration tool.

Read more
Catégories: Elsewhere

Web Omelette: Creating a custom Views field in Drupal 8

Planet Drupal - lun, 19/01/2015 - 09:10

In this article I am going to show you how to create a custom Views field in Drupal 8. At the end of this tutorial, you will be able to add a new field to any node based View which will flag (by displaying a specific message) the nodes of a particular type (configurable in the field configuration). Although I will use nodes, you can use this example to create custom fields for other entities as well.

So let's get started by creating a small module called node_type_flagger (which you can also find in this repository):

node_type_flagger.info.yml:

name: Node Type Flagger description: 'Demo module that flags a particular node type in a View listing' type: module core: 8.x

In Drupal 7, whenever we want to create a custom field, filter, relationship, etc for Views, we need to implement hook_views_api() and declare the version of Views we are using. That is no longer necessary in Drupal 8. What we do now is create a file called module_name.views.inc in the root of our module and implement the views related hooks there.

To create a custom field for the node entity, we need to implement hook_views_data_alter():

node_type_flagger.views.inc:

/** * Implements hook_views_data_alter(). */ function node_type_flagger_views_data_alter(array &$data) { $data['node']['node_type_flagger'] = array( 'title' => t('Node type flagger'), 'field' => array( 'title' => t('Node type flagger'), 'help' => t('Flags a specific node type.'), 'id' => 'node_type_flagger', ), ); }

In this implementation we extend the node table definition by adding a new field called node_type_flagger. Although there are many more options you can specify here, these will be enough for our purpose. The most important thing to remember is the id key (under field) which marks the id of the views plugin that will be used to handle this field. In Drupal 7 we have instead a handler key in which we specify the class name.

In Drupal 8 we have something called plugins and many things have now been converted to plugins, including views handlers. So let's define ours inside the src/Plugin/views/field folder of our module:

src/Plugin/views/field/NodeTypeFlagger.php

<?php /** * @file * Definition of Drupal\node_type_flagger\Plugin\views\field\NodeTypeFlagger */ namespace Drupal\node_type_flagger\Plugin\views\field; use Drupal\Core\Form\FormStateInterface; use Drupal\node\Entity\NodeType; use Drupal\views\Plugin\views\field\FieldPluginBase; use Drupal\views\ResultRow; /** * Field handler to flag the node type. * * @ingroup views_field_handlers * * @ViewsField("node_type_flagger") */ class NodeTypeFlagger extends FieldPluginBase { /** * @{inheritdoc} */ public function query() { // Leave empty to avoid a query on this field. } /** * Define the available options * @return array */ protected function defineOptions() { $options = parent::defineOptions(); $options['node_type'] = array('default' => 'article'); return $options; } /** * Provide the options form. */ public function buildOptionsForm(&$form, FormStateInterface $form_state) { $types = NodeType::loadMultiple(); $options = []; foreach ($types as $key => $type) { $options[$key] = $type->label(); } $form['node_type'] = array( '#title' => $this->t('Which node type should be flagged?'), '#type' => 'select', '#default_value' => $this->options['node_type'], '#options' => $options, ); parent::buildOptionsForm($form, $form_state); } /** * @{inheritdoc} */ public function render(ResultRow $values) { $node = $values->_entity; if ($node->bundle() == $this->options['node_type']) { return $this->t('Hey, I\'m of the type: @type', array('@type' => $this->options['node_type'])); } else { return $this->t('Hey, I\'m something else.'); } } }

We are defining our NodeTypeFlagger class that extends FieldPluginBase (which is the base plugin abstract class for the views field many plugins extend from). Just above the class declaration we use the @ViewsField annotation to specify the id of this plugin (the same one we declared in the hook_views_data_alter() implementation). We also use the @ingroup annotation to mark that this is a views field handler.

In our example class, we have 4 methods (all overriding the parent class ones).

Query

First, we override the query() method but leave it empty. This is so that views does not try to include this field in the regular node table query (since the field is not backed by a table column).

DefineOptions

The second method is the defineOptions() method through which we specify what configuration options we need for this field. In our case one is enough: we need to specify the node type which we want flagged in the Views results. We set a sensible default as the article node type.

BuildOptionsForm

The third method, buildOptionsForm() is responsible for creating the form for the configuration options we declared earlier. In our case we just have a select list with which we can choose from the existing node types.

Render

Lastly, the render() method which is the most important and which deals with output. We use it to actually render the content of the field for each result. Here is where we perform some business logic based on the currently set node type option and flag with a message whether or not the current result is in fact of the type specified in the configuration.

The $resultRow object is an instance of Drupal\views\ResultRow which contains data returned for the current row by Views and the entity object at the base of the query (in our case the node). Based on this information we can perform our logic.

Keep in mind you can use depedency injection to inject all sorts of services into this class and make use of them in your logic. Additionally, you can override various other methods of the parent class in order to further customize your field.

Conclusion

There you have it. A small custom module that demonstrates how to create a custom Views field (plugin). Relationships, filters, sorters and others work in similar way. I will be covering those in later articles. Stay tuned.

var switchTo5x = true;stLight.options({"publisher":"dr-8de6c3c4-3462-9715-caaf-ce2c161a50c"});
Catégories: Elsewhere

DrupalOnWindows: Node Comment and Forum working together to boost user participation

Planet Drupal - lun, 19/01/2015 - 07:00

It is frequent that customers approach us asking for help to rescue their projects from site builders. Sometimes they have technological issues (mainly slow sites) but sometimes it's just plain bad usability os some wrong marketing concepts.

We recently were asked for help from a site that gets about 5,000 unique visitors a day. Despite the not so bad visitor numbers for their niche, this page was getting very low user interaction. They barely got a handful (<10) of comments and forum posts in a whole year timespan.

Language English
Catégories: Elsewhere

Drupal core announcements: Drupal core security release window on Wednesday, January 21

Planet Drupal - lun, 19/01/2015 - 06:51
Start:  2015-01-21 (All day) America/New_York Online meeting (eg. IRC meeting) Organizers:  David_Rothstein

The monthly security release window for Drupal 6 and Drupal 7 core will take place on Wednesday, January 21.

This does not mean that a Drupal core security release will necessarily take place on that date for either the Drupal 6 or Drupal 7 branches, only that you should prepare to look out for one (and be ready to update your Drupal sites in the event that the Drupal security team decides to make a release).

There will be no bug fix release on this date; the next window for a Drupal core bug fix release is Wednesday, February 4.

For more information on Drupal core release windows, see the documentation on release timing and security releases, and the discussion that led to this policy being implemented.

Catégories: Elsewhere

Jonathan Wiltshire: Alcester BSP, day three

Planet Debian - dim, 18/01/2015 - 23:27

We have had a rather more successful weekend then I feared, as you can see from our log on the wiki page. Steve reproduced and wrote patches for several installer/bootloader bugs, and Neil and I spent significant time in a maze of twist zope packages (we have managed to provide more diagnostics on the bug, even if we couldn’t resolve it). Ben and Adam have ploughed through a mixture of bugs and maintenance work.

I wrongly assumed we would only be able to touch a handful of bugs, since they are now mostly quite difficult, so it was rather pleasant to recap our progress this evening and see that it’s not all bad after all.

Alcester BSP, day three is a post from: jwiltshire.org.uk | Flattr

Catégories: Elsewhere

Gregor Herrmann: RC bugs 2014/51-2015/03

Planet Debian - dim, 18/01/2015 - 22:41

I have to admit that I was a bit lazy when it comes to working on RC bugs in the last weeks. here's my not-so-stellar summary:

  • #729220 – pdl: "pdl: problems upgrading from wheezy due to triggers"
    investigate (unsuccessfully), later fixed by maintainer
  • #772868 – gxine: "gxine: Trigger cycle causes dpkg to fail processing"
    switch trigger from "interest" to "interest-noawait", upload to DELAYED/2
  • #774584 – rtpproxy: "rtpproxy: Deamon does not start as init script points to wrong executable path"
    adjust path in init script, upload to DELAYED/2
  • #774791 – src:xine-ui: "xine-ui: Creates dpkg trigger cycle via libxine2-ffmpeg, libxine2-misc-plugins or libxine2-x"
    add trigger patch from Michael Gilbert, upload to DELAYED/2
  • #774862 – ciderwebmail: "ciderwebmail: unhandled symlink to directory conversion: /usr/share/ciderwebmail/root/static/images/mimeicons"
    use dpkg-maintscript-helper to fix symlink_to_dir conversion (pkg-perl)
  • #774867 – lirc-x: "lirc-x: unhandled symlink to directory conversion: /usr/share/doc/PACKAGE"
    use dpkg-maintscript-helper to fix symlink_to_dir conversion, upload to DELAYED/2
  • #775640 – src:libarchive-zip-perl: "libarchive-zip-perl: FTBFS in jessie: Tests failures"
    start to investigate (pkg-perl)
Catégories: Elsewhere

Mark Brown: Heating the Internet of Things

Planet Debian - dim, 18/01/2015 - 22:23

Internet of Things seems to be trendy these days, people like the shiny apps for controlling things and typically there are claims that the devices will perform better than their predecessors by offloading things to the cloud – but this makes some people worry that there are potential security issues and it’s not always clear that internet usage is actually delivering benefits over something local. One of the more widely deployed applications is smart thermostats for central heating which is something I’ve been playing with. I’m using Tado, there’s also at least Nest and Hive who do similar things, all relying on being connected to the internet for operation.

The main thing I’ve noticed has been that the temperature regulation in my flat is better, my previous thermostat allowed the temperature to vary by a couple of degrees around the target temperature in winter which got noticeable, with this the temperature generally seems to vary by a fraction of a degree at most. That does use the internet connection to get the temperature outside, though I’m fairly sure that most of this is just a better algorithm (the thermostat monitors how quickly the flat heats up when heating and uses this when to turn off rather than waiting for the temperature to hit the target then seeing it rise further as the radiators cool down) and performance would still be substantially improved without it.

The other thing that these systems deliver which does benefit much more from the internet connection is that it’s easy to control them remotely. This in turn makes it a lot easier to do things like turn the heating off when it’s not needed – you can do it remotely, and you can turn the heating back on without being in the flat so that you don’t need to remember to turn it off before you leave or come home to a cold building. The smarter ones do this automatically based on location detection from smartphones so you don’t need to think about it.

For example, when I started this post this I was sitting in a coffee shop so the heating had been turned off based on me taking my phone with me and as a result the temperature gone had down a bit. By the time I got home the flat was back up to normal temperature all without any meaningful intervention or visible difference on my part. This is particularly attractive for me given that I work from home – I can’t easily set a schedule to turn the heating off during the day like someone who works in an office so the heating would be on a lot of the time. Tado and Nest will to varying extents try to do this automatically, I don’t know about Hive. The Tado one at least works very well, I can’t speak to the others.

I’ve not had a bill for a full winter yet but I’m fairly sure looking at the meter that between the two features I’m saving a substantial amount of energy (and hence money and/or the environment depending on what you care about) and I’m also seeing a more constant temperature within the flat, my guess would be that most of the saving is coming from the heating being turned off when I leave the flat. For me at least this means that having the thermostat internet connected is worthwhile.

Catégories: Elsewhere

3C Web Services: Creating a Drag &amp; Drop Sorting Interface for a Drupal View

Planet Drupal - dim, 18/01/2015 - 21:45
How to create a drag and drop sorting interface for a Drupal 7 View.
Catégories: Elsewhere

Danny Englander: Drupal Drush Segmentation Fault 11 Error: Avoiding the Rabbit Hole

Planet Drupal - dim, 18/01/2015 - 21:02

I've been doing a lot lately with Grunt and LibSass within my drupal.org contrib theme, Gratis. Yesterday, I updated my Node modules locally. Shortly thereafter, I started getting a nasty Drush error.

line 1: 48475 Segmentation fault: 11  
/opt/local/bin/php /Users/danny/.composer/vendor/drush/drush/drush.php
--php=/opt/local/bin/php --backend=2
--root=/Users/danny/Sites/Drupal/gratis2/gratis2-site
--uri=http://default pm-updatestatus 2>&1 or sometimes just:

Segmentation fault: 11

Not only that but my local site's admin UI started WSODing. I didn't immediately connect the Node NPM update to the drush error. So I looked in my MacPorts Apache log and saw hundreds of these streaming down every few seconds:

[Sat Jan 17 13:03:56 2015] [notice] child pid 49312 exit signal Segmentation fault (11)

No joy

Doing a Google search led me to some varied and vague issues with regard to Apache and MySQL but none of theme really rang true to what I was experiencing. I decided to check some of my other local sites and they all seemed fine; no errors, WSODs, or otherwise. Bizarre! I worked for about an hour, but still no joy, I was headed down a rabbit hole. That being said, I let this rest for a while. I always let a problem to sit for a bit if I can't fix it right away or ask for help. More often than not, I'll come back later and end up fixing it.

The search

I got out for some air and went to downtown San Diego to take some photos. That usually gets my mind off things and is relaxing. Arriving back later in the day, I got back into it and decided to search for drush cache clear segmentation fault theme. Bingo! (and 50 browser tabs later). I don't know why I didn't search for this earlier in the day, I was just searching the pure Apache log error which knows nothing of drush.

Sure enough it's an error related to Node modules (from the node_modules folder) having a .info file. Drush sees that and thinks it's supposed to be part of Drupal. The problem is, in a Drush world, these files are malformed. Thus the errors. Right about now, I was wishing there was some kind of .drushignore file along the same lines as .gitignore.

With this new search, here's the relevant posts I found:

In turn, these led me to the main issue, Themes should not crash when .info file appears inside node_modules

It turns out there is a proposed patch for core to prevent this error. I somehow don't see this getting in anytime soon but there are some workarounds on the Node / Grunt end of things.

Custom script

Here is the fix that I arrived at based on all the suggestions and comments in this last issue. First, we need to write a Node NPM cleanup Bash script. The script will find any .info files and rename them to .inf0 (with a zero). This will not have any negative effects as you don't commit node_modules folder to your repo and the info files are not actually needed for Grunt to run properly. So we'll call our script,npm_post.sh

#!/bin/sh
# npm_post.sh

# This script finds any .info files in the node_modules directory and renames them so they don't
# conflict with drush. package.json runs this on completion of npm install.
# These files, if any are not actually needed to run grunt and compile LIbSass
# See this issue for more info: https://www.drupal.org/node/2329453

find -L ./node_modules -type f -name "*.info" -print0 | while IFS= read -r -d '' FNAME; do
    mv -- "$FNAME" "${FNAME%.info}.inf0"
done

Once you have this in the same folder as your package.json file (in my case the root of my theme), you'll need to call it with a postinstall method from your package.json file.

  "scripts": {
    "postinstall": "sh npm_post.sh"
  },

One caveat here is that you may run into an error that the script won't run. To solve this you can either run sudo npm install --unsafe-perm or alternatively create an .npmrc file with the code:

unsafe-perm = true

and then run sudo npm install as usual.

Conclusion

Running into errors like this is definitely not fun but I learned a lot in the process. I am not sure if this is the best fix in the world but it seems to work fine for my use case. It also shows us to not get tunnel vision when trying to fix a development problem.

Tags 
  • Drupal
  • Grunt
  • Node NPM
  • Drupal Planet
Catégories: Elsewhere

Dirk Eddelbuettel: Running UBSAN tests via clang with Rocker

Planet Debian - dim, 18/01/2015 - 18:37

Every now and then we get reports from CRAN about our packages failing a test there. A challenging one concerns UBSAN, or Undefined Behaviour Sanitizer. For background on UBSAN, see this RedHat blog post for gcc and this one from LLVM about clang.

I had written briefly about this before in a blog post introducing the sanitizers package for tests, as well as the corresponding package page for sanitizers, which clearly predates our follow-up Rocker.org repo / project described in this initial announcement and when we became the official R container for Docker.

Rocker had support for SAN testing, but UBSAN was not working yet. So following a recent CRAN report against our RcppAnnoy package, I was unable to replicate the error and asked for help on r-devel in this thread.

Martyn Plummer and Jan van der Laan kindly sent their configurations in the same thread and off-list; Jeff Horner did so too following an initial tweet offering help. None of these worked for me, but further trials eventually lead me to the (already mentioned above) RedHat blog post with its mention of -fno-sanitize-recover to actually have an error abort a test. Which, coupled with the settings used by Martyn, were what worked for me: clang-3.5 -fsanitize=undefined -fno-sanitize=float-divide-by-zero,vptr,function -fno-sanitize-recover.

This is now part of the updated Dockerfile of the R-devel-SAN-Clang repo behind the r-devel-ubsan-clang. It contains these settings, as well a new support script check.r for littler---which enables testing right out the box.

Here is a complete example:

docker # run Docker (any recent version, I use 1.2.0) run # launch a container --rm # remove Docker temporary objects when dome -ti # use a terminal and interactive mode -v $(pwd):/mnt # mount the current dir. as /mnt in the container rocker/r-devel-ubsan-clang # using the rocker/r-devel-ubsan-clang container check.r # launch the check.r command from littler (in container) --setwd /mnt # with a setd() to the /mnt directory --install-deps # installing all package dependencies before the test RcppAnnoy_0.0.5.tar.gz # and test this tarball

I know. It's a mouthful. But it really is merely the standard practice of running Docker to launch a single command. And while I frequently make this the /bin/bash command (hence the -ti options I always use) to work and explore interactively, here we do one better thanks to the (pretty useful so far) check.r script I wrote over the last two days.

check.r does about the same as R CMD check. If you look inside check you will see a call to a (non-exported) function from the (R base-internal) tools package. We call the same function here. But to make things more interesting we also first install the package we test to really ensure we have all build-dependencies from CRAN met. (And we plan to extend check.r to support additional apt-get calls in case other libraries etc are needed.) We use the dependencies=TRUE option to have R smartly install Suggests: as well, but only one level (see help(install.packages) for details. With that prerequisite out of the way, the test can proceed as if we had done R CMD check (and additional R CMD INSTALL as well). The result for this (known-bad) package:

edd@max:~/git$ docker run --rm -ti -v $(pwd):/mnt rocker/r-devel-ubsan-clang check.r --setwd /mnt --install-deps RcppAnnoy_0.0.5.tar.gz also installing the dependencies ‘Rcpp’, ‘BH’, ‘RUnit’ trying URL 'http://cran.rstudio.com/src/contrib/Rcpp_0.11.3.tar.gz' Content type 'application/x-gzip' length 2169583 bytes (2.1 MB) opened URL ================================================== downloaded 2.1 MB trying URL 'http://cran.rstudio.com/src/contrib/BH_1.55.0-3.tar.gz' Content type 'application/x-gzip' length 7860141 bytes (7.5 MB) opened URL ================================================== downloaded 7.5 MB trying URL 'http://cran.rstudio.com/src/contrib/RUnit_0.4.28.tar.gz' Content type 'application/x-gzip' length 322486 bytes (314 KB) opened URL ================================================== downloaded 314 KB trying URL 'http://cran.rstudio.com/src/contrib/RcppAnnoy_0.0.4.tar.gz' Content type 'application/x-gzip' length 25777 bytes (25 KB) opened URL ================================================== downloaded 25 KB * installing *source* package ‘Rcpp’ ... ** package ‘Rcpp’ successfully unpacked and MD5 sums checked ** libs clang++-3.5 -fsanitize=undefined -fno-sanitize=float-divide-by-zero,vptr,function -fno-sanitize-recover -I/usr/local/lib/R/include -DNDEBUG -I../inst/include/ -I/usr/local/include -fpic -pipe -Wall -pedantic - g -c Date.cpp -o Date.o [...] * checking examples ... OK * checking for unstated dependencies in ‘tests’ ... OK * checking tests ... Running ‘runUnitTests.R’ ERROR Running the tests in ‘tests/runUnitTests.R’ failed. Last 13 lines of output: + if (getErrors(tests)$nFail > 0) { + stop("TEST FAILED!") + } + if (getErrors(tests)$nErr > 0) { + stop("TEST HAD ERRORS!") + } + if (getErrors(tests)$nTestFunc < 1) { + stop("NO TEST FUNCTIONS RUN!") + } + } Executing test function test01getNNsByVector ... ../inst/include/annoylib.h:532:40: runtime error: index 3 out of bounds for type 'int const[2]' * checking PDF version of manual ... OK * DONE Status: 1 ERROR, 2 WARNINGs, 1 NOTE See ‘/tmp/RcppAnnoy/..Rcheck/00check.log’ for details. root@a7687c014e55:/tmp/RcppAnnoy#

The log shows that thanks to check.r, we first download and the install the required packages Rcpp, BH, RUnit and RcppAnnoy itself (in the CRAN release). Rcpp is installed first, we then cut out the middle until we get to ... the failure we set out to confirm.

Now having a tool to confirm the error, we can work on improved code.

One such fix currently under inspection in a non-release version 0.0.5.1 then passes with the exact same invocation (but pointing at RcppAnnoy_0.0.5.1.tar.gz):

edd@max:~/git$ docker run --rm -ti -v $(pwd):/mnt rocker/r-devel-ubsan-clang check.r --setwd /mnt --install-deps RcppAnnoy_0.0.5.1.tar.gz also installing the dependencies ‘Rcpp’, ‘BH’, ‘RUnit’ [...] * checking examples ... OK * checking for unstated dependencies in ‘tests’ ... OK * checking tests ... Running ‘runUnitTests.R’ OK * checking PDF version of manual ... OK * DONE Status: 1 WARNING See ‘/mnt/RcppAnnoy.Rcheck/00check.log’ for details. edd@max:~/git$

This proceeds the same way from the same pristine, clean container for testing. It first installs the four required packages, and the proceeds to test the new and improved tarball. Which passes the test which failed above with no issues. Good.

So we now have an "appliance" container anybody can download from free from the Docker hub, and deploy as we did here in order to have fully automated, one-command setup for testing for UBSAN errors.

UBSAN is a very powerful tool. We are only beginning to deploy it. There are many more useful configuration settings. I would love to hear from anyone who would like to work on building this out via the R-devel-SAN-Clang GitHub repo. Improvements to the littler scripts are similarly welcome (and I plan on releasing an updated littler package "soon").

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Catégories: Elsewhere

EvolvisForge blog: Debian/m68k hacking weekend cleanup

Planet Debian - dim, 18/01/2015 - 17:16

OK, time to clean up ↳ tarent so people can work again tomorrow.

Not much to clean though (the participants were nice and cleaned up after themselves ☺), so it’s mostly putting stuff back to where it belongs. Oh, and drinking more of the cool Belgian beer Geert (Linux upstream) brought ☻

We were productive, reporting and fixing kernel bugs, fixing hardware, swapping and partitioning discs, upgrading software, getting buildds (mostly Amiga) back to work, trying X11 (kdrive) on a bare metal Atari Falcon (and finding a window manager that works with it), etc. – I hope someone else writes a report; for now we have a photo and a screenshot (made with trusty xwd). Watch the debian-68k mailing list archives for things to come.

I think that, issues with electric cars aside, everyone liked the food places too ;-)

Catégories: Elsewhere

Andreas Metzler: Another new toy

Planet Debian - dim, 18/01/2015 - 16:35

Given that snow is yet a little bit sparse for snowboarding and the weather could be improved on I have made myself a late christmas present:

It is a rather sporty rodel (Torggler TS 120 Tourenrodel Spezial 2014/15, 9kg weight, with fast (non stainless) "racing rails" and 22° angle of the runners) but not a competition model. I wish I had bought this years ago. It is a lot more comfortable than a classic sled ("Davoser Schlitten"), since one is sitting in instead of on top of the sled somehow like in a hammock. Being able to steer without putting a foot into the snow has the nice side effect that the snow stays on the ground instead of ending up in my face. Obviously it is also faster which is a huge improvement even for recreational riding, since it makes the difference between riding the sledge or pulling it on flattish stretches. Strongly recommended.

FWIW I ordered this via rodelfuehrer.de (they started with a guidebook of luge tracks, which translates to "Rodelführer"), where I would happily order again.

Catégories: Elsewhere

Chris Lamb: Adjusting a backing track with SoX

Planet Debian - dim, 18/01/2015 - 13:28

Earlier today I came across some classical sheet music that included a "playalong" CD, just like a regular recording except it omits the solo cello part. After a quick listen it became clear there were two problems:

  • The recording was made at A=442, rather than the more standard A=440.
  • The tempi of the movements was not to my taste, either too fast or too slow.

SoX, the "Swiss Army knife of sound processing programs", can easily adjust the latter, but to remedy the former it must be provided with a dimensionless "cent" unit—ie. 1/100th of a semitone—rather than the 442Hz and 440Hz reference frequencies.

First, we calculate the cent difference with:

Next, we rip the material from the CD:

$ sudo apt-get install ripit flac [..] $ ripit --coder 2 --eject --nointeraction [..]

And finally we adjust the tempo and pitch:

$ apt-get install sox libsox-fmt-mp3 [..] $ sox 01.flac 01.mp3 pitch -7.85 tempo 1.00 # (Tuning notes) $ sox 02.flac 02.mp3 pitch -7.85 tempo 0.95 # Too fast! $ sox 03.flac 03.mp3 pitch -7.85 tempo 1.01 # Close.. $ sox 04.flac 04.mp3 pitch -7.85 tempo 1.03 # Too slow!

(I'm converting to MP3 at the same time it'll be more convenient on my phone.)

Catégories: Elsewhere

Alexander Mikhailian: Data-mining Drupal users in a screenful of code

Planet Drupal - dim, 18/01/2015 - 10:53
Objective

Select like-minded users from a local community website.

Pre-requisites
  1. A Drupal website with the votingapi module enabled and at least a few dozen votes by registered users.
  2. A working installation of the R language.
Exract data

For each user, select all other users that voted on same node and comments:

SELECT v1.uid uid1, v2.uid uid2, u.name name2, v2.entity_id entity_id, v1.value value1, v2.value value2 FROM votingapi_vote v1 JOIN (votingapi_vote v2, users u) ON (v1.uid != v2.uid AND v1.entity_id=v2.entity_id AND v1.entity_type=v2.entity_type AND v2.uid=u.uid) WHERE v1.uid

This produces a table

Catégories: Elsewhere

Ian Campbell: Using Grub 2 as a bootloader for Xen PV guests on Debian Jessie

Planet Debian - dim, 18/01/2015 - 10:23

I recently wrote a blog post on using grub 2 as a Xen PV bootloader for work. See Using Grub 2 as a bootloader for Xen PV guests over on https://blog.xenproject.org.

Rather than repeat the whole thing here I'll just briefly cover the stuff which is of interest for Debian users (if you want all full background and the stuff on building grub from source etc then see the original post).

TL;DR: With Jessie, install grub-xen-host in your domain 0 and grub-xen in your PV guests then in your guest configuration, depending on whether you want a 32- or 64-bit PV guest write either:

kernel = "/usr/lib/grub-xen/grub-i386-xen.bin"

or

kernel = "/usr/lib/grub-xen/grub-x86_64-xen.bin"

(instead of bootloader = ... or other kernel = ..., also omit ramdisk = ... and any command line related stuff (e.g. root = ..., extra = ..., cmdline = ... ) and your guests will boot using Grub 2, much like on native.

In slightly more detail:

The forthcoming Debian 8.0 (Jessie) release will contain support for both host and guest pvgrub2. This was added in version 2.02~beta2-17 of the package (bits were present before then, but -17 ties it all together).

The package grub-xen-host contains grub binaries configured for the host, these will attempt to chainload an in-guest grub image (following the Xen x86 PV Bootloader Protocol) and fall back to searching for a grub.cfg in the guest filesystems. grub-xen-host is Recommended by the Xen meta-packages in Debian or can be installed by hand.

The package grub-xen-bin contains the grub binaries for both the i386-xen and x86_64-xen platforms, while the grub-xen package integrates this into the running system by providing the actual pvgrub2 image (i.e. running grub-install at the appropriate times to create an image tailored to the system) and integration with the kernel packages (i.e. running update-grub at the right times), so it is the grub-xen which should be installed in Debian guests.

At this time the grub-xen package is not installed in a guest automatically so it will need to be done manually (something which perhaps could be addressed for Stretch).

Catégories: Elsewhere

Pages

Subscribe to jfhovinne agrégateur