Feed aggregator

Triquanta Web Solutions: SEO and CDN

Planet Drupal - 4 hours 50 min ago

So you decided to start using a CDN provider for your website. A good idea! But a lot of CDN providers use a custom URL that you should CNAME when everything is set up properly.

For instance Fastly and Cloudfront two big CDN providers.

When I want to add this website to Fastly they will give me the URL: www.triquanta.nl.global.prod.fastly.net
For Cloudfront it will be something like: d67something714.cloudfront.net

Once you CNAME'd these people will most likely not see these. But it can happen that these domains are going to be indexed by search engines.

And there you have it.... Duplicate content.
This means that your CDN provider is concurring the actual main domain. You don't want this, because it is a bad thing for your Search Engine Optimization (SEO)

To prevent this use the Canonical meta tag for all of your content pages. ( see https://support.google.com/webmasters/answer/139066?hl=en&rd=1 for more info)
In Drupal this can be done using the metatag module https://www.drupal.org/project/metatag this module can add the canonical and a lot of other desired meta-tags (see https://groups.drupal.org/node/229413 for the full list).

Now your content is okay but what about your files (images, pdf, word, etc).....
Since 2011 Google (and the rest followed Google) also support the canonical when it is used in the response headers. The next step is to add the header to the files. This can be done on your own server.
Apache .htaccess example with mod rewite and mod headers enabled.

  1. <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf|webp|html)(\.gz)?(\?.*)?$">
  2.     <IfModule mod_rewrite.c>
  3.        RewriteEngine On
  4.        RewriteCond %{HTTPS} !=on
  5.        RewriteRule .* - [E=CANONICAL:http://www.triquanta.nl%{REQUEST_URI},NE]
  6.        RewriteCond %{HTTPS} =on
  7.        RewriteRule .* - [E=CANONICAL:https://www.triquanta.nl%{REQUEST_URI},NE]
  8.     </IfModule>
  9.     <IfModule mod_headers.c>
  10.        Header set Link "<%{CANONICAL}e>; rel=\"canonical\""
  11.     </IfModule>
  12.  </FilesMatch>

Ngnix example.

  1. location ~ \.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf|webp|html)(\.gz)?(\?.*)?$ {
  2.   add_header Link "<$scheme://www.triquanta.nl$request_uri>; rel=\"canonical\"";
  3. }

When a file is being accessed using the CDN URL it will add the proper Canonical headers,  and you will not have any duplicate content issues.

Categories: Elsewhere

Chen Hui Jing: Drupal 101: Creating an iTunes podcast feed

Planet Drupal - 14 hours 2 min ago

Podcast listenership has been steadily increasing in recent years, and some are even predicting that we’re on the verge of a podcasting explosion. With that being said, it’s pretty likely you’ll get tasked with creating an iTunes podcast feed. Luckily, it’s quite simple to create one on your Drupal site with Views.

Required modules

Create/Modify content type for feed
  1. Install and enable the required modules. drush en views views_ui views_rss views_rss_core views_rss_itunes libraries getid3 -y
    • Create a new folder in your libraries folder like so:...
Categories: Elsewhere

DrupalOnWindows: Drupal: Fields or Properties (or something else)

Planet Drupal - Sun, 03/05/2015 - 19:00
Language English

Making Drupal scale is hard. It is even harder if you application is big and complex. And one of the main problems is that usually not enough attention is paid to data storage. But let me tell you that the storage model you pick is the backspine of your application, its heart, its soul. 

No fancy UI is ever going to compensate for a slow, unmaintainable and crappy engineered piece of software. 

More articles...
Categories: Elsewhere

orkjerns blogg: Drupal and IoT. Code examples, part 1

Planet Drupal - Sun, 03/05/2015 - 15:39
Drupal and IoT. Code examples, part 1 Body

As promised, I am posting the code for all the examples in the article about Drupal and the Internet of Things. Since I figured this could be also a good excuse to actually examplify different approaches to securing these communication channels, I decided to do different strategies for each code example. So here is the disclaimer. These posts (and maybe especially this one) would not necessarily contain the best-practices of establishing a communication channel from your "thing" to your Drupal site. But this is one example, and depending on the use-case, who knows, this might be easiest and most practical for you.

So, the first example we will look at is how to turn on and off your Drupal site with a TV remote control. If you did not read the previous article, or if you did not see the example video, here it is:

Overview of technology and communication flow

This is basically what is happening:

  • I click the on/off button on my TV remote.
  • A Tessel microcontroller reads the IR signal
  • The IR signal is analyzed to see if it indeed is the "on/off" button
  • A request is sent to my Drupal site
  • The Drupal site has enabled a module that defines an endpoint for toggling the site maintenance mode on and off
  • The Drupal site is toggled either on or off (depending on the previous state).
See any potential problems? Good. Let's start at the beginning Receiving IR and communicating with Drupal

OK, so this is a Drupal blog, and not a microcontroller or javascript blog. I won't go through this in detail here, but the full commented source code is at github. If you want to use it, you would need a tessel board though. If you have that, and want to give it a go, the easiest way to get started is probably to read through the tests. Let's just sum it up in a couple of bullet points, real quick:

  • All IR signals are collected by the Tessel. Fun fact: There will be indications of IR signals even when you are not pressing the remote.
  • IR signals from the same button are rarely completely identical, so some fuzzing is needed in the identification of a button press
  • Figuring out the "signature" of your "off-button" might require some research.
  • Configure the code to pass along the config for your site, so that when we know we want to toggle maintenance mode (the correct button is pressed), we send a request to the Drupal site.
Receiving a request to toggle maintenance mode

Now to the obvious problem. If you exposed a URL that would turn the site on and off, what is to stop any random person from just toggling your site status just for the kicks? Here is the part where I want to talk about different methods of authentication. Let us compare this to the actual administration form where you can toggle the maintenance mode. What is to stop people from just using that? Access control. You have to actually log in and have the correct permission (administer site configuration) to be able to see that page. Now, logging in with a micro controller is of course possible, but it is slightly more impractical than for a human. So let's explore our options. In 3 posts, this being the first. Since this is the first one, we will start with the least flexible. But perhaps the most lo-fi and most low-barrier entry. We are going to still use the permission system.

Re-using your browser login from the IR receiver

These paragraphs are included in case someone reading this needs background info about this part. If this seems very obvious, please skip ahead 2 paragraphs

Web apps these days do not require log-ins on each page (that would be very impractical), but actually uses a cookie to indicate you are still trusted to be the same user as when you logged in. So, for example, when I am writing this, it is because I have a session cookie stored in my browser, and this indicates I am authorised to post nodes on this site. So when I request a page, the cookie is passed along with it. We can also do the same passing of a cookie on a micro controller.

Sending fully authenticated requests without a browser

So to figure out how to still be authenticated as an admin user you can use your browser dev tools of your choice. Open a browser where you are logged in as a user allowed to put the site into maintenance mode. Now open your browser dev-tools (for example with Cmd-Alt-I in Chrome on a Mac). In the dev tools there will be a network tab. Keep this active while loading a page you want to get the session cookie from. You can now inspect one of the requests and see what headers your browser passed on to the server. One of these things is the header Cookie. It will include something along the lines of this (it starts with SESS):

SESS51337Tr0lloll110l00l1=acbdef123abc1337H4XX

Since I am a fan of animated gifs, here is the same explanation illustrated:

This is the session cookie for you session as an authenticated user on your site. Since we now know this, we can request the path for the toggle functionality from our microcontroller, passing this cookie along as the header, and toggle the site as we were just accessing it through the browser.

The maintenance_mode_ir module

As promised, I also posted the Drupal part of the code. It is a module for Drupal 8, and can be found on github

So what is happening in that module? It is a very basic module actually mostly generated by the super awesome Drupal console. To again sum it up in bullet points:

  • It defines a route in maintenance_mode_ir.routing.yml (example.com/maintenance_mode_ir)
  • The route requires the permission "administer site configuration"
  • The route controller checks the StateInterface for the current state of maintenance mode, toggles it and returns a JSON response about the new state
  • The route (and so the toggling) will never be accessible for anonymous users (unless you give the anonymous users the permission "administer site configuration", in which case you probably have other issues anyway)
  • There are also tests to make sure this works as expected
When do you want to use this, and what is the considerations and compromises

Now, your first thought might be: would it not be even simpler to just expose a route where requests would turn the site on and off? We wouldn't need to bother with finding the session cookie, passing that along and so on? Legitimate question and of course true in the sense that it is simpler. But this is really the core of any communications taking place between your "things" and Drupal (or any other backend) - you want to make sure they are secured in some way. Of course being able to toggle the maintenance mode is probably not something you would want to expose anyway, but you should also use some sort of authentication if it only was a monitoring of temperature. Securing it through the access control in Drupal gives you a battle tested foundation for doing this.

Limitations and considerations

This method has some limitations. Say for example you are storing your sessions in a typical cache storage (like redis). Your session will expire at some point. Or, if you are using no persistence for redis, it will just be dropped as soon as redis restarts. Maybe you are limited by your php session lifetime settings. Or maybe you just accidentally log out of the session where you "found" the cookie. Many things can make this authenticated request stop working. But if all you are doing is hooking up a remote control reader to make a video and put on your blog, this will work.

Another thing to consider is the connection of your "thing". Is your site served over a non-secure connection and you are sending requests with your "thing" connected through a public wifi? You might want to reconsider your tactics. Also, keep in mind that if your session is compromised, it is not only the toggling of maintenance mode that is compromised, but the actual administrator user. This might not be the case if we were to use another form of authentication.

Now, the next paragraph presented to you will actually be the comments section. The section where you are encouraged to comment on inconsistencies, forgotten security concerns or praise about well chosen gif animations. Let me just first remind you of the disclaimer in the first paragraph, and the fact that this a serie of posts exploring different forms of device authentications. I would say the main takeaway from this first article is that exposing different aspects of your Drupal site to "the physical world", be it remote controlled maintenance mode or temperature logging, requires you to think about how you want to protect these exposed endpoints. So please do that, enjoy this complementary animated gif (in the category "maintenance"), and then feel free to comment.

admin Sun, 05/03/2015 - 13:39 Image Tags:
Categories: Elsewhere

OhTheHugeManatee: How to Build a New Source for Drupal Migrate 8

Planet Drupal - Sat, 02/05/2015 - 16:10

This week I wanted to accomplish a task in Drupal 8 that would be simple in Drupal 7: Import several CSV files, each one related to the others by taxonomy terms. Most importantly, I wanted to do it with Migrate module.

Migrate in Drupal 7 is a fantastic piece of code. It is not designed to be used from the GUI, rather, it provides a framework of “source”, “destination”, and “migration” classes so that even the most convoluted migration is 90% written for you. To create a migration in Drupal 7, you create a custom module, declare your migrations in a hook_info, and then extend the built in “migration” class. You instantiate one of the given classes for the source material (is it a CSV? JSON? Direct connection to a custom DB?), then one of the classes for the destination (is it a content type? Taxonomy term?). Then you add one simple line of code mapping each field from source to destination. If you know what you’re doing, the task I had in mind shouldn’t take more than 15 minutes per source.

It’s not quite so easy in Drupal 8. First of all, with Migrate in core, we had to greatly simplify the goals for the module. The version of Migrate that is really functional and stable is specifically and only the basic framework. There is a separate migrate_drupal module to provide everything you need for migrating from Drupal 6 or 7. This has been a laser-tight focus on just the essentials, which means there’s no UI, very little drush support, and definitely no nice extras like the ability to read non-Drupal sources.

My task this week became to write the first CSV source for Drupal 8 Migrate.

Drupal 8 Migrate Overview

Drupal 8 Migrations, when you’re using classes that already exist, are actually even easier than Migrate 7. All you do is write a single YAML file for each kind of data you’re transferring, and put it in a custom module’s config/install directory. Your YAML file gives your migration a name and a group, tells us what the source is for data, maps source fields to destination fields, and tells us what the destination objects are. Here’s an example Migration definition file from core. See if you can understand what’s being migrated here.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 id: d6_system_site label: Drupal 6 site configuration migration_groups: - Drupal 6 source: plugin: variable variables: - site_name - site_mail - site_slogan - site_frontpage - site_403 - site_404 - drupal_weight_select_max - admin_compact_mode process: name: site_name mail: site_mail slogan: site_slogan 'page/front': site_frontpage 'page/403': site_403 'page/404': site_404 weight_select_max: drupal_weight_select_max admin_compact_mode: admin_compact_mode destination: plugin: config config_name: system.site

You probably figured it out: this migration takes the system settings (variables) from a Drupal 6 site, and puts them into the Drupal 7 configuration. Not terribly hard, right? You can even do data transformations from the source field value to the destination.

Unfortunately, the only sources we have so far are for Drupal 6 and 7 sites, pulling directly from the database. If you want to use Migrate 8 the way we used Migrate 7, as an easy way to pull in data from arbitrary sources, you’ll have to contribute.

Writing a source plugin in Migrate_plus

Enter Migrate Plus module. This is the place in contrib, where we can fill out all the rest of the behavior we want from Migrate, that’s not necessarily a core requirement. This is where we’ll be writing our source plugin.

To add a source plugin, just create a .php file in migrate_plus/src/Plugins/migrate/source . Drupal will discover the new plugin automatically the next time you rebuild the cache. The filename has to be the same as the name of the class, so choose carefully! My file is called CSV.php . Here’s the top of the file you need for a basic :

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 <?php /** * @file * Contains \Drupal\migrate_plus\Plugin\migrate\source\csv. */ namespace Drupal\migrate_plus\Plugin\migrate\source; use Drupal\migrate\Plugin\migrate\source\SourcePluginBase; /** * Source for CSV files. * * @MigrateSource( * id = "csv" * ) */ class CSV extends SourcePluginBase {

I’m calling this out separately because for newbies to Drupal 8, this is the hard part. This is all the information that Drupal needs to be able to find your class when it needs it. The @file comment is important. That and the namespace below have to match the actual location of the .php file.

Then you declare any other classes that you need, with their full namespace. To start with all you need is SourcePluginBase.

Finally you have to annotate the class with that @MigrateSource(id=“csv”). This is how Migrate module knows that this is a MigrateSource, and the name of your Plugin. Don’t miss it!

Inside the class, you must have the following methods. I’ll explain a bit more about each afterwards.

  • initializeIterator() : Should return a valid Iterator object.
  • getIds() : Should return an array that defines the unique identifiers of your data source.
  • __toString() : Should return a simple, string representation of the source.
  • fields() : Should return a definitive list of fields in the source.
  • __construct() : You don’t NEED this method, but you probably will end up using it.
initializeIterator()

An Iterator is a complicated sounding word for an Object that contains everything you need to read from a data source, and go through it one line at a time. Maybe you’re used to fopen(‘path/to/file’, ‘r’) to open a file, and then you write code for every possible operation with that file. An iterator takes care of all that for you. In the case of most file-based sources, you can just use the SplFileObject class that comes with PHP.

Any arguments that were passed in the source: section of the YAML file will be available under $this->configuration. So if my YAML looks like this:

1 2 3 source: plugin: csv path: '/vagrant/import/ACS_13_1YR_B28002_with_ann.csv'

My initializeIterator( ) method can look like this:

1 2 3 4 5 6 public function initializeIterator() { // File handler using our custom header-rows-respecting extension of SPLFileObject. $file = new SplFileObject($this->configuration['path']); $file->setFlags(SplFileObject::READ_CSV); return $file; }

Not too complicated, right? This method is called right at the beginning of the migration, the first time Migrate wants to get any information out of your source. The iterator will be stored in $this->iterator.

getIds()

This method should return an array of all the unique keys for your source. A unique key is some value that’s unique for that row in the source material. Sometimes there’s more than one, which is why this is an array. Each key field name is also an array, with a child “type” declaration. This is hard to explain in English, but easy to show in code:

1 2 3 4 5 6 7 public function getIDs() { $ids = array(); foreach ($this->configuration['keys'] as $key) { $ids[$key]['type'] = 'string'; } return $ids; }

We rely on the YAML author to tell us the key fields in the CSV, and we just reformat them accordingly. Type can be ‘string’, ‘float’, ‘integer’, whatever makes sense.

__toString()

This method has to return a simple string explanation of the source query. In the case of a file-based source, it makes sense to print the path to the file, like this:

1 2 3 public function __toString() { return (string) $this->query; } fields()

This method returns an array of available fields on the source. The keys should be the machine names, the values are descriptive, human-readable names. In the case of the CSV source, we look for headers at the top of the CSV file and build the array that way.

__construct()

The constructor method is called whenever a class is instantiated. You don’t technically HAVE to have a constructor on your source class, but you’ll find it helpful. For the CSV source, I used the constructor to make sure we have all the configuration that we need. Then I try and set sane values for fields, based on any header in the file.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 public function __construct(array $configuration, $plugin_id, $plugin_definition, MigrationInterface $migration) { parent::__construct($configuration, $plugin_id, $plugin_definition, $migration); // Path is required. if (empty($this->configuration['path'])) { return new MigrateException('You must declare the "path" to the source CSV file in your source settings.'); } // Key field(s) are required if (empty($this->configuration['keys'])) { return new MigrateException('You must declare the "keys" the source CSV file in your source settings.'); } // Set header rows from the migrate configuration. $this->headerRows = !empty($this->configuration['header_rows']) ? $this->configuration['header_rows'] : 0; // Figure out what CSV columns we have. // One can either pass in an explicit list of column names to use, or if we have // a header row we can use the names from that if ($this->headerRows && empty($this->configuration['csvColumns'])) { $this->csvColumns = array(); // Skip all but the last header for ($i = 0; $i < $this->headerRows - 1; $i++) { $this->getNextLine(); } $row = $this->getNextLine(); foreach ($row as $key => $header) { $header = trim($header); $this->getIterator()->csvColumns[] = array($header, $header); } } elseif ($this->configuration['csvColumns']) { $this->getIterator()->csvColumns = $this->configuration['csvColumns']; } } Profit!

That’s it! Four simple methods, and you have a new source type for Drupal 8 Migrate. Of course, you will probably find complications that require a bit more work. For CSV, supporting a header row turned out to be a real pain. Any time Migrate tried to “rewind” the source back to the first line, it would try and migrate the header row! I ended up extending SplFileObject just to handle this issue.

Here’s the YAML file I used to test this, importing a list of states from some US Census data.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 id: states label: States migration_groups: - US Census source: plugin: csv path: '/vagrant/import/ACS_13_1YR_B28002_with_ann.csv' header_rows: 2 fields: - Id2 - Geography keys: - Id2 process: name: Geography vid: - plugin: default_value default_value: state destination: plugin: entity:taxonomy_term

You can see my CSV source and Iterator in the issue queue for migrate_plus. Good luck, and happy migrating!

Thanks

I learned a lot this week. Big thanks to the Migrate Documentation, but especially to chx, mikeryan, and the other good folks in #drupal-migrate who helped set me straight.

Categories: Elsewhere

Dimitri Fontaine: Quicklisp and debian

Planet Debian - Sat, 02/05/2015 - 16:06

Common Lisp users are very happy to use Quicklisp when it comes to downloading and maintaining dependencies between their own code and the librairies it is using.

Sometimes I am pointed that when compared to other programming languages Common Lisp is lacking a lot in the batteries included area. After having had to package about 50 common lisp librairies for debian I can tell you that I politely disagree with that.

And this post is about the tool and process I use to maintain all those librairies.

Quicklisp is good at ensuring a proper distribution of all those libs it supports and actually tests that they all compile and load together, so I've been using it as my upstream for debian packaging purposes. Using Quicklisp here makes my life much simpler as I can grovel through its metadata and automate most of the maintenance of my cl related packages.

It's all automated in the ql-to-deb software which, unsurprisingly, has been written in Common Lisp itself. It's a kind of a Quicklisp client that will fetch Quicklisp current list of releases with version numbers and compare to the list of managed packages for debian in order to then build new version automatically.

The current workflow I'm using begins with using `ql-to-deb` is to `check` for the work to be done today:

$ /vagrant/build/bin/ql-to-deb check Fetching "http://beta.quicklisp.org/dist/quicklisp.txt" Fetching "http://beta.quicklisp.org/dist/quicklisp/2015-04-07/releases.txt" update: cl+ssl cl-csv cl-db3 drakma esrap graph hunchentoot local-time lparallel nibbles qmynd trivial-backtrace upload: hunchentoot

After careful manual review of the automatic decision, let's just `update` all what `check` decided would have to be:

$ /vagrant/build/bin/ql-to-deb update Fetching "http://beta.quicklisp.org/dist/quicklisp.txt" Fetching "http://beta.quicklisp.org/dist/quicklisp/2015-04-07/releases.txt" Updating package cl-plus-ssl from 20140826 to 20150302. see logs in "//tmp/ql-to-deb/logs//cl-plus-ssl.log" Fetching "http://beta.quicklisp.org/archive/cl+ssl/2015-03-02/cl+ssl-20150302-git.tgz" Checksum test passed. File: "/tmp/ql-to-deb/archives/cl+ssl-20150302-git.tgz" md5: 61d9d164d37ab5c91048827dfccd6835 Building package cl-plus-ssl Updating package cl-csv from 20140826 to 20150302. see logs in "//tmp/ql-to-deb/logs//cl-csv.log" Fetching "http://beta.quicklisp.org/archive/cl-csv/2015-03-02/cl-csv-20150302-git.tgz" Checksum test passed. File: "/tmp/ql-to-deb/archives/cl-csv-20150302-git.tgz" md5: 32f6484a899fdc5b690f01c244cd9f55 Building package cl-csv Updating package cl-db3 from 20131111 to 20150302. see logs in "//tmp/ql-to-deb/logs//cl-db3.log" Fetching "http://beta.quicklisp.org/archive/cl-db3/2015-03-02/cl-db3-20150302-git.tgz" Checksum test passed. File: "/tmp/ql-to-deb/archives/cl-db3-20150302-git.tgz" md5: 578896a3f60f474742f240b703f8c5f5 Building package cl-db3 Updating package cl-drakma from 1.3.11 to 1.3.13. see logs in "//tmp/ql-to-deb/logs//cl-drakma.log" Fetching "http://beta.quicklisp.org/archive/drakma/2015-04-07/drakma-1.3.13.tgz" Checksum test passed. File: "/tmp/ql-to-deb/archives/drakma-1.3.13.tgz" md5: 3b548bce10728c7a058f19444c8477c3 Building package cl-drakma Updating package cl-esrap from 20150113 to 20150302. see logs in "//tmp/ql-to-deb/logs//cl-esrap.log" Fetching "http://beta.quicklisp.org/archive/esrap/2015-03-02/esrap-20150302-git.tgz" Checksum test passed. File: "/tmp/ql-to-deb/archives/esrap-20150302-git.tgz" md5: 8b198d26c27afcd1e9ce320820b0e569 Building package cl-esrap Updating package cl-graph from 20141106 to 20150407. see logs in "//tmp/ql-to-deb/logs//cl-graph.log" Fetching "http://beta.quicklisp.org/archive/graph/2015-04-07/graph-20150407-git.tgz" Checksum test passed. File: "/tmp/ql-to-deb/archives/graph-20150407-git.tgz" md5: 3894ef9262c0912378aa3b6e8861de79 Building package cl-graph Updating package hunchentoot from 1.2.29 to 1.2.31. see logs in "//tmp/ql-to-deb/logs//hunchentoot.log" Fetching "http://beta.quicklisp.org/archive/hunchentoot/2015-04-07/hunchentoot-1.2.31.tgz" Checksum test passed. File: "/tmp/ql-to-deb/archives/hunchentoot-1.2.31.tgz" md5: 973eccfef87e81f1922424cb19884d63 Building package hunchentoot Updating package cl-local-time from 20150113 to 20150407. see logs in "//tmp/ql-to-deb/logs//cl-local-time.log" Fetching "http://beta.quicklisp.org/archive/local-time/2015-04-07/local-time-20150407-git.tgz" Checksum test passed. File: "/tmp/ql-to-deb/archives/local-time-20150407-git.tgz" md5: 7be4a31d692f5862014426a53eb1e48e Building package cl-local-time Updating package cl-lparallel from 20141106 to 20150302. see logs in "//tmp/ql-to-deb/logs//cl-lparallel.log" Fetching "http://beta.quicklisp.org/archive/lparallel/2015-03-02/lparallel-20150302-git.tgz" Checksum test passed. File: "/tmp/ql-to-deb/archives/lparallel-20150302-git.tgz" md5: dbda879d0e3abb02a09b326e14fa665d Building package cl-lparallel Updating package cl-nibbles from 20141106 to 20150407. see logs in "//tmp/ql-to-deb/logs//cl-nibbles.log" Fetching "http://beta.quicklisp.org/archive/nibbles/2015-04-07/nibbles-20150407-git.tgz" Checksum test passed. File: "/tmp/ql-to-deb/archives/nibbles-20150407-git.tgz" md5: 2ffb26241a1b3f49d48d28e7a61b1ab1 Building package cl-nibbles Updating package cl-qmynd from 20141217 to 20150302. see logs in "//tmp/ql-to-deb/logs//cl-qmynd.log" Fetching "http://beta.quicklisp.org/archive/qmynd/2015-03-02/qmynd-20150302-git.tgz" Checksum test passed. File: "/tmp/ql-to-deb/archives/qmynd-20150302-git.tgz" md5: b1cc35f90b0daeb9ba507fd4e1518882 Building package cl-qmynd Updating package cl-trivial-backtrace from 20120909 to 20150407. see logs in "//tmp/ql-to-deb/logs//cl-trivial-backtrace.log" Fetching "http://beta.quicklisp.org/archive/trivial-backtrace/2015-04-07/trivial-backtrace-20150407-git.tgz" Checksum test passed. File: "/tmp/ql-to-deb/archives/trivial-backtrace-20150407-git.tgz" md5: 762b0acf757dc8a2a6812d2f0f2614d9 Building package cl-trivial-backtrace

Quite simple.

To be totally honnest, I first had a problem with the parser generator library esrap wherein the README documentation changed to be a README.org file, and I had to tell my debian packaging about that. See the 0ef669579cf7c07280eae7fe6f61f1bd664d337e commit to ql-to-deb for details.

What about trying to install those packages locally? That's usually a very good test. Sometimes some dependencies are missing at the dpkg command line, so another apt-get install -f is needed:

$ /vagrant/build/bin/ql-to-deb install sudo dpkg -i /tmp/ql-to-deb/cl-plus-ssl_20150302-1_all.deb /tmp/ql-to-deb/cl-csv_20150302-1_all.deb /tmp/ql-to-deb/cl-csv-clsql_20150302-1_all.deb /tmp/ql-to-deb/cl-csv-data-table_20150302-1_all.deb /tmp/ql-to-deb/cl-db3_20150302-1_all.deb /tmp/ql-to-deb/cl-drakma_1.3.13-1_all.deb /tmp/ql-to-deb/cl-esrap_20150302-1_all.deb /tmp/ql-to-deb/cl-graph_20150407-1_all.deb /tmp/ql-to-deb/cl-hunchentoot_1.2.31-1_all.deb /tmp/ql-to-deb/cl-local-time_20150407-1_all.deb /tmp/ql-to-deb/cl-lparallel_20150302-1_all.deb /tmp/ql-to-deb/cl-nibbles_20150407-1_all.deb /tmp/ql-to-deb/cl-qmynd_20150302-1_all.deb /tmp/ql-to-deb/cl-trivial-backtrace_20150407-1_all.deb (Reading database ... 79689 files and directories currently installed.) Preparing to unpack .../cl-plus-ssl_20150302-1_all.deb ... Unpacking cl-plus-ssl (20150302-1) over (20140826-1) ... Selecting previously unselected package cl-csv. Preparing to unpack .../cl-csv_20150302-1_all.deb ... Unpacking cl-csv (20150302-1) ... Selecting previously unselected package cl-csv-clsql. Preparing to unpack .../cl-csv-clsql_20150302-1_all.deb ... Unpacking cl-csv-clsql (20150302-1) ... Selecting previously unselected package cl-csv-data-table. Preparing to unpack .../cl-csv-data-table_20150302-1_all.deb ... Unpacking cl-csv-data-table (20150302-1) ... Selecting previously unselected package cl-db3. Preparing to unpack .../cl-db3_20150302-1_all.deb ... Unpacking cl-db3 (20150302-1) ... Preparing to unpack .../cl-drakma_1.3.13-1_all.deb ... Unpacking cl-drakma (1.3.13-1) over (1.3.11-1) ... Preparing to unpack .../cl-esrap_20150302-1_all.deb ... Unpacking cl-esrap (20150302-1) over (20150113-1) ... Preparing to unpack .../cl-graph_20150407-1_all.deb ... Unpacking cl-graph (20150407-1) over (20141106-1) ... Preparing to unpack .../cl-hunchentoot_1.2.31-1_all.deb ... Unpacking cl-hunchentoot (1.2.31-1) over (1.2.29-1) ... Preparing to unpack .../cl-local-time_20150407-1_all.deb ... Unpacking cl-local-time (20150407-1) over (20150113-1) ... Preparing to unpack .../cl-lparallel_20150302-1_all.deb ... Unpacking cl-lparallel (20150302-1) over (20141106-1) ... Preparing to unpack .../cl-nibbles_20150407-1_all.deb ... Unpacking cl-nibbles (20150407-1) over (20141106-1) ... Preparing to unpack .../cl-qmynd_20150302-1_all.deb ... Unpacking cl-qmynd (20150302-1) over (20141217-1) ... Preparing to unpack .../cl-trivial-backtrace_20150407-1_all.deb ... Unpacking cl-trivial-backtrace (20150407-1) over (20120909-2) ... Setting up cl-plus-ssl (20150302-1) ... dpkg: dependency problems prevent configuration of cl-csv: cl-csv depends on cl-interpol; however: Package cl-interpol is not installed. dpkg: error processing package cl-csv (--install): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of cl-csv-clsql: cl-csv-clsql depends on cl-csv; however: Package cl-csv is not configured yet. dpkg: error processing package cl-csv-clsql (--install): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of cl-csv-data-table: cl-csv-data-table depends on cl-csv; however: Package cl-csv is not configured yet. dpkg: error processing package cl-csv-data-table (--install): dependency problems - leaving unconfigured Setting up cl-db3 (20150302-1) ... Setting up cl-drakma (1.3.13-1) ... Setting up cl-esrap (20150302-1) ... Setting up cl-graph (20150407-1) ... Setting up cl-local-time (20150407-1) ... Setting up cl-lparallel (20150302-1) ... Setting up cl-nibbles (20150407-1) ... Setting up cl-qmynd (20150302-1) ... Setting up cl-trivial-backtrace (20150407-1) ... Setting up cl-hunchentoot (1.2.31-1) ... Errors were encountered while processing: cl-csv cl-csv-clsql cl-csv-data-table

Let's make sure that our sid users will be happy with the update here:

$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: g++-4.7 git git-man html2text libaugeas-ruby1.8 libbind9-80 libclass-isa-perl libcurl3-gnutls libdns88 libdrm-nouveau1a libegl1-mesa-drivers libffi5 libgraphite3 libgssglue1 libisc84 libisccc80 libisccfg82 liblcms1 liblwres80 libmpc2 libopenjpeg2 libopenvg1-mesa libpoppler19 librtmp0 libswitch-perl libtiff4 libwayland-egl1-mesa luatex openssh-blacklist openssh-blacklist-extra python-chardet python-debian python-magic python-pkg-resources python-six ttf-dejavu-core ttf-marvosym Use 'apt-get autoremove' to remove them. The following extra packages will be installed: cl-interpol The following NEW packages will be installed: cl-interpol 0 upgraded, 1 newly installed, 0 to remove and 51 not upgraded. 3 not fully installed or removed. Need to get 20.7 kB of archives. After this operation, 135 kB of additional disk space will be used. Do you want to continue? [Y/n] Get:1 http://ftp.fr.debian.org/debian/ sid/main cl-interpol all 0.2.1-2 [20.7 kB] Fetched 20.7 kB in 0s (84.5 kB/s) debconf: unable to initialize frontend: Dialog debconf: (Dialog frontend will not work on a dumb terminal, an emacs shell buffer, or without a controlling terminal.) debconf: falling back to frontend: Readline Selecting previously unselected package cl-interpol. (Reading database ... 79725 files and directories currently installed.) Preparing to unpack .../cl-interpol_0.2.1-2_all.deb ... Unpacking cl-interpol (0.2.1-2) ... Setting up cl-interpol (0.2.1-2) ... Setting up cl-csv (20150302-1) ... Setting up cl-csv-clsql (20150302-1) ... Setting up cl-csv-data-table (20150302-1) ...

All looks fine, time to sign those packages. There's a trick here, where you want to be sure you're using a GnuPG setup that allows you to enter your passphrase only once, see ql-to-deb vm setup for details, and the usual documentations about all that if you're interested into the details.

$ /vagrant/build/bin/ql-to-deb sign signfile /tmp/ql-to-deb/cl-plus-ssl_20150302-1.dsc 60B1CB4E signfile /tmp/ql-to-deb/cl-plus-ssl_20150302-1_amd64.changes 60B1CB4E Successfully signed dsc and changes files signfile /tmp/ql-to-deb/cl-csv_20150302-1.dsc 60B1CB4E signfile /tmp/ql-to-deb/cl-csv_20150302-1_amd64.changes 60B1CB4E Successfully signed dsc and changes files signfile /tmp/ql-to-deb/cl-db3_20150302-1.dsc 60B1CB4E signfile /tmp/ql-to-deb/cl-db3_20150302-1_amd64.changes 60B1CB4E Successfully signed dsc and changes files signfile /tmp/ql-to-deb/cl-drakma_1.3.13-1.dsc 60B1CB4E signfile /tmp/ql-to-deb/cl-drakma_1.3.13-1_amd64.changes 60B1CB4E Successfully signed dsc and changes files signfile /tmp/ql-to-deb/cl-esrap_20150302-1.dsc 60B1CB4E signfile /tmp/ql-to-deb/cl-esrap_20150302-1_amd64.changes 60B1CB4E Successfully signed dsc and changes files signfile /tmp/ql-to-deb/cl-graph_20150407-1.dsc 60B1CB4E signfile /tmp/ql-to-deb/cl-graph_20150407-1_amd64.changes 60B1CB4E Successfully signed dsc and changes files signfile /tmp/ql-to-deb/hunchentoot_1.2.31-1.dsc 60B1CB4E signfile /tmp/ql-to-deb/hunchentoot_1.2.31-1_amd64.changes 60B1CB4E Successfully signed dsc and changes files signfile /tmp/ql-to-deb/cl-local-time_20150407-1.dsc 60B1CB4E signfile /tmp/ql-to-deb/cl-local-time_20150407-1_amd64.changes 60B1CB4E Successfully signed dsc and changes files signfile /tmp/ql-to-deb/cl-lparallel_20150302-1.dsc 60B1CB4E signfile /tmp/ql-to-deb/cl-lparallel_20150302-1_amd64.changes 60B1CB4E Successfully signed dsc and changes files signfile /tmp/ql-to-deb/cl-nibbles_20150407-1.dsc 60B1CB4E signfile /tmp/ql-to-deb/cl-nibbles_20150407-1_amd64.changes 60B1CB4E Successfully signed dsc and changes files signfile /tmp/ql-to-deb/cl-qmynd_20150302-1.dsc 60B1CB4E signfile /tmp/ql-to-deb/cl-qmynd_20150302-1_amd64.changes 60B1CB4E Successfully signed dsc and changes files signfile /tmp/ql-to-deb/cl-trivial-backtrace_20150407-1.dsc 60B1CB4E signfile /tmp/ql-to-deb/cl-trivial-backtrace_20150407-1_amd64.changes 60B1CB4E Successfully signed dsc and changes files

Ok, with all tested and signed, it's time we upload our packages on debian servers for our dear debian users to be able to use newer and better versions of their beloved Common Lisp librairies:

$ /vagrant/build/bin/ql-to-deb upload Trying to upload package to ftp-master (ftp.upload.debian.org) Checking signature on .changes gpg: Signature made Sat 02 May 2015 05:06:48 PM MSK using RSA key ID 60B1CB4E gpg: Good signature from "Dimitri Fontaine <dim@tapoueh.org>" Good signature on /tmp/ql-to-deb/cl-plus-ssl_20150302-1_amd64.changes. Checking signature on .dsc gpg: Signature made Sat 02 May 2015 05:06:46 PM MSK using RSA key ID 60B1CB4E gpg: Good signature from "Dimitri Fontaine <dim@tapoueh.org>" Good signature on /tmp/ql-to-deb/cl-plus-ssl_20150302-1.dsc. Uploading to ftp-master (via ftp to ftp.upload.debian.org): Uploading cl-plus-ssl_20150302-1.dsc: done. Uploading cl-plus-ssl_20150302.orig.tar.gz: done. Uploading cl-plus-ssl_20150302-1.debian.tar.xz: done. Uploading cl-plus-ssl_20150302-1_all.deb: done. Uploading cl-plus-ssl_20150302-1_amd64.changes: done. Successfully uploaded packages.

Of course the same text or abouts is then repeated for all the other packages.

Enjoy using Common Lisp in debian!

Oh and remember, the only reason I've written ql-to-deb and signed myself up to maintain those upteens Common Lisp librairies as debian package is to be able to properly package pgloader in debian, as you can see at https://packages.debian.org/sid/pgloader and in particular in the Other Packages Related to pgloader section of the debian source package for pgloader at https://packages.debian.org/source/sid/pgloader.

That level of effort is done to ensure that we respect the Debian Social Contract wherein debian ensures its users that it's possible to rebuild anything from sources as found in the debian repositories.

Categories: Elsewhere

Dirk Eddelbuettel: Rcpp 0.11.6

Planet Debian - Sat, 02/05/2015 - 15:59

The new release 0.11.5 of Rcpp arrived on the CRAN network for GNU R yesterday; the corresponding Debian package has also been uploaded.

Rcpp has become the most popular way of enhancing GNU R with C++ code. As of today, 373 packages on CRAN depend on Rcpp for making analyses go faster and further; BioConductor adds another 57 packages, and casual searches on GitHub suggests many more.

This version adds a little more polish and refinement around things we worked on previous releases to solidify builds, installation and the run-time experience. It does not bring anything new or majorrelease continues the 0.11.* release cycle, adding another large number of small bug fixes, polishes and enhancements. As always, you can follow the development via the GitHub repo and particularly the Issue tickets and Pull Requests. And any discussions, questions, ... regarding Rcpp are always welcome at the rcpp-devel mailing list.

See below for a detailed list of changes extracted from the NEWS file.

Changes in Rcpp version 0.11.6 (2015-05-01)
  • Changes in Rcpp API:

    • The unwinding of exceptions was refined to protect against inadvertent memory leaks.

    • Header files now try even harder not to let macro definitions leak.

    • Matrices have a new default constructor for zero-by-zero dimension matrices (via a pull request by Dmitrii Meleshko).

    • A new empty() string constructor was added (via another pull request).

    • Better support for Vectors with a storage policy different from the default, i.e. NoProtectStorage, was added.

  • Changes in Rcpp Attributes:

    • Rtools 3.3 is now supported.

Thanks to CRANberries, you can also look at a diff to the previous release As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Dirk Eddelbuettel: RcppArmadillo 0.5.100.1.0

Planet Debian - Sat, 02/05/2015 - 15:46

A new minor release 5.100.1 of Armadillo was released by Conrad yesterday. Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab.

Our corresponding RcppArmadillo release 0.5.100.1.0 also reached CRAN and Debian yesterday. See below for the brief list of changes.

Changes in RcppArmadillo version 0.5.100.1.0 (2015-05-01)
  • Upgraded to Armadillo release 5.100.1 ("Ankle Biter Deluxe")

    • added interp1() for 1D interpolation

    • added .is_sorted() for checking whether a vector or matrix has sorted elements

    • updated physical constants to NIST 2010 CODATA values

Courtesy of CRANberries, there is also a diffstat report for the most recent CRAN release. As always, more detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Niels Thykier: The release of Debian Jessie from an RM’s PoV

Planet Debian - Sat, 02/05/2015 - 09:12

It was quite an experience to partake in the Jessie release – and also a rather long “Saturday”.  This post is mostly a time line of how I spent my release day with doing the actual release.  I have glossed over some details – the post is long enough without these. :)

We started out at 8 (UTC) with a final “dinstall” run, which took nearly 2 hours.  It was going to take longer, but we decided to skip the synchronisation to “coccia.debian.org” (the server hosting the DD-accessible mirror of release.debian.org).

The release itself started with the FTP masters renaming the aliases of Squeeze, Wheezy and Jessie to oldoldstable, oldstable and stable respectively.   While they worked, the release team reviewed and double checked their work.  After an hour (~11), the FTP masters reported that the stable releases were ready for the final review and the SRMs signed the relevant “Release” files.

Then the FTP masters pushed the stable releases to our CD build server, where Steve McIntyre started building the installation images.  While Steve started with the CDs, the FTP masters and the release team continued with creating a suite for Stretch.  On the FTP/release side, we finished shortly before 12:30.  At this point, our last ETA from Steve suggested that the installation media would take another 11 and a half hours to complete.  We could have opened for mirror synchronisation then, but we decided to wait for the installation media.

At 12:30, there was a long “intermission” for the release team in the release process.  That was an excellent time to improve some of our tools, but that is for another post. :)

We slowly started to resume around 22:20, where we tried to figure out when to open for the mirror synchronisation to time it with the installation media.  We agreed to start the mirror sync at 23:00 despite the installation media not being completely done then.  They followed half an hour later, when Steve reported that the last CD was complete.

At this point, “all” that was left was to update the website and send out the press announcement.  Sadly, we were hit by some (minor) issues then.  First, I had underestimated the work involved in updating the website. Secondly, we had no one online at the time to trigger an “out of band” rebuild of the website.  Steve and I spent an hour and a half solving website issues (like arm64 and ppc64el not being listed as a part of the release).  Unsurprisingly, I decided to expand our the “release checklist” to be slightly more verbose on this particular topic.

My “Saturday” had passed its 16th hour, when I thought we had fixed all the website issues (of course, I would be wrong) and we would now just be waiting for the an automatic rebuild.  I was tempted to just punt it and go to bed, when Paul Wise rejoined us at about 01:25.  He quickly got up to speed and offered to take care of the rest.  An offer I thankfully accepted and I checked out 15 minutes later at 01:40 UTC.

That more or less covers the Jessie release day from my PoV.  After a bit of reflection inside the release team, we have found several points where we can improve the process.  This part certainly deserves its own post as well, which will also give us some time to flesh out some of the ideas a bit more. :)


Filed under: Debian, Release-Team
Categories: Elsewhere

DrupalCon News: Learn through contribution. Come sprint with us.

Planet Drupal - Fri, 01/05/2015 - 21:30

DrupalCon is where you can make contributions to the Drupal project. Each convention is filled with inspiration, networking, and fun — and while cutting-edge sessions are the meat of DrupalCon, sprints are the beating heart.

Categories: Elsewhere

Daniel Kahn Gillmor: Preferred Packaging Practices

Planet Debian - Fri, 01/05/2015 - 20:45
I just took a few minutes to write up my preferred Debian packaging practices.

The basic jist is that i like to use git-buildpackage (gbp) with the upstream source included in the repo, both as tarballs (with pristine-tar branches) and including upstream's native VCS history (Joey's arguments about syncing with upstream git are worth reading if you're not already convinced this is a good idea).

I also started using gbp-pq recently -- the patch-queue feature is really useful for at least three things:

  • rebasing your debian/patches/ files when a new version comes out upstream -- you can use all your normal git rebase habits! and
  • facilitating sending patches upstream, hopefully reducing the divergence, and
  • cherry-picking new as-yet-unreleased upstream bugfix patches into a debian release.

My preferred packaging practices document is a work in progress. I'd love to improve it. If you have suggestions, please let me know.

Also, if you've written up your own preferred packaging practices, send me a link! I'm hoping to share and learn tips and tricks around this kind of workflow at debconf 15 this year.

Categories: Elsewhere

Petter Reinholdtsen: What would it cost to store all phone calls in Norway?

Planet Debian - Fri, 01/05/2015 - 19:30

Many years ago, a friend of mine calculated how much it would cost to store the sound of all phone calls in Norway, and came up with the cost of around 20 million NOK (2.4 mill EUR) for all the calls in a year. I got curious and wondered what the same calculation would look like today. To do so one need an idea of how much data storage is needed for each minute of sound, how many minutes all the calls in Norway sums up to, and the cost of data storage.

The 2005 numbers are from digi.no, the 2012 numbers are from a NKOM report, and I got the 2013 numbers after asking NKOM via email. I was told the numbers for 2014 will be presented May 20th, and decided not to wait for those, as I doubt they will be very different from the numbers from 2013.

The amount of data storage per minute sound depend on the wanted quality, and for phone calls it is generally believed that 8 Kbit/s is enough. See for example a summary on voice quality from Cisco for some alternatives. 8 Kbit/s is 60 Kbytes/min, and this can be multiplied with the number of call minutes to get the storage requirements.

Storage prices varies a lot, depending on speed, backup strategies, availability requirements etc. But a simple way to calculate can be to use the price of a TiB-disk (around 1000 NOK / 120 EUR) and double it to take space, power and redundancy into account. It could be much higher with high speed and good redundancy requirements.

But back to the question, What would it cost to store all phone calls in Norway? Not much. Here is a small table showing the estimated cost, which is within the budget constraint of most medium and large organisations:

YearCall minutesSizePrice in NOK / EUR 200524 000 000 0001.3 PiB3 mill / 358 000 201218 000 000 0001.0 PiB2.2 mill / 262 000 201317 000 000 000950 TiB2.1 mill / 250 000

This is the cost of buying the storage. Maintenance need to be taken into account too, but calculating that is left as an exercise for the reader. But it is obvious to me from those numbers that recording the sound of all phone calls in Norway is not going to be stopped because it is too expensive. I wonder if someone already is collecting the data?

Categories: Elsewhere

Lisandro Dami&aacute;n Nicanor P&eacute;rez Meyer: Qt4's status and Qt4's webkit removal in Stretch

Planet Debian - Fri, 01/05/2015 - 19:05
Hi everyone! As you might know Qt4 has been deprecated (in the sense "you better start to port your code") since Qt5's first release in December 19th 2012. Since that point on Qt4 received only bugfixes. Upstream is about to release the last point release, 4.8.7. This means that only severe bugs like security ones will get a chance to get solved.

Moreover upstream recommended keeping Qt4 until 2017. If we get a Debian release every ±2 years that will make Jessie oldstable in 2017 and deprecated in 2018. This means we should really consider starting to port code using Qt4 to Qt5 during Stretch's developing life cycle.

It is important to note that Qt4 depends on a number of dependencies that their maintainers might want to get removed from the archive for similar reasons. In this case we will simply don't hesitate in removing their support as long as Qt4 keeps building. This normally doesn't mean API/ABI breakage but missing plugins that will diminish functionality from your apps, maybe even key ones. As an example let's take the **hypothetical** case in which the libasound2 maintainers are switching to a new libasound3 which is not API-compatible and removing libasound2 in the process. In this case we will have no choice but to remove the dependency and drop the functionality it provides. This is another of the important reasons why you should be switching to Qt5.

Qt4's webkit removalWebkit is definitely not an easy piece of code to maintain. For starters it means having a full copy of the code in the archive for both Qt4 and Qt5. Now add to that the fact that the code evolves quickly and thus having upstream support even for security bugs will be getting harder and harder. So we decided to remove Qt4's webkit from the archive. Of course we still have a lot of KDE stuff using Qt4's webkit, so it won't disappear "soon", but it will at some point.

PortingSome of us where involved in various Qt4 to Qt5 migrations [0] and we know for sure that porting stuff from Qt4 to Qt5 is much much easier and less painful than it was from Qt3 to Qt4.

We also understand that there is still a lot of software still using Qt4. In order to ease the transition time we have provided Wheezy backports for Qt5.

Don't forget to take a look at the C++ API changes page [1] whenever you start porting your application.

[0] http://pkg-kde.alioth.debian.org/packagingqtstuff.html
[1] http://doc.qt.io/qt-5/sourcebreaks.html

Temporarily shipping both Qt4 and Qt5 builds of your libraryIn case you maintain a library chances are that upstream already provides a way to build it using Qt5. Please note there is no point in shipping an application built with both flavours, please use Qt5 whenever possible. This double compilation should be left only for libraries.

You can't mix Qt4 and Qt5 in the same binary, but you may provide libraries compiled against one or the other. For example, your source package foo could provide both libqt4foo1 and libqt5foo1. You need to mangle your debian/rules and/or build system accordingly to achieve this.

A good example both for upstream code allowing both styles of compilation and debian packaging is phonon. Take a look at the CMakeLists.txt files for seeing how a source can be built against both flavours and another to debian/rules to see an example of how to handle the compilation. Just bear in mind that you
need to replace $(overridden_command) with the command itself, that variable substitution comes from internal stuff from our team and you should not be using it without a very good reason. If in doubt, feel free to ask us on IRC [2] or on the mailing list [3].

[2] irc.debian.org #debian-kde
[3] debian-kde@lists.debian.org

TimelineWe plan to start filing wishlist bugs soon. Once we get most of KDE stuff running with Qt5's webkit we will start raising the severities.
Categories: Elsewhere

Red Crackle: How to add an Ubuntu apt-get key from behind a firewall

Planet Drupal - Fri, 01/05/2015 - 19:00
Have you ever tried to install an apt package from a third-party repository from behind a firewall? If you run apt-key command with a key server, firewall will block it. Read this post to find out how to get past the firewall to import key for a third-party apt package.
Categories: Elsewhere

SitePoint PHP Drupal: Automated Testing of Drupal 8 Modules

Planet Drupal - Fri, 01/05/2015 - 18:00

In this article we are going to look at automated testing in Drupal 8. More specifically, we are going to write a few integration tests for some of the business logic we wrote in the previous Sitepoint articles on Drupal 8 module development. You can find the latest version of that code in this repository along with the tests we write today.

But before doing that, we will talk a bit about what kinds of tests we can write in Drupal 8 and how they actually work.

Simpletest (Testing)

Simpletest is the Drupal specific testing framework. For Drupal 6 it was a contributed module but since Drupal 7 it has been part of the core package. Simpletest is now an integral part of Drupal core development, allowing for safe API modifications due to an extensive codebase test coverage.

Right off the bat I will mention the authoritative documentation page for Drupal testing with Simpletest. There you can find a hub of information related to how Simpletest works, how you can write tests for it, what API methods you can use, etc.

By default, the Simpletest module that comes with Drupal core is not enabled so we will have to do that ourselves if we want to run tests. It can be found on the Extend page named as Testing.

Once that is done, we can head to admin/config/development/testing and see all the tests currently available for the site. These include both core and contrib module tests. At the very bottom, there is also the Clean environment button that we can use if any of our tests quit unexpectedly and there are some remaining test tables in your database.

Continue reading %Automated Testing of Drupal 8 Modules%

Categories: Elsewhere

Lullabot: Lullabot Named Top Development And Design Agency

Planet Drupal - Fri, 01/05/2015 - 17:01

Clutch is a research firm that analyzes and reviews software and professional services agencies, covering more than 500 companies in over 50 different markets. Like a Consumer Reports for the agency sector, they do independent research. They publish their results at Clutch.co. Recently, they reviewed Lullabot, interviewing our clients; they created a profile of Lullabot with the results. Lullabot received top marks across the board.

In January, Clutch published a press release listing Lullabot first overall on its international list of web development agencies. We've always been very proud of our work, but it's really amazing to be recognized like this by an independent research firm. In March, Clutch sent out another press release that lists Lullabot as top in Boston-area web design and development agencies. We'll take it!

Clutch also provides matrix-based research results comparing agencies based on focus and ability to deliver. Lullabot floats to the top of both the Top Web Development Companies and the Top Web Design & Development Firms in Boston showing high-focus and the most proven ability to deliver of any agency in the listings.

Since 2006, we've built an incredible team at Lullabot and I'd like to thank all of our employees for their contributions. We've also partnered with scores of magnificant clients over the years. We'd like to thank them all for their trust and collaboration. Of course, Clutch's listings are dynamic and ongoing. We can't sit back and expect to remain in the top position. We will continue to strive to be the best agency we can be, providing superlative results for our clients while continuing to provide a rewarding work environment for our talented team of expert developers, designers, and strategists.

Categories: Elsewhere

Drupal Association News: They’re back! Personalized membership certificates are here.

Planet Drupal - Fri, 01/05/2015 - 14:44

We all like to show our love for Drupal, and over the past few years, we’ve let Drupal show it loves you! Due to popular demand, we are bringing back a fun gift to Drupal Association members -- personalized certificates of membership. In 2014, we delivered these downloadable certificates to over 600 members and we hope to break this record during the months of May and June this year.

Join as a member or renew your current membership anytime before June 30 and you’ll receive a personalized certificate of membership.

Sign me up

This is a token of our gratitude for your support of Drupal and the community. Thanks for everything you do for Drupal.

Personal blog tags: Membership
Categories: Elsewhere

BlackMesh: DrupalCon LA has 5 dedicated sprint days. Should you be there? YES!

Planet Drupal - Fri, 01/05/2015 - 14:18
What is a Sprint? Sprints are times dedicated to focused work on a project or topic. People in the same location (physical space, IRC, certain issue pages) work together to make progress, remove barriers to completion, and try to get things to "done". The goal is to get an issue to Reviewed and Tested By the Community (RTBC), and keep it there. (Keeping it there means quickly addressing feedback, like when a someone moves an issue back from RTBC to Needs Work, or Needs Review.)

Instead of a person working by themselves on many things, we work together, with others, on a few things to get them done. We do not measure success by the number of patches posted. We measure success by the reviews we do, issue summaries we update, change records we post, and the number of issues we get to RTBC.

DrupalCon is a supportive environment to begin contributing Though sprints might sound intimidating, the sprints at DrupalCon have a history of both informal and formal ways for integrating new contributors in this process. If you have experience with Drupal, then you have the skills we need to help with contribution to the Drupal project. And, DrupalCon is the most supportive environment to start your contribution journey. Who? We welcome project managers, bug reporters, site builders, themers, QA testers, writers to help with documentation, accessibility specialists, usability specialists, developers, and contributors to other open source projects to help fix upstream bugs. When?
  • Extended sprints begin Saturday, May 9 and Sunday, May 10 at an off-site location: IndieDesk.
  • Monday, May 11, sprints at the conference venue in room Room 408AB run concurrently with the community summit and other trainings.
  • There is a lounge for work all throughout the event at the venue in Room 408AB, and off-site which is open 24 hours at the Westin Bonaventure Hotel in the Ballroom on 3rd floor.
  • Then Friday, May 15 is the most awesome sprint day which you should not miss an hour of.
  • Followed by more extended sprints at IndieDesk Saturday, May 16 and Sunday, May 17.
Why should you sprint? Sprinting with others allows barriers to contributing to be removed in real time. You will learn a lot just by sitting at a table with others and seeing how they work. Having others see how you work, might result in them giving you time saving tips that will help you contribute and... might also help when you return to your regular routine after the sprint. It is fun. You will get to network and meet the real human people who work on Drupal itself. Sprints include some down time. We have to eat! Conversations at lunch and dinner can be both enjoyable and eye opening. Finally, trust that you do have skills that will be appreciated at the sprints. We are really good and finding ways for everyone to contribute. Topics In addition to generally "contributing", there are some specific topics in Core, Contrib modules, themes, and infrastructure with people organizing work in those areas and gathering others to collaborate with them. Core Some of the focus for Core during the DrupalCon sprints will be:
  • Getting D8 done
  • Multilingual
  • Front-end United
Getting D8 done For getting D8 done, there are different ways of being involved. One way, is working on D8 criticals. Read the page on helping with Drupal 8. Angie Byron (webchick) is also giving a session to update people on the status of the current criticals. We are also looking for experienced contributors join a group of people who will triage issues with the priority of Major. (We are still working on instructions for doing the major triage. Join us at the event!) Multilingual The best way to get involved with the Drupal 8 Multilingual Initiative (D8MI) is to start on the D8MI website. And then look at the issues we are currently focusing on. Jump right in, or come by our weekly meeting in IRC in the #drupal-i18n channel. The next meeting is May 6th at 4pm UTC. Front-end United Front-end United people will be working on: CSS, JavaScript, Twig, UX, themes, the theme system, markup, and stuff! Contrib Some of the focus for Contrib during the DrupalCon sprints will be:
  • Content Staging in D8
  • Media in D8
Content Staging in D8 Information about content staging in Drupal 8 is on the deploy project page. Media in D8 Current focus for media is on entity_embed, entity_browser, media_entity and fallback_formatter. Work is happening in the drupal-media project on github. The presentation by Chandan Singh has a recent update. Drupal.org Drupal.org is running on Drupal 7. If you are familiar with Drupal 7 and do notwant to dive into Drupal 8 yet, you can still really help with the release of Drupal 8, by working on Drupal.org itself. Drupal.org has a lot of projects where it tracks issues. There are 42 Drupal.org related projects which each have their own issue queue. searching for the d.o hitlist tag (across all projects) yields a list of issues that are relevant for the sprint. Infrastructure The big push for infrastructure right now, includes blockers to Drupal 8. Issue: [meta] Drupal.org (websites/infra) blockers to a Drupal 8 release has lots of details, but can be a little overwhelming to dive into. Confersations happen in IRC in the #drupal-infrastructure channel. One of the blockers is the new testbot Continuous Integration system (testbot CI 2.0). There are 7 projects related to DrupalCI. DrupalCI: Modernizing Testbot Initiative is the main overall project. And updates and confersations happen in the Drupal.org Testing Infrastructure groups.drupal.org group and in IRC in the #drupal-testing channel. Sign-up Signup to help with a topic, or lead a topic in the DrupalCon LA sprint spreadsheet.

Photo credit: Jeremy Thorson

DrupalSprints
Categories: Elsewhere

Code Karate: Mobile friendly navigation using the Mmenu Module

Planet Drupal - Fri, 01/05/2015 - 13:28
Episode Number: 206

As a request from David over at luxor.me, we checked out the Mmenu module. This little gem allows you to use various javascript libraries to create a mobile friendly navigation. The navigation it produces is similar to the slide in menus you find side a lot of mobile applications.

Tags: DrupalBlocksDrupal 7Site BuildingDrupal Planet
Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator