Elsewhere

Microserve: Caching beyond the norm in Drupal 7

Planet Drupal - Tue, 09/02/2016 - 10:19
Caching beyond the norm in Drupal 7Feb 9th 2016

When developing in a drupal centric environment there are two general methods used to cache information on the system in modules, these can be used to persist while a page is loading using drupal static or for a longer periods using cache set and cache get.

I have used some other simple and alternative methods, and from reading notes before the drupal_static function declaration in bootstrap.inc; ways of getting even more efficient caching.

First I’ll address drupal static; it’s explained very neatly on this Lullabot article.

This is not necessarily always required. If you have a simple function in your code that you want to remember, a setting and speed is a must, you can just use a static variable.

function get_a_value($value_name) { static $function_settings = array(); if (!isset($function_settings[$value_name])) { //get variable from database and set it to $fetched_value $function_settings['$value_name'] = $fetched_value; } else { return $function_settings[$value_name]; } }

The static variable $function_settings stays persistent throughout the execution of your page, and the caching stays localised to your function / module.

Another way to do something similar to above, but to include the drupal_static function, would be to do an advanced drupal_static pattern (this is mentioned in the drupal_static notes).

function user_access($string, $account = NULL) { // Use the advanced drupal_static() pattern, since this is called very often. static $drupal_static_fast; if (!isset($drupal_static_fast)) { $drupal_static_fast['perm'] = &drupal_static(__FUNCTION__); } $perm = &$drupal_static_fast['perm']; //... }

This is taken directly from the notes. Here we assign a local static variable as in the previous example but if the local static variable is not set, we assign the local static variable to drupal_static. Now other functions and code outside of the function can call on drupal_static_reset and reset the locally declared static variable, but will still retain the efficiency of having the static variable locally as in the previous example.

The last method I would like to approach is by using a cache in an object:

class SomeClass { /** * * @var array */ protected $localCache = array(); public function getData($cid) { if ($data = $this->getCache($cid)) { return $data; } //nothing set //do time intensive task here and set to data $this->cache($cid, $data); return $data; } /** * * @param type $cid * @param type $data */ protected function cache($cid, $data) { $this->setLocalCache($cid, $data); cache_set($cid, $data, 'cache', strtotime('+7 days', time())); } protected function getCache($cid) { if ($data = $this->getLocalCache($cid)) { return $data; } if ($cache = cache_get($cid)) { return $cache->data; } return FALSE; } /** * * @param any $cid * @param any $data */ protected function setLocalCache($cid, $data) { $this->localCache[$cid] = $data; } /** * * @param any $cid * @return any */ protected function getLocalCache($cid) { return isset($this->localCache[$cid]) ? $this->localCache[$cid] : FALSE; } }

In this example, I do all the local caching within the object but if needs be, I will retrieve and save to the system cache if the required data is not saved locally. One thing I have omitted is methods to clear the local cache. This caching this essentially using a local property.

If you wanted to clear the cache externally, you could create a public method or a static method.

So that about wraps this one up. Have you used any different methods? Can it be done better? Tell me what you think in the comment section below.

Written by: Darren Whittington, Senior Developer

Microserve is a Drupal Agency based in Bristol, UK. We specialise in Drupal Development, Drupal Site Audits and Health Checks, and Drupal Support and Maintenance. Contact us for for further information.

Categories: Elsewhere

Deeson: Drupal Focal Point Module: Making the most of your images

Planet Drupal - Tue, 09/02/2016 - 10:15

Focal point is a Drupal module that allows site administrators to select an important portion of an image to focus on.

It’s similar in many ways to the Image field focus module. But rather than giving a square box with crosshairs for focusing and another for cropping (which you can only do inside the focus area and can be quite confusing), focal point allows you to select a single point on the image to focus on. It is also fully compatible with the Media module.

User experience

Let's take a look at focal point from an administrator's perspective. The user can click on the image at any point which adds an icon over that particular area, representing the chosen focal point (see below).

From this, the administrator can then select the “Image preview” link below the image which will display a page with both the original image and how the image will look with the different image styles.

As you can see below, the image has now been focused upon the parrot on the right.

Setup and configuration

Firstly, you need to download and enable the focal point module (https://www.drupal.org/project/focal_point).

Upon enabling the module you will find a new image style called “Focal Point Preview”. This is used for the admin page and is the default preview style for setting the focal point. It rescales the image width to 250px with upscaling allowed.

You will also have two new image effects available for cropping “Focal Point Crop” and “Focal Point Scale And Crop” within the drupal image styles at admin/config/media/image-styles.

These both crop down to the point to which the user has selected on the image, and ensure that the chosen focal point will not be cropped out of the image.

Now you can create a new image style with one of these image effects, and then apply this image style to an image field within the manage display page of your content type. The images will then crop to the selection the user has chosen.

Media

To enable the compatibility with the media module, you should ensure that the “Media module image fields” setting at admin/config/media/focal_point is enabled.

Then, after adding an image via a field using the media browser widget, another step is provided in the media browser overlay. You can now add a focal point as you would in a standard image field.

You can also edit previously uploaded images to set an individual focal point.

Categories: Elsewhere

Orestis Ioannou: Using debsources API to determine the license of foo.bar

Planet Debian - Tue, 09/02/2016 - 09:45

Following up on the hack of Matthieu - A one-liner to catch'em all! - and the recent features of Debsources I got the idea to modify a bit the one liner in order to retrieve the license of foo.bar.

The script will calculate the SHA256 hash of the file and then query the Debsources API in order to retrieve the license of that particular file.

Save the following in a file as license-of and add it in your $PATH

#!/bin/bash function license-of { readlink -f $1 | xargs dpkg-query --search | awk -F ": " '{print $1}' | xargs apt-cache showsrc | grep-dctrl -s 'Package' -n '' | awk -v sha="$(sha256sum $1 | awk '{ print $1 }')" -F " " '{print "https://sources.debian.net/copyright/api/sha256/?checksum="sha"&packagename="$1""}' | xargs curl -sS } CMD="$1" license-of ${CMD}

Then you can try something like:

license-of /usr/lib/python2.7/dist-packages/pip/exceptions.py Notes:
  • if the checksum is not found in the DB (compiled file, modified file, not part of any package) this will fail
  • if the debian/copyright file of the specific package is not machine readable then you are out of luck!
  • if there are more than 1 versions of the package you will get all the available information. If you want to get just testing then add "&suite=testing" after the &packagename="$1" in the debsources link.
Categories: Elsewhere

Ingo Juergensmann: rpcbind listening on all interfaces

Planet Debian - Mon, 08/02/2016 - 23:20

Currently I'm testing GlusterFS as a replicating network filesystem. GlusterFS depends on rpcbind package. No problem with that, but I usually want to have the services that run on my machines to only listen on those addresses/interfaces that are needed to fulfill the task. This is especially important, because rpcbind can be abused by remote attackers for rpc amplification attacks (dDoS). So, the rpcbind man page states: 

-h : Specify specific IP addresses to bind to for UDP requests. This option may be specified multiple times and is typically necessary when running on a multi-homed host. If no -h option is specified, rpcbind will bind to INADDR_ANY, which could lead to problems on a multi-homed host due to rpcbind returning a UDP packet from a different IP address than it was sent to. Note that when specifying IP addresses with -h, rpcbind will automatically add 127.0.0.1 and if IPv6 is enabled, ::1 to the list.

Ok, although there is neither a /etc/default/rpcbind.conf nor a /etc/rpcbind.conf nor a sample-rpcbind.conf under /usr/share/doc/rpcbind, some quick websearch revealed a sample config file. I'm using this one: 

# /etc/init.d/rpcbind
OPTIONS=""

# Cause rpcbind to do a "warm start" utilizing a state file (default)
# OPTIONS="-w "

# Uncomment the following line to restrict rpcbind to localhost only for UDP requests
OPTIONS="${OPTIONS} -h 192.168.1.254"
#127.0.0.1 -h ::1"

# Uncomment the following line to enable libwrap TCP-Wrapper connection logging
OPTIONS="${OPTIONS} -l "

As you can see, I want to bind to 192.168.1.254. After a /etc/init.d/rpcbind restart verifying that everything works as desired with netstat is showing...

tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 0 2084266 30777/rpcbind
tcp6 0 0 :::111 :::* LISTEN 0 2084272 30777/rpcbind
udp 0 0 0.0.0.0:848 0.0.0.0:* 0 2084265 30777/rpcbind
udp 0 0 192.168.1.254:111 0.0.0.0:* 0 2084264 30777/rpcbind
udp 0 0 127.0.0.1:111 0.0.0.0:* 0 2084260 30777/rpcbind
udp6 0 0 :::848 :::* 0 2084271 30777/rpcbind
udp6 0 0 ::1:111 :::* 0 2084267 30777/rpcbind

Whoooops! Although I've specified that rpcbind should only listen to 192.168.1.254 (and localhost as described by the man page) rpcbind is still listening on all addresses. Quick check if the process is using the correct options: 

root     30777  0.0  0.0  37228  2360 ?        Ss   16:11   0:00 /sbin/rpcbind -h 192.168.1.254 -l

Hmmm, yes, -h 192.168.1.254 is specified. Ok, something is going wrong here...

According to an entry in Ubuntus Launchpad I'm not the only one that experienced this problem. However this Launchpad entry mentioned that upstream seems to have a fix in version 0.2.3, but I experienced the same behaviour in stable as well as in unstable, where the package version is 0.2.3-0.2. Apparently the problem still exists in Debian unstable.

I'm somewhat undecided whether to file a normal bug against rpcbind or if I should label it as a security bug, because it opens a service to the public that can be abused for amplification attacks, although you might have configured rpcbind to just listen on internal addresses.

Kategorie: DebianTags: DebianSoftwareNetworkBug 
Categories: Elsewhere

Niels Thykier: Performance tuning of lintian, take 3

Planet Debian - Mon, 08/02/2016 - 23:19

About 7 months ago, I wrote about we had improved Lintian’s performance. In 2.5.41, we are doing another memory reduction, where we primarily reduce the memory consumption of data about ELF binaries.  Like previously, memory reductions follows the “less is more” pattern.

My initial test subject was linux-image-4.4.0-trunk-rt-686-pae_4.4-1~exp1_i386.deb. It had a somewhat unique property that the ELF data made up a little over half the cache.

  • We do away with a lot of unnecessary default values [f4c57bb, 470875f]
    • That removed about ~3MB (out of 10.56MB) of that ELF data cache
  • Discard section information we do not use [3fd98d9]
    • This reduced the ELF data cache to 2MB (down from the 7MB).
  • Then we stop caching the output of file(1) twice [7c2bee4]
    • While a fairly modest reduction (only 0.80MB out of 16MB total), it also affects packages without ELF binaries.

At this point, we had reduced the total memory usage from 18.35MB to 8.92MB (the ELF data going from 10.56MB to 1.98MB)[1]. At this point, I figured that I was happy with the improvement and discarded my test subject.

While impressive, the test subject was unsurprisingly a special case.  The improvement in “regular” packages[2] (with ELF binaries) were closer to 8% in total.  Not being satisfied with that, I pulled one more trick.

  • Keep only “UND” and “.text” symbols [2b21621]
    • This brought coreutils (just the lone deb) another 10% memory reduction in total.

In the grand total, coreutils 8.24-1 amd64 went from 4.09MB to 3.48MB.  The ELF data cache went from 3.38MB to 2.84MB.  Similarly, libreoffice/4.2.5-1 (including its ~170 binaries) has also seen a 8.5% reduction in total cache size[3] and is now down to 260.48MB (from 284.83MB).

 

[1] If you are wondering why I in 3fd98d9 wrote “The total “cache” memory usage is approaching 1/3 of the original for that package”, then you are not alone.  I am not sure myself any more, but it seems obviously wrong.

[2] FTR: The sample size of “regular packages” is 2 in this case.  Of which one of them being coreutils…

[3] Admittedly, since “take 2” and not since 2.5.40.2 like the rest.


Filed under: Debian, Lintian
Categories: Elsewhere

DrupalCon News: You are the Coding & Development Track

Planet Drupal - Mon, 08/02/2016 - 18:40

With core Drupal 8 now in full swing and the contrib space rapidly maturing, now is an excellent time to get more deeply involved with one of the world’s largest open-source development communities. The Coding and Development track is focused on educating developers on the latest techniques and tools for increasing the quality and efficacy of their projects.

Categories: Elsewhere

Joachim Breitner: Protecting static content with mod_rewrite

Planet Debian - Mon, 08/02/2016 - 17:39

Since fourteen years, I have been photographing digitally and putting the pictures on my webpage. Back then, online privacy was not a big deal, but things have changed, and I had to at least mildly protect the innocent. In particular, I wanted to prevent search engines from accessing some of my pictures.

As I did not want my friends and family having to create an account and remember a password, I set up an OpenID based scheme five years ago. This way, they could use any of their OpenID enabled account, e.g. their Google Mail account, to log in, without disclosing any data to me. As my photo album consists of just static files, I created two copies on the server: the real one with everything, and a bunch of symbolic links representing the publicly visible parts. I then used mod_auth_openid to prevent access to the protected files, unless the users logged in. I never got around of actually limiting who could log in, so strangers were still able to see all photos, but at least search engine spiders were locked out.

But, very unfortunately, OpenID did never really catch on, Google even stopped being a provider, and other promising decentralized authentication schemes like Mozilla Persona are also phased out. So I needed an alternative.

A very simply scheme would be a single password that my friends and family can get from me and that unlocks the pictures. I could have done that using HTTP Auth, but that is not very user-friendly, and the login does not persist (at least not without the help of the browser). Instead, I wanted something that involves a simple HTTP form. But I also wanted to avoid server-side programming, for performance and security reasons. I love serving static files whenever it is feasible.

Then I found that mod_rewrite, Apache’s all-around-tool for URL rewriting and request mangling, supports reading and writing cookies! So I came up with a scheme that implements the whole login logic in the Apache server configuration. I’d like to describe this setup here, in case someone finds it inspiring.

I created a login.html with a simple HTML form:

<form method="GET" action="/bilder/login.html"> <div style="text-align:center"> <input name="password" placeholder="Password" /> <button type="submit">Sign-In</button> </div> </form>

It sends the user to the same page again, putting the password into the query string, hence the method="GET" – mod_rewrite unfortunately cannot read the parameters of a POST request.

The Apache configuration is as follows:

RewriteMap public "dbm:/var/www/joachim-breitner.de/bilder/publicfiles.dbm" <Directory /var/www/joachim-breitner.de/bilder> RewriteEngine On # This is a GET request, trying to set a password. RewriteCond %{QUERY_STRING} password=correcthorsebatterystaple RewriteRule ^login.html /bilder/loggedin.html [L,R,QSD,CO=bilderhp:correcthorsebatterystaple:www.joachim-breitner.de:2000000:/bilder] # This is a GET request, trying to set a wrong password. RewriteCond %{QUERY_STRING} password= RewriteRule ^login.html /bilder/notloggedin.html [L,R,QSD] # No point in loggin in if there is already the right password RewriteCond %{HTTP:Cookie} bilderhp=correcthorsebatterystaple RewriteRule ^login.html /bilder/loggedin.html [L,R] # If protected file is requested, check for cookie. # If no cookie present, redirect pictures to replacement picture RewriteCond %{HTTP:Cookie} !bilderhp=correcthorsebatterystaple RewriteCond ${public:$0|private} private RewriteRule ^.*\.(png|jpg)$ /bilder/pleaselogin.png [L] RewriteCond %{HTTP:Cookie} !bilderhp=correcthorsebatterystaple RewriteCond ${public:$0|private} private RewriteRule ^.+$ /bilder/login.html [L,R] </Directory>

The publicfiles.dbm file is generated from a text file with lines like

login.html.en 1 login.html.de 1 pleaselogin.png 1 thumbs/20030920165701_thumb.jpg 1 thumbs/20080813225123_thumb.jpg 1 ...

using

/usr/sbin/httxt2dbm -i publicfiles.txt -o publicfiles.dbm

and whitelists all files that are visible without login. Make sure it contains the login page, otherwise you’ll get a redirect circle.

The other directives in the above configure fulfill these tasks:

  • If the password (correcthorsebatterystaple) is in the query string, the server redirects the user to a logged-in-page that tells him that the login was successful and instructs him to reload the photo album. It also sets a cookie that will last very long -- after all, I want this to be convenient for my visitors. The query string parsing is not very strict (e.g. a password of correcthorsebatterystaplexkcdrules would also work), but that’s ok.
  • The next request detects an attempt to set a password. It must be wrong (otherwise the first rule would have matched), so we redirect the user to a variant of the login page that tells him so.
  • If the user tries to access the login page with a valid cookie, just log him in.
  • The next two rules implement the actual protection. If there no valid cookie and the accessed file is not whitelisted, then access is forbidden. For requests to images, we do an internal redirect to a placeholder image, while for everything else we redirect the user to the login page.

And that’s it! No resource-hogging web frameworks, not security-dubious scripting languages, and a dead-simple way to authenticate.

Oh, and if you believe you know me well enough to be allowed to see all photos: The real password is not correcthorsebatterystaple; just ask me what it is.

Categories: Elsewhere

Lunar: Reproducible builds: week 41 in Stretch cycle

Planet Debian - Mon, 08/02/2016 - 16:43

What happened in the reproducible builds effort this week:

Toolchain fixes

After remarks from Guillem Jover, Lunar updated his patch adding generation of .buildinfo files in dpkg.

Packages fixed

The following packages have become reproducible due to changes in their build dependencies: dracut, ent, gdcm, guilt, lazarus, magit, matita, resource-agents, rurple-ng, shadow, shorewall-doc, udiskie.

The following packages became reproducible after getting fixed:

  • disque/1.0~rc1-5 by Chris Lamb, noticed by Reiner Herrmann.
  • dlm/4.0.4-2 by Ferenc Wágner.
  • drbd-utils/8.9.6-1 by Apollon Oikonomopoulos.
  • java-common/0.54 by by Emmanuel Bourg.
  • libjibx1.2-java/1.2.6-1 by Emmanuel Bourg.
  • libzstd/0.4.7-1 by Kevin Murray.
  • python-releases/1.0.0-1 by Jan Dittberner.
  • redis/2:3.0.7-2 by Chris Lamb, noticed by Reiner Herrmann.
  • tetex-brev/4.22.github.20140417-3 by Petter Reinholdtsen.

Some uploads fixed some reproducibility issues, but not all of them:

  • anarchism/14.0-4 by Holger Levsen.
  • hhvm/3.11.1+dfsg-1 by Faidon Liambotis.
  • netty/1:4.0.34-1 by Emmanuel Bourg.

Patches submitted which have not made their way to the archive yet:

  • #813309 on lapack by Reiner Herrmann: removes the test log and sorts the files packed into the static library locale-independently.
  • #813345 on elastix by akira: suggest to use the $datetime placeholder in Doxygen footer.
  • #813892 on dietlibc by Reiner Herrmann: remove gzip headers, sort md5sums file, and sort object files linked in static libraries.
  • #813912 on git by Reiner Herrmann: remove timestamps from documentation generated with asciidoc, remove gzip headers, and sort `md5sums and tclIndex files.
reproducible.debian.net

For the first time, we've reached more than 20,000 packages with reproducible builds for sid on amd64 with our current test framework.

Vagrant Cascadian has set up another test system for armhf. Enabling four more builder jobs to be added to Jenkins. (h01ger)

Package reviews

233 reviews have been removed, 111 added and 86 updated in the previous week.

36 new FTBFS bugs were reported by Chris Lamb and Alastair McKinstry.

New issue: timestamps_in_manpages_generated_by_yat2m. The description for the blacklisted_on_jenkins issue has been improved. Some packages are also now tagged with blacklisted_on_jenkins_armhf_only.

Misc.

Steven Chamberlain gave an update on the status of FreeBSD and variants after the BSD devroom at FOSDEM’16. He also discussed how jails can be used for easier and faster reproducibility tests.

The video for h01ger's talk in the main track of FOSDEM’16 about the reproducible ecosystem is now available.

Categories: Elsewhere

blog.studio.gd: Views Plugins (Part 1) : Simple area handler plugin

Planet Drupal - Mon, 08/02/2016 - 11:56
In this series I will show you how to make use of the new Drupal 8 Plugin system, we begin with a simple example : the views area handler plugins.
Categories: Elsewhere

blog.studio.gd: Overview of CMI in Drupal 8

Planet Drupal - Mon, 08/02/2016 - 11:56
Some notes about the new Configuration management system in Drupal 8
Categories: Elsewhere

Orestis Ioannou: Debian - your patches and machine readable copyright files are available on Debsources

Planet Debian - Mon, 08/02/2016 - 10:33

TL;DR All Debian license and patches are belong to us. Discover them here and here.

In case you hadn't already stumbled upon sources.debian.net in the past, Debsources is a simple web application that allows to publish an unpacked Debian source mirror on the Web. On the live instance you can browse the contents of Debian source packages with syntax highlighting, search files matching a SHA-256 hash or a ctag, query its API, highlight lines, view accurate statistics and graphs. It was initially developed at IRILL by Stefano Zacchiroli and Matthieu Caneill.

During GSOC 2015 I helped introduce two new features.

License Tracker

Since Debsources has all the debian/copyright files and that many of them adopted the DEP-5 suggestion (machine readable copyright files) it was interesting to exploit them for end users. You may find interesting the following features:

  • an API that allows users to find the license of file "foo" or the licenses for a bunch of packages, using filenames or SHA-256 hashes

  • a better looking interface for debian/copyright files

Have a look at the documentation to discover more!

Patch tracker

The old patch tracker unfortunately died a while ago. Since Debsources stores all the patches it was, probably, natural for it to be able to exploit them and present them over the web. You can navigate through packages by prefix or by searching them here. Among the use cases:

  • a summary which contains all the patches of a package together with their diffs and summaries/subjects
  • links to view and download (quilt-3.0) patches.

Read more about the API!

Coming ...
  • In the future these informations will be added in the DB. This will allow:

    • the license tracker to provide interesting statistics and graphs about licensing trends (What do Pythonistas usually choose as a license, how many GPL-3 files are in Jessie etc). Those are going to be quite accurate since they will take into account each file in a given package and not just the "general" license of the package.

    • the patch tracker to produce a list of packages that contain patches - this will enable providing links from PTS to the patch tracker.

  • Not far in the horizon there is also an initial work for exporting debian/copyright files into SPDX documents. You can have a look at a beta / testing version on debsources-dev. (Example)

I hope you find these new features useful. Don't hesitate to report any bugs or suggestions you come accross.

Categories: Elsewhere

Janez Urevc: janezurevc.name runs on Drupal 8!

Planet Drupal - Mon, 08/02/2016 - 07:30
janezurevc.name runs on Drupal 8!

Drupal 8 was officially released last November. Since then I was planning to try migrating my blog from previous version of this great CMS. Drupal 8 comes with many improvements and I definitely wanted to leverage those also on my site.

Besides that I always used my personal site also as an experimental sandbox where I tested new Drupal modules, themes, technologies. Even if I am very active contributor to Drupal core and contributed modules and I've been working on an enterprise Drupal 8 project at my work I actually never migrated a site to Drupal 8 to this date. It was definitely something I wanted to try.

Previous version of janezurevc.name was running on Drupal 7. It is important to note migration from 7 to 8 isn't officially supported yet. Drupal 7 won't reach EOL for at least few more years, which makes this migration not critical. However, migrations from Drupal 6 have been fully supported since the day 8 was release. 6 will reach EOL this month, which makes migration from 6 to 8 an absolute priority.

Migration

My site is actually very basic. I am using content (2 content types), taxonomy (1 vocabulary), few contributed modules and that is really it. It turns out that every that I needed migrates reliably.

I started the process by reading official documentation. Besides Migrate and Migrate Drupal modules that come with core I needed few contributed modules. Drupal upgrade, Migrate tools and Migrate plus.

Migration itself was extremely easy. I installed Drupal 8 site, enabled migrate modules, started migration and waited for a few minutes. That's it! At least for core stuff. There are some glitches when it comes to contributed modules, but even that was fairly easy to resolve.

I can just thank to everyone that contributed to Migrate in Drupal core. You did an awesome job!

Theme

Drupal 7 version of my blog used Sky theme, which is unfortunately not ported to 8 yet. For that reason I needed to search theme repository and came across Bootstrap clean blog.

It looked nice and it had a Drupal 8 -dev release. Regardless of that it works as a charm. I even contributed minor patches and am planning to contribute few more.

How do you like the theme?

Modules

Like almost every Drupal website out there mine also uses few contributed modules. Let's see how that went.

Disqus

Disqus module has been ported as part of the Google summer of code project, which I've mentored in 2014. Module itself works very well. We changed architecture a bit; instead of having a custom database table we rather used a dedicated field type. This approach comes with many benefits. By doing this we're not limited to nodes any more. Disqus can be used on any entity type now.

Even if the port was there migration was not. I used this opportunity to dig into this part of Drupal a bit more. I wrote 7 to 8 migration support for everything Disqus needs. This includes general configuration, fields on entities, statuses and identifiers. My code is already committed and you can give it a try.

Did you try Disqus migration? Let me know how did it work for you.

Pathauto and Redirect

D8 ports are available on their Drupal.org project pages. They work as a charm. While core migrates existing aliases alias patterns, redirects and other configuration aren't supported yet. I had just 3 alias patterns and less than 10 redirects on my old site so this wasn't hard to fix manually.

If you meet @Berdir please buy him a beer. He did an awesome job porting this (any many other) modules.

Media

I was using media to embed images in WYSIWYG, which is using legacy embed token in Drupal 7. This part was unfortunately not ported yet. I was using this on lees than 10 places so I decided to fix this manually too. I used a simple SQL query to get node IDs of content that used legacy token. Then I simply changed it to standard tag with data-entity-* attributes, which Drupal 8 uses for it's own image embeds.

Markdown filter

Recently I found out that I prefer Markdown when producing written content. It doesn't interfere my writing flow nearly as much as WYSIWYG editors do. When using Markdown I focus on content instead of the appearance (for the same reason I really liked LaTeX during my university years).

Guess what? There is a module for that! Markdown filter comes with a text filter that will convert Markdown syntax to HTML when displaying it. And it also has a Drupal 8 port. Download, install, configure, use. It was as easy as that!

How does Markdown work for you? Do you prefer it over WYSIWYG editor as much as I do?

Other modules

I use few other modules on the page. All of them have some kind of Drupal 8 release:

All of them work without any problem. I downloaded, installed and configured them. Google analytics even comes with support for migration (which meant 3rd step was not needed).

Great work maintainers and contributors!

Other interesting stuff

I also used this migration to move my blog to a new hosting solution. Old blog was hosted on a VPS that I used in the past and am slowly moving away from it. Most of my sites and services are currently hosted on a dedicated server at Hetzner (they provide excellent value for a price so I'd definitely recommend them).

Recently I started using Docker for my development environments and I wanted to try it in production too (I mentioned I am (ab)using my personal site for experimenting, right? :)). As a result of that I'm hosting janezurevc.name in a Dockerized environment managed via Docker compose. Compose is super nice as it allows you to describe your infrastructure stack in a single YAML file. This allows you to commit it to a VCS repository and replicate infrastructure anywhere you want. But this is already a topic for some of my future posts.

Did you try to migrate your Drupal 6 or 7 sites to 8? How did it go? Which contributed modules are you using?

slashrsm Mon, 08.02.2016 - 07:30 Tags Drupal

View the discussion thread.

Categories: Elsewhere

Russ Allbery: Converted personal web sites to TLS

Planet Debian - Mon, 08/02/2016 - 05:44

I've been in favor of using TLS and encryption for as much as possible for a while, but I never wanted to pay money to the certificate cartel. I'd been using certificates from CAcert, but they're not recognized by most browsers, so it felt rude to redirect everything to TLS with one of their certificates.

Finally, the EFF and others put together Let's Encrypt with free, browser-recognized certificates and even a really solid automatic renewal system. That's perfect, and also eliminated my last excuse to go do the work, so now all of my personal web sites use TLS and HTTPS by default and redirect to the encrypted version of the web site. And better yet, all the certificates should just renew themselves automatically, meaning one less thing I have to keep track of and deal with periodically.

Many thanks to Wouter Verhelst for his short summary of how to get the Let's Encrypt client to work properly from the command line without doing all the other stuff it wants to do in order to make things easier for less sophisticated users. Also useful was the SSL Labs server test to make sure I got the modern TLS configuration right. (All my sites should now be an A. I decided to not cut off support for Internet Explorer older than version 11 yet.

I imported copies of the Debian packages needed for installation of the Let's Encrypt package on Debian jessie that weren't already in Debian backports into my personal Debian repository for my own convenience, but they're also there for anyone else.

Oh, that reminds me: this also affects the archives.eyrie.org APT repository (the one linked above), so if any of you were using that, you'll now need to install apt-transport-https and might want to change the URL to use HTTPS.

Categories: Elsewhere

Mike Hommey: SSH through jump hosts, revisited

Planet Debian - Mon, 08/02/2016 - 00:26

Close to 7 years ago, I wrote about SSH through jump hosts. Twice. While the method used back then still works, Openssh has grown an new option in version 5.3 that allows it to be simplified a bit, by not using nc.

So here is an updated rule, version 2016:

Host *+* ProxyCommand ssh -W $(echo %h | sed 's/^.*+//;s/^\([^:]*$\)/\1:22/') $(echo %h | sed 's/+[^+]*$//;s/\([^+%%]*\)%%\([^+]*\)$/\2 -l \1/;s/:\([^:+]*\)$/ -p \1/')

The syntax you can use to connect through jump hosts hasn’t changed compared to previous blog posts:

  • With one jump host:
    $ ssh login1%host1:port1+host2:port2 -l login2
  • With two jump hosts:
    $ ssh login1%host1:port1+login2%host2:port2+host3:port3 -l login3
  • With three jump hosts:
    $ ssh login1%host1:port1+login2%host2:port2+login3%host3:port3+host4:port4 -l login4
  • etc.

Logins and ports can be omitted.

Update: Add missing port to -W flag when one is not given.

Categories: Elsewhere

Iain R. Learmonth: After FOSDEM 2016

Planet Debian - Sun, 07/02/2016 - 23:55

FOSDEM was fun. It was great to see all these open source projects coming together in one place and it was really good to talk to people that were just as enthusiastic about the FOSS activities they do as I am about mine.

Thanks go to Saúl Corretgé who looked after the real-time communications dev room and made sure everything ran smoothly. I was very pleased to find that I had to stand for a couple of talks as the room was full with people eager to learn more about the world of RTC.

I was again pleased on the Sunday when I had such a great audience for my talk in the distributions dev room. Everyone was very welcoming and after the talk I had some corridor discussions with a few people that were really interesting and have given me a few new things to explore in the near future.

A few highlights from FOSDEM:

  • ReactOS: Since I last looked at this project it has really matured and is getting to be rather stable. It may be possible to start seriously considering replacing Windows XP/Vista machines with ReactOS where the applications being run just cannot be used with later versions of Windows.
  • Haiku: I used BeOS a long long time ago on my video/music PC. I can't say that I was using it over a Linux or BSD distribution for any particular reason but it worked well. I saw a talk that discussed how Haiku was keeping up-to-date with drivers and also there was a talk, that I didn't see, that talked about the new Haiku package management system. I think I may check out Haiku again in the near future, even if only for the sake of nostalgia.
  • Kolab: Continuing with the theme of things that have matured since I last looked at them, I visited the Kolab stand at FOSDEM and I was impressed with how far it has come. In fact, I was so impressed that I'm looking at using it for my primary email and calendaring in the near future.
  • picoTCP: When I did my Honours project at University, I was playing with Contiki. This looks a lot easier to get started with, even if it's perhaps missing parts of the stack that Contiki implements well. If I ever find time for doing some IoT hacking, this will be on the list of things to try out first.

This is just some of the highlights, and I know I'm missing out a lot here. One of the main things that FOSDEM has done for me is open my eyes as to how wide and diverse our community is and it has served as a reminder that there is tons of cool stuff out there if you take a moment to look around.

Also, thanks to my trip to FOSDEM, I now have four new t-shirts to add into the rotation: FOSDEM 2016, Debian, XMPP and twiki.org.

Categories: Elsewhere

Joey Hess: letsencrypt support in propellor

Planet Debian - Sun, 07/02/2016 - 23:10

I've integrated letsencrypt into propellor today.

I'm using the reference letsencrypt client. While I've seen complaints that it has a lot of dependencies and is too complicated, it seemed to only need to pull in a few packages, and use only a few megabytes of disk space, and it has fewer options than ls does. So seems fine. (Although it would be nice to have some alternatives packaged in Debian.)

I ended up implementing this:

letsEncrypt :: AgreeTOS -> Domain -> WebRoot -> Property NoInfo

This property just makes the certificate available, it does not configure the web server to use it. This avoids relying on the letsencrypt client's apache config munging, which is probably useful for many people, but not those of us using configuration management systems. And so avoids most of the complicated magic that the letsencrypt client has a reputation for.

Instead, any property that wants to use the certificate can just use leteencrypt to get it and set up the server when it makes a change to the certificate:

letsEncrypt (LetsEncrypt.AgreeTOS (Just "me@my.domain")) "example.com" "/var/www" `onChange` setupthewebserver

(Took me a while to notice I could use onChange like that, and so divorce the cert generation/renewal from the server setup. onChange is awesome! This blog post has been updated accordingly.)

In practice, the http site has to be brought up first, and then letsencrypt run, and then the cert installed and the https site brought up using it. That dance is automated by this property:

Apache.httpsVirtualHost "example.com" "/var/www" (LetsEncrypt.AgreeTOS (Just "me@my.domain"))

That's about as simple a configuration as I can imagine for such a website!

The two parts of letsencrypt that are complicated are not the fault of the client really. Those are renewal and rate limiting.

I'm currently rate limited for the next week because I asked letsencrypt for several certificates for a domain, as I was learning how to use it and integrating it into propellor. So I've not quite managed to fully test everything. That's annoying. I also worry that rate limiting could hit at an inopportune time once I'm relying on letsencrypt. It's especially problimatic that it only allows 5 certs for subdomains of a given domain per week. What if I use a lot of subdomains?

Renewal is complicated mostly because there's no good way to test it. You set up your cron job, or whatever, and wait three months, and hopefully it worked. Just as likely, you got something wrong, and your website breaks. Maybe letsencrypt could offer certificates that will only last an hour, or a day, for use when testing renewal.

Also, what if something goes wrong with renewal? Perhaps letsencrypt.org is not available when your certificate needs to be renewed.

What I've done in propellor to handle renewal is, it runs letsencrypt every time, with the --keep-until-expiring option. If this fails, propellor will report a failure. As long as propellor is run periodically by a cron job, this should result in multiple failure reports being sent (for 30 days I think) before a cert expires without getting renewed. But, I have not been able to test this.

Categories: Elsewhere

Iustin Pop: mt-st project new homepage

Planet Debian - Sun, 07/02/2016 - 21:34

A short public notice: mt-st project new homepage at https://github.com/iustin/mt-st. Feel free to forward your distribution-specific patches for upstream integration!

Context: a while back I bought a tape unit to help me with backups. Yay, tape! All good, except that I later found out that the Debian package was orphaned, so I took over the maintenance.

All good once more, but there were a number of patches in the Debian package that were not Debian-specific, but rather valid for upstream. And there was no actual upstream project homepage, as this was quite an old project, with no (visible) recent activity; the canonical place for the project source code was an ftp site (ibiblio.org). I spoke with Kai Mäkisara, the original author, and he agreed to let me take over the maintenance of the project (and that's what I intend to do: maintenance mostly, merging of patches, etc. but not significant work). So now there's a github project for it.

There was no VCS history for the project, so I did my best to partially recreate the history: I took the debian releases from snapshots.debian.org and used the .orig.tar.gz as bulk import; the versions 0.7, 0.8, 0.9b and 1.1 have separate commits in the tree.

I also took the Debian and Fedora patches and applied them, and with a few other cleanups, I've just published the 1.2 release. I'll update the Debian packaging soon as well.

So, if you somehow read this and are the maintainer of mt-st in another distribution, feel free to send patches my way for integration; I know this might be late, as some distributions have dropped it (e.g. Arch Linux).

Categories: Elsewhere

Ben Armstrong: Bluff Trail icy dawn: Winter 2016

Planet Debian - Sun, 07/02/2016 - 15:45

Before the rest of the family was up, I took a brief excursion to explore the first kilometre of the Bluff Trail and check out conditions. I turned at the ridge, satisfied I had seen enough to give an idea of what it’s like out there, and then walked back the four kilometres home on the BLT Trail.

I saw three joggers and their three dogs just before I exited the Bluff Trail on the way back, and later, two young men on the BLT with day packs approaching. The parking lot had gained two more cars for a total of three as I headed home. Exercising appropriate caution and judgement, the first loop is beautiful and rewarding, and I’m not alone in feeling the draw of its delights this crisp morning.

Click the first photo below to start the slideshow.

Click to start the slideshow At the parking lot, some ice, but passable with caution Trail head: a few mm of sleet Many footprints since last snowfall Thin ice encrusts the bog The boardwalk offers some loose traction Mental note: buy crampons More thin bog ice Bubbles captured in the bog ice Shelves hang above receding water First challenging boulder ascent Rewarding view at the crest Time to turn back here Flowing runnels alongside BLT Trail Home soon to fix breakfast If it looks like a tripod, it is Not a very adjustable tripod, however Pretty, encrusted pool The sun peeks out briefly Light creeps down the rock face Shimmering icy droplets and feathery moss Capped with a light dusting of sleet
Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator - Elsewhere