Elsewhere

Laura Arjona: How contributors.debian.org helped in my email address migration

Planet Debian - Sun, 08/02/2015 - 01:30

Some months ago I changed my preferred email address. I updated my profile in different sites to point to the new address, and changed my subscriptions in mailing lists.

I forgot about subscriptions to Debian bugs.

I like that you don’t need a “user” to participate in Debian BTS (you just need an email address) but I learned that there’s no way to get a list of the bugs you’re subscribed to (for mailing lists it’s possible: send mail to majordomo at lists.debian.org with which your.email.address in the body).

Then, I remembered that I’m listed in contributors.debian.org as BTS contributor, and that there is an “extra info” link, so I went there and got redirected to:

https://bugs.debian.org/cgi-bin/pkgreport.cgi?correspondent=MYOLDADDRESS

which lists all the bugs for which I sent an email (not only the bugs that I submitted). So now, I have a list of bug numbers to send -unsubscribe mails from my old address and then -subscribe mails from the new address.

I’m probably subscribed to some more bugs in which I didn’t participate (just lurking or interested in how people deal with them) but I suppose they are not many.

(I could have retrieved the list of bugs from the BTS interface, but contributors.debian.org came first to my mind, and it’s nice to have that link handy there, isn’t it?)


Filed under: Tools Tagged: Bugs, Contributing to libre software, Debian, Email, English
Categories: Elsewhere

Dirk Eddelbuettel: rfoaas 0.1.3

Planet Debian - Sun, 08/02/2015 - 01:26

A brand new version of rfoaas is now on CRAN. It shadows the 0.1.3 release of FOAAS just how an earlier 0.1.2 had done (but there was something not quite right at the server backend which we coded around with an interim release 0.1.2.1; neither one of these was ever released to CRAN).

The rfoaas package provides an interface for R to the most excellent FOAAS service--which provides a modern, scalable and RESTful web service for the frequent need to tell someone to f$#@ off. Release 0.1.3 of FOAAS brings support for filters which the initial support going to the absolutely outstanding shoutcloud.io service. This can be enabled by adding filter="shoutcloud" as an argument to any of the access functions. And thanks to shoutcloud.io, the result will be LOUD AND CLEAR.

As usual, CRANberries provides a diff to the previous CRAN release. Questions, comments etc should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Dirk Eddelbuettel: drat Tutorial: First Steps towards Lightweight R Repositories

Planet Debian - Sun, 08/02/2015 - 01:01

Now that drat is on CRAN and I got a bit of feedback (or typo corrections) in three issue tickets, I thought I could show how to quickly post such an interim version in a drat repository.

Now, I obviously already have a checkout of drat. If you, dear reader, wanted to play along and create your own drat repository, one rather simple way would be to simply clone my repo as this gets you the desired gh-pages branch with the required src/contrib/ directories. Otherwise just do it by hand.

Back to a new interim version. I just pushed commit fd06293 which bumps the version and date for the new interim release based mostly on the three tickets addresses right after the initial release 0.0.1. So by building it we get a new version 0.0.1.1:

edd@max:~/git$ R CMD build drat * checking for file ‘drat/DESCRIPTION’ ... OK * preparing ‘drat’: * checking DESCRIPTION meta-information ... OK * checking for LF line-endings in source and make files * checking for empty or unneeded directories * building ‘drat_0.0.1.1.tar.gz’ edd@max:~/git$

Because I want to use the drat repo next, I need to now switch from master to gh-pages; a step I am omitting as we can assume that your drat repo will already be on its gh-pages branch.

Next we simply call the drat function to add the release:

edd@max:~/git$ r -e 'drat:::insert("drat_0.0.1.1.tar.gz")' edd@max:~/git$

As expected, now have two updated PACKAGES files (compressed and plain) and a new tarball:

edd@max:~/git/drat(gh-pages)$ git status On branch gh-pages Your branch is up-to-date with 'origin/gh-pages'. Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working directory) modified: src/contrib/PACKAGES modified: src/contrib/PACKAGES.gz Untracked files: (use "git add <file>..." to include in what will be committed) src/contrib/drat_0.0.1.1.tar.gz no changes added to commit (use "git add" and/or "git commit -a") edd@max:~/git/drat(gh-pages)$

All is left to is to add, commit and push---either as I usually via the spectacularly useful editor mode, or on the command-line, or by simply adding commit=TRUE in the call to insert() or insertPackage().

I prefer to use littler's r for command-line work, so I am setting the desired repos in ~/.littler.r which is read on startup (since the recent littler release 0.2.2) with these two lines:

## add RStudio CRAN mirror drat:::add("CRAN", "http://cran.rstudio.com") ## add Dirk's drat drat:::add("eddelbuettel")

After that, repos are set as I like them (at home at least):

edd@max:~/git$ r -e'print(options("repos"))' $repos CRAN eddelbuettel "http://cran.rstudio.com" "http://eddelbuettel.github.io/drat/" edd@max:~/git$

And with that, we can just call update.packages() specifying the package directory to update:

edd@max:~/git$ r -e 'update.packages(ask=FALSE, lib.loc="/usr/local/lib/R/site-library")' trying URL 'http://eddelbuettel.github.io/drat/src/contrib/drat_0.0.1.1.tar.gz' Content type 'application/octet-stream' length 5829 bytes opened URL ================================================== downloaded 5829 bytes * installing *source* package ‘drat’ ... ** R ** inst ** preparing package for lazy loading ** help *** installing help indices ** building package indices ** testing if installed package can be loaded * DONE (drat) The downloaded source packages are in ‘/tmp/downloaded_packages’ edd@max:~/git$

and presto, a new version of a package we have installed (here the very drat interim release we just pushed above) is updated.

Writing this up made me realize I need to update the handy update.r script (see e.g. the littler examples page for more) and it hard-wires just one repo which needs to be relaxed for drat. Maybe in install2.r which already has docopt support...

Categories: Elsewhere

Eddy Petri&#537;or: Using Gentoo to create a cross toolchain for the old NSLU2 systems (armv5te)

Planet Debian - Sat, 07/02/2015 - 20:07
This is mostly written so I don't forget how to create a custom (Arm) toolchain the Gentoo way (in a Gentoo chroot).

I have been a Debian user since 2001, and I like it a lot. Yet I have had my share of problems with it, mostly because due to lack of time I have very little disposition to try to track unstable or testing, so I am forced to use stable.

This led me to be a fan of Russ Albery's backport script and to create a lot of local backports of packages that are already in unstable or testing.

But this does not help when packages are simply missing from Debian or when something like creating an arm uclibc based system that should be kept up to date, from a security PoV.

I have experience with Buildroot and I must say I like it a lot for creating custom root filesystems and even toolchains. It allows a lot of flexibility that binary distros like Debian don't offer, it does its designated work, creating root filesystems. But buildroot is not appropriate for a system that should be kept up to date, because it lacks a mechanism by which to be able to update to new versions of packages without recompiling the entire rootfs.

So I was hearing from the guys from the Linux Action Show (and Linux Unplugged - by the way, Jupiter Broadcast, why do I need scripts enabled from several sites just to see the links for the shows?) how Arch is great and all, that is a binary rolling release, and that you can customize packages by building your own packages from source using makepkg. I tried it, but Arm support is provided for some specific (modern) devices, my venerable Linksys NSLU2's (I have 2 of them) not being among them.

So I tried Arch in a chroot, then dropped it in favour of a Gentoo chroot since I was under the feeling running Arch from a chroot wasn't such a great idea and I don't want to install Arch on my SSD.

I used succesfully Gentoo in the past to create an arm-unknown-linux-gnueabi chroot back in 2008 and I always liked the idea of USE flags from Gentoo, so I knew I could do this.


So here it goes:


# create a local portage overlay - necessary for cross tools
export LP=/usr/local/portage
mkdir -p $LP/{metadata,profiles}
echo 'mycross' > $LP/profiles/repo_name
echo 'masters = gentoo' > $LP/metadata/layout.conf
chown -R portage:portage $LP
echo 'PORTDIR_OVERLAY="'$LP' ${PORTDIR_OVERLAY}"' >> /etc/portage/make.conf
unset LP

# install crossdev, setup for the desired target, build toolchain
emerge crossdev
crossdev --init-target -t arm-softfloat-linux-gnueabi -oO /usr/local/portage/mycross
crossdev -t arm-softfloat-linux-gnueabi

 


Categories: Elsewhere

Ben Hutchings: Debian LTS work, January 2015

Planet Debian - Sat, 07/02/2015 - 17:53

This was my second month working on Debian LTS, paid for by Freexian's Debian LTS initiative via Codethink. I spent 11.75 hours working on the kernel package (linux-2.6) and committed my changes but did not complete an update. I or another developer will probably release an update soon.

I have committed fixes for CVE-2013-6885, CVE-2014-7822, CVE-2014-8133, CVE-2014-8134, CVE-2014-8160 CVE-2014-9419, CVE-2014-9420, CVE-2014-9584, CVE-2014-9585 and CVE-2015-1421. In the process of looking at CVE-2014-9419, I noticed that Linux 2.6.32.y is missing a series of fixes to FPU/MMX/SSE/AVX state management that were made in Linux 3.3 and backported to 3.2.y some time ago. These addressed possible corruption of these registers when switching tasks, although it's less likely to happen in 2.6.32.y. The fix for CVE-2014-9419 depends on them. So I've backported and committed all these changes, but may yet decide that they're too risky to include in the next update.

Categories: Elsewhere

Richard Hartmann: Release Critical Bug report for Week 06

Planet Debian - Sat, 07/02/2015 - 03:44

Belated post due to meh real life situations.

As you may have heard, if a package is removed from testing now, it will not be able to make it back into Jessie. Also, a lot of packages are about to be reoved for being buggy. If those are gone, they are gone.

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1066 (Including 187 bugs affecting key packages)
    • Affecting Jessie: 161 (key packages: 123) That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 109 (key packages: 90) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 25 bugs are tagged 'patch'. (key packages: 23) Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 6 bugs are marked as done, but still affect unstable. (key packages: 5) This can happen due to missing builds on some architectures, for example. Help investigate!
        • 78 bugs are neither tagged patch, nor marked done. (key packages: 62) Help make a first step towards resolution!
      • Affecting Jessie only: 52 (key packages: 33) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 19 bugs are in packages that are unblocked by the release team. (key packages: 14)
        • 33 bugs are in packages that are not unblocked. (key packages: 19)

How do we compare to the Squeeze and Wheezy release cycles?

Week Squeeze Wheezy Jessie 43 284 (213+71) 468 (332+136) 319 (240+79) 44 261 (201+60) 408 (265+143) 274 (224+50) 45 261 (205+56) 425 (291+134) 295 (229+66) 46 271 (200+71) 401 (258+143) 427 (313+114) 47 283 (209+74) 366 (221+145) 342 (260+82) 48 256 (177+79) 378 (230+148) 274 (189+85) 49 256 (180+76) 360 (216+155) 226 (147+79) 50 204 (148+56) 339 (195+144) ??? 51 178 (124+54) 323 (190+133) 189 (134+55) 52 115 (78+37) 289 (190+99) 147 (112+35) 1 93 (60+33) 287 (171+116) 140 (104+36) 2 82 (46+36) 271 (162+109) 157 (124+33) 3 25 (15+10) 249 (165+84) 172 (128+44) 4 14 (8+6) 244 (176+68) 187 (132+55) 5 2 (0+2) 224 (132+92) 175 (124+51) 6 release! 212 (129+83) 161 (109+52) 7 release+1 194 (128+66) 8 release+2 206 (144+62) 9 release+3 174 (105+69) 10 release+4 120 (72+48) 11 release+5 115 (74+41) 12 release+6 93 (47+46) 13 release+7 50 (24+26) 14 release+8 51 (32+19) 15 release+9 39 (32+7) 16 release+10 20 (12+8) 17 release+11 24 (19+5) 18 release+12 2 (2+0)

Graphical overview of bug stats thanks to azhag:

Categories: Elsewhere

Antoine Beaupré: Migrating from Drupal to Ikiwiki

Planet Debian - Sat, 07/02/2015 - 00:02

TLPL; j'ai changé de logiciel pour la gestion de mon blog.

TLDR; I have changed my blog from Drupal to Ikiwiki.

Note: since this post uses ikiwiki syntax (i just copied it over here), you may want to read the original version instead of this one.

will continue operating for a while to
give a chance to feed aggregators to catch that article. It will also
give time to the Internet archive to catchup with the static
stylesheets (it turns out it doesn't like Drupal's CSS compression at
all!) An archive will therefore continue being available on the
internet archive for people that miss the old stylesheet.

Eventually, I will simply redirect the anarcat.koumbit.org URL to
the new blog location, . This will likely be my
last blog post written on Drupal, and all new content will be
available on the new URL. RSS feed URLs should not change.

Why

I am migrating away from Drupal because it is basically impossible to
upgrade my blog from Drupal 6 to Drupal 7. Or if it is, I'll have to
redo the whole freaking thing again when Drupal 8 comes along.

And frankly, I don't really need Drupal to run a blog. A blog was
originally a really simple thing: a web blog. A set of articles
written on the corner of a table. Now with Drupal, I can add
ecommerce, a photo gallery and whatnot to my blog, but why would I do
that? and why does it need to be a dynamic CMS at all, if I get so
little comments?

So I'm switching to ikiwiki, for the following reason:

  • no upgrades necessary: well, not exactly true, i still need to
    upgrade ikiwiki, but that's covered by the Debian package
    maintenance and I only have one patch to it, and there's no data migration! (the last such migration in ikiwiki was in 2009 and was fully supported)
  • offline editing: this is a a big thing for me: i can just note
    things down and push them when I get back online
  • one place for everything: this blog is where I keep my notes, it's
    getting annoying to have to keep track of two places for that stuff
  • future-proof: extracting content from ikiwiki is amazingly
    simple. every page is a single markdown-formatted file. that's it.

Migrating will mean abandoning the
barlow theme, which was
seeing a declining usage anyways.

What

So what should be exported exactly. There's a bunch of crap in the old
blog that i don't want: users, caches, logs, "modules", and the list
goes on. Maybe it's better to create a list of what I need to extract:

  • nodes
    • title ([[ikiwiki/directive/meta]] title and guid tags, guid to avoid flooding aggregators)
    • body (need to check for "break comments")
    • nid (for future reference?)
    • tags (should be added as \[[!tag foo bar baz]] at the bottom)
    • URL (to keep old addresses)
    • published date ([[ikiwiki/directive/meta]] date directive)
    • modification date ([[ikiwiki/directive/meta]] updated directive)
    • revisions?
    • attached files
  • menus
    • RSS feed
    • contact
    • search
  • comments
    • author name
    • date
    • title
    • content
  • attached files
    • thumbnails
    • links
  • tags
    • each tag should have its own RSS feed and latest posts displayed
When

Some time before summer 2015.

Who

Well me, who else. You probably really don't care about that, so let'S
get to the meat of it.

How

How to perform this migration... There are multiple paths:

  • MySQL commandline: extracting data using the commandline mysql tool (drush sqlq ...)
  • Views export: extracting "standard format" dumps from Drupal and
    parse it (JSON, XML, CSV?)

Both approaches had issues, and I found a third way: talk directly to
mysql and generate the files directly, in a Python script. But first,
here are the two previous approaches I know of.

MySQL commandline

LeLutin switched using MySQL requests,
although he doesn't specify how content itself was migrated. Comments
importing is done with that script:

echo "select n.title, concat('| [[!comment format=mdwn|| username=\"', c.name, '\"|| ip=\"', c.hostname, '\"|| subject=\"', c.subject, '\"|| date=\"', FROM_UNIXTIME(c.created), '\"|| content=\"\"\"||', b.comment_body_value, '||\"\"\"]]') from node n, comment c, field_data_comment_body b where n.nid=c.nid and c.cid=b.entity_id;" | drush sqlc | tail -n +2 | while read line; do if [ -z "$i" ]; then i=0; fi; title=$(echo "$line" | sed -e 's/[ ]\+|.*//' -e 's/ /_/g' -e 's/[:(),?/+]//g'); body=$(echo "$line" | sed 's/[^|]*| //'); mkdir -p ~/comments/$title; echo -e "$body" &gt; ~/comments/$title/comment_$i._comment; i=$((i+1)); done

Kind of ugly, but beats what i had before (which was "nothing").

I do think it is the good direction to take, to simply talk to the
MySQL database, maybe with a native Python script. I know the Drupal
database schema pretty well (still! this is D6 after all) and it's
simple enough that this should just work.

Views export

[[!img 2015-02-03-233846_1440x900_scrot.png class="align-right" size="300x" align="center" alt="screenshot of views 2.x"]]

mvc recommended views data export on Lelutin's
blog. Unfortunately, my experience with the views export interface has
been somewhat mediocre so far. Yet another reason why I don't like
using Drupal anymore is this kind of obtuse dialogs:

I clicked through those for about an hour to get JSON output that
turned out to be provided by views bonus instead of
views_data_export. And confusingly enough, the path and
format_name fields are null in the JSON output
(whyyy!?). views_data_export unfortunately only supports XML,
which seems hardly better than SQL for structured data, especially
considering I am going to write a script for the conversion anyways.

Basically, it doesn't seem like any amount of views mangling will
provide me with what i need.

Nevertheless, here's the [[failed-export-view.txt]] that I was able to
come up with, may it be useful for future freedom fighters.

Python script

I ended up making a fairly simple Python script to talk directly to
the MySQL database.

The script exports only nodes and comments, and nothing else. It makes
a bunch of assumptions about the structure of the site, and is
probably only going to work if your site is a simple blog like mine,
but could probably be improved significantly to encompass larger and
more complex datasets. History is not preserved so no interaction is
performed with git.

Generating dump

First, I imported the MySQL dump file on my local mysql server for easier
development. It is 13.9MiO!!

mysql -e 'CREATE DATABASE anarcatblogbak;' ssh aegir.koumbit.net "cd anarcat.koumbit.org ; drush sql-dump" | pv | mysql anarcatblogbak

I decided to not import revisions. The majority (70%) of the content has
1 or 2 revisions, and those with two revisions are likely just when
the node was actually published, with minor changes. ~80% have 3
revisions or less, 90% have 5 or less, 95% 8 or less, and 98% 10 or
less. Only 5 articles have more than 10 revisions, with two having the
maximum of 15 revisions.

Those stats were generated with:

SELECT title,count(vid) FROM anarcatblogbak.node_revisions group by nid;

Then throwing the output in a CSV spreadsheet (thanks to
mysql-workbench for the easy export), adding a column numbering the
rows (B1=1,B2=B1+1), another for generating percentages
(C1=B1/count(B$2:B$218)) and generating a simple graph with
that. There were probably ways of doing that more cleanly with R,
and I broke my promise to never use a spreadsheet again, but then
again it was Gnumeric and it's just to get a rough idea.

There are 196 articles to import, with 251 comments, which means an
average of 1.15 comment per article (not much!). Unpublished articles
(5!) are completely ignored.

Summaries are also not imported as such (break comments are
ignored) because ikiwiki doesn't support post summaries.

Calling the conversion script

The script is in [[drupal2ikiwiki.py]]. It is called with:

./drupal2ikiwiki.py -u anarcatblogbak -d anarcatblogbak blog -vv

The -n and -l1 have been used for first tests as well. Use this
command to generate HTML from the result without having to commit and
push all:

ikiwiki --plugin meta --plugin tag --plugin comments --plugin inline . ../anarc.at.html

More plugins are of course enabled in the blog, see the setup file for
more information, or just enable plugin as you want to unbreak
things. Use the --rebuild flag on subsequent runs. The actual
invocation I use is more something like:

ikiwiki --rebuild --no-usedirs --plugin inline --plugin calendar --plugin postsparkline --plugin meta --plugin tag --plugin comments --plugin sidebar . ../anarc.at.html

I had problems with dates, but it turns out that I wasn't setting
dates in redirects... Instead of doing that, I started adding a
"redirection" tag that gets ignored by the main page.

Files and old URLs

The script should keep the same URLs, as long as pathauto is enabled
on the site. Otherwise, some logic should be easy to add to point to
node/N.

To redirect to the new blog, rewrite rules, on original blog, should
be as simple as:

Redirect / http://anarc.at/blog/

When we're sure:

Redirect permanent / http://anarc.at/blog/

Now, on the new blog, some magic needs to happen for files. Both
/files and /sites/anarcat.koumbit.org/files need to resolve
properly. We can't use symlinks because
ikiwiki drops symlinks on generation.

So I'll just drop the files in /blog/files directly, the actual
migration is:

cp $DRUPAL/sites/anarcat.koumbit.org/files $IKIWIKI/blog/files rm -r .htaccess css/ js/ tmp/ languages/ rm foo/bar # wtf was that. rmdir * sed -i 's#/sites/anarcat.koumbit.org/files/#/blog/files/#g' blog/*.mdwn sed -i 's#http://anarcat.koumbit.org/blog/files/#/blog/files/#g' blog/*.mdwn chmod -R -x blog/files sudo chmod -R +X blog/files

A few pages to test images:

  • http://anarcat.koumbit.org/node/157
  • http://anarcat.koumbit.org/node/203

There are some pretty big files in there, 10-30MB MP3s - but those are
already in this wiki! so do not import them!

Running fdupes on the result helps find oddities.

The meta guid directive is used to keep the aggregators from finding
duplicate feed entries. I tested it with Liferea, but it may freak out
some other sites.

Remaining issues
  • postsparkline and calendar archive disrespect meta(date)
  • merge the files in /communication with the ones in /blog/files
    before import
  • import non-published nodes
  • check nodes with a format different than markdown (only a few 3=Full
    HTML found so far)
  • replace links to this wiki in blog posts with internal links

More progress information in [[the script|drupal2ikiwiki.py]] itself.

Categories: Elsewhere

Daniel Pocock: Lumicall's 3rd Birthday

Planet Drupal - Fri, 06/02/2015 - 21:33

Today, 6 February, is the third birthday of the Lumicall app for secure SIP on Android.

Happy birthday

Lumicall's 1.0 tag was created in the Git repository on this day in 2012. It was released to the Google Play store, known as the Android Market back then, while I was in Brussels, the day after FOSDEM.

Since then, Lumicall has also become available through the F-Droid free software marketplace for Android and this is the recommended way to download it.

An international effort

Most of the work on Lumicall itself has taken place in Switzerland. Many of the building blocks come from Switzerland's neighbours:

  • The ice4j ICE/STUN/TURN implementation comes from the amazing Jitsi softphone, which is developed in France.
  • The ZORG open source ZRTP stack comes from PrivateWave in Italy
  • Lumicall itself is based on the Sipdroid project that has a German influence, while Sipdroid is based on MjSIP which comes out of Italy.
  • The ENUM dialing logic uses code from ENUMdroid, published by Nominet in the UK. The UK is not exactly a neighbour of Switzerland but there is a tremendous connection between the two countries.
  • Google's libPhoneNumber has been developed by the Google team in Zurich and helps Lumicall format phone numbers for dialing through international VoIP gateways and ENUM.

Lumicall also uses the reSIProcate project for server-side infrastructure. The repro SIP proxy and TURN server run on secure and reliable Debian servers in a leading Swiss data center.

An interesting three years for free communications

Free communications is not just about avoiding excessive charges for phone calls. Free communications is about freedom.

In the three years Lumicall has been promoting freedom, the issue of communications privacy has grabbed more headlines than I could have ever imagined.

On 5 June 2013 I published a blog about the Gold Standard in Free Communications Technology. Just hours later a leading British newspaper, The Guardian, published damning revelations about the US Government spying on its own citizens. Within a week, Edward Snowden was a household name.

Google's Eric Schmidt had previously told us that "If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place.". This statement is easily debunked: as CEO of a corporation listed on a public stock exchange, Schmidt and his senior executives are under an obligation to protect commercially sensitive information that could be used for crimes such as insider trading.

There is no guarantee that Lumicall will keep the most determined NSA agent out of your phone but nonetheless using a free and open source application for communications does help to avoid the defacto leakage of your conversations to a plethora of marketing and profiling companies that occurs when using a regular phone service or messaging app.

How you can help free communications technology evolve

As I mentioned in my previous blog on Lumicall, the best way you can help Lumicall is by helping the F-Droid team. F-Droid provides a wonderful platform for distributing free software for Android and my own life really wouldn't be the same without it. It is a privilege for Lumicall to be featured in the F-Droid eco-system.

That said, if you try Lumicall and it doesn't work for you, please feel free to send details from the Android logs through the Lumicall issue tracker on Github and they will be looked at. It is impossible for Lumicall developers to test every possible phone but where errors are obvious in the logs some attempt can be made to fix them.

Beyond regular SIP

Another thing that has emerged in the three years since Lumicall was launched is WebRTC, browser based real-time communications and VoIP.

In its present form, WebRTC provides tremendous opportunities on the desktop but it does not displace the need for dedicated VoIP apps on mobile handsets. WebRTC applications using JavaScript are a demanding solution that don't integrate as seamlessly with the Android UI as a native app and they currently tend to be more intensive users of the battery.

Lumicall users can receive calls from desktop users with a WebRTC browser using the free calling from browser to mobile feature on the Lumicall web site. This service is powered by JSCommunicator and DruCall for Drupal.

Categories: Elsewhere

Daniel Pocock: Lumicall's 3rd Birthday

Planet Debian - Fri, 06/02/2015 - 21:33

Today, 6 February, is the third birthday of the Lumicall app for secure SIP on Android.

Happy birthday

Lumicall's 1.0 tag was created in the Git repository on this day in 2012. It was released to the Google Play store, known as the Android Market back then, while I was in Brussels, the day after FOSDEM.

Since then, Lumicall has also become available through the F-Droid free software marketplace for Android and this is the recommended way to download it.

An international effort

Most of the work on Lumicall itself has taken place in Switzerland. Many of the building blocks come from Switzerland's neighbours:

  • The ice4j ICE/STUN/TURN implementation comes from the amazing Jitsi softphone, which is developed in France.
  • The ZORG open source ZRTP stack comes from PrivateWave in Italy
  • Lumicall itself is based on the Sipdroid project that has a German influence, while Sipdroid is based on MjSIP which comes out of Italy.
  • The ENUM dialing logic uses code from ENUMdroid, published by Nominet in the UK. The UK is not exactly a neighbour of Switzerland but there is a tremendous connection between the two countries.
  • Google's libPhoneNumber has been developed by the Google team in Zurich and helps Lumicall format phone numbers for dialing through international VoIP gateways and ENUM.

Lumicall also uses the reSIProcate project for server-side infrastructure. The repro SIP proxy and TURN server run on secure and reliable Debian servers in a leading Swiss data center.

An interesting three years for free communications

Free communications is not just about avoiding excessive charges for phone calls. Free communications is about freedom.

In the three years Lumicall has been promoting freedom, the issue of communications privacy has grabbed more headlines than I could have ever imagined.

On 5 June 2013 I published a blog about the Gold Standard in Free Communications Technology. Just hours later a leading British newspaper, The Guardian, published damning revelations about the US Government spying on its own citizens. Within a week, Edward Snowden was a household name.

Google's Eric Schmidt had previously told us that "If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place.". This statement is easily debunked: as CEO of a corporation listed on a public stock exchange, Schmidt and his senior executives are under an obligation to protect commercially sensitive information that could be used for crimes such as insider trading.

There is no guarantee that Lumicall will keep the most determined NSA agent out of your phone but nonetheless using a free and open source application for communications does help to avoid the defacto leakage of your conversations to a plethora of marketing and profiling companies that occurs when using a regular phone service or messaging app.

How you can help free communications technology evolve

As I mentioned in my previous blog on Lumicall, the best way you can help Lumicall is by helping the F-Droid team. F-Droid provides a wonderful platform for distributing free software for Android and my own life really wouldn't be the same without it. It is a privilege for Lumicall to be featured in the F-Droid eco-system.

That said, if you try Lumicall and it doesn't work for you, please feel free to send details from the Android logs through the Lumicall issue tracker on Github and they will be looked at. It is impossible for Lumicall developers to test every possible phone but where errors are obvious in the logs some attempt can be made to fix them.

Beyond regular SIP

Another thing that has emerged in the three years since Lumicall was launched is WebRTC, browser based real-time communications and VoIP.

In its present form, WebRTC provides tremendous opportunities on the desktop but it does not displace the need for dedicated VoIP apps on mobile handsets. WebRTC applications using JavaScript are a demanding solution that don't integrate as seamlessly with the Android UI as a native app and they currently tend to be more intensive users of the battery.

Lumicall users can receive calls from desktop users with a WebRTC browser using the free calling from browser to mobile feature on the Lumicall web site. This service is powered by JSCommunicator and DruCall for Drupal.

Categories: Elsewhere

Dries Buytaert: Growing Drupal in Latin America

Planet Drupal - Fri, 06/02/2015 - 20:45

When I visited Brazil in 2011, I was so impressed by the Latin American Drupal community and how active and passionate the people are. The region is fun and beautiful, with some of the most amazing sites I have seen anywhere in the world. It also happens to be a strategic region for the project.

Latin American community members are doing their part to grow the project and the Drupal community. In 2014, the region hosted 19 Global Training Day events to recruit newcomers, and community leaders coordinated many Drupal camps to help convert those new Drupal users into skilled talent. Members of the Latin American community help promote Drupal at local technology and Open Source events, visiting events like FISL (7,000+ participants), Consegi (5,000+ participants) and Latinoware (4,500+ participants).

You can see the results of all the hard work in the growth of the Latin American Drupal business ecosystem. The region has a huge number of talented developers working at agencies large and small. When they aren't creating great Drupal websites like the one for the Rio 2016 Olympics, they are contributing code back to the project. For example, during our recent Global Sprint Weekend, communities in Bolivia, Colombia, Costa Rica, and Nicaragua participated and made valuable contributions.

The community has also been instrumental in translation efforts. On localize.drupal.org, the top translation is Spanish with 500 contributors, and a significant portion of those contributors come from the Latin America region. Community members are also investing time and energy translating Drupal educational videos, conducting camps in Spanish, and even publishing a Drupal magazine in Spanish. All of these efforts lower the barrier to entry for Spanish speakers, which is incredibly important because Spanish is one of the top spoken languages in the world. While the official language of the Drupal project is English, there can be a language divide for newcomers who primarily speak other languages.

Last but not least, I am excited that we are bringing DrupalCon to Latin America next week. This is the fruit of many hours spent by passionate volunteers in the Latin American local communities, working together with the Drupal Association to figure out how to make a DrupalCon happen in this part of the world. At every DrupalCon we have had so far, we have seen an increase in energy for the project and a bump in engagement. Come for the software, stay for the community! Hasta pronto!

Categories: Elsewhere

Aten Design Group: Removing Duplicate Content Across Multiple Drupal Views

Planet Drupal - Fri, 06/02/2015 - 19:31

Views is an indispensable and powerful module at the heart of Drupal that you can use to quickly generate structured tables or lists of consistently formatted content, and filter and group that content by simple or complex logic. But in pushing Views to do ever more complex and useful things, we can sort of paint ourselves into a corner sometimes. For instance, I have many times created multiple Views displays on a single page that contain overlapping content. My homepage has a Views display of manually curated content, using Nodequeue or a similar module. On the same homepage, I have a Views display of news content that shows the most recent content. Since the two different Views displays pull from the same bucket of content, it is very possible to have duplicate content across the displays. Here is an example:

Notice the underlined duplicate titles across the two Views displays.

This is what we want:

Notice the missing featured titles from the deduped Views display.

By creating a custom Drupal module and utilizing a Views hook, we can remove the duplicate content across the two Views displays. We programmatically check exactly which pieces of content are in one View, and we feed that information to a filter in the second View that excludes it.

Before diving into my example, I want to cover a few assumptions I’m making about you.
  • You are using Drupal 7
  • You are familiar with Views module
  • You know how to install modules
  • You know at least a touch of PHP
Steps to Follow Along

View Example Code on Github

Step 1

My example code assumes that you have created two Views displays.

  • Featured - A View display of manually curated content. This display will be used to generate a list of content to exclude from our automated Views display.
  • Automated - A View display of news content that shows the most recent content. This display will accept a list of content to be excluded.

You can of course adapt the Views displays to your exact needs.

After creating the Views you wish to use, you’ll need to know the machine name of the View and View display.

One way to retrieve these names is from the view edit URL. While editing your view, notice the URL:

/admin/structure/views/view/automated_news/edit/block

In my case, automated_news is the view name and block is the view display name.

Make a note of your machine names for Step 3

Step 2

On the view you wish to dedup or exclude content from, you’ll need to add and configure a contextual filter.

  1. Navigate to edit the automated content view
  2. Under “Advanced” & “Contextual Filters”, click add and select “Content: Nid (The node ID.)”
  3. Select “Provide default value” and choose “Fixed value”.
  4. Leave the Fixed value empty as we’ll provide this in code
  5. Under “More” select “Allow multiple values” and “Exclude”
  6. Save the view
Step 3

Enable your custom module that contains the deduping code. You are welcome to download the example module on Github and use it, or add the code to an existing custom module if it makes more sense. In any case, you’ll need to customize the module a little bit to work with your Views.

  1. Update the machine name variables from Step 1. See $featured_view_name, $featured_view_display, $automated_view_name and 2. $automated_view_display
  2. Save your module
  3. Enable your module
  4. Clear your Drupal cache

If everything was configured correctly, you should see your Views displays properly deduped.

Code Explained

View Example Code on Github

The code relies on hook_views_pre_view(), a Views hook. Using this hook, we can pass values to the Views display contextual filter set in Step 2. Here is a version where content IDs (NIDs) 1, 2, 5 & 6 are manually being passed to a view for exclusion.

/** * @implements hook_views_pre_view(). * * https://api.drupal.org/api/views/views.api.php/function/hook_views_pre_view/7 */ function hook_views_pre_view(&$view, &$display_id, &$args){ // Check for the specific View name and display if ($view->name == ‘automated_news’ && $display_id == ‘block’) { $args[] = 1+2+5+6; } }

There are many ways you could dynamically build a list of NIDs you wish to exclude. In my example, we are loading another Views display to build a list of NIDs. The function views_get_view() loads a Views display in code and provides access to the result set.

// Load the view // https://api.drupal.org/api/views/views.module/function/views_get_view/7 $view = views_get_view('automated_news'); $view->set_display('block'); $view->pre_execute(); $view->execute();   // Get the results $results = $view->result;

Drupal Views is a powerful module and I like the ability to extend it even further using the extensive Views hooks API. In the case of my example, we can keep using Views with writing complex database queries.

Categories: Elsewhere

Carl Chenet: Backup Checker 1.0, the fully automated backup checker

Planet Debian - Fri, 06/02/2015 - 19:08

Follow me on Identi.ca  or Twitter  or Diaspora*

Backup Checker is the new name of the Brebis project.

Backup Checker is a CLI software developed in Python 3.4, allowing users to verify the integrity of archives (tar,gz,bz2,lzma,zip,tree of files) and the state of the files inside an archive in order to find corruptions or intentional of accidental changes of states or removal of files inside an archive.

Brebis version 0.9 was downloaded 1092 times. In order to keep the project growing, several steps were adopted recently:

  • Brebis was renamed Backup Checker, the last one being more explicit.
  • Mercurial ,the distributed version control system of the project, was replaced by Git.
  • The project switched from a self hosted old Redmine to GitHub. Here is the GitHub project page.

This new version 1.0 does not only provide project changes. Starting from 1.0, Backup Checker now verifies the owner name and the owner group name of a file inside an archive, enforcing the possible checks for both an archive and a tree of files.

Moreover, the recent version 0.10 of Brebis published 9 days ago provided the following features

  • The default behaviour calculated the hash sums of every files in the archive or the tree of files, this was discontinued because of poor performances while using Backup Checker on archives of large size.
  • You can force the old behaviour by using the new –hashes option.
  • The new –exceptions-file option allows the user to provide a list of files inside the archive in order to compute their hash sums.
  • The documentation of the project is now available on Readthedocs.

As usual, any feedback is welcome, through bug reports, emails of the author or comments on this blog.


Categories: Elsewhere

Gunnar Wolf: On the number of attempts on brute-force login attacks

Planet Debian - Fri, 06/02/2015 - 18:51

I would expect brute-force login attacks to be more common. And yes, at some point I got tired of ssh scans, and added rate-limiting firewall rules, even switched the daemon to a nonstandard port... But I have very seldom received an IMAP brute-force attack. I have received countless phishing scams on my users, and I know some of them have bitten because the scammers then use their passwords on my servers to send tons of spam. Activity is clearly atypical.

Anyway, yesterday we got a brute-force attack on IMAP. A very childish atack, attempted from an IP in the largest ISP in Mexico, but using only usernames that would not belong in our culture (mosty English firstnames and some usual service account names).

What I find interesting to see is that each login was attempted a limited (and different) amount of times: Four account names were attempted only once, eight were attempted twice, and so on — following this pattern:

1 • 2 •• 3 •• 4 ••••• 5 ••••••• 6 •••••• 7 ••••• 8 •••••••• 9 ••••••••• 10 •••••••• 11 •••••••• 12 •••••••••• 13 ••••••• 14 •••••••••• 15 ••••••••• 16 •••••••••••• 17 ••••••••••• 18 •••••••••••••• 19 ••••••••••••••• 20 •••••••••••• 21 •••••••••••• 22 ••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••

(each dot represents four attempts)

So... What's significant in all this? Very little, if anything at all. But for such a naïve login attack, it's interesting to see the number of attempted passwords per login varies so much. Yes, 273 (over ¼ of the total) did 22 requests, and another 200 were 18 and more. The rest... Fell quite shorter.

In case you want to play with the data, you can grab the list of attempts with the number of requests. I filtered out all other data, as i was basically meaningless. This file is the result of:

  1. $ grep LOGIN /var/log/syslog.1 |
  2. grep FAILED.*201.163.94.42|
  3. awk '{print $7 " " $8}'|
  4. sort|uniq -c

AttachmentSize logins.txt27.97 KB
Categories: Elsewhere

Annertech: 5 Tips for a Responsive Website

Planet Drupal - Fri, 06/02/2015 - 18:36
5 Tips for a Responsive Website

Last month I wrote about why we care about responsive websites, and why you should too. This month I'm going to brush the surface of how one might achieve such a goal.

Responsive Buzzword Bingo

I'm not about to go knee-deep into the semantics of the various jargon words surrounding this topic and their pros and cons, but here are broad descriptions of some of the approaches.

Categories: Elsewhere

Dcycle: Two tips for debugging Simpletest tests

Planet Drupal - Fri, 06/02/2015 - 15:52

I have been using Simpletest on Drupal 7 for several years, and, used well, it can greatly enhance the quality of your code. I like to practice test-driven development: writing a failing test first, then run it multiple times, each time tweaking the code, until the test passes.

Simpletest works by spawning a completely new Drupal site (ignoring your current database), running tests, and destroying the database. Sometimes, a test will fail and you're not quite sure why. Here are two tips to help you debug why your tests are failing:

Tip #1: debug()

The Drupal debug() function can be placed anywhere in your test or your source code, and the result will appear on the test results page in the GUI.

For example, if when you are playing around with the dev version of your site, things work fine, but in the test, a specific node contains invalid data, you can add this line anywhere in your test or source code which is being called during your test:

... debug($node); ...

This will provide formatted output of your $node variable, alongside your test results.

Tip #2: die()

Sometimes the temporary test environment's behaviour seems to make no sense. And it can be frustrating to not be able to simply log into it and play around with it, because it is destroyed after the test is over.

To understand this technique, here is quick primer on how Simpletest works:

  • In Drupal 7, running a test requires a host site and database. This is basically an installed Drupal site with Simpletest enabled, and your module somewhere in the modules directory (the module you are testing does not have to be enabled).
  • When you run a test, Simpletest creates a brand-new installation of Drupal using a special prefix simpletest123456 where 123456 is a random number. This allows Simpletest to have an isolated environment where to run tests, but on the same database and with the same credentials as the host.
  • When your test does something, like call a function, or load a page with, for example, $this->drupalGet('user'), the host environment is ignored and temporary environment (which uses the prefixed database tables) is used. In the previous example, the test loads the "user" page using a real HTTP calls. Simpletest knows to use the temporary environment because the call is made using a specially-crafted user agent.
  • When the test is over, all tables with the prefix simpletest123456 are destroyed.

If you have ever tried to run a test on a host environment which already contains a prefix, you will understand why you can get "table name too long" errors in certain cases: Simpletest is trying to add a prefix to another prefix. That's one reason to avoid prefixes when you can, but I digress.

Now you can try this: somewhere in your test code, add die(), this will kill Simpletest, leaving the temporary database intact.

Here is an example: a colleague recently was testing a feature which exported a view. In the dev environment, the view was available to users with the role manager, as was expected. However when the test logged in as a manager user and attempted to access the view, the result was an "Access denied" page.

Because we couldn't easily figure it out, I suggested adding die() to play around in the environment:

... $this->drupalLogin($manager); $this->drupalGet('inventory'); die(); $this->assertNoText('denied', 'A manager accessing the inventory page does not see "access denied"'); ...

Now, when the test was run, we could:

  • wait for it to crash,
  • then examine our database to figure out which prefix the test was using,
  • change the database prefix in sites/default/settings.php from '' to (for example) 'simpletest73845'.
  • run drush uli to get a one-time login.

Now, it was easier to debug the source of the problem by visiting the views configuration for inventory: it turns out that features exports views with access by role using the role ID, not the role name (the role ID can be different for each environment). Simply changing the access method for the view from "by role" to "by permission" made the test pass, and prevented a potential security flaw in the code.

(Another reason to avoid "by role" access in views is that User 1 often does not have the role required, and it is often disconcerting to be user 1 and have "access denied" to a view.)

So in conclusion, Simpletest is great when it works as expected and when you understand what it does, but when you don't, it is always good to know a few techniques for further investigation.

Tags: blogplanet
Categories: Elsewhere

Olivier Berger: Configuring the start of multiple docker container with Vagrant in a portable manner

Planet Debian - Fri, 06/02/2015 - 12:42

I’ve mentioned earlier the work that our students did on migrating part of the elements of the Database MOOC lab VM to docker.

While docker seems quite cool, let’s face it, participants to the MOOCs aren’t all using Linux where docker can be available directly. Hence the need to use boot2docker, for instance on Windows.

Then we’re back quite close to the architecture of the Vagrang VM, which relies too on a VirtualBox VM to run a Linux machine (boot2docker does exactly that with a minimal Linux which runs docker).

If VirtualBox is to be kept around, then why not stick to Vagrant also, as it offers a docker provider. This docker provider for Vagrant helps configure basic parameters of docker containers in a Vagrantfile, and basically uses the vagrant up command instead of using docker build + docker run. If on Linux, it only triggers docker, and if not, then it’ll start boot2docker (or any other Linux box) in between.

This somehow offers a unified invocation command, which renders a bit more portable the documentation.

Now, there are some tricks when using this docker provider, in particular for debugging what’s happening inside the VM.

One nice feature is that you can debug on Linux what is to be executed on Windows, by explicitely requiring the start of the intermediary boot2docker VM even if it’s not really needed.

By using a custom secondary Vagrantfile for that VM, it is possible to tune some parameters of that VM (like its graphic memory to allow to start it with a GUI allowing to connect — another alternative is to “ssh -p 2222 docker@localhost” once you know that its password is ‘tcuser’).

I’ve committed an example of such a setup in the moocbdvm project’s Git, which duplicates the docker provisioning files that our students had already published in the dedicated GitHub repo.

Here’s an interesting reference post about Vagrant + docker and multiple containers, btw.

Categories: Elsewhere

OpenLucius: A robot in your Drupal social intranet / extranet – why and how?

Planet Drupal - Fri, 06/02/2015 - 10:15

If you work with a team on projects, then there are (obviously) tasks to share. Including tasks to be followed up by your clients.

For example: the delivery of a design in Photoshop/fireworks for their new social intranet.

Now it can happen that somebody does not follow-up on his/her task in time resulting in problems for your planning. Usually this is not on purpose, often they simply 'forgot'.

Categories: Elsewhere

Drupal core announcements: Princeton Critical Sprint Recap

Planet Drupal - Fri, 06/02/2015 - 02:35

At the end of January, 2015, sprinters gathered in Princeton, NJ, USA for a focused D8 Accelerate sprint designed to accelerate work on critical and upgrade-path-blocking issues related to menus, menu links, and link generation.

The sprint was coordinated with the 4th annual DrupalCamp NJ. pwolanin, dawehner, kgoel, xjm, Wim Leers, mpdonadio, YesCT, effulgentsia, and tim.plunkett participated onsite. (In addition to the D8 Accelerate Group, local Drupalists davidhernandez, cilefen, crowdcg, wheatpenny, ijf8090, and HumanSky joined the sprint primarily to work on Drupal 8 Twig and theme issues, and EclipseGC and evolvingweb dropped in too.)

The sprint benefitted from pre-sprint planning meetings and discussion with the sprinters and a broader group of contributors (including webchick and catch, as well as amateescu, larowlan, Gábor Hojtsy, Bojhan, and Crell), and daily support from webchick to track, summarize, and unblock progress with issue posts and commits so the sprinters could move on to the next steps.

Thanks to the pre-sprint planning, sprint focus, and the tremendous experience of the participants and their history of working together on hard issues in the past, this sprint achieved a very high level and breadth of success. Sprinters worked on a total of 17 critical issues (14 of which are now fixed) as well as 27 other related bugs and DX fixes. All the issues opened or worked on during the sprint can bee seen under the tag D8 Accelerate NJ.

Take-away lessons

Identifying key issues in advance made the sprint more productive, as did meeting via video chat and in IRC to discuss possible solutions ahead of time. The pending deadline of the sprint helped push contributors to forge consensus and begin work on the issues before the event even happened. Never underestimate the value of a hard deadline!

As always, having the group in the same room (and timezone) with a whiteboard allowed resolution of discussions that would have taken weeks via issue comments and online meetings. We also were able to scale our progress with occasional pair programming and pair code review - very effective for ramping up skilled sprinters to unfamiliar and difficult problem spaces.

In addition, while the sprint was happening at the same time as DrupalCamp NJ activities (and for 2 days in the same building), the sprinters deliberately avoided the presentations or general Drupal mentoring they might have done in other circumstances. This relative lack of distractions was part of what we learned made the prior Ghent sprint a success and it helped maintain the focus at this sprint as well.

The sprinters stayed in 2 adjoining hotels, which made coordination easy.

Changing the sprint room each day initially seemed like it might be a drawback, but instead seemed to keep things a bit fresher. Note, however, that every room had windows and natural light - especially important the first days as people were dealing with jet lag.

It's off-season for New Jersey in January, so the low flight costs that allowed us to fund many more people to come and also accommodated people who made travel plans as late as a week prior to the event. This allowed us to recruit more participants even with a very short time frame to plan. (When the sprint was first given the D8 Accelerate Grant at the end of December, we had only 3 confirmed attendees and just a rough idea of the issues and goals to be addressed.)

Sponsors

The sprint was sponsored by a Drupal Association grant and by Princeton University Web Development Services providing space and logistical support.

In addition, Black Mesh sponsored all travel costs for YesCT, Forum One provided time off for kgoel, Night Kitchen Interactive provided time off for mpdonadio, and Acquia provided several employees' time (pwolanin, effulgentsia, xjm, tim.plunkett, and Wim Leers).

Daily sprint updates from webchick

These daily issue summaries were originally provided by webchick on [meta] Finalize the menu links system.

January 27

A very hyped snow storm leads to the cancelation of all 3 flights coming from Europe - but the snow fell further North and East, so all 3 participants were able to reschedule for the next day.

January 28

Most participants arrived in Princeton and settled in.

January 29

Day one of the sprint! Occupying the lounge at the NE corner of 701 Carnegie, part of the facilities of Princeton University.

Dinner plans were inspired by the DrupalCamp NJ theme for 2015 - a New Jersey diner! Just reading the menu was an exotic treat for the Europeans.

January 30

Occupying a multi-purpose room at the SE Corner of 701 Carnegie.

At the same time, about 70 people participated in 4 Drupal training courses in other rooms on the ground floor.

Thanks to the prompting of Tim Plunkett, dinner was real New Jersey pizza at Nino's Pizza Star in Princeton (a local favorite among the Central NJ Drupal meetup regulars). EclipseGC even treated the group to a Nutella pizza for dessert!

January 31

Occupying room 111 at the Friend Engineering Center, on the campus of Princeton University. In the neighboring rooms the sessions and BoFs were happening for the 4th annual DrupalCamp NJ. The sprinters were counted among the 257 registered attendees.

February 1

Occupying a (paid) meeting room at the hotel where most sprinters were staying.

Apparently there was some football game going on too.

While most people are headed home tomorrow, there are a few stalwart hangers-on who are staying through to Tuesday.


February 2

People worked together at the hotel or remotely. A Farewell lunch in Princeton was followed by a brief look at the Princeton University campus as a scenic amount of snow fell again.

Categories: Elsewhere

Mediacurrent: Introducing the Mediacurrent Dropcast!

Planet Drupal - Thu, 05/02/2015 - 22:03

Our inaugural episode. Team Kool-Aide starts a podcast and we talk about a variety of topics taken from The Weekly Drop.

Your browser does not support the audio element.
Episode 0 Audio Download Link

 

Categories: Elsewhere

more onion - devblog: Stale static cache - you're likely to have seen this bug!

Planet Drupal - Thu, 05/02/2015 - 21:18

This week I've finally found the core of several issues that I've had in the past. Are you using install-profiles or features? Then this bug is likely to have affected you too.

Tags:
Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator - Elsewhere