Feed aggregator

DrupalOnWindows: Getting #2,000 requests per second without varnish

Planet Drupal - Sun, 08/02/2015 - 07:00
Language English

Now that Varnish is finally "the free version of a propietary software", it's time to look somewhere else, and the answer is right in front of our noses. When the word pripietary starts to appear, let's stick to the big guys. If you want to know how to get #2,000 requests per second without depending on anything else but your current webserver (yes, the same one you are using to serve your pages, no need to setup anything else here) then keep on reading.

More articles...
Categories: Elsewhere

Hideki Yamane: ThinkPad x121e firmware bug with UFEI

Planet Debian - Sun, 08/02/2015 - 04:59
Okay I'm wrong, d-i 8 rc1 can be installed to Thinkpad X121e, thanks.

However, I recommend to install Debian (or other distro) with legacy BIOS mode, instead of UFEI, because X121e has firmware bug, Fn key is always enabled when you back from suspend, see posts on Lenovo users forum.

Probably it would annoy you (especially with Vim - oh, "Esc"! ;)
Categories: Elsewhere

Laura Arjona: How contributors.debian.org helped in my email address migration

Planet Debian - Sun, 08/02/2015 - 01:30

Some months ago I changed my preferred email address. I updated my profile in different sites to point to the new address, and changed my subscriptions in mailing lists.

I forgot about subscriptions to Debian bugs.

I like that you don’t need a “user” to participate in Debian BTS (you just need an email address) but I learned that there’s no way to get a list of the bugs you’re subscribed to (for mailing lists it’s possible: send mail to majordomo at lists.debian.org with which your.email.address in the body).

Then, I remembered that I’m listed in contributors.debian.org as BTS contributor, and that there is an “extra info” link, so I went there and got redirected to:

https://bugs.debian.org/cgi-bin/pkgreport.cgi?correspondent=MYOLDADDRESS

which lists all the bugs for which I sent an email (not only the bugs that I submitted). So now, I have a list of bug numbers to send -unsubscribe mails from my old address and then -subscribe mails from the new address.

I’m probably subscribed to some more bugs in which I didn’t participate (just lurking or interested in how people deal with them) but I suppose they are not many.

(I could have retrieved the list of bugs from the BTS interface, but contributors.debian.org came first to my mind, and it’s nice to have that link handy there, isn’t it?)


Filed under: Tools Tagged: Bugs, Contributing to libre software, Debian, Email, English
Categories: Elsewhere

Dirk Eddelbuettel: rfoaas 0.1.3

Planet Debian - Sun, 08/02/2015 - 01:26

A brand new version of rfoaas is now on CRAN. It shadows the 0.1.3 release of FOAAS just how an earlier 0.1.2 had done (but there was something not quite right at the server backend which we coded around with an interim release 0.1.2.1; neither one of these was ever released to CRAN).

The rfoaas package provides an interface for R to the most excellent FOAAS service--which provides a modern, scalable and RESTful web service for the frequent need to tell someone to f$#@ off. Release 0.1.3 of FOAAS brings support for filters which the initial support going to the absolutely outstanding shoutcloud.io service. This can be enabled by adding filter="shoutcloud" as an argument to any of the access functions. And thanks to shoutcloud.io, the result will be LOUD AND CLEAR.

As usual, CRANberries provides a diff to the previous CRAN release. Questions, comments etc should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Dirk Eddelbuettel: drat Tutorial: First Steps towards Lightweight R Repositories

Planet Debian - Sun, 08/02/2015 - 01:01

Now that drat is on CRAN and I got a bit of feedback (or typo corrections) in three issue tickets, I thought I could show how to quickly post such an interim version in a drat repository.

Now, I obviously already have a checkout of drat. If you, dear reader, wanted to play along and create your own drat repository, one rather simple way would be to simply clone my repo as this gets you the desired gh-pages branch with the required src/contrib/ directories. Otherwise just do it by hand.

Back to a new interim version. I just pushed commit fd06293 which bumps the version and date for the new interim release based mostly on the three tickets addresses right after the initial release 0.0.1. So by building it we get a new version 0.0.1.1:

edd@max:~/git$ R CMD build drat * checking for file ‘drat/DESCRIPTION’ ... OK * preparing ‘drat’: * checking DESCRIPTION meta-information ... OK * checking for LF line-endings in source and make files * checking for empty or unneeded directories * building ‘drat_0.0.1.1.tar.gz’ edd@max:~/git$

Because I want to use the drat repo next, I need to now switch from master to gh-pages; a step I am omitting as we can assume that your drat repo will already be on its gh-pages branch.

Next we simply call the drat function to add the release:

edd@max:~/git$ r -e 'drat:::insert("drat_0.0.1.1.tar.gz")' edd@max:~/git$

As expected, now have two updated PACKAGES files (compressed and plain) and a new tarball:

edd@max:~/git/drat(gh-pages)$ git status On branch gh-pages Your branch is up-to-date with 'origin/gh-pages'. Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working directory) modified: src/contrib/PACKAGES modified: src/contrib/PACKAGES.gz Untracked files: (use "git add <file>..." to include in what will be committed) src/contrib/drat_0.0.1.1.tar.gz no changes added to commit (use "git add" and/or "git commit -a") edd@max:~/git/drat(gh-pages)$

All is left to is to add, commit and push---either as I usually via the spectacularly useful editor mode, or on the command-line, or by simply adding commit=TRUE in the call to insert() or insertPackage().

I prefer to use littler's r for command-line work, so I am setting the desired repos in ~/.littler.r which is read on startup (since the recent littler release 0.2.2) with these two lines:

## add RStudio CRAN mirror drat:::add("CRAN", "http://cran.rstudio.com") ## add Dirk's drat drat:::add("eddelbuettel")

After that, repos are set as I like them (at home at least):

edd@max:~/git$ r -e'print(options("repos"))' $repos CRAN eddelbuettel "http://cran.rstudio.com" "http://eddelbuettel.github.io/drat/" edd@max:~/git$

And with that, we can just call update.packages() specifying the package directory to update:

edd@max:~/git$ r -e 'update.packages(ask=FALSE, lib.loc="/usr/local/lib/R/site-library")' trying URL 'http://eddelbuettel.github.io/drat/src/contrib/drat_0.0.1.1.tar.gz' Content type 'application/octet-stream' length 5829 bytes opened URL ================================================== downloaded 5829 bytes * installing *source* package ‘drat’ ... ** R ** inst ** preparing package for lazy loading ** help *** installing help indices ** building package indices ** testing if installed package can be loaded * DONE (drat) The downloaded source packages are in ‘/tmp/downloaded_packages’ edd@max:~/git$

and presto, a new version of a package we have installed (here the very drat interim release we just pushed above) is updated.

Writing this up made me realize I need to update the handy update.r script (see e.g. the littler examples page for more) and it hard-wires just one repo which needs to be relaxed for drat. Maybe in install2.r which already has docopt support...

Categories: Elsewhere

Eddy Petri&#537;or: Using Gentoo to create a cross toolchain for the old NSLU2 systems (armv5te)

Planet Debian - Sat, 07/02/2015 - 20:07
This is mostly written so I don't forget how to create a custom (Arm) toolchain the Gentoo way (in a Gentoo chroot).

I have been a Debian user since 2001, and I like it a lot. Yet I have had my share of problems with it, mostly because due to lack of time I have very little disposition to try to track unstable or testing, so I am forced to use stable.

This led me to be a fan of Russ Albery's backport script and to create a lot of local backports of packages that are already in unstable or testing.

But this does not help when packages are simply missing from Debian or when something like creating an arm uclibc based system that should be kept up to date, from a security PoV.

I have experience with Buildroot and I must say I like it a lot for creating custom root filesystems and even toolchains. It allows a lot of flexibility that binary distros like Debian don't offer, it does its designated work, creating root filesystems. But buildroot is not appropriate for a system that should be kept up to date, because it lacks a mechanism by which to be able to update to new versions of packages without recompiling the entire rootfs.

So I was hearing from the guys from the Linux Action Show (and Linux Unplugged - by the way, Jupiter Broadcast, why do I need scripts enabled from several sites just to see the links for the shows?) how Arch is great and all, that is a binary rolling release, and that you can customize packages by building your own packages from source using makepkg. I tried it, but Arm support is provided for some specific (modern) devices, my venerable Linksys NSLU2's (I have 2 of them) not being among them.

So I tried Arch in a chroot, then dropped it in favour of a Gentoo chroot since I was under the feeling running Arch from a chroot wasn't such a great idea and I don't want to install Arch on my SSD.

I used succesfully Gentoo in the past to create an arm-unknown-linux-gnueabi chroot back in 2008 and I always liked the idea of USE flags from Gentoo, so I knew I could do this.


So here it goes:


# create a local portage overlay - necessary for cross tools
export LP=/usr/local/portage
mkdir -p $LP/{metadata,profiles}
echo 'mycross' > $LP/profiles/repo_name
echo 'masters = gentoo' > $LP/metadata/layout.conf
chown -R portage:portage $LP
echo 'PORTDIR_OVERLAY="'$LP' ${PORTDIR_OVERLAY}"' >> /etc/portage/make.conf
unset LP

# install crossdev, setup for the desired target, build toolchain
emerge crossdev
crossdev --init-target -t arm-softfloat-linux-gnueabi -oO /usr/local/portage/mycross
crossdev -t arm-softfloat-linux-gnueabi

 


Categories: Elsewhere

Ben Hutchings: Debian LTS work, January 2015

Planet Debian - Sat, 07/02/2015 - 17:53

This was my second month working on Debian LTS, paid for by Freexian's Debian LTS initiative via Codethink. I spent 11.75 hours working on the kernel package (linux-2.6) and committed my changes but did not complete an update. I or another developer will probably release an update soon.

I have committed fixes for CVE-2013-6885, CVE-2014-7822, CVE-2014-8133, CVE-2014-8134, CVE-2014-8160 CVE-2014-9419, CVE-2014-9420, CVE-2014-9584, CVE-2014-9585 and CVE-2015-1421. In the process of looking at CVE-2014-9419, I noticed that Linux 2.6.32.y is missing a series of fixes to FPU/MMX/SSE/AVX state management that were made in Linux 3.3 and backported to 3.2.y some time ago. These addressed possible corruption of these registers when switching tasks, although it's less likely to happen in 2.6.32.y. The fix for CVE-2014-9419 depends on them. So I've backported and committed all these changes, but may yet decide that they're too risky to include in the next update.

Categories: Elsewhere

Richard Hartmann: Release Critical Bug report for Week 06

Planet Debian - Sat, 07/02/2015 - 03:44

Belated post due to meh real life situations.

As you may have heard, if a package is removed from testing now, it will not be able to make it back into Jessie. Also, a lot of packages are about to be reoved for being buggy. If those are gone, they are gone.

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1066 (Including 187 bugs affecting key packages)
    • Affecting Jessie: 161 (key packages: 123) That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 109 (key packages: 90) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 25 bugs are tagged 'patch'. (key packages: 23) Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 6 bugs are marked as done, but still affect unstable. (key packages: 5) This can happen due to missing builds on some architectures, for example. Help investigate!
        • 78 bugs are neither tagged patch, nor marked done. (key packages: 62) Help make a first step towards resolution!
      • Affecting Jessie only: 52 (key packages: 33) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 19 bugs are in packages that are unblocked by the release team. (key packages: 14)
        • 33 bugs are in packages that are not unblocked. (key packages: 19)

How do we compare to the Squeeze and Wheezy release cycles?

Week Squeeze Wheezy Jessie 43 284 (213+71) 468 (332+136) 319 (240+79) 44 261 (201+60) 408 (265+143) 274 (224+50) 45 261 (205+56) 425 (291+134) 295 (229+66) 46 271 (200+71) 401 (258+143) 427 (313+114) 47 283 (209+74) 366 (221+145) 342 (260+82) 48 256 (177+79) 378 (230+148) 274 (189+85) 49 256 (180+76) 360 (216+155) 226 (147+79) 50 204 (148+56) 339 (195+144) ??? 51 178 (124+54) 323 (190+133) 189 (134+55) 52 115 (78+37) 289 (190+99) 147 (112+35) 1 93 (60+33) 287 (171+116) 140 (104+36) 2 82 (46+36) 271 (162+109) 157 (124+33) 3 25 (15+10) 249 (165+84) 172 (128+44) 4 14 (8+6) 244 (176+68) 187 (132+55) 5 2 (0+2) 224 (132+92) 175 (124+51) 6 release! 212 (129+83) 161 (109+52) 7 release+1 194 (128+66) 8 release+2 206 (144+62) 9 release+3 174 (105+69) 10 release+4 120 (72+48) 11 release+5 115 (74+41) 12 release+6 93 (47+46) 13 release+7 50 (24+26) 14 release+8 51 (32+19) 15 release+9 39 (32+7) 16 release+10 20 (12+8) 17 release+11 24 (19+5) 18 release+12 2 (2+0)

Graphical overview of bug stats thanks to azhag:

Categories: Elsewhere

Antoine Beaupré: Migrating from Drupal to Ikiwiki

Planet Debian - Sat, 07/02/2015 - 00:02

TLPL; j'ai changé de logiciel pour la gestion de mon blog.

TLDR; I have changed my blog from Drupal to Ikiwiki.

Note: since this post uses ikiwiki syntax (i just copied it over here), you may want to read the original version instead of this one.

will continue operating for a while to
give a chance to feed aggregators to catch that article. It will also
give time to the Internet archive to catchup with the static
stylesheets (it turns out it doesn't like Drupal's CSS compression at
all!) An archive will therefore continue being available on the
internet archive for people that miss the old stylesheet.

Eventually, I will simply redirect the anarcat.koumbit.org URL to
the new blog location, . This will likely be my
last blog post written on Drupal, and all new content will be
available on the new URL. RSS feed URLs should not change.

Why

I am migrating away from Drupal because it is basically impossible to
upgrade my blog from Drupal 6 to Drupal 7. Or if it is, I'll have to
redo the whole freaking thing again when Drupal 8 comes along.

And frankly, I don't really need Drupal to run a blog. A blog was
originally a really simple thing: a web blog. A set of articles
written on the corner of a table. Now with Drupal, I can add
ecommerce, a photo gallery and whatnot to my blog, but why would I do
that? and why does it need to be a dynamic CMS at all, if I get so
little comments?

So I'm switching to ikiwiki, for the following reason:

  • no upgrades necessary: well, not exactly true, i still need to
    upgrade ikiwiki, but that's covered by the Debian package
    maintenance and I only have one patch to it, and there's no data migration! (the last such migration in ikiwiki was in 2009 and was fully supported)
  • offline editing: this is a a big thing for me: i can just note
    things down and push them when I get back online
  • one place for everything: this blog is where I keep my notes, it's
    getting annoying to have to keep track of two places for that stuff
  • future-proof: extracting content from ikiwiki is amazingly
    simple. every page is a single markdown-formatted file. that's it.

Migrating will mean abandoning the
barlow theme, which was
seeing a declining usage anyways.

What

So what should be exported exactly. There's a bunch of crap in the old
blog that i don't want: users, caches, logs, "modules", and the list
goes on. Maybe it's better to create a list of what I need to extract:

  • nodes
    • title ([[ikiwiki/directive/meta]] title and guid tags, guid to avoid flooding aggregators)
    • body (need to check for "break comments")
    • nid (for future reference?)
    • tags (should be added as \[[!tag foo bar baz]] at the bottom)
    • URL (to keep old addresses)
    • published date ([[ikiwiki/directive/meta]] date directive)
    • modification date ([[ikiwiki/directive/meta]] updated directive)
    • revisions?
    • attached files
  • menus
    • RSS feed
    • contact
    • search
  • comments
    • author name
    • date
    • title
    • content
  • attached files
    • thumbnails
    • links
  • tags
    • each tag should have its own RSS feed and latest posts displayed
When

Some time before summer 2015.

Who

Well me, who else. You probably really don't care about that, so let'S
get to the meat of it.

How

How to perform this migration... There are multiple paths:

  • MySQL commandline: extracting data using the commandline mysql tool (drush sqlq ...)
  • Views export: extracting "standard format" dumps from Drupal and
    parse it (JSON, XML, CSV?)

Both approaches had issues, and I found a third way: talk directly to
mysql and generate the files directly, in a Python script. But first,
here are the two previous approaches I know of.

MySQL commandline

LeLutin switched using MySQL requests,
although he doesn't specify how content itself was migrated. Comments
importing is done with that script:

echo "select n.title, concat('| [[!comment format=mdwn|| username=\"', c.name, '\"|| ip=\"', c.hostname, '\"|| subject=\"', c.subject, '\"|| date=\"', FROM_UNIXTIME(c.created), '\"|| content=\"\"\"||', b.comment_body_value, '||\"\"\"]]') from node n, comment c, field_data_comment_body b where n.nid=c.nid and c.cid=b.entity_id;" | drush sqlc | tail -n +2 | while read line; do if [ -z "$i" ]; then i=0; fi; title=$(echo "$line" | sed -e 's/[ ]\+|.*//' -e 's/ /_/g' -e 's/[:(),?/+]//g'); body=$(echo "$line" | sed 's/[^|]*| //'); mkdir -p ~/comments/$title; echo -e "$body" &gt; ~/comments/$title/comment_$i._comment; i=$((i+1)); done

Kind of ugly, but beats what i had before (which was "nothing").

I do think it is the good direction to take, to simply talk to the
MySQL database, maybe with a native Python script. I know the Drupal
database schema pretty well (still! this is D6 after all) and it's
simple enough that this should just work.

Views export

[[!img 2015-02-03-233846_1440x900_scrot.png class="align-right" size="300x" align="center" alt="screenshot of views 2.x"]]

mvc recommended views data export on Lelutin's
blog. Unfortunately, my experience with the views export interface has
been somewhat mediocre so far. Yet another reason why I don't like
using Drupal anymore is this kind of obtuse dialogs:

I clicked through those for about an hour to get JSON output that
turned out to be provided by views bonus instead of
views_data_export. And confusingly enough, the path and
format_name fields are null in the JSON output
(whyyy!?). views_data_export unfortunately only supports XML,
which seems hardly better than SQL for structured data, especially
considering I am going to write a script for the conversion anyways.

Basically, it doesn't seem like any amount of views mangling will
provide me with what i need.

Nevertheless, here's the [[failed-export-view.txt]] that I was able to
come up with, may it be useful for future freedom fighters.

Python script

I ended up making a fairly simple Python script to talk directly to
the MySQL database.

The script exports only nodes and comments, and nothing else. It makes
a bunch of assumptions about the structure of the site, and is
probably only going to work if your site is a simple blog like mine,
but could probably be improved significantly to encompass larger and
more complex datasets. History is not preserved so no interaction is
performed with git.

Generating dump

First, I imported the MySQL dump file on my local mysql server for easier
development. It is 13.9MiO!!

mysql -e 'CREATE DATABASE anarcatblogbak;' ssh aegir.koumbit.net "cd anarcat.koumbit.org ; drush sql-dump" | pv | mysql anarcatblogbak

I decided to not import revisions. The majority (70%) of the content has
1 or 2 revisions, and those with two revisions are likely just when
the node was actually published, with minor changes. ~80% have 3
revisions or less, 90% have 5 or less, 95% 8 or less, and 98% 10 or
less. Only 5 articles have more than 10 revisions, with two having the
maximum of 15 revisions.

Those stats were generated with:

SELECT title,count(vid) FROM anarcatblogbak.node_revisions group by nid;

Then throwing the output in a CSV spreadsheet (thanks to
mysql-workbench for the easy export), adding a column numbering the
rows (B1=1,B2=B1+1), another for generating percentages
(C1=B1/count(B$2:B$218)) and generating a simple graph with
that. There were probably ways of doing that more cleanly with R,
and I broke my promise to never use a spreadsheet again, but then
again it was Gnumeric and it's just to get a rough idea.

There are 196 articles to import, with 251 comments, which means an
average of 1.15 comment per article (not much!). Unpublished articles
(5!) are completely ignored.

Summaries are also not imported as such (break comments are
ignored) because ikiwiki doesn't support post summaries.

Calling the conversion script

The script is in [[drupal2ikiwiki.py]]. It is called with:

./drupal2ikiwiki.py -u anarcatblogbak -d anarcatblogbak blog -vv

The -n and -l1 have been used for first tests as well. Use this
command to generate HTML from the result without having to commit and
push all:

ikiwiki --plugin meta --plugin tag --plugin comments --plugin inline . ../anarc.at.html

More plugins are of course enabled in the blog, see the setup file for
more information, or just enable plugin as you want to unbreak
things. Use the --rebuild flag on subsequent runs. The actual
invocation I use is more something like:

ikiwiki --rebuild --no-usedirs --plugin inline --plugin calendar --plugin postsparkline --plugin meta --plugin tag --plugin comments --plugin sidebar . ../anarc.at.html

I had problems with dates, but it turns out that I wasn't setting
dates in redirects... Instead of doing that, I started adding a
"redirection" tag that gets ignored by the main page.

Files and old URLs

The script should keep the same URLs, as long as pathauto is enabled
on the site. Otherwise, some logic should be easy to add to point to
node/N.

To redirect to the new blog, rewrite rules, on original blog, should
be as simple as:

Redirect / http://anarc.at/blog/

When we're sure:

Redirect permanent / http://anarc.at/blog/

Now, on the new blog, some magic needs to happen for files. Both
/files and /sites/anarcat.koumbit.org/files need to resolve
properly. We can't use symlinks because
ikiwiki drops symlinks on generation.

So I'll just drop the files in /blog/files directly, the actual
migration is:

cp $DRUPAL/sites/anarcat.koumbit.org/files $IKIWIKI/blog/files rm -r .htaccess css/ js/ tmp/ languages/ rm foo/bar # wtf was that. rmdir * sed -i 's#/sites/anarcat.koumbit.org/files/#/blog/files/#g' blog/*.mdwn sed -i 's#http://anarcat.koumbit.org/blog/files/#/blog/files/#g' blog/*.mdwn chmod -R -x blog/files sudo chmod -R +X blog/files

A few pages to test images:

  • http://anarcat.koumbit.org/node/157
  • http://anarcat.koumbit.org/node/203

There are some pretty big files in there, 10-30MB MP3s - but those are
already in this wiki! so do not import them!

Running fdupes on the result helps find oddities.

The meta guid directive is used to keep the aggregators from finding
duplicate feed entries. I tested it with Liferea, but it may freak out
some other sites.

Remaining issues
  • postsparkline and calendar archive disrespect meta(date)
  • merge the files in /communication with the ones in /blog/files
    before import
  • import non-published nodes
  • check nodes with a format different than markdown (only a few 3=Full
    HTML found so far)
  • replace links to this wiki in blog posts with internal links

More progress information in [[the script|drupal2ikiwiki.py]] itself.

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator