Elsewhere

nielsdefeyter.nl: Watch Drupalcon Now Orleans sessions on Youtube

Planet Drupal - Fri, 13/05/2016 - 01:47
As always the Drupal Association puts videe-recording of most sessions on DrupalCon Youtube. That is also true for the now ongoing DrupalCon New Orleans 2016. An excellent option for learning and understanding Drupal . (and following the event!) Recored video-sessions on YouTube Link DrupalCon New...
Categories: Elsewhere

Antoine Beaupré: Notmuch, offlineimap and Sieve setup

Planet Debian - Fri, 13/05/2016 - 01:29

I've been using Notmuch since about 2011, switching away from Mutt to deal with the monstrous amount of emails I was, and still am dealing with on the computer. I have contributed a few patches and configs on the Notmuch mailing list, but basically, I have given up on merging patches, and instead have a custom config in Emacs that extend it the way I want. In the last 5 years, Notmuch has progressed significantly, so I haven't found the need to patch it or make sweeping changes.

The huge INBOX of death

The one thing that is problematic with my use of Notmuch is that I end up with a ridiculously large INBOX folder. Before the cleanup I did this morning, I had over 10k emails in there, out of about 200k emails overall.

Since I mostly work from my laptop these days, the Notmuch tags are only on the laptop, and not propagated to the server. This makes accessing the mail spool directly, from webmail or simply through a local client (say Mutt) on the server, really inconvenient, because it has to load a very large spool of mail, which is very slow in Mutt. Even worse, a bunch of mail that was archived in Notmuch shows up in the spool because it's just removed tags in Notmuch: the mails are still in the inbox, even though they are marked as read.

So I was hoping that Notmuch would help me deal with the giant inbox of death problem, but in fact, when I don't use Notmuch, it actually makes the problem worse. Today, I did a bunch of improvements to my setup to fix that.

The first thing I did was to kill procmail, which I was surprised to discover has been dead for over a decade. I switched over to Sieve for filtering, having already switched to Dovecot a while back on the server. I tried to use the procmail2sieve.pl conversion tool but it didn't work very well, so I basically rewrote the whole file. Since I was mostly using Notmuch for filtering, there wasn't much left to convert.

Sieve filtering

But this is where things got interesting: Sieve is so simpler to use and more intuitive that I started doing more interesting stuff in bridging the filtering system (Sieve) with the tagging system (Notmuch). Basically, I use Sieve to split large chunks of emails off my main inbox, to try to remove as much spam, bulk email, notifications and mailing lists as possible from the larger flow of emails. Then Notmuch comes in and does some fine-tuning, assigning tags to specific mailing lists or topics, and being generally the awesome search engine that I use on a daily basis.

Dovecot and Postfix configs

For all of this to work, I had to tweak my mail servers to talk sieve. First, I enabled sieve in Dovecot:

--- a/dovecot/conf.d/15-lda.conf +++ b/dovecot/conf.d/15-lda.conf @@ -44,5 +44,5 @@ protocol lda { # Space separated list of plugins to load (default is global mail_plugins). - #mail_plugins = $mail_plugins + mail_plugins = $mail_plugins sieve }

Then I had to switch from procmail to dovecot for local delivery, that was easy, in Postfix's perennial main.cf:

#mailbox_command = /usr/bin/procmail -a "$EXTENSION" mailbox_command = /usr/lib/dovecot/dovecot-lda -a "$RECIPIENT"

Note that dovecot takes the full recipient as an argument, not just the extension. That's normal. It's clever, it knows that kind of stuff.

One last tweak I did was to enable automatic mailbox creation and subscription, so that the automatic extension filtering (below) can create mailboxes on the fly:

--- a/dovecot/conf.d/15-lda.conf +++ b/dovecot/conf.d/15-lda.conf @@ -37,10 +37,10 @@ #lda_original_recipient_header = # Should saving a mail to a nonexistent mailbox automatically create it? -#lda_mailbox_autocreate = no +lda_mailbox_autocreate = yes # Should automatically created mailboxes be also automatically subscribed? -#lda_mailbox_autosubscribe = no +lda_mailbox_autosubscribe = yes protocol lda { # Space separated list of plugins to load (default is global mail_plugins). Sieve rules

Then I had to create a Sieve ruleset. That thing lives in ~/.dovecot.sieve, since I'm running Dovecot. Your provider may accept an arbitrary ruleset like this, or you may need to go through a web interface, or who knows. I'm assuming you're running Dovecot and have a shell from now on.

The first part of the file is simply to enable a bunch of extensions, as needed:

# Sieve Filters # http://wiki.dovecot.org/Pigeonhole/Sieve/Examples # https://tools.ietf.org/html/rfc5228 require "fileinto"; require "envelope"; require "variables"; require "subaddress"; require "regex"; require "vacation"; require "vnd.dovecot.debug";

Some of those are not used yet, for example I haven't tested the vacation module yet, but I have good hopes that I can use it as a way to announce a special "urgent" mailbox while I'm traveling. The rationale is to have a distinct mailbox for urgent messages that is announced in the autoreply, that hopefully won't be parsable by bots.

Spam filtering

Then I filter spam using this fairly standard expression:

######################################################################## # spam # possible improvement, server-side: # http://wiki.dovecot.org/Pigeonhole/Sieve/Examples#Filtering_using_the_spamtest_and_virustest_extensions if header :contains "X-Spam-Flag" "YES" { fileinto "junk"; stop; } elsif header :contains "X-Spam-Level" "***" { fileinto "greyspam"; stop; }

This puts stuff into the junk or greyspam folder, based on the severity. I am very aggressive with spam: stuff often ends up in the greyspam folder, which I need to check from time to time, but it beats having too much spam in my inbox.

Mailing lists

Mailing lists are generally put into a lists folder, with some mailing lists getting their own folder:

######################################################################## # lists # converted from procmail if header :contains "subject" "FreshPorts" { fileinto "freshports"; } elsif header :contains "List-Id" "alternc.org" { fileinto "alternc"; } elsif header :contains "List-Id" "koumbit.org" { fileinto "koumbit"; } elsif header :contains ["to", "cc"] ["lists.debian.org", "anarcat@debian.org"] { fileinto "debian"; # Debian BTS } elsif exists "X-Debian-PR-Message" { fileinto "debian"; # default lists fallback } elsif exists "List-Id" { fileinto "lists"; }

The idea here is that I can safely subscribe to lists without polluting my mailbox by default. Further processing is done in Notmuch.

Extension matching

I also use the magic +extension tag on emails. If you send email to, say, foo+extension@example.com then the emails end up in the foo folder. This is done with the help of the following recipe:

######################################################################## # wildcard +extension # http://wiki.dovecot.org/Pigeonhole/Sieve/Examples#Plus_Addressed_mail_filtering if envelope :matches :detail "to" "*" { # Save name in ${name} in all lowercase except for the first letter. # Joe, joe, jOe thus all become 'Joe'. set :lower "name" "${1}"; fileinto "${name}"; #debug_log "filed into mailbox ${name} because of extension"; stop; }

This is actually very effective: any time I register to a service, I try as much as possible to add a +extension that describe the service. Of course, spammers and marketers (it's the same really) are free to drop the extension and I suspect a lot of them do, but it helps with honest providers and this actually sorts a lot of stuff out of my inbox into topically-defined folders.

It is also a security issue: someone could flood my filesystem with tons of mail folders, which would cripple the IMAP server and eat all the inodes, 4 times faster than just sending emails. But I guess I'll cross that bridge when I get there: anyone can flood my address and I have other mechanisms to deal with this.

The trick is to then assign tags to all folders so that they appear in the Notmuch-emacs welcome view:

echo tagging folders for folder in $(ls -ad $HOME/Maildir/${PREFIX}*/ | egrep -v "Maildir/${PREFIX}(feeds.*|Sent.*|INBOX/|INBOX/Sent)\$"); do tag=$(echo $folder | sed 's#/$##;s#^.*/##') notmuch tag +$tag -inbox tag:inbox and not tag:$tag and folder:${PREFIX}$tag done

This is part of my notmuch-tag script that includes a lot more fine-tuned filtering, detailed below.

Automated reports filtering

Another thing I get a lot of is machine-generated "spam". Well, it's not commercial spam, but it's a bunch of Nagios, cron jobs, and god knows what software thinks it's important to send me emails every day. I get a lot less of those these days since I'm off work at Koumbit, but still, those can be useful for others as well:

if anyof (exists "X-Cron-Env", header :contains ["subject"] ["security run output", "monthly run output", "daily run output", "weekly run output", "Debian Package Updates", "Debian package update", "daily mail stats", "Anacron job", "nagios", "changes report", "run output", "[Systraq]", "Undelivered mail", "Postfix SMTP server: errors from", "backupninja", "DenyHosts report", "Debian security status", "apt-listchanges" ], header :contains "Auto-Submitted" "auto-generated", envelope :contains "from" ["nagios@", "logcheck@"]) { fileinto "rapports"; } # imported from procmail elsif header :comparator "i;octet" :contains "Subject" "Cron" { if header :regex :comparator "i;octet" "From" ".*root@" { fileinto "rapports"; } } elsif header :comparator "i;octet" :contains "To" "root@" { if header :regex :comparator "i;octet" "Subject" "\\*\\*\\* SECURITY" { fileinto "rapports"; } } elsif header :contains "Precedence" "bulk" { fileinto "bulk"; } Refiltering emails

Of course, after all this I still had thousands of emails in my inbox, because the sieve filters apply only on new emails. The beauty of Sieve support in Dovecot is that there is a neat sieve-filter command that can reprocess an existing mailbox. That was a lifesaver. To run a specific sieve filter on a mailbox, I simply run:

sieve-filter .dovecot.sieve INBOX 2>&1 | less

Well, this doesn't do anything. To really execute the filters, you need the -e flags, and to write to the INBOX for real, you need the -w flag as well, so the real run looks something more like this:

sieve-filter -e -W -v .dovecot.sieve INBOX > refilter.log 2>&1

The funky output redirects are necessary because this outputs a lot of crap. Also note that, unfortunately, the fake run output differs from the real run and is actually more verbose, which makes it really less useful than it could be.

Archival

I also usually archive my mails every year, rotating my mailbox into an Archive.YYYY directory. For example, now all mails from 2015 are archived in a Archive.2015 directory. I used to do this with Mutt tagging and it was a little slow and error-prone. Now, i simply have this Sieve script:

require ["variables","date","fileinto","mailbox", "relational"]; # Extract date info if currentdate :matches "year" "*" { set "year" "${1}"; } if date :value "lt" :originalzone "date" "year" "${year}" { if date :matches "received" "year" "*" { # Archive Dovecot mailing list items by year and month. # Create folder when it does not exist. fileinto :create "Archive.${1}"; } }

I went from 15613 to 1040 emails in my real inbox with this process (including refiltering with the default filters as well).

Notmuch configuration

My Notmuch configuration is a in three parts: I have small settings in ~/.notmuch-config. The gist of it is:

[new] tags=unread;inbox; ignore= #[maildir] # synchronize_flags=true # tentative patch that was refused upstream # http://mid.gmane.org/1310874973-28437-1-git-send-email-anarcat@koumbit.org #reckless_trash=true [search] exclude_tags=deleted;spam;

I omitted the fairly trivial [user] section for privacy reasons and [database] for declutter.

Then I have a notmuch-tag script symlinked into ~/Maildir/.notmuch/hooks/post-new. It does way too much stuff to describe in details here, but here are a few snippets:

if hostname | grep angela > /dev/null; then PREFIX=Anarcat/ else PREFIX=. fi

This sets a variable that makes the script work on my laptop (angela), where mailboxes are in Maildir/Anarcat/foo or the server, where mailboxes are in Maildir/.foo.

I also have special rules to tag my RSS feeds, which are generated by feed2imap, which is documented shortly below:

echo tagging feeds ( cd $HOME/Maildir/ && for feed in ${PREFIX}feeds.*; do name=$(echo $feed | sed "s#${PREFIX}feeds\\.##") notmuch tag +feeds +$name -inbox folder:$feed and not tag:feeds done )

Another example that would be useful is how to tag mailing lists, for example, this removes the inbox tag and adds the notmuch tags to emails from the notmuch mailing list.

notmuch tag +lists +notmuch -inbox tag:inbox and "to:notmuch@notmuchmail.org"

Finally, I have a bunch of special keybindings in ~/.emacs.d/notmuch-config.el:

;; autocompletion (eval-after-load "notmuch-address" '(progn (notmuch-address-message-insinuate))) ; use fortune for signature, config is in custom (add-hook 'message-setup-hook 'fortune-to-signature) ; don't remember what that is (add-hook 'notmuch-show-hook 'visual-line-mode) ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;;; keymappings ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; (define-key notmuch-show-mode-map "S" (lambda () "mark message as spam and advance" (interactive) (notmuch-show-tag '("+spam" "-unread")) (notmuch-show-next-open-message-or-pop))) (define-key notmuch-search-mode-map "S" (lambda (&optional beg end) "mark message as spam and advance" (interactive (notmuch-search-interactive-region)) (notmuch-search-tag (list "+spam" "-unread") beg end) (anarcat/notmuch-search-next-message))) (define-key notmuch-show-mode-map "H" (lambda () "mark message as spam and advance" (interactive) (notmuch-show-tag '("-spam")) (notmuch-show-next-open-message-or-pop))) (define-key notmuch-search-mode-map "H" (lambda (&optional beg end) "mark message as spam and advance" (interactive (notmuch-search-interactive-region)) (notmuch-search-tag (list "-spam") beg end) (anarcat/notmuch-search-next-message))) (define-key notmuch-search-mode-map "l" (lambda (&optional beg end) "undelete and advance" (interactive (notmuch-search-interactive-region)) (notmuch-search-tag (list "-unread") beg end) (anarcat/notmuch-search-next-message))) (define-key notmuch-search-mode-map "u" (lambda (&optional beg end) "undelete and advance" (interactive (notmuch-search-interactive-region)) (notmuch-search-tag (list "-deleted") beg end) (anarcat/notmuch-search-next-message))) (define-key notmuch-search-mode-map "d" (lambda (&optional beg end) "delete and advance" (interactive (notmuch-search-interactive-region)) (notmuch-search-tag (list "+deleted" "-unread") beg end) (anarcat/notmuch-search-next-message))) (define-key notmuch-show-mode-map "d" (lambda () "delete current message and advance" (interactive) (notmuch-show-tag '("+deleted" "-unread")) (notmuch-show-next-open-message-or-pop))) ;; https://notmuchmail.org/emacstips/#index17h2 (define-key notmuch-show-mode-map "b" (lambda (&optional address) "Bounce the current message." (interactive "sBounce To: ") (notmuch-show-view-raw-message) (message-resend address) (kill-buffer))) ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;;; my custom notmuch functions ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; (defun anarcat/notmuch-search-next-thread () "Skip to next message from region or point This is necessary because notmuch-search-next-thread just starts from point, whereas it seems to me more logical to start from the end of the region." ;; move line before the end of region if there is one (unless (= beg end) (goto-char (- end 1))) (notmuch-search-next-thread)) ;; Linking to notmuch messages from org-mode ;; https://notmuchmail.org/emacstips/#index23h2 (require 'org-notmuch nil t) (message "anarcat's custom notmuch config loaded")

This is way too long: in my opinion, a bunch of that stuff should be factored in upstream, but some features have been hard to get in. For example, Notmuch is really hesitant in marking emails as deleted. The community is also very strict about having unit tests for everything, which makes writing new patches a significant challenge for a newcomer, which will often need to be familiar with both Elisp and C. So for now I just have those configs that I carry around.

Emails marked as deleted or spam are processed with the following script named notmuch-purge which I symlink to ~/Maildir/.notmuch/hooks/pre-new:

#!/bin/sh if hostname | grep angela > /dev/null; then PREFIX=Anarcat/ else PREFIX=. fi echo moving tagged spam to the junk folder notmuch search --output=files tag:spam \ and not folder:${PREFIX}junk \ and not folder:${PREFIX}greyspam \ and not folder:Koumbit/INBOX \ and not path:Koumbit/** \ | while read file; do mv "$file" "$HOME/Maildir/${PREFIX}junk/cur" done echo unconditionnally deleting deleted mails notmuch search --output=files tag:deleted | xargs -r rm

Oh, and there's also customization for Notmuch:

;; -*- mode: emacs-lisp; auto-recompile: t; -*- (custom-set-variables ;; from https://anarc.at/sigs.fortune '(fortune-file "/home/anarcat/.mutt/sigs.fortune") '(message-send-hook (quote (notmuch-message-mark-replied))) '(notmuch-address-command "notmuch-address") '(notmuch-always-prompt-for-sender t) '(notmuch-crypto-process-mime t) '(notmuch-fcc-dirs (quote ((".*@koumbit.org" . "Koumbit/INBOX.Sent") (".*" . "Anarcat/Sent")))) '(notmuch-hello-tag-list-make-query "tag:unread") '(notmuch-message-headers (quote ("Subject" "To" "Cc" "Bcc" "Date" "Reply-To"))) '(notmuch-saved-searches (quote ((:name "inbox" :query "tag:inbox and not tag:koumbit and not tag:rt") (:name "unread inbox" :query "tag:inbox and tag:unread") (:name "unread" :query "tag:unred") (:name "freshports" :query "tag:freshports and tag:unread") (:name "rapports" :query "tag:rapports and tag:unread") (:name "sent" :query "tag:sent") (:name "drafts" :query "tag:draft")))) '(notmuch-search-line-faces (quote (("deleted" :foreground "red") ("unread" :weight bold) ("flagged" :foreground "blue"))))/ '(notmuch-search-oldest-first nil) '(notmuch-show-all-multipart/alternative-parts nil) '(notmuch-show-all-tags-list t) '(notmuch-show-insert-text/plain-hook (quote (notmuch-wash-convert-inline-patch-to-part notmuch-wash-tidy-citations notmuch-wash-elide-blank-lines notmuch-wash-excerpt-citations))) )

I think that covers it.

Offlineimap

So of course the above works well on the server directly, but how do run Notmuch on a remote machine that doesn't have access to the mail spool directly? This is where OfflineIMAP comes in. It allows me to incrementally synchronize a local Maildir folder hierarchy with a a remote IMAP server. I am assuming you already have an IMAP server configured, since you already configured Sieve above.

Note that other synchronization tools exist. The other popular one is isync but I had trouble migrating to it (see courriels for details) so for now I am sticking with OfflineIMAP.

The configuration is fairly simple:

[general] accounts = Anarcat ui = Blinkenlights maxsyncaccounts = 3 [Account Anarcat] localrepository = LocalAnarcat remoterepository = RemoteAnarcat # refresh all mailboxes every 10 minutes autorefresh = 10 # run notmuch after refresh postsynchook = notmuch new # sync only mailboxes that changed quick = -1 ## possible optimisation: ignore mails older than a year #maxage = 365 # local mailbox location [Repository LocalAnarcat] type = Maildir localfolders = ~/Maildir/Anarcat/ # remote IMAP server [Repository RemoteAnarcat] type = IMAP remoteuser = anarcat remotehost = anarc.at ssl = yes # without this, the cert is not verified (!) sslcacertfile = /etc/ssl/certs/DST_Root_CA_X3.pem # do not sync archives folderfilter = lambda foldername: not re.search('(Sent\.20[01][0-9]\..*)', foldername) and not re.search('(Archive.*)', foldername) # and only subscribed folders subscribedonly = yes # don't reconnect all the time holdconnectionopen = yes # get mails from INBOX immediately, doesn't trigger postsynchook idlefolders = ['INBOX']

Critical parts are:

  • postsynchook: obviously, we want to run notmuch after fetching mail
  • idlefolders: receives emails immediately without waiting for the longer autorefresh delay, which means that most mailboxes don't see new emails until 10 minutes in the worst case. unfortunately, doesn't run the postsynchook so I need to hit G in Emacs to see new mail
  • quick=-1, subscribedonly, holdconnectionopen: makes most runs much, much faster as it skips unchanged or unsubscribed folders and keeps the connection to the server

The other settings should be self-explanatory.

RSS feeds

I gave up on RSS readers, or more precisely, I merged RSS feeds and email. The first time I heard of this, it sounded like a horrible idea, because it means yet more emails! But with proper filtering, it's actually a really nice way to process emails, since it leverages the distributed nature of email.

For this I use a fairly standard feed2imap, although I do not deliver to an IMAP server, but straight to a local Maildir. The configuration looks like this:

--- include-images: true target-refix: &target "maildir:///home/anarcat/Maildir/.feeds." feeds: - name: Planet Debian url: http://planet.debian.org/rss20.xml target: [ *target, 'debian-planet' ]

I have obviously more feeds, the above is just and example. This will deliver the feeds as emails in one mailbox per feed, in ~/Maildir/.feeds.debian-planet, in the above example.

Troubleshooting

You will fail at writing the sieve filters correctly, and mail will (hopefully?) fall through to your regular mailbox. Syslog will tell you things fail, as expected, and details are in your .dovecot.sieve.log file in your home directory.

I also enabled debugging on the Sieve module

--- a/dovecot/conf.d/90-sieve.conf +++ b/dovecot/conf.d/90-sieve.conf @@ -51,6 +51,7 @@ plugin { # deprecated imapflags extension in addition to all extensions were already # enabled by default. #sieve_extensions = +notify +imapflags + sieve_extensions = +vnd.dovecot.debug # Which Sieve language extensions are ONLY available in global scripts. This # can be used to restrict the use of certain Sieve extensions to administrator

This allowed me to use debug_log function in the rulesets to output stuff directly to the logfile.

Further improvements

Of course, this is all done on the commandline, but that is somewhat expected if you are already running Notmuch. Of course, it would be much easier to edit those filters through a GUI. Roundcube has a nice Sieve plugin, and Thunderbird also has such a plugin as well. Since Sieve is a standard, there's a bunch of clients available. All those need you to setup some sort of thing on the server, which I didn't bother doing yet.

And of course, a key improvement would be to have Notmuch synchronize its state better with the mailboxes directly, instead of having the notmuch-purge hack above. Dovecot and Maildir formats support up to 26 flags, and there were discussions about using those flags to synchronize with notmuch tags so that multiple notmuch clients can see the same tags on different machines transparently.

This, however, won't make Notmuch work on my phone or webmail or any other more generic client: for that, Sieve rules are still very useful.

I still don't have webmail setup at all: so to read email, I need an actual client, which is currently my phone, which means I need to have Wifi access to read email. "Internet Cafés" or "this guy's computer" won't work as well, although I can always use ssh to login straight to the server and read mails with Mutt.

I am also considering using X509 client certificates to authenticate to the mail server without a passphrase. This involves configuring Postfix, which seems simple enough. Dovecot's configuration seems a little more involved and less well documented. It seems that both [OfflimeIMAP][] and K-9 mail support client-side certs. OfflineIMAP prompts me for the password so it doesn't get leaked anywhere. I am a little concerned about building yet another CA, but I guess it would not be so hard...

The server side of things needs more documenting, particularly the spam filters. This is currently spread around this wiki, mostly in configuration.

Security considerations

The whole purpose of this was to make it easier to read my mail on other devices. This introduces a new vulnerability: someone may steal that device or compromise it to read my mail, impersonate me on different services and even get a shell on the remote server.

Thanks to the two-factor authentication I setup on the server, I feel a little more confident that just getting the passphrase to the mail account isn't sufficient anymore in leveraging shell access. It also allows me to login with ssh on the server without trusting the machine too much, although that only goes so far... Of course, sudo is then out of the question and I must assume that everything I see is also seen by the attacker, which can also inject keystrokes and do all sorts of nasty things.

Since I also connected my email account on my phone, someone could steal the phone and start impersonating me. The mitigation here is that there is a PIN for the screen lock, and the phone is encrypted. Encryption isn't so great when the passphrase is a PIN, but I'm working on having a better key that is required on reboot, and the phone shuts down after 5 failed attempts. This is documented in my phone setup.

Client-side X509 certificates further mitigates those kind of compromises, as the X509 certificate won't give shell access.

Basically, if the phone is lost, all hell breaks loose: I need to change the email password (or revoke the certificate), as I assume the account is about to be compromised. I do not trust Android security to give me protection indefinitely. In fact, one could argue that the phone is already compromised and putting the password there already enabled a possible state-sponsored attacker to hijack my email address. This is why I have an OpenPGP key on my laptop to authenticate myself for critical operations like code signatures.

The risk of identity theft from the state is, after all, a tautology: the state is the primary owner of identities, some could say by definition. So if a state-sponsored attacker would like to masquerade as me, they could simply issue a passport under my name and join a OpenPGP key signing party, and we'd have other problems to deal with, namely, proper infiltration counter-measures and counter-snitching.

Categories: Elsewhere

Ingo Juergensmann: Xen randomly crashing server - part 2

Planet Debian - Thu, 12/05/2016 - 20:38

Some weeks ago I blogged about "Xen randomly crashing server". The problem back then was that I couldn't get any information why the server reboots. Using a netconsole was not possible, because netconsole refused to work with the bridge that is used for Xen networking. Luckily my colocation partner rrbone.net connected the second network port of my server to the network so that I could use eth1 instead of the bridged eth0 for netconsole.

Today the server crashed several times and I was able to collect some more information than just the screenshots from IPMI/KVM console as shown in my last blog entry (full netconsole output is attached as a file): 

May 12 11:56:39 31.172.31.251 [829681.040596] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.16.0-4-amd64 #1 Debian 3.16.7-ckt25-2
May 12 11:56:39 31.172.31.251 [829681.040647] Hardware name: Supermicro X9SRE/X9SRE-3F/X9SRi/X9SRi-3F/X9SRE/X9SRE-3F/X9SRi/X9SRi-3F, BIOS 3.0a 01/03/2014
May 12 11:56:39 31.172.31.251 [829681.040701] task: ffffffff8181a460 ti: ffffffff81800000 task.ti: ffffffff81800000
May 12 11:56:39 31.172.31.251 [829681.040749] RIP: e030:[<ffffffff812b7e56>]
May 12 11:56:39 31.172.31.251  [<ffffffff812b7e56>] memcpy+0x6/0x110
May 12 11:56:39 31.172.31.251 [829681.040802] RSP: e02b:ffff880280e03a58  EFLAGS: 00010286
May 12 11:56:39 31.172.31.251 [829681.040834] RAX: ffff88026eec9070 RBX: ffff88023c8f6b00 RCX: 00000000000000ee
May 12 11:56:39 31.172.31.251 [829681.040880] RDX: 00000000000004a0 RSI: ffff88006cd1f000 RDI: ffff88026eec9422
May 12 11:56:39 31.172.31.251 [829681.040927] RBP: ffff880280e03b38 R08: 00000000000006c0 R09: ffff88026eec9062
May 12 11:56:39 31.172.31.251 [829681.040973] R10: 0100000000000000 R11: 00000000af9a2116 R12: ffff88023f440d00
May 12 11:56:39 31.172.31.251 [829681.041020] R13: ffff88006cd1ec66 R14: ffff88025dcf1cc0 R15: 00000000000004a8
May 12 11:56:39 31.172.31.251 [829681.041075] FS:  0000000000000000(0000) GS:ffff880280e00000(0000) knlGS:ffff880280e00000
May 12 11:56:39 31.172.31.251 [829681.041124] CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
May 12 11:56:39 31.172.31.251 [829681.041153] CR2: ffff88006cd1f000 CR3: 0000000271ae8000 CR4: 0000000000042660
May 12 11:56:39 31.172.31.251 [829681.041202] Stack:
May 12 11:56:39 31.172.31.251 [829681.041225]  ffffffff814d38ff
May 12 11:56:39 31.172.31.251  ffff88025b5fa400
May 12 11:56:39 31.172.31.251  ffff880280e03aa8
May 12 11:56:39 31.172.31.251  9401294600a7012a
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.041287]  0100000000000000
May 12 11:56:39 31.172.31.251  ffffffff814a000a
May 12 11:56:39 31.172.31.251  000000008181a460
May 12 11:56:39 31.172.31.251  00000000000080fe
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.041346]  1ad902feff7ac40e
May 12 11:56:39 31.172.31.251  ffff88006c5fd980
May 12 11:56:39 31.172.31.251  ffff224afc3e1600
May 12 11:56:39 31.172.31.251  ffff88023f440d00
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.041407] Call Trace:
May 12 11:56:39 31.172.31.251 [829681.041435]  <IRQ>
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.041441]
May 12 11:56:39 31.172.31.251  [<ffffffff814d38ff>] ? ndisc_send_redirect+0x3bf/0x410
May 12 11:56:39 31.172.31.251 [829681.041506]  [<ffffffff814a000a>] ? ipmr_device_event+0x7a/0xd0
May 12 11:56:39 31.172.31.251 [829681.041548]  [<ffffffff814bc74c>] ? ip6_forward+0x71c/0x850
May 12 11:56:39 31.172.31.251 [829681.041585]  [<ffffffff814c9e54>] ? ip6_route_input+0xa4/0xd0
May 12 11:56:39 31.172.31.251 [829681.041621]  [<ffffffff8141f1a3>] ? __netif_receive_skb_core+0x543/0x750
May 12 11:56:39 31.172.31.251 [829681.041729]  [<ffffffff8141f42f>] ? netif_receive_skb_internal+0x1f/0x80
May 12 11:56:39 31.172.31.251 [829681.041771]  [<ffffffffa0585eb2>] ? br_handle_frame_finish+0x1c2/0x3c0 [bridge]
May 12 11:56:39 31.172.31.251 [829681.041821]  [<ffffffffa058c757>] ? br_nf_pre_routing_finish_ipv6+0xc7/0x160 [bridge]
May 12 11:56:39 31.172.31.251 [829681.041872]  [<ffffffffa058d0e2>] ? br_nf_pre_routing+0x562/0x630 [bridge]
May 12 11:56:39 31.172.31.251 [829681.041907]  [<ffffffffa0585cf0>] ? br_handle_local_finish+0x80/0x80 [bridge]
May 12 11:56:39 31.172.31.251 [829681.041955]  [<ffffffff8144fb65>] ? nf_iterate+0x65/0xa0
May 12 11:56:39 31.172.31.251 [829681.041987]  [<ffffffffa0585cf0>] ? br_handle_local_finish+0x80/0x80 [bridge]
May 12 11:56:39 31.172.31.251 [829681.042035]  [<ffffffff8144fc16>] ? nf_hook_slow+0x76/0x130
May 12 11:56:39 31.172.31.251 [829681.042067]  [<ffffffffa0585cf0>] ? br_handle_local_finish+0x80/0x80 [bridge]
May 12 11:56:39 31.172.31.251 [829681.042116]  [<ffffffffa0586220>] ? br_handle_frame+0x170/0x240 [bridge]
May 12 11:56:39 31.172.31.251 [829681.042148]  [<ffffffff8141ee24>] ? __netif_receive_skb_core+0x1c4/0x750
May 12 11:56:39 31.172.31.251 [829681.042185]  [<ffffffff81009f9c>] ? xen_clocksource_get_cycles+0x1c/0x20
May 12 11:56:39 31.172.31.251 [829681.042217]  [<ffffffff8141f42f>] ? netif_receive_skb_internal+0x1f/0x80
May 12 11:56:39 31.172.31.251 [829681.042251]  [<ffffffffa063f50f>] ? xenvif_tx_action+0x49f/0x920 [xen_netback]
May 12 11:56:39 31.172.31.251 [829681.042299]  [<ffffffffa06422f8>] ? xenvif_poll+0x28/0x70 [xen_netback]
May 12 11:56:39 31.172.31.251 [829681.042331]  [<ffffffff8141f7b0>] ? net_rx_action+0x140/0x240
May 12 11:56:39 31.172.31.251 [829681.042367]  [<ffffffff8106c6a1>] ? __do_softirq+0xf1/0x290
May 12 11:56:39 31.172.31.251 [829681.042397]  [<ffffffff8106ca75>] ? irq_exit+0x95/0xa0
May 12 11:56:39 31.172.31.251 [829681.042432]  [<ffffffff8135a285>] ? xen_evtchn_do_upcall+0x35/0x50
May 12 11:56:39 31.172.31.251 [829681.042469]  [<ffffffff8151669e>] ? xen_do_hypervisor_callback+0x1e/0x30
May 12 11:56:39 31.172.31.251 [829681.042499]  <EOI>
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.042506]
May 12 11:56:39 31.172.31.251  [<ffffffff810013aa>] ? xen_hypercall_sched_op+0xa/0x20
May 12 11:56:39 31.172.31.251 [829681.042561]  [<ffffffff810013aa>] ? xen_hypercall_sched_op+0xa/0x20
May 12 11:56:39 31.172.31.251 [829681.042592]  [<ffffffff81009e7c>] ? xen_safe_halt+0xc/0x20
May 12 11:56:39 31.172.31.251 [829681.042627]  [<ffffffff8101c8c9>] ? default_idle+0x19/0xb0
May 12 11:56:39 31.172.31.251 [829681.042666]  [<ffffffff810a83e0>] ? cpu_startup_entry+0x340/0x400
May 12 11:56:39 31.172.31.251 [829681.042705]  [<ffffffff81903076>] ? start_kernel+0x497/0x4a2
May 12 11:56:39 31.172.31.251 [829681.042735]  [<ffffffff81902a04>] ? set_init_arg+0x4e/0x4e
May 12 11:56:39 31.172.31.251 [829681.042767]  [<ffffffff81904f69>] ? xen_start_kernel+0x569/0x573
May 12 11:56:39 31.172.31.251 [829681.042797] Code:
May 12 11:56:39 31.172.31.251  <f3>
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.043113] RIP
May 12 11:56:39 31.172.31.251  [<ffffffff812b7e56>] memcpy+0x6/0x110
May 12 11:56:39 31.172.31.251 [829681.043145]  RSP <ffff880280e03a58>
May 12 11:56:39 31.172.31.251 [829681.043170] CR2: ffff88006cd1f000
May 12 11:56:39 31.172.31.251 [829681.043488] ---[ end trace 1838cb62fe32daad ]---
May 12 11:56:39 31.172.31.251 [829681.048905] Kernel panic - not syncing: Fatal exception in interrupt
May 12 11:56:39 31.172.31.251 [829681.048978] Kernel Offset: 0x0 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffff9fffffff)

I'm not that good at reading this kind of output, but to me it seems that ndisc_send_redirect is at fault. When googling for "ndisc_send_redirect" you can find a patch on lkml.org and Debian bug #804079, both seem to be related to IPv6.

When looking at the linux kernel source mentioned in the lkml patch I see that this patch is already applied (line 1510): 

        if (ha) 
                ndisc_fill_addr_option(buff, ND_OPT_TARGET_LL_ADDR, ha);

So, when the patch was intended to prevent "leading to data corruption or in the worst case a panic when the skb_put failed" it does not help in my case or in the case of #804079.

Any tips are appreciated!

PS: I'll contribute to that bug in the BTS, of course!

AttachmentSize syslog-xen-crash.txt24.27 KB Kategorie: DebianTags: DebianSoftwareXenServerBug 
Categories: Elsewhere

Dries Buytaert: State of Drupal presentation (May 2016)

Planet Drupal - Thu, 12/05/2016 - 20:25

DrupalCon New Orleans comes at an important time in the history of Drupal. Now that Drupal 8 has launched, we have a lot of work to do to accelerate Drupal 8's adoption as well as plan what is next.

In my keynote presentation, I shared my thoughts on where we should focus our efforts in order for Drupal to continue its path to become the leading platform for assembling the world's best digital experiences.

Based on recent survey data, I proposed key initiatives for Drupal, as well as shared my vision for building cross-channel customer experiences that span various devices, including conversational technologies like Amazon Echo.

You can watch a recording of my keynote (starting at 3:43) or download a copy of my slides (162 MB).

Take a look, and as always feel free to leave your opinions in the comments!

Categories: Elsewhere

ThinkDrop Consulting: Onward with OpenDevShop Inc

Planet Drupal - Thu, 12/05/2016 - 18:27

Today I am awaking to the last "official" day of DrupalCon New Orleans with a huge new wind at my back.

It felt like an appropriate time to post what is likely my last blog post as ThinkDrop Consulting LLC.

My partners and I have been in a whirlwind tour of the convention, spreading the news of our product, and our new company: OpenDevShop Inc. In order to focus entirely on development and hosting tools, I am closing up ThinkDrop Consulting.

We've been building the OpenDevShop platform since late 2011 for my clients and myself, and in January of this year, we incorporated.

Our mission: to make hosting, testing, and scaling websites as easy as possible; to make infrastructure management as easy to deal with as content; and to foster a community around these types of tools.

OpenDevShop Inc is a member of the newly formed Aegir Coop. The cooperative is a group of companies and individuals who have organized together to support and develop the Aegir ecosystem.

We are working hard to not only grow this business but to grow the Aegir community, both users but especially contributors. We have a lot of work to do if Aegir is going to live up to modern expectations of infrastructure management and deployment tools.

Today we have two Birds of a Feather sessions back to back at DrupalCon: one for OpenDevShop and one for the Aegir Coop.

The energy behind the Drupal Community has never been higher. Let's follow in their footsteps and bring together everyone that cares about better Infrastructure Management and DevOps.

Come join the Aegir & OpenDevShop communities in room 292 (AshDay) from 1pm - 3pm, and maybe later if they let us stay.

Please check out our new company website, opendevshop.com.

We will see you around the community!

Tags: devshopPlanet Drupal
Categories: Elsewhere

LevelTen Interactive: Learn with LevelTen: DrupalCon Session Twitter Recap

Planet Drupal - Thu, 12/05/2016 - 16:50

The LevelTen team used the hashtag #learnwithl10 to document the various sessions they attended and what they learned on Tuesday and Wednesday of DrupalCon New Orleans.

...Read more
Categories: Elsewhere

DrupalEasy: DrupalEasy Podcast: New Orleans Day 1

Planet Drupal - Thu, 12/05/2016 - 16:50

Direct .mp3 file download.

Hosts Ryan Price, Mike Anello, Ted Bowman and Kelley Curry are joined by guests Mike Herchel (of the Lullabot Podcast) and Steve Edwards (formerly of the Acquia Podcast) to discuss the events on Day 1 of DrupalCon. We start with the Prenote and Driesnote, and then discuss sessions each person was interested in throughout the day.

Check in later this week for more episodes from DrupalCon New Orleans 2016.

Follow us on Twitter Intro Music

House of Drupalcon from #Prenote

By Adam Juran, Campbell Vertessi and Jeremy "JAM" Macguire

Subscribe

Subscribe to our podcast on iTunesGoogle Play or Miro. Listen to our podcast onStitcher.

If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or corrections. If you'd rather just send us an email, please use our contact page.

Categories: Elsewhere

Matthew Garrett: Convenience, security and freedom - can we pick all three?

Planet Debian - Thu, 12/05/2016 - 16:40
Moxie, the lead developer of the Signal secure communication application, recently blogged on the tradeoffs between providing a supportable federated service and providing a compelling application that gains significant adoption. There's a set of perfectly reasonable arguments around that that I don't want to rehash - regardless of feelings on the benefits of federation in general, there's certainly an increase in engineering cost in providing a stable intra-server protocol that still allows for addition of new features, and the person leading a project gets to make the decision about whether that's a valid tradeoff.

One voiced complaint about Signal on Android is the fact that it depends on the Google Play Services. These are a collection of proprietary functions for integrating with Google-provided services, and Signal depends on them to provide a good out of band notification protocol to allow Signal to be notified when new messages arrive, even if the phone is otherwise in a power saving state. At the time this decision was made, there were no terribly good alternatives for Android. Even now, nobody's really demonstrated a free implementation that supports several million clients and has no negative impact on battery life, so if your aim is to write a secure messaging client that will be adopted by as many people is possible, keeping this dependency is entirely rational.

On the other hand, there are users for whom the decision not to install a Google root of trust on their phone is also entirely rational. I have no especially good reason to believe that Google will ever want to do something inappropriate with my phone or data, but it's certainly possible that they'll be compelled to do so against their will. The set of people who will ever actually face this problem is probably small, but it's probably also the set of people who benefit most from Signal in the first place.

(Even ignoring the dependency on Play Services, people may not find the official client sufficient - it's very difficult to write a single piece of software that satisfies all users, whether that be down to accessibility requirements, OS support or whatever. Slack may be great, but there's still people who choose to use Hipchat)

This shouldn't be a problem. Signal is free software and anybody is free to modify it in any way they want to fit their needs, and as long as they don't break the protocol code in the process it'll carry on working with the existing Signal servers and allow communication with people who run the official client. Unfortunately, Moxie has indicated that he is not happy with forked versions of Signal using the official servers. Since Signal doesn't support federation, that means that users of forked versions will be unable to communicate with users of the official client.

This is awkward. Signal is deservedly popular. It provides strong security without being significantly more complicated than a traditional SMS client. In my social circle there's massively more users of Signal than any other security app. If I transition to a fork of Signal, I'm no longer able to securely communicate with them unless they also install the fork. If the aim is to make secure communication ubiquitous, that's kind of a problem.

Right now the choices I have for communicating with people I know are either convenient and secure but require non-free code (Signal), convenient and free but insecure (SMS) or secure and free but horribly inconvenient (gpg). Is there really no way for us to work as a community to develop something that's all three?

comments
Categories: Elsewhere

OSTraining: Drupal 8 CookieConsent EU Law module

Planet Drupal - Thu, 12/05/2016 - 13:53

One of are OSTraining members asked how to add cookie notification to a drupal 8 site.

The CookieConsent is a module provides a solution to deal with the EU Cookie Law.

And is particularly useful if you want to use the SuperCookie module

Categories: Elsewhere

Wunderkraut blog: Dropcat - the configuration files

Planet Drupal - Thu, 12/05/2016 - 13:26

In a series of blog posts I am going to present our new tool for doing drupal deploys. It is developed internally in the ops-team in Wunderkraut Sweden , and we did that because of when we started doing Drupal 8 deploys we tried to rethink how we mostly have done Drupal deploys before, because we had some issues what we already had. This is part 2.

The idea with dropcat is that you use it with options, or/and with configuration files. I would recommend to use it with config files, and with minor settings as options. 

You could use just use a default settings file, that should be dropcat.yml, or as in most cases you have one config file for each environment you have – dev, stage, prod etc.

You could use an environment variable to set which environment to use, this variable is called DROPCAT_ENV.  To use prod environment you could set that variable in the terminal to prod with:
export DROPCAT_ENV=prod

Normally we set this environment variable in our jenkins build, but you could also set it as an parameter with dropcat like:
dropcat backup --env=prod

That will use the dropcat.prod.yml file

By default dropcat uses dropcat.yml if youi don't set an environment. 

Thing will be more in the next blog posts, but first we now look into a minimal config file, in our root dir we could hav a dropcat.yml file with this config:

app_name: mysite local: environment: tmp_path: /tmp seperator: _ drush_folder: /home/myuser/.drush remote: environment: server: mytarget.server.com ssh_user: myuser ssh_port: 22 identity_file: /home/myuser/.ssh/id_rsa web_root: /var/www/webroot temp_folder: /tmp alias: mysite_latest_stage site: environment: drush_alias: mysitestage backup_path: /backup original_path: /srv/www/shared/mysite_stage/files symlink: /srv/www/mysite_latest_stage/web/sites/default/files url: http://mysite.com name: mysitestage mysql: environment: host: mymysql.host.com database: my_db user: my_db_user password: my_db_password port: 3306

The settings is grouped in a way that should explain what they are used for – local.environment is from where we deploy, remote.environment is to where we deploy. site.environment is for drush and symlinks (we use for the files folder), mysql.environment, is for… yeah you guessed correctly – mysql/mariadb. 

appname

This is the application name, used for creating a tar-file with that name (with some more information, like build date and build number).

local

These are the settings from where we deploy, it could be localy, it could be a build server as jenkins. 

tmp_path

Where we temporary store stuff.

Seperator

Used for i name of foler to deploy as seperator like myapp_DATE


drush_folder

Where drush-settings from you deploy from, normaly in your home folder (for jenkins normaly: /var/lib/jenkins/.drush), and this is also to which path the drush alias is saved on dropcat prepare.

Remoteserver

The server you deploy you code too.

ssh_user

User to use with ssh to your remote server

ssh_port

Port used to use ssh to your server

identity_file

Which private ssh-key to use to login to your remote server

web_root

Path to which your site is going to be deployed to.

temp_folder

Temp folder on remote server, used for unpacking tar file.

alias

Symlink alias for you site


Sitedrush_alias

Name of you drush alias, used from 'local' server. Drush alias is created as a part of dropcat prepare.

backup_path

Backup path on ”local” server. Used by dropcat backup

original_path

Existing path to point a symlink to – we use for the files folder

symlink

Symlink path that points to original_path

url

URL for you site, used in drush alias

name

Name of site in drush alias.


Mysqlhost

name of db host

database

Database to use

user

Database user

password

password for db user to host

port

Port to use with mysql

We are still on a very abstract level, next time we will go through that is needed in an normal jenkins-build.

Categories: Elsewhere

Wunderkraut blog: Dropcat - the configuration files

Planet Drupal - Thu, 12/05/2016 - 13:26

In a series of blog posts I am going to present our new tool for doing drupal deploys. It is developed internally in the ops-team in Wunderkraut Sweden , and we did that because of when we started doing Drupal 8 deploys we tried to rethink how we mostly have done Drupal deploys before, because we had some issues what we already had. This is part 2.

The idea with dropcat is that you use it with options, or/and with configuration files. I would recommend to use it with config files, and with minor settings as options. 

You could use just use a default settings file, that should be dropcat.yml, or as in most cases you have one config file for each environment you have – dev, stage, prod etc.

You could use an environment variable to set which environment to use, this variable is called DROPCAT_ENV.  To use prod environment you could set that variable in the terminal to prod with:
export DROPCAT_ENV=prod

Normally we set this environment variable in our jenkins build, but you could also set it as an parameter with dropcat like:
dropcat backup &ndash;env=prod

That will use the dropcat.prod.yml file

By default dropcat uses dropcat.yml if youi don't set an environment. 

Thing will be more in the next blog posts, but first we now look into a minimal config file, in our root dir we could hav a dropcat.yml file with this config:

app_name: mysite local: environment: tmp_path: /tmp seperator: _ drush_folder: /home/myuser/.drush remote: environment: server: mytarget.server.com ssh_user: myuser ssh_port: 22 identity_file: /home/myuser/.ssh/id_rsa web_root: /var/www/webroot temp_folder: /tmp alias: mysite_latest_stage site: environment: drush_alias: mysitestage backup_path: /backup original_path: /srv/www/shared/mysite_stage/files symlink: /srv/www/mysite_latest_stage/web/sites/default/files url: http://mysite.com name: mysitestage mysql: environment: host: mymysql.host.com database: my_db user: my_db_user password: my_db_password port: 3306

The settings is grouped in a way that should explain what they are used for – local.environment is from where we deploy, remote.environment is to where we deploy. site.environment is for drush and symlinks (we use for the files folder), mysql.environment, is for… yeah you guessed correctly – mysql/mariadb. 

appname

This is the application name, used for creating a tar-file with that name (with some more information, like build date and build number).

local

These are the settings from where we deploy, it could be localy, it could be a build server as jenkins. 

tmp_path

Where we temporary store stuff.

Seperator

Used for i name of foler to deploy as seperator like myapp_DATE


drush_folder

Where drush-settings from you deploy from, normaly in your home folder (for jenkins normaly: /var/lib/jenkins/.drush), and this is also to which path the drush alias is saved on dropcat prepare.

Remoteserver

The server you deploy you code too.

ssh_user

User to use with ssh to your remote server

ssh_port

Port used to use ssh to your server

identity_file

Which private ssh-key to use to login to your remote server

web_root

Path to which your site is going to be deployed to.

temp_folder

Temp folder on remote server, used for unpacking tar file.

alias

Symlink alias for you site


Sitedrush_alias

Name of you drush alias, used from 'local' server. Drush alias is created as a part of dropcat prepare.

backup_path

Backup path on ”local” server. Used by dropcat backup

original_path

Existing path to point a symlink to – we use for the files folder

symlink

Symlink path that points to original_path

url

URL for you site, used in drush alias

name

Name of site in drush alias.


Mysqlhost

name of db host

database

Database to use

user

Database user

password

password for db user to host

port

Port to use with mysql

We are still on a very abstract level, next time we will go through that is needed in an normal jenkins-build.

Categories: Elsewhere

Michal &#268;iha&#345;: Changed Debian repository signing key

Planet Debian - Thu, 12/05/2016 - 09:10

After getting complains from apt and users, I've finally decided to upgrade signing key on my Debian repository to something more decent that DSA. If you are using that repository, you will now have to fetch new key to make it work again.

The old DSA key was there really because my laziness as I didn't want users to reimport the key, but I think it's really good that apt started to complain about it (it doesn't complain about DSA itself, but rather on using SHA1 signatures, which is most you can get out of DSA key).

Anyway the new key ID is DCE7B04E7C6E3CD9 and fingerprint is 4732 8C5E CD1A 3840 0419 1F24 DCE7 B04E 7C6E 3CD9. It's signed by my GPG key, so you can verify it this way. Of course instruction on my Debian repository page have been updated as well.

Filed under: Debian English | 2 comments

Categories: Elsewhere

Petter Reinholdtsen: Debian now with ZFS on Linux included

Planet Debian - Thu, 12/05/2016 - 07:30

Today, after many years of hard work from many people, ZFS for Linux finally entered Debian. The package status can be seen on the package tracker for zfs-linux. and the team status page. If you want to help out, please join us. The source code is available via git on Alioth. It would also be great if you could help out with the dkms package, as it is an important piece of the puzzle to get ZFS working.

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator - Elsewhere