Gregor Herrmann: RC bugs 2014/45-46

Planet Debian - Sun, 16/11/2014 - 23:07

I was not much at home during the last two weeks, so not much to report about RC bug activities. – some general observations:

  1. the RC bug count is still relatively low, even after lucas' archive rebuild.
  2. the release team is extremely fast in handling unblock request - kudos! – pro tip: file them quickly after uploading, or some{thing,one} else might be faster :)

my small contributions:

  • #765327 – libnet-dns-perl: "lintian fails if the machine has a link-local IPv6 nameserver configured"
    discuss possible fix with upstream (pkg-perl)
  • #768683 – src:libmoosex-storage-perl: "libmoosex-storage-perl: missing runtime dependencies cause (build) failures in other packages"
    move packages from Recommends to Depends (pkg-perl)
  • #768692 – src:libaudio-mpd-perl: "libaudio-mpd-perl: FTBFS in jessie: Failed test '10 seconds worth of music in the db': got: '9'"
    add patch from Simon McVittie (pkg-perl)
  • #768712 – src:libpoe-component-client-mpd-perl: "libpoe-component-client-mpd-perl: FTBFS in jessie: Failed test '10 seconds worth of music in the db': got: '9'"
    add patch from Simon McVittie (pkg-perl)
  • #769003 – libgluegen2-build-java: "libjogl2-java: FTBFS on arm64, ppc64el, s390x"
    add patch from Colin Watson (pkg-java)
Categories: Elsewhere

Gizra.com: Behat - The Right Way

Planet Drupal - Sun, 16/11/2014 - 23:00

Behat is a wonderful tool for automatic testing. It allows you to write your user stories and scenarios in proper English, which is then parsed by Behat and transformed to a set of clicks or other operations that mimic a real user.

If you don't have automated tests on your project, I would argue that you're doing it wrong (I explain why on The Gizra Way presentation). Even having a single test is much better than none.

With that said, it's super easy to abuse Behat. We are developers and we think sort of like machines (not really, but you get my point). If you would like to test login to your site you could easily do

Given I visit "/user/login" # fill the username and password input fields, and click submit When I fill "username" with "foo" And I fill "password" with "bar" And I press "Login" Then I should get a "200" HTTP response

Your test will return green, but it could be improved:

Continue reading…

Categories: Elsewhere

Drupal Commerce: Commerce 2.x Stories: Taxes

Planet Drupal - Sun, 16/11/2014 - 21:40

"Why doesn’t Commerce/Magento/$otherSolution handle my taxes properly? That’s the most basic feature!” - many people, often.

When it comes to eCommerce, nobody likes taxes. We expect taxes to “just work”, so we can finish our projects and get on with our lives. At the same time, no other topic is as complex.

Selling online puts us at the crossroads of different (and sometimes conflicting) laws with many rules and even more exceptions. All eCommerce systems provide the basic tools (“Define your tax rates and specify when to apply them”) and make the site developer responsible for tax compliance. The developer usually passes that responsibility to the client, sometimes implicitly. The client consults an accountant, sometimes. But the buck has to stop somewhere, and it often comes back to the developer, 5 days after launch.

As taxes become more and more complex, there is a need for smarter tax handling, where the application does more and the site administrator less. In the Commerce 1.x lifecycle we’ve built the commerce_vat module to handle the more and more complex VAT taxes. For 2.x, we’re bringing this approach back into core, and releasing several libraries to share the solution with the wider PHP community.


Categories: Elsewhere

Daniel Leidert: Getting the audio over HDMI to work for the HP N54L microserver running Debian Wheezy and a Sapphire Radeon HD 6450

Planet Debian - Sun, 16/11/2014 - 20:13
Conclusion: Sound over HDMI works with the Sapphire Radeon HD 6450 card in my HP N54L microserver. It requires a recent kernel and firmware from Wheezy 7.7 backports and the X.org server. There is no sound without X.org, even if audio has been enabled for the radeon kernel module.

Last year I couldn't get audio over HDMI to work after I installed a Sapphire Radeon HD 6450 1 GB (11190-02-20g) card into my N54L microserver. The cable that connects the HDMI interfaces between the card and the TV monitor supports HDMI 1.3, so audio should have been possible even then. However, I didn't get any audio output by XBMC playing video or music files. Nothing happened with stock Wheezy 7.1 and X.org/XBMC installed. So I removed the latter two and used the server as stock server without X/desktop and delayed my plans for an HTPC.

Now I tried again after I found some new hints, that made me curious for a second try :) Imagine my joy, when (finally) speaker-test produced noise on the TV! So here is my configuration and a step-by-step guide to

  • enable Sound over HDMI for the Radeon HD 6450
  • install a graphical environment
  • install XBMC
  • automatically start XBMC on boot

The latter two will be covered by a second post. Also note, that there is lot of information out there to achive the above tasks. So this is only about my configuration. Some packages below are marked as optional. A few are necessary only for the N54L microserver (firmware) and for a few I'm not sure they are necessary at all.

Step 1 - Prepare the system

At this point I don't have any desktop nor any other graphical environment (X.org) installed. First I purged pulseaudio and related packages completely and only use ALSA:

# apt-get autoremove --purge pulseaudio pulseaudio-utils pulseaudio-module-x11 gstreamer0.10-pulseaudio
# apt-get install alsa-base alsa-utils alsa-oss

Next I installed a recent linux kernel and recent firmware from Wheezy backports:

# apt-get install -t wheezy-backports linux-image-amd64 firmware-linux-free firmware-linux firmware-linux-nonfree firmware-atheros firmware-bnx2 firmware-bnx2x

This put linux-image-3.16-0.bpo.3-amd64 and recent firmware onto my system. I've chosen to upgrade linux-image-amd64 instead to pick a special (recent) linux kernel package from Wheezy backports to keep up-to-date with recent kernels from there.

Then I enabled the audio output of the kernel radeon module. Essentially there are at least three ways to do this. I use the one to modify /etc/modules.d/radeon.conf and set the audio parameter there. The hw_i2c parameter is disabled. I read, that it might cause trouble with the audio output here although I never personally experienced it:

options radeon audio=1 hw_i2c=0

JFTR: This is how I boot the N54L by default:

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi=force pcie_aspm=force nmi_watchdog=0"

After rebooting I see this for the Radeon card in question:

# lsmod | egrep snd\|radeon\|drm | awk '{print $1}' | sort
# lspci -k
01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Caicos [Radeon HD 6450/7450/8450 / R5 230 OEM]
Subsystem: PC Partner Limited / Sapphire Technology Device e204
Kernel driver in use: radeon
01:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Caicos HDMI Audio [Radeon HD 6400 Series]
Subsystem: PC Partner Limited / Sapphire Technology Radeon HD 6450 1GB DDR3
Kernel driver in use: snd_hda_intel
# cat /sys/module/radeon/parameters/audio
# cat /sys/module/radeon/parameters/hw_i2c
# aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: HDMI [HDA ATI HDMI], device 3: HDMI 0 [HDMI 0]
Subdevices: 0/1
Subdevice #0: subdevice #0
# aplay -L
Discard all samples (playback) or generate zero samples (capture)
PulseAudio Sound Server
HDMI Audio Output

At this point, without having the X.org server installed, I still have no audio output to the connected monitor. Running alsamixer I only see the S/PDIF bar for the HDA ATI HDMI device, showing a value of 00. I can mute and un-mute this device but not change the value. No need to worry, sound comes with step two.

Step 2 - Install a graphical environment (X.org server)

Next is to install a graphical environment, basically the X.org server. This is done in Debian by the desktop task. Unfortunately tasksel makes use of APT::Install-Recommends="true" and would install a desktop environment and some more recommended packages. At the moment I don't want this, only X. So basically I installed only the task-desktop package with dependencies:

# apt-get install task-desktop xfonts-cyrillic

Next is to install a display manager. I've chosen lightdm:

# apt-get install lightdm accountsservice

Done. Now (re-)start the X server. Simply ...

# service lightdm restart

... should do. And now there is sound, probably due to the X.org Radeon driver. The following command created noise on the two monitor speakers :)

# speaker-test -c2 -D hdmi:0 -t pink

Finally there is sound over HDMI!

Step 3 - Install XBMC

To be continued ...

Categories: Elsewhere

Bernhard R. Link: Enabling Change

Planet Debian - Sun, 16/11/2014 - 16:51

Big changes are always a complicated thing to get done and can be the harder the bigger or more diverse an organization is it is taking place in.


Ideally every change is well communicated early and openly. Leaving people in the dark about what will change and when means people have much less time to feeling comfortable about it or arranging with it mentally. Especially bad can be extending the change later or or shortening transition periods. Letting people think they have some time to transition only to force them to rush later will remove any credibility you have and severely reduce their ability to believe you are not crossing them. Making a new way optional is a great way to create security (see below), but making that obligatory before the change even arrives as optional with them will not make them very willing to embrace change.

Take responsibility

Every transformation means costs. Even if some change did only improve and did not make anything worse once implemented (the ideal change you will never meet in reality), the deployment of the change still costs: processes have adapted to it, people have to relearn how to do things, how to detect if something goes wrong, how to fix it, documentation has to be adopted and and and. Even as the change causes more good than costs in the whole organization (let's hope it does, I hope you wouldn't try to do something if the total benefit is negative), the benefits and thus the benefit to cost ratio will differ for the different parts of your organization or the different people within it. It's hardly avoidable that for some people there will not be much benefit, much less perceived benefit compared to the costs they have to burden for it. Those are the people whose good will you want to fight for, not the people you want to fight against.

They have to pay with their labor/resources and thus their good will for your benefit the overall benefit.

This is much easier if you acknowledge that fact. If you blame them for having the costs, claim their situation does not even exist or even ridicule them for not embracing change you only prepare yourself for frustration. You might be able to persuade yourself that everyone that is not willing to invest in the change is just acting out of malevolent self-interest. But you will hardly be able to persuade people that it is evil to not help your cause if you treat them as enemies.

And once you ignored or played down costs that later actually happen, your credibility in being able to see the big picture will simply cease to exist at all for the next change.

Allow different metrics

People have different opinions about priorities, about what is important, about how much something costs and even about what is a problem. If you want to persuade them, try to take that into account. If you do not understand why something is a reason, it might be because the given point is stupid. But it might also be that you miss something. And often there is simple a different valuation of what is important, what the costs are and what are problems. If you want to persuade people, it is worth to try to understand those.

If all you want to do is persuade some leader or some majority then ridiculing their concerns might get you somewhere. But how do you want to win people over if you do not even appear to understand their problems. Why should people trust you that their costs will be worth the overall benefits if you tell them the costs that they clearly see do not exist? How credible is referring to the bigger picture if the part of the picture they can see does not match what you say the bigger picture looks like?

Don't get trolled and don't troll

There will always be people that might be unreasonable or even try to provoke you. Don't allow being provoked. Remember that for successful changes you need to win broad support. Feeling personally attacked or feeling presented a large amount of pointless arguments easily results in not bringing proper responses or actually looking at arguments. If someone is only trolling and purely malevolent, they will tease you best if they bring actual concerns of people in a way you likely degrade your yourself and your point in answering. Becoming impertinent with the troll is like attacking the annoying little goblin hiding next to the city guards with area damage.

When not being able to persuade people, it is also far to easy to consider them in bad faith and/or attacking them personally. This can only escalate even more. Worst case you frustrate someone in good faith. In most cases you poison the discussion so much that people actually in good faith will no longer contribute the discussion. It might be rewarding short term because after some escalation only obviously unreasonable people will talk against you, but it makes it much harder to find solutions together that could benefit anyone and almost impossible to persuade those that simply left the discussion.

Give security

Last but not least, remember that humans are quite risk-averse. In general they might invest in (even small) chances to win, but go a long way to avoid risks. Thus an important part of enabling change is to reduce risks, real and perceived ones and give people a feeling of security.

In the end, almost every measure boils down to that: You give people security by giving them the feeling that the whole picture is considered in decisions (by bringing them early into the process, by making sure their concerns are understood and part of the global profit/cost calculation and making sure their experiences with the change are part of the evaluation). You give people security by allowing them to predict and control things (by transparency about plans, how far the change will go and guaranteed transitions periods, by giving them enough time so they can actually plan and do the transition). You give people security by avoiding early points of no return (by having wide enough tests, rollback scenarios,...). You give people security by not letting them alone (by having good documentation, availability of training, ...).

Especially side-by-side availability of old and new is an extremely powerful tool, as it fits all of the above: It allows people to actually test it (and not some little prototype mostly but not quite totally unrelated to reality) so their feedback can be heard. It makes it more predictable as all the new ways can be tried before the old ones no longer work. It is the ultimate role-back scenario (just switch off the new). And allows for learning the new before losing the old.

Of course giving the people a feeling of security needs resources. But it is a very powerful way to get people to embrace the chance.

Also in my experience people only fearing for themselves will usually mostly be passive by not pushing forward and trying to avoid or escape the changes. (After all, working against something costs energy, so purely egoistic behavior is quite limiting in that regard). Most people actively pushing back do it because they fear for something larger than only them. And any measure to making them fear less that you ruin the overall organization, not only avoids unnecessary hurdles rolling out the change but also has some small chance to actually avoid running into disaster with closed eyes.

Categories: Elsewhere

Vincent Bernat: Staging a Netfilter ruleset in a network namespace

Planet Debian - Sun, 16/11/2014 - 16:28

A common way to build a firewall ruleset is to run a shell script calling iptables and ip6tables. This is convenient since you get access to variables and loops. There are three major drawbacks with this method:

  1. While the script is running, the firewall is temporarily incomplete. Even if existing connections can be arranged to be left untouched, the new ones may not be allowed to be established (or unauthorized flows may be allowed). Also, essential NAT rules or mangling rules may be absent.

  2. If an error occurs, you are left with an half-working firewall. Therefore, you should ensure that some rules authorizing remote access are set very early. Or implement some kind of automatic rollback system.

  3. Building a large firewall can be slow. Each ip{,6}tables command will download the ruleset from the kernel, add the rule and upload the whole modified ruleset to the kernel.

Using iptables-restore

A classic way to solve these problems is to build a rule file that will be read by iptables-restore and ip6tables-restore1. Those tools send the ruleset to the kernel in one pass. The kernel applies it atomically. Usually, such a file is built with ip{,6}tables-save but a script can fit the task.

The ruleset syntax understood by ip{,6}tables-restore is similar to the syntax of ip{,6}tables but each table has its own block and chain declaration is different. See the following example:

$ iptables -P FORWARD DROP $ iptables -t nat -A POSTROUTING -s -j MASQUERADE $ iptables -N SSH $ iptables -A SSH -p tcp --dport ssh -j ACCEPT $ iptables -A INPUT -i lo -j ACCEPT $ iptables -A OUTPUT -o lo -j ACCEPT $ iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT $ iptables -A FORWARD -j SSH $ iptables-save *nat :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -s -j MASQUERADE COMMIT *filter :INPUT ACCEPT [0:0] :FORWARD DROP [0:0] :OUTPUT ACCEPT [0:0] :SSH - [0:0] -A INPUT -i lo -j ACCEPT -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -j SSH -A OUTPUT -o lo -j ACCEPT -A SSH -p tcp -m tcp --dport 22 -j ACCEPT COMMIT

As you see, we have one block for the nat table and one block for the filter table. The user-defined chain SSH is declared at the top of the filter block with other builtin chains.

Here is a script diverting ip{,6}tables commands to build such a file (heavily relying on some Zsh-fu2):

#!/bin/zsh set -e work=$(mktemp -d) trap "rm -rf $work" EXIT # ➊ Redefine ip{,6}tables iptables() { # Intercept -t local table="filter" [[ -n ${@[(r)-t]} ]] && { # Which table? local index=${(k)@[(r)-t]} table=${@[(( index + 1 ))]} argv=( $argv[1,(( $index - 1 ))] $argv[(( $index + 2 )),$#] ) } [[ -n ${@[(r)-N]} ]] && { # New user chain local index=${(k)@[(r)-N]} local chain=${@[(( index + 1 ))]} print ":${chain} -" >> ${work}/${0}-${table}-userchains return } [[ -n ${@[(r)-P]} ]] && { # Policy for a builtin chain local index=${(k)@[(r)-P]} local chain=${@[(( index + 1 ))]} local policy=${@[(( index + 2 ))]} print ":${chain} ${policy}" >> ${work}/${0}-${table}-policy return } # iptables-restore only handle double quotes echo ${${(q-)@}//\'/\"} >> ${work}/${0}-${table}-rules #' } functions[ip6tables]=${functions[iptables]} # ➋ Build the final ruleset that can be parsed by ip{,6}tables-restore save() { for table (${work}/${1}-*-rules(:t:s/-rules//)) { print "*${${table}#${1}-}" [ ! -f ${work}/${table}-policy ] || cat ${work}/${table}-policy [ ! -f ${work}/${table}-userchains || cat ${work}/${table}-userchains cat ${work}/${table}-rules print "COMMIT" } } # ➌ Execute rule files for rule in $(run-parts --list --regex '^[.a-zA-Z0-9_-]+$' ${0%/*}/rules); do . $rule done # ➍ Execute rule files ret=0 save iptables | iptables-restore || ret=$? save ip6tables | ip6tables-restore || ret=$? exit $ret

In ➊, a new iptables() function is defined and will shadow the iptables command. It will try to locate the -t parameter to know which table should be used. If such a parameter exists, the table is remembered in the $table variable and removed from the list of arguments. Defining a new chain (with -N) is also handled as well as setting the policy (with -P).

In ➋, the save() function will output a ruleset that should be parseable by ip{,6}tables-restore. In ➌, user rules are executed. Each ip{,6}tables command will call the previously defined function. When no error has occurred, in ➍, ip{,6}tables-restore is invoked. The command will either succeed or fail.

This method works just fine3. However, the second method is more elegant.

Using a network namespace

An hybrid approach is to build the firewall rules with ip{,6}tables in a newly created network namespace, save it with ip{,6}tables-save and apply it in the main namespace with ip{,6}tables-restore. Here is the gist (still using Zsh syntax):

#!/bin/zsh set -e alias main='/bin/true ||' [ -n $iptables ] || { # ➊ Execute ourself in a dedicated network namespace iptables=1 unshare --net -- \ $0 4> >(iptables-restore) 6> >(ip6tables-restore) # ➋ In main namespace, disable iptables/ip6tables commands alias iptables=/bin/true alias ip6tables=/bin/true alias main='/bin/false ||' } # ➌ In both namespaces, execute rule files for rule in $(run-parts --list --regex '^[.a-zA-Z0-9_-]+$' ${0%/*}/rules); do . $rule done # ➍ In test namespace, save the rules [ -z $iptables ] || { iptables-save >&4 ip6tables-save >&6 }

In ➊, the current script is executed in a new network namespace. Such a namespace has its own ruleset that can be modified without altering the one in the main namespace. The $iptables environment variable tell in which namespace we are. In the new namespace, we execute all the rule files (➌). They contain classic ip{,6}tables commands. If an error occurs, we stop here and nothing happens, thanks to the use of set -e. Otherwise, in ➍, the ruleset of the new namespace are saved using ip{,6}tables-save and sent to dedicated file descriptors.

Now, the execution in the main namespace resumes in ➊. The results of ip{,6}tables-save are feeded to ip{,6}tables-restore. At this point, the firewall is mostly operational. However, we will play again the rule files (➌) but the ip{,6}tables commands will be disabled (➋). Additional commands in the rule files, like enabling IP forwarding, will be executed.

The new namespace does not provide the same environment as the main namespace. For example, there is no network interface in it, so we cannot get or set IP addresses. A command that must not be executed in the new namespace should be prefixed by main:

main ip addr add dev lan-guest

You can look at a complete example on GitHub.

  1. Another nifty tool is iptables-apply which will apply a rule file and rollback after a given timeout unless the change is confirmed by the user. 

  2. As you can see in the snippet, Zsh comes with some powerful features to handle arrays. Another big advantage of Zsh is it does not require quoting every variable to avoid field splitting. Hence, the script can handle values with spaces without a problem, making it far more robust. 

  3. If I were nitpicking, there are three small flaws with it. First, when an error occurs, it can be difficult to match the appropriate location in your script since you get the position in the ruleset instead. Second, a table can be used before it is defined. So, it may be difficult to spot some copy/paste errors. Third, the IPv4 firewall may fail while the IPv6 firewall is applied, and vice-versa. Those flaws are not present in the next method. 

Categories: Elsewhere

Vincent Bernat: Intel Wireless 7260 as an access point

Planet Debian - Sun, 16/11/2014 - 16:27

My home router acts as an access point with an Intel Dual-Band Wireless-AC 7260 wireless card. This card supports 802.11ac (on the 5 GHz band) and 802.11n (on both the 5 GHz and 2.4 GHz band). While this seems a very decent card to use in managed mode, this is not really a great choice for an access point.

$ lspci -k -nn -d 8086:08b1 03:00.0 Network controller [0280]: Intel Corporation Wireless 7260 [8086:08b1] (rev 73) Subsystem: Intel Corporation Dual Band Wireless-AC 7260 [8086:4070] Kernel driver in use: iwlwifi

TL;DR: Use an Atheros card instead.


First, the card is said “dual-band” but you can only uses one band at a time because there is only one radio. Almost all wireless cards have this limitation. If you want to use both the 2.4 GHz band and the less crowded 5 GHz band, two cards are usually needed.

5 GHz band

There is no support to set an access point on the 5 GHz band. The firmware doesn’t allow it. This can be checked with iw:

$ iw reg get country CH: DFS-ETSI (2402 - 2482 @ 40), (N/A, 20), (N/A) (5170 - 5250 @ 80), (N/A, 20), (N/A) (5250 - 5330 @ 80), (N/A, 20), (0 ms), DFS (5490 - 5710 @ 80), (N/A, 27), (0 ms), DFS (57240 - 65880 @ 2160), (N/A, 40), (N/A), NO-OUTDOOR $ iw list Wiphy phy0 [...] Band 2: Capabilities: 0x11e2 HT20/HT40 Static SM Power Save RX HT20 SGI RX HT40 SGI TX STBC RX STBC 1-stream Max AMSDU length: 3839 bytes DSSS/CCK HT40 Frequencies: * 5180 MHz [36] (20.0 dBm) (no IR) * 5200 MHz [40] (20.0 dBm) (no IR) * 5220 MHz [44] (20.0 dBm) (no IR) * 5240 MHz [48] (20.0 dBm) (no IR) * 5260 MHz [52] (20.0 dBm) (no IR, radar detection) DFS state: usable (for 192 sec) DFS CAC time: 60000 ms * 5280 MHz [56] (20.0 dBm) (no IR, radar detection) DFS state: usable (for 192 sec) DFS CAC time: 60000 ms [...]

While the 5 GHz band is allowed by the CRDA, all frequencies are marked with no IR. Here is the explanation for this flag:

The no-ir flag exists to allow regulatory domain definitions to disallow a device from initiating radiation of any kind and that includes using beacons, so for example AP/IBSS/Mesh/GO interfaces would not be able to initiate communication on these channels unless the channel does not have this flag.

Multiple SSID

This card can only advertise one SSID. Managing several of them is useful to setup distinct wireless networks, like a public access (routed to Tor), a guest access and a private access. iw can confirm this:

$ iw list valid interface combinations: * #{ managed } <= 1, #{ AP, P2P-client, P2P-GO } <= 1, #{ P2P-device } <= 1, total <= 3, #channels <= 1

Here is the output of an Atheros card able to manage 8 SSID:

$ iw list valid interface combinations: * #{ managed, WDS, P2P-client } <= 2048, #{ IBSS, AP, mesg point, P2P-GO } <= 8, total <= 2048, #channels <= 1 Configuration as an access point

Except for those two limitations, the card works fine as an access point. Here is the configuration that I use for hostapd:

interface=wlan-guest driver=nl80211 # Radio ssid=XXXXXXXXX hw_mode=g channel=11 # 802.11n wmm_enabled=1 ieee80211n=1 ht_capab=[HT40-][SHORT-GI-20][SHORT-GI-40][DSSS_CCK-40][DSSS_CCK-40][DSSS_CCK-40] # WPA auth_algs=1 wpa=2 wpa_passphrase=XXXXXXXXXXXXXXX wpa_key_mgmt=WPA-PSK wpa_pairwise=TKIP rsn_pairwise=CCMP

Because of the use of channel 11, only 802.11n HT40- rate can be enabled. Look at the Wikipedia page for 802.11n to check if you can use either HT40-, HT40+ or both.

Categories: Elsewhere

Vincent Bernat: Replacing Swisscom router by a Linux box

Planet Debian - Sun, 16/11/2014 - 16:26

I have recently moved to Lausanne, Switzerland. Broadband Internet access is not as cheap as in France. Free, a French ISP, is providing an FTTH access with a bandwith of 1 Gbps1 for about 38 € (including TV and phone service), Swisscom is providing roughly the same service for about 200 €2. Swisscom fiber access was available for my appartment and I chose the 40 Mbps contract without phone service for about 80 €.

Like many ISP, Swisscom provides an Internet box with an additional box for TV. I didn’t unpack the TV box as I have no use for it. The Internet box comes with some nice features like the ability to setup firewall rules, a guest wireless access and some file sharing possibilities. No shell access!

I have bought a small PC to act as router and replace the Internet box. I have loaded the upcoming Debian Jessie on it. You can find the whole software configuration in a GitHub repository.

This blog post only covers the Swisscom-specific setup (and QoS). Have a look at those two blog posts for related topics:


The Internet box is packed with a Siligence-branded 1000BX SFP3. This SFP receives and transmits data on the same fiber using a different wavelength for each direction.

Instead of using a network card with an SFP port, I bought a Netgear GS110TP which comes with 8 gigabit copper ports and 2 fiber SFP ports. It is a cheap switch bundled with many interesting features like VLAN and LLDP. It works fine if you don’t expect too much from it.


IPv4 connectivity is provided over VLAN 10. A DHCP client is mandatory. Moreover, the DHCP vendor class identifier option (option 60) needs to be advertised. This can be done by adding the following line to /etc/dhcp/dhclient.conf when using the ISC DHCP client:

send vendor-class-identifier "100008,0001,,Debian";

The first two numbers are here to identify the service you are requesting. I suppose this can be read as requesting the Swisscom residential access service. You can put whatever you want after that. Once you get a lease, you need to use a browser to identify yourself to Swisscom on the first use.


Swisscom provides IPv6 access through the 6rd protocol. This is a tunneling mechanism to facilitate IPv6 deployment accross an IPv4 infrastructure. This kind of tunnel is natively supported by Linux since kernel version 2.6.33.

To setup IPv6, you need the base IPv6 prefix and the 6rd gateway. Some ISP are providing those values through DHCP (option 212) but this is not the case for Swisscom. The gateway is 6rd.swisscom.com and the prefix is 2a02:1200::/28. After appending the IPv4 address to the prefix, you still get 4 bits for internal subnets.

Swisscom doesn’t provide a fixed IPv4 address. Therefore, it is not possible to precompute the IPv6 prefix. When installed as a DHCP hook (in /etc/dhcp/dhclient-exit-hooks.d/6rd), the following script configures the tunnel:

sixrd_mtu=1472 # This is 1500 - 20 - 8 (PPPoE header) sixrd_ttl=64 sixrd_prefix=2a02:1200::/28 # No way to guess, just have to know it. sixrd_br= # That's "6rd.swisscom.com" sixrd_down() { ip tunnel del internet6 || true } sixrd_up() { ipv4=${new_ip_address:-$old_ip_address} sixrd_subnet=$(ruby <<EOF require 'ipaddr' prefix = IPAddr.new "${sixrd_prefix}", Socket::AF_INET6 prefixlen = ${sixrd_prefix#*/} ipv4 = IPAddr.new "${ipv4}", Socket::AF_INET ipv6 = IPAddr.new (prefix.to_i + (ipv4.to_i << (64 + 32 - prefixlen))), Socket::AF_INET6 puts ipv6 EOF ) # Let's configure the tunnel ip tunnel add internet6 mode sit local $ipv4 ttl $sixrd_ttl ip addr add ${sixrd_subnet}1/64 dev internet6 ip link set mtu ${sixrd_mtu} dev internet6 ip link set internet6 up ip route add default via ::${sixrd_br} dev internet6 } case $reason in BOUND|REBOOT) sixrd_down sixrd_up ;; RENEW|REBIND) if [ "$new_ip_address" != "$old_ip_address" ]; then sixrd_down sixrd_up fi ;; STOP|EXPIRE|FAIL|RELEASE) sixrd_down ;; esac

The computation of the IPv6 prefix is offloaded to Ruby instead of trying to use the shell for that. Even if the ipaddr module is pretty “basic”, it suits the job.

Swisscom is using the same MTU for all clients. Because some of them are using PPPoE, the MTU is 1472 instead of 1480. You can easily check your MTU with this handy online MTU test tool.

It is not uncommon that PMTUD is broken on some parts of the Internet. While not ideal, setting up TCP MSS will alievate any problem you may run into with a MTU less than 1500:

ip6tables -t mangle -A POSTROUTING -o internet6 \ -p tcp --tcp-flags SYN,RST SYN \ -j TCPMSS --clamp-mss-to-pmtu QoS

Once upon a time, QoS was a tacky subject. The Wonder Shaper was a common way to get a somewhat working setup. Nowadays, thanks to the work of the Bufferbloat project, there are two simple steps to get something quite good:

  1. Reduce the queue of your devices to something like 32 packets. This helps TCP to detect congestion and act accordingly while still being able to saturate a gigabit link.

    ip link set txqueuelen 32 dev lan ip link set txqueuelen 32 dev internet ip link set txqueuelen 32 dev wlan
  2. Change the root qdisc to fq_codel. A qdisc receives packets to be sent from the kernel and decide how packets are handled to the network card. Packets can be dropped, reordered or rate-limited. fq_codel is a queuing discipline combining fair queuing and controlled delay. Fair queuing means that all flows get an equal chance to be served. Another way to tell it is that a high-bandwidth flow won’t starve the queue. Controlled delay means that the queue size will be limited to ensure the latency stays low. This is achieved by dropping packets more aggressively when the queue grows.

    tc qdisc replace dev lan root fq_codel tc qdisc replace dev internet root fq_codel tc qdisc replace dev wlan root fq_codel
  1. Maximum download speed is 1 Gbps, while maximum upload speed is 200 Mbps. 

  2. This is the standard Vivo XL package rated at CHF 169.– plus the 1 Gbps option at CHF 80.–. 

  3. There are two references on it: SGA 441SFP0-1Gb and OST-1000BX-S34-10DI. It transmits to the 1310 nm wave length and receives on the 1490 nm one. 

Categories: Elsewhere

Dirk Eddelbuettel: Introducing RcppAnnoy

Planet Debian - Sun, 16/11/2014 - 15:36

A new package RcppAnnoy is now on CRAN.

It wraps the small, fast, and lightweight C++ template header library Annoy written by Erik Bernhardsson for use at Spotify.

While Annoy is setup for use by Python, RcppAnnoy offers the exact same functionality from R via Rcpp.

A new page for RcppAnnoy provides some more detail, example code and further links. See a recent blog post by Erik for a performance comparison of different approximate nearest neighbours libraries for Python.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Stefano Zacchiroli: Debsources Participation in FOSS Outreach Program

Planet Debian - Sun, 16/11/2014 - 13:45
Jingjie Jiang selected as OPW intern for Debsources

I'm glad to announce that Jingjie Jiang, AKA sophiejjj, has been selected as intern to work on Debsources as part of the FOSS Outerach Program (formerly known as Outreach Program for Women, or OPW). I'll co-mentor her work together with Matthieu Caneill.

I've just added sophiejjj's blog to Planet Debian, so you will soon hear about her work in the Debian blogosphere.

I've been impressed by the interest that the Debsources proposal in this round of OPW has spawned. Together with Matthieu I have interacted with more than a dozen OPW applicants. Many of them have contributed useful patches during the application period, and those patches have been in production at http://sources.debian.net since quite a while now (see the commit log for details). A special mention goes to Akshita Jha, who has shown a lot of determination in tackling both simple and complex issues affecting Debsources. I hope there will be other chances to work with her in the future.

OPW internship will begin December 9th, fasten your seat belts for a boost in Debsources development!

Categories: Elsewhere

Jonathan Wiltshire: Getting things into Jessie (#1)

Planet Debian - Sun, 16/11/2014 - 11:35
Make it easy for us to review your request

The release team gets a lot of mail at this time in the cycle. Make it easy for us by:

  • including as much information as you can think of
    • yes, even if you think it’s too much
    • remember we have probably never seen your package before
    • if you do write a lot, include a short summary at the top
  • not deviating from the freeze policy without a really good reason
    • explain why you deviated and why it’s really good in your request
  • demonstrating that you’ve considered and checked your changes carefully
    • that’s one (but not the sole) reason we ask for a source debdiff (and assume you’ve read it yourself)


Getting things into Jessie (#1) is a post from: jwiltshire.org.uk | Flattr

Categories: Elsewhere

Matthew Palmer: A benefit of running an alternate init in Debian Jessie

Planet Debian - Sun, 16/11/2014 - 06:00

If you’re someone who doesn’t like Debian’s policy of automatically starting on install (or its heinous cousin, the RUN or ENABLE variable in /etc/default/<service>), then running an init system other than systemd should work out nicely.

Categories: Elsewhere

Jingjie Jiang: Start the new journey

Planet Debian - Sun, 16/11/2014 - 04:26

I’m very excited about being accepted to the Debsources project in OPW. I’ll record everything about my adventure here.

Cheers ^_^

Categories: Elsewhere

Jo Shields: mono-project.com Linux packages – an update

Planet Debian - Sat, 15/11/2014 - 17:21

It’s been pointed out to me that many people aren’t aware of the current status of Linux packages on mono-project.com, so I’m here’s a summary:

Stable packages

Mono 3.10.0, MonoDevelop, NuGet 2.8.1 and F# packages are available. Plus related bits. MonoDevelop on Linux does not currently include the F# addin (there are a lot of pieces to get in place for this to work).

These are built for x86-64 CentOS 7, and should be compatible with RHEL 7, openSUSE 12.3, and derivatives. I haven’t set up a SUSE 1-click install file yet, but I’ll do it next week if someone reminds me.

They are also built for Debian 7 – on i386, x86-64, and IBM zSeries processors. The same packages ought to work on Ubuntu 12.04 and above, and any derivatives of Debian or Ubuntu. Due to ABI changes, you need to add a second compatibility extension repository for Ubuntu 12.04 or 12.10 to get anything to work, and a different compatibility extension repository for Debian derivatives with Apache 2.4 if you want the mod-mono ASP.NET Apache module (Debian 8+, Ubuntu 13.10+, and derivatives, will need this).

MonoDevelop 5.5 on Ubuntu 14.04

In general, see the install guide to get these going.


You may have seen Microsoft recently posting a guide to using ASP.NET 5 on Docker. Close inspection would show that this Docker image is based on our shiny new Xamarin Mono docker image, which is based on Debian 7.The full details are on Docker Hub, but the short version is “docker pull mono:latest” gets you an image with the very latest Mono.

directhex@desire:~$ docker pull mono:latest Pulling repository mono 9da8fc8d2ff5: Download complete 511136ea3c5a: Download complete f10807909bc5: Download complete f6fab3b798be: Download complete 3c43ebb7883b: Download complete 7a1f8e485667: Download complete a342319da8ea: Download complete 3774d7ea06a6: Download complete directhex@desire:~$ docker run -i -t mono:latest mono --version Mono JIT compiler version 3.10.0 (tarball Wed Nov 5 12:50:04 UTC 2014) Copyright (C) 2002-2014 Novell, Inc, Xamarin Inc and Contributors. www.mono-project.com TLS: __thread SIGSEGV: altstack Notifications: epoll Architecture: amd64 Disabled: none Misc: softdebug LLVM: supported, not enabled. GC: sgen

The Dockerfiles are on GitHub.

Categories: Elsewhere

Andrew Cater: Formal key transitiion to 4096 bit key

Planet Debian - Sat, 15/11/2014 - 13:37
Following mini-Debconf and submitting my key to keyring-maint, here's a copy of the note marking the transition. I do retain a copy of the old key: it has not been compromised or revoked (until I'm sure my new key reaches the Debian keyring) and my vote in the GR is validly signed with the old key.

Date: Wed, 12 Nov 2014 08:11:53 +0000
From: "Andrew M.A. Cater"
To: keyring@rt.debian.org
Subject: Debian RT - new key for amacater
User-Agent: Mutt/1.5.23 (2014-03-12)

Hash: SHA1

pub   1024D/E93ADE7B 2001-07-04
      Key fingerprint = F3FA 2752 1327 7904 846D  C0DE 3233 C127 E93A DE7B
uid                  Andrew Cater (Andrew M.A. Cater)
sub   1024g/E8C8CC00 2001-07-04

pub   4096R/22EF1F0F 2014-08-29
      Key fingerprint = 5596 5E39 93E0 6E2B 5BA5  CD84 4AA8 FC24 22EF 1F0F
uid                  Andrew M.A. Cater (Andy Cater)
uid                  Andrew M.A. Cater (non-Debian email)
sub   4096R/923AB77E 2014-08-29

This is intended to replace the old key by the new key as part of a key transition from old, insecure keys

All the best,

Version: GnuPG v1


This because Google (and Planet Debian) are more reliable than my email inbox.

[Keys exchanged at the mini-Debconf have now been signed with the new 4096 bit key]

Categories: Elsewhere

Don Armstrong: Adding a newcomer (⎈) tag to the BTS

Planet Debian - Sat, 15/11/2014 - 04:14

Some of you may already be aware of the gift tag which has been used for a while to indicate bugs which are suitable for new contributors to use as an entry point to working on specific packages. Unfortunately, some of us (including me!) were unaware that this tag even existed.

Luckily, Lucas Nussbaum clued me in to the existence of this tag, and after a brief bike-shed-naming thread, and some voting using pocket_devotee we decided to name the new tag newcomer, and I have now added this tag to the BTS documentation, and tagged all of the bugs which were user tagged "gift" with this tag.

If you have bugs in your package which you think are ideal for new contributors to Debian (or your package) to fix, please tag them newcomer. If you're getting started in Debian, and working on bugs to fix, please search for the newcomer tag, grab the helm, and contribute to Debian.

Categories: Elsewhere

Don Armstrong: Virginia King selected for Debbugs FOSS Outreach Program for Women

Planet Debian - Sat, 15/11/2014 - 01:23

I'm glad to announce that Virginia King has been selected as one of the three interns for this round of the FOSS Outreach Program for women. Starting December 9th, and continuing until March 9th, she'll be working on improving the documentation of Debian's bug tracking system.

The initial goal is to develop a Bug Triager Howto to help new contributors to Debian jump in and help existing teams triage bugs. We'll be getting in touch with some of the larger teams in Debian to help make this document as useful as possible. If you're a member of a team in Debian who would like this howto to address your specific workflow, please drop me an e-mail, and we'll keep you in the loop.

The secondary goals for this project are to:

  • Improve documentation under http://www.debian.org/Bugs
  • Document of bug-tags and categories
  • Improve upstream debbugs documentation
Categories: Elsewhere

Drupal Association News: Don't Miss the Blink Reaction Membership Discount

Planet Drupal - Fri, 14/11/2014 - 23:09

At the Drupal Association, we love our members and want to show it. That’s why we team up with some of the best Drupal companies around every month to offer our members spectacular discounts.

This month, we’re pleased to announce that  Drupal Association Members can receive 30% off Blink Institute training classes from Blink Reaction. Using the discount code here, Drupal Association members can access fantastic training from Blink, led by veteran Drupalists who are expert trainers. Note: this offer can not be combined with other promotional offers.

Blink Reaction is a premiere provider of enterprise Drupal services to Fortune 1000 companies throughout the US. Their Drupal Training program is designed to help individuals, Enterprise service providers and small business owners harness the power of Drupal.

The Blink Training program has taught beginner and advanced methods to hundreds of individuals and corporations. Blink is proud to offer free and nearly free training through Global Drupal Training Days and at Drupal Camps alongside their public and private training offerings.

Make sure you take advantage of this great opportunity while it lasts. Kudos to our friends over at Blink -- thanks for sharing the Drupal love!

Categories: Elsewhere

Code Karate: Drupal 7 Imagefield Focus

Planet Drupal - Fri, 14/11/2014 - 22:31
Episode Number: 178

In this episode we look at the Imagefield Focus Module. This module adds another option to the image styles on a content type field. With this module you are able to specify a focus and crop area of your image. Once you have selected either or both of those areas the module then resizes and focuses on the certain area you specified.

Tags: DrupalFieldsDrupal 7Image HandlingDrupal PlanetUI/Design
Categories: Elsewhere

Midwestern Mac, LLC: Creating a contact form programmatically in Drupal 8

Planet Drupal - Fri, 14/11/2014 - 19:24

Drupal 8's expanded and broadly-used Entity API extends even to Contact Forms, and recently I needed to create a contact form programmatically as part of Honeypot's test suite. Normally, you can export a contact form as part of your site configuration, then when it's imported in a different site/environment, it will be set up simply and easily.

However, if you need to create a contact form programmatically (in code, dynamically), it's a rather simple affair:

First, use Drupal's ContactForm class at the top of the file so you can use the class in your code later:

Categories: Elsewhere


Subscribe to jfhovinne aggregator - Elsewhere