Elsewhere

Gunnar Wolf: UNAM. Viva México, viva en paz.

Planet Debian - Thu, 20/11/2014 - 19:38

We have had terrible months in Mexico; I don't know how much has appeared about our country in the international media. The last incidents started on the last days of September, when 43 students at a school for rural teachers were forcefully disappeared (in our Latin American countries, this means they were taken by force and no authority can yet prove whether they are alive or dead; forceful disappearance is one of the saddest and most recognized traits of the brutal military dictatorships South America had in the 1970s) in the Iguala region (Guerrero state, South of the country) and three were killed on site. An Army regiment was stationed few blocks from there and refused to help.

And yes, we live in a country where (incredibly) this news by themselves would not seem so unheard of... But in this case, there is ample evidence they were taken by the local police forces, not by a gang of (assumed) wrongdoers. And they were handed over to a very violent gang afterwards. Several weeks later, with far from a thorough investigation, we were told they were killed, burnt and thrown to a river.

The Iguala city major ran away, and was later captured, but it's not clear why he was captured at two different places. The Guerrero state governor resigned and a new governor was appointed. But this was not the result of a single person behaving far from what their voters would expect — It's a symptom of a broken society where policemen will kill when so ordered, where military personnel will look away when pointed out to the obvious, where the drug dealers have captured vast regions of the country where are stronger than the formal powers.

And then, instead of dealing with the issue personally as everybody would expect, the president goes on a commercial mission to China. Oh, to fix some issues with a building company. That coincidentally or not was selling a super-luxury house to his wife. A house that she, several days later, decided to sell because it was tarnishing her family's honor and image.

And while the president is in China, the person who dealt with the social pressure and told us about the probable (but not proven!) horrible crime where the "bad guys" for some strange and yet unknown reason (even with tens of them captured already) decided to kill and burn and dissolve and disappear 43 future rural teachers presents his version, and finishes his speech saying that "I'm already tired of this topic".

Of course, our University is known for its solidarity with social causes; students in our different schools are the first activists in many protests, and we have had a very tense time as the protests are at home here at the university. This last weekend, supposed policemen entered our main campus with a stupid, unbelievable argument (they were looking for a phone reported as stolen three days earlier), get into an argument with some students, and end up firing shots at the students; one of them was wounded in the leg.

And the university is now almost under siege: There are policemen surrounding us. We are working as usual, and will most likely finish the semester with normality, but the intimidation (in a country where seeing a policeman is practically never a good sign) is strong.

And... Oh, I could go on a lot. Things feel really desperate and out of place.

Today I will join probably tens or hundreds of thousands of Mexicans sick of this simulation, sick of this violence, in a demonstration downtown. What will this achieve? Very little, if anything at all. But we cannot just sit here watching how things go from bad to worse. I do not accept to live in a state of exception.

So, this picture is just right: A bit over a month ago, two dear friends from Guadalajara city came, and we had a nice walk in the University. Our national university is not only huge, it's also beautiful and loaded with sights. And being so close to home, it's our favorite place to go with friends to show around. This is a fragment of the beautiful mural in the Central Library. And, yes, the University stands for "Viva México". And the university stands for "Peace". And we need it all. Desperately.

Categories: Elsewhere

Another Drop in the Drupal Sea: A new approach to Drupal training

Planet Drupal - Thu, 20/11/2014 - 19:30

There are many paid and free Drupal training sites on the internet. To the best of my knowledge, none of them is open source. And I'm quite certain none of them is "ridiculously open."

read more

Categories: Elsewhere

Acquia: Custom Distributions on Acquia Cloud: Part 2 -- Updating with Drush Make

Planet Drupal - Thu, 20/11/2014 - 18:57

In the first post of this series on Drush Make we looked at building a custom Drupal install profile on Acquia Cloud using Drush make. In this installment, we look at managing and updating the code in your install profile and deploying it onto Acquia Cloud. Keeping up with new releases is one of the most important aspects of maintaining any site and leveraging Drush make can dramatically reduce the effort involved with that process.

Categories: Elsewhere

Drupal Association News: Better than FTP

Planet Drupal - Thu, 20/11/2014 - 18:42

As things stand today, Drupal.org's mirror network is an essential part of the Drupal.org infrastructure. The ftp.drupal.org infrastructure hosts millions of files, serving everything from Drupal Core to contributed modules and themes, but it's beginning to show its age.

Our current FTP mirrors (co-located, in Oregon, Illinois, and New York) have been behaving erratically: projects have been failing to sync to the mirrors, being deleted before update, and sometimes disappearing from the mirrors for hours or days at a time. Even when working properly, the replication from the primary to additional mirrors can take as much as 45 minutes.

Compounding these issues is the fact that we do not have robust control or access to the existing architecture when problems arise.

So we've taken a step back to ask:

How can we deliver these files in a more reliable way?
On the modern web, the key elements of file delivery are:

  • High availability
  • Peering capacity designed for global delivery
  • Fast replication
  • HTTPS/TLS support

A Content Delivery Network is the answer to these problems, which is why we're evaluating MaxCDN to replace the ftp.drupal.org infrastructure.

But wait - does this mean the ftp:// protocol will no longer work?
Yes. The FTP protocol is aging as well...

  • In the month of October 2014, ftp:// had 96 unique visitors. Of those 96 unique visitors, only 33 of them made over 10 requests.
  • The ftp pathing differs from http, making the experience of using ftp:// confusing and inconsistent.
  • Replacing the ftp:// protocol with http will enable us to secure Drupal.org with HTTPS across all domains.

How you can help
We need users to help us test MaxCDN as an alternative for file delivery. You can track the issue here, and help us by testing the MaxCDN based downloads. Please report back your findings (good or bad) and let us know if there are any showstoppers.

To test, add this line to your /etc/hosts file:

~$ sudo vim /etc/hosts
198.232.124.192 ftp.drupal.org

And continue using ftp.drupal.org as you normally would through Drupal.org project pages, drush dl, etc.

Categories: Elsewhere

Mediacurrent: The Weather Channel’s Journey to Drupal

Planet Drupal - Thu, 20/11/2014 - 18:00

When my business partner, Paul Chason, and I joined forces over seven years ago we had a rather simple vision for Mediacurrent. We were convinced that open-source software offered a superior value proposition over proprietary, licensed based solutions. We had an ambitious goal of starting a digital agency that was going to revolutionize how companies thought about the way they managed their web properties. As Simon Sinek so eloquently describes, this was our "why" and purpose.

Categories: Elsewhere

Drupal Watchdog: Different, Not Difficult

Planet Drupal - Thu, 20/11/2014 - 17:36
Article

As AppNeta’s developer evangelist, I work with customers in five different programming languages to monitor application performance. Drupal is just one part of one language, but I’ll always have a soft spot for it because it’s where I learned to program. When I get a chance, I like to keep my skills sharp by contributing to the community-maintained TraceView integration module. Last spring, I decided to port it and learn Drupal 8 the hard way.

Like most Drupal developers, I’d never tried writing Symfony code or using Composer to manage packages. Before attempting it, I decided to research both Symfony in its own right and how it is being leveraged to rewrite Drupal. Thankfully, there were many rich tutorials on “the basics” even then, and, after a relatively painless porting process, I had the module running with a skeletal Symfony bundle inside it.

Initially, I relied on the same strategy as the Drupal 7 version of the TraceView module, which monitors hook execution time by installing two additional modules: an “early” module with a very low weight and a “late” module with a very high weight. As each hook was removed from core, I moved its implementations from the modules into the bundle and tagged that event with listeners at maximum and minimum priority.

Categories: Elsewhere

Dries Buytaert: Weather.com using Drupal

Planet Drupal - Thu, 20/11/2014 - 17:06
Topic: DrupalAcquiaDrupal sites

One of the world's most trafficked websites, with more than 100 million unique visitors every month and more than 20 million different pages of content, is now using Drupal. Weather.com is a top 20 U.S. site according to comScore. As far as I know, this is currently the biggest Drupal site in the world.

Weather.com has been an active Drupal user for the past 18 months; it started with a content creation workflow on Drupal to help its editorial team publish content to its existing website faster. With Drupal, Weather.com was able to dramatically reduce the number of steps that was required to publish content from 14 to just a few. Speed is essential in reporting the weather, and Drupal's content workflow provided much-needed velocity. The success of that initial project is what led to this week's migration of Weather.com from Percussion to Drupal.

The company has moved the entire website to Acquia Cloud, giving the site a resilient platform that can withstand sudden onslaughts of demand as unpredictable as the weather itself. As we learned from our work with New York City's MTA during Superstorm Sandy in 2012, “weather-proofing” the delivery of critical information to insure the public stays informed during catastrophic events is really important and can help save lives.

The team at Weather.com worked with Acquia and Mediacurrent for its site development and migration.

Categories: Elsewhere

Acquia: Meet Cal Evans ... Meet Jeffrey A. "jam" McGuire

Planet Drupal - Thu, 20/11/2014 - 15:14
Language Undefined

Voices of the ElePHPant / Acquia Podcast Ultimate Showdown Part 1 - Cal Evans and I got the chance to sit down and talk (a lot!) at DrupalCon Amsterdam and talk about a range of topics we have in common. In this first part of a 2-part series, we talk Drupal, PHP convergence and the "PHP Renaissance", open source communities, proprietary v open source business and the ethics of helping, and more.

Why PHP?

According to Cal, PHP has three things going for it:

Categories: Elsewhere

Steve Kemp: An experiment in (re)building Debian

Planet Debian - Thu, 20/11/2014 - 14:28

I've rebuilt many Debian packages over the years, largely to fix bugs which affected me, or to add features which didn't make the cut in various releases. For example I made a package of fabric available for Wheezy, since it wasn't in the release. (Happily in that case a wheezy-backport became available. Similar cases involved repackaging gtk-gnutella when the protocol changed and the official package in the lenny release no longer worked.)

I generally release a lot of my own software as Debian packages, although I'll admit I've started switching to publishing Perl-based projects on CPAN instead - from which they can be debianized via dh-make-perl.

One thing I've not done for many years is a mass-rebuild of Debian packages. I did that once upon a time when I was trying to push for the stack-smashing-protection inclusion all the way back in 2006.

Having had a few interesting emails this past week I decided to do the job for real. I picked a random server of mine, rsync.io, which stores backups, and decided to rebuild it using "my own" packages.

The host has about 300 packages installed upon it:

root@rsync ~ # dpkg --list | grep ^ii | wc -l 294

I got the source to every package, patched the changelog to bump the version, and rebuild every package from source. That took about three hours.

Every package has a "skx1" suffix now, and all the build-dependencies were also determined by magic and rebuilt:

root@rsync ~ # dpkg --list | grep ^ii | awk '{ print $2 " " $3}'| head -n 4 acpi 1.6-1skx1 acpi-support-base 0.140-5+deb7u3skx1 acpid 1:2.0.16-1+deb7u1skx1 adduser 3.113+nmu3skx1

The process was pretty quick once I started getting more and more of the packages built. The only shortcut was not explicitly updating the dependencies to rely upon my updages. For example bash has a Debian control file that contains:

Depends: base-files (>= 2.1.12), debianutils (>= 2.15)

That should have been updated to say:

Depends: base-files (>= 2.1.12skx1), debianutils (>= 2.15skx1)

However I didn't do that, because I suspect if I did want to do this decently, and I wanted to share the source-trees, and the generated packages, the way to go would not be messing about with Debian versions instead I'd create a new Debian release "alpha-apple", "beta-bananna", "crunchy-carrot", "dying-dragonfruit", "easy-elderberry", or similar.

In conclusion: Importing Debian packages into git, much like Ubuntu did with bzr, is a fun project, and it doesn't take much to mass-rebuild if you're not making huge changes. Whether it is worth doing is an entirely different question of course.

Categories: Elsewhere

Daniel Pocock: Is Amnesty giving spy victims a false sense of security?

Planet Debian - Thu, 20/11/2014 - 13:48

Amnesty International is getting a lot of attention with the launch of a new tool to detect government and corporate spying on your computer.

I thought I would try it myself. I went to a computer running Microsoft Windows, an operating system that does not publish its source code for public scrutiny. I used the Chrome browser, users often express concern about Chrome sending data back to the vendor about the web sites the users look for.

Without even installing the app, I would expect the Amnesty web site to recognise that I was accessing the site from a combination of proprietary software. Instead, I found a different type of warning.

Beware of Amnesty?

Instead, the online warning I received was from Amnesty's own cookies:

Even before I install the app to find out if the government is monitoring me, Amnesty is keen to monitor my behaviour themselves.

While cookies are used widely, their presence on a site like Amnesty's only further desensitizes Internet users to the downside risks of tracking technologies. By using cookies, Amnesty is effectivley saying a little bit of tracking is justified for the greater good. Doesn't that sound eerily like the justification we often hear from governments too?

Is Amnesty part of the solution or part of the problem?

Amnesty is a well known and widely respected name when human rights are mentioned.

However, their advice that you can install an app onto a Windows computer or iPhone to detect spyware is like telling people that putting a seatbelt on a motorbike will eliminate the risk of death. It would be much more credible for Amnesty to tell people to start by avoiding cloud services altogether, browse the web with Tor and only use operating systems and software that come with fully published source code under a free license. Only when 100% of the software on your device is genuinely free and open source can independent experts exercise the freedom to study the code and detect and remove backdoors, spyware and security bugs.

It reminds me of the advice Kim Kardashian gave after the Fappening, telling people they can continue trusting companies like Facebook and Apple with their private data just as long as they check the privacy settings (reality check: privacy settings in cloud services are about as effective as a band-aid on a broken leg).

Write to Amnesty

Amnesty became famous for their letter writing campaigns.

Maybe now is the time for people to write to Amnesty themselves, thank them for their efforts and encourage them to take more comprehensive action.

Feel free to cut and paste some of the following potential ideas into an email to Amnesty:

I understand you may not be able to respond to every email personally but I would like to ask you to make a statement about these matters on your public web site or blog.

I understand it is Amnesty's core objective to end grave abuses of human rights. Electronic surveillence, due to its scale and pervasiveness, has become a grave abuse in itself and in a disturbing number of jurisdictions it is an enabler for other types of grave violations of human rights.

I'm concerned that your new app Detekt gives people a false sense of security and that your campaign needs to be more comprehensive to truly help people and humanity in the long term.

If Amnesty is serious about solving the problems of electronic surveillance by government, corporations and other bad actors, please consider some of the following:

  • Instead of displaying a cookie warning on Amnesty.org, display a warning to users who access the site from a computer running closed-source software and give them a link to download an open source web browser like Firefox.
  • Redirect all visitors to your web site to use the HTTPS encrypted version of the site.
  • Using spyware-free open source software such as the Linux operating system and LibreOffice for all Amnesty's own operations, making a public statement about your use of free open source software and mentioning this in the closing paragraph of all press releases relating to surveillance topics.
  • Encouraging Amnesty donors, members and supporters to choose similar software especially when engaging in any political activities.
  • Make a public statement that Amnesty will not use cloud services such as SalesForce or Facebook to store, manage or interact with data relating to members, donors or other supporters.
  • Encouraging the public to move away from centralized cloud services such as those provided by their smartphone or social networks and use de-centralized or federated services such as XMPP chat.

Given the immense threat posed by electronic surveillance, I'd also like to call on Amnesty to allocate at least 10% of annual revenue towards software projects releasing free and open source software that offers the public an alternative to the centralized cloud.

While publicity for electronic privacy is great, I hope Amnesty can go a step further and help people use trustworthy software from the ground up.

Categories: Elsewhere

Paul Booker: Creating you own API endpoint using Services

Planet Drupal - Thu, 20/11/2014 - 12:53
/** * Implements of hook_services_resources(). */ function mymodule_services_services_resources() { $api = array( 'frontpage' => array( 'operations' => array( 'retrieve' => array( 'help' => 'Retrieves front page', 'callback' => '_mymodule_services_frontpage_retrieve', 'access callback' => 'user_access', 'access arguments' => array('access content'), 'access arguments append' => FALSE, 'args' => array( array( 'name' => 'fn', 'type' => 'string', 'description' => 'Function to perform', 'source' => array('path' => '0'), 'optional' => TRUE, 'default' => '0', ), array( 'name' => 'nitems', 'type' => 'int', 'description' => 'Number of latest items to get', 'source' => array('param' => 'nitems'), 'optional' => TRUE, 'default' => '0', ), array( 'name' => 'since', 'type' => 'int', 'description' => 'Posts from the last number of days', 'source' => array('param' => 'since'), 'optional' => TRUE, 'default' => '0', ), ), ), ), ), ); return $api; } /** * Callback function for blog retrieve */ function _mymodule_services_frontpage_retrieve($fn, $nitems, $timestamp) { // Check for mad values $nitems = intval($nitems); $timestamp = intval($timestamp); return _mymodule_services_blog_items($nitems, $timestamp); } /** * Gets frontpage blog posts */ function _mymodule_services_blog_items($nitems, $timestamp) { // Compose query $query = db_select('node', 'n'); $query->join('node_revision', 'v', '(n.nid = v.nid) AND (n.vid = v.vid)'); $query->join('comment', 'c', 'c.nid = n.nid'); $query->join('users', 'u', 'n.uid = u.uid'); $query->fields('v', array('timestamp', 'title')); $query->addField('u', 'name', 'author'); $query->addField('n', 'nid'); $query->addField('n', 'title'); $query->addField('n', 'uid'); $query->addField('n', 'created'); $query->addField('n', 'changed'); $query->addField('u', 'picture'); $query->addExpression('COUNT(c.cid)', 'comments'); $query->condition('n.type', 'blog', '='); $query->groupBy('n.nid'); // How many days ago? if ($timestamp) { $query->condition('v.timestamp', time() - ($timestamp * 60 * 60 * 24), '>'); } $query->orderBy('v.timestamp', 'DESC'); // Limited by items? if ($nitems) { $query->range(0, $nitems); } $items = $query->execute()->fetchAll(); return $items; } Tags:
Categories: Elsewhere

Jonathan Wiltshire: Getting things into Jessie (#5)

Planet Debian - Thu, 20/11/2014 - 11:30
Don’t assume another package’s unblock is a precedent for yours

Sometime we’ll use our judgement when granting an unblock to a less-than-straightforward package. Lots of factors go into that, including the regression risk, desirability, impact on other packages (of both acceptance and refusal) and trust.

However, a judgement call on one package doesn’t automatically mean that the same decision will be made for another. Every single unblock request we get is called on its own merits.

Do by all means ask about your package in light of another. There may be cross-over that makes your change desirable as well.

Don’t take it personally if the judgement call ends up being not what you expected.

Getting things into Jessie (#5) is a post from: jwiltshire.org.uk | Flattr

Categories: Elsewhere

Stefano Zacchiroli: Thoughts on the Init System Coupling GR

Planet Debian - Thu, 20/11/2014 - 09:59
on perceived hysteria and silent sanity

As you probably already know by now, the results of the Debian init system coupling general resolution (GR) look like this:

Init system coupling GR: results (arrow from A to B means that voters preferred A to B by that margin)

Some random thoughts about them:

  • The turnout has been the highest since 2010 DPL elections and the 2nd highest among all GRs (!= DPL elections) ever. The highest among all GRs dates back to 2004 and was about dropping non-free. In absolute terms this vote scores even better: it is the GR with the highest number of voters ever.

    Clearly there was a lot of interest within the project about this vote. The results appear to be as representative of the views of project members as we have been able to get in the second half of Debian history.

  • There is a total ordering of options (which is not always the case with our voting system). Starting with the winning option, each option in the results beats every subsequent option. The winning option ("General resolution is not required") beats the runner-up ("Support for other init systems is recommended, i.e., "you SHOULD NOT require a specific init") by a large margin: 100 votes, ~20.7% of the voters. The winning options wins over further options by increasingly large margins: 173 votes (~35.8%) against "Packages may require specific init systems if maintainers decide" (the MAY option); 176 (~36.4%) against "Packages may not require a specific init system" (the MUST NOT option); 263 (~54.5%) against "Further discussion" (the "let's keep on flaming" option).

    While judging from Debian mailing lists and news sites you might have gotten the impression that the project was evenly split on init system matters, at least w.r.t. the matter on the ballot that doesn't seem to be the case.

  • The winning option is not as crazy as its label might imply (voting to declare that the vote was not required? WTH?). What the winning option actually says is more articulated than that; quoting from the ballot (highlight mine):

    Regarding the subject of this ballot, the Project affirms that the procedures for decision making and conflict resolution are working adequately and thus a General Resolution is not required.

    With this GR the Debian Project affirms that the procedures we have used to decide the default init system for Jessie and to arbitrate the ensuing conflicts are just fine. People might flame and troll debian-devel as much as they want (actually, I'm pretty sure we would all like them to stop, but that matter wasn't on the ballot so you'll have to take my word for it). People might write blog posts and make headlines corroborating the impression that Debian is still being torn apart by ongoing init system battles. But this vote says instead that the large majority of project members thinks our decision making and conflict-arbitration procedures, which most prominently include the Debian Technical Committee, have served use "adequately" well over the past troubled months.

    That of course doesn't mean that everyone in Debian is happy about every single recent decision, otherwise we wouldn't have had this GR in the first place. But it does mean that we consider our procedures good enough to (a) avoid getting in their way with a project-wide vote, and (b) keep on trusting them for the foreseeable future.

  • [ It is not the main focus of this post, but if you care specifically about the implications of this GR on SystemD adoption in Debian, I recommend reading this excellent GR commentary by Russ Allbery. ]

My take home message is that we are experiencing a huge gap between the public perception of the state of Debian (both from within and from without the project) and the actual beliefs of the silent majority of people that make Debian with their work, day after day.

In part this is old news. The most "senior" members of the project will remember that the topic of "vocal minorities vs silent majority" was a recurrent one in Debian 10+ years ago, when flames were periodically ravaging the project. Since then Debian has grown a lot though, and we are now part of a much larger and varied ecosystem. We are now at a scale at which there are plenty of FOSS "mass-media" covering daily what happens in Debian, inducing feedback loops with our own perception of ourselves which we do not fully grok yet. This is a new factor in the perception gap. This situation is intrinsically bad, nor there is blame to assign here: after all influential bloggers, news sites, etc., just do their job. And their attention also testifies of the huge interest that there is around Debian and our choices.

But we still need to adapt and learn to take perceived hysteria with a pinch (or two) of salt. It might just be time for our decennial check-up. Time to remind ourselves that our ways of doing things might in fact still be much more sane than sometimes we tend to believe.

We went on 10+ years ago, after monumental flames. It looks like we are now ready to move on again, putting The Era of the Great SystemD Histeria™ behind us.

Categories: Elsewhere

Matthew Palmer: Multi-level prefix delegation is not a myth! I've seen it!

Planet Debian - Thu, 20/11/2014 - 06:00

Unless you’ve been living under a firewalled rock, you know that IPv6 is coming. There’s also a good chance that you’ve heard that IPv6 doesn’t have NAT. Or, if you pay close attention to the minutiae of IPv6 development, you’ve heard that IPv6 does have NAT, but you don’t have to (and shouldn’t) use it.

So let’s say we’ll skip NAT for IPv6. Fair enough. However, let’s say you have this use case:

  1. A bunch of containers that need Internet access…

  2. That are running in a VM…

  3. On your laptop…

  4. Behind your home router!

For IPv4, you’d just layer on the NAT, right? While SIP and IPsec might have kittens trying to work through three layers of NAT, for most things it’ll Just Work.

In the Grand Future of IPv6, without NAT, how the hell do you make that happen? The answer is “Prefix Delegation”, which allows routers to “delegate” management of a chunk of address space to downstream routers, and allow those downstream routers to, in turn, delegate pieces of that chunk to downstream routers.

In the case of our not-so-hypothetical containers-in-VM-on-laptop-at-home scenario, it would look like this:

  1. My “border router” (a DNS-323 running Debian) asks my ISP for a delegated prefix, using DHCPv6. The ISP delegates a /561. One /64 out of that is allocated to the network directly attached to the internal interface, and the rest goes into “the pool”, as /60 blocks (so I’ve got 15 of them to delegate, if required).

  2. My laptop gets an address on the LAN between itself and the DNS-323 via stateless auto-addressing (“SLAAC”). It also uses DHCPv6 to request one of the /60 blocks from the DNS-323. The laptop puts one /64 from that block as the address space for the “virtual LAN” (actually a Linux bridge) that connects the laptop to all my VMs, and puts the other 15 /64 blocks into a pool for delegation.

  3. The VM that will be running the set of containers under test gets an address on the “all VMs virtual LAN” via SLAAC, and then requests a delegated /64 to use for the “all containers virtual LAN” (another bridge, this one running on the VM itself) that the containers will each connect to themselves.

Now, almost all of this Just Works. The current releases of ISC DHCP support prefix delegation just fine, and a bit of shell script plumbing between the client and server seals the deal – the client needs to rewrite the server’s config file to tell it the netblock from which it can delegate.

Except for one teensy, tiny problem – routing. When the DHCP server delegates a netblock to a particular machine, the routing table needs to get updated so that packets going to that netblock actually get sent to the machine the netblock was delegated to. Without that, traffic destined for the containers (or the VM) won’t actually make it to its destination, and a one-way Internet connection isn’t a whole lot of use.

I cannot understand why this problem hasn’t been tripped over before. It’s absolutely fundamental to the correct operation of the delegation system. Some people advocate running a dynamic routing protocol, but that’s a sledgehammer to crack a nut if ever I saw one.

Actually, I know this problem has been tripped over before, by OpenWrt. Their solution, however, was to use a PHP script to scan logfiles and add routes. Suffice it to say, that wasn’t an option I was keen on exploring.

Instead, I decided to patch ISC DHCP so that the server can run an external script to add the necessary routes, and perhaps modify firewall rules – and also to reverse the process when the delegation is released (or expired). If anyone else wants to play around with it, I’ve put it up on Github. I don’t make any promises that it’s the right way to do it, necessarily, but it works, and the script I’ve added in contrib/prefix-delegation-routing.rb shows how it can be used to good effect. By the way, if anyone knows how pull requests work over at ISC, drop me a line. From the look of their website, they don’t appear to accept (or at least encourage) external contributions.

So, that’s one small patch for DHCP, one giant leap for my home network.

  1. The standard recommendation is for ISPs to delegate each end-user customer a /48 (giving 65,536 /64 networks); my ISP is being a little conservative in “only” giving me 256 /64s. It works fine for my purposes, but if you’re an ISP getting set for deploying IPv6, make life easy on your customers and give them a /48.

Categories: Elsewhere

Russ Allbery: Interpreting the Debian init system GR

Planet Debian - Thu, 20/11/2014 - 05:42

I originally posted this in a thread on debian-private, but on further reflection it seems appropriate for a broader audience. So I'm posting it here, as well as on debian-project.

There is quite a lot of discussion in various places about what the recent GR result means. Some are concluding that systemd won in some way that implies Debian is not going to support other init systems, or at least that support for other init systems is in immediate danger. A lot of that analysis concludes that the pro-systemd "side" in Debian won some sort of conclusive victory.

I have a different perspective.

I think we just had a GR in which the Debian developer community said that we, as a community, would like to work through all of the issues around init systems together, as a community, rather than having any one side of the argument win unambiguously and impose its views on those who disagree.

There were options on the ballot that clearly required loose coupling and that clearly required tight coupling. The top two options did neither of those things. The second-highest option said, effectively, that we should feel free to exercise our technical judgement for our own packages, but should do so with an eye to enabling people to make different choices, and should merge their changes and contributions where possible. The highest option said that we don't even want to say that, and would instead prefer to work this whole thing out through discussion, respect, consensus, and mutual support, without giving *anyone* a clear mandate or project-wide blessing for their approach.

In other words, the way I choose to look at this GR is that the project as a whole just voted to take away the sticks that we were using to beat each other with.

In a way, we just chose thet *hardest* option. We didn't make a simplifying technical decision that provides clear guidance to everyone. Instead, we made a complicating social decision that says that, sorry, there's no short cut to avoid having to talk to each other, respect each other's views, and try to reach workable collaborative compromises. Even though it's really hard, even though everyone is raw and upset, that's what the project as a whole is asking us to do.

Are we up to the challenge?

Categories: Elsewhere

PreviousNext: Community gathering at DrupalCamp Melbourne

Planet Drupal - Thu, 20/11/2014 - 03:51

It's been a while since the last DrupalCamp in Melbourne, so the community came together recently to share what they know. Here's a brief wrap up of the two day event.

Categories: Elsewhere

Paul Booker: 10 commands that could help you to survive Drupageddon

Planet Drupal - Thu, 20/11/2014 - 01:18

It's been more than a month since Drupageddon so I thought I would post an update of my previous post.


Commands that help with auditing:

Showing files that have changed on the live server:

git status

Looking for code execution attempts via menu_router:

select * from menu_router where access_callback = 'file_put_contents'

Another possible code execution attempt via menu_router:

select * from menu_router where access_callback = 'assert';

Showing which files are on the live server and not in version control:

diff -r docroot repo | grep 'Only in docroot'

Looking for PHP files in the files directory:

find . -path "*php"

Looking for additional roles and users:

select * from role select * from users_roles where rid=123

Checking the amount of time between when a user logged into your site and their most recent page visit:

select (s.timestamp - u.login) / 60 / 60 / 24 AS days_since_login, u.uid from sessions s inner join users u on s.uid = u.uid;



Commands that can help with recovery:

Apply the patch. Hotfix: (SA-CORE-2014-005)

curl https://www.drupal.org/files/issues/SA-CORE-2014-005-D7.patch | patch -p1

End active sessions, i.e log everyone out.

TRUNCATE TABLE sessions;

Updating passwords:

update users set pass = concat('XYZ', sha(concat(pass, md5(rand()))));

If you need help regarding the recent drupal vulnerability feel free to contact me.

P.S.

Latest security advisory was today.

Tags:
Categories: Elsewhere

Shomeya: How to Level Up from Nice Guy Dev to Awesome Guy Dev

Planet Drupal - Thu, 20/11/2014 - 01:05

If Barbie I can be a Computer Engineer taught us anything it taught us that Steven and Brian are nice guys. They just want to help, they know how to fix it, and they are there just when you need them to be. And worst of all they don't mean anything by it.

So what's a nice guy to do? You care, you retweet the awesomest feminist blogs, you were ON it during #gamergate. But on a human interaction level how does it go? Here are some ways that you can level up from just that nice guy that I don't call out on everything, but who secretly makes me sad, to awesome guy that makes my day well ...awesome.

Read more
Categories: Elsewhere

Thomas Goirand: Rotten tomatoes

Planet Debian - Thu, 20/11/2014 - 00:45

There’s many ways to interpret the last GR. The way I see it is how Joey hoped Debian was: the outcome of the poll shows that we don’t want to do technical decisions by voting. At the beginning of this GR, I was supportive of it, and though it was a good thing to enforce the rule that we care for non-systemd setups. Though I have slowly changed my mind, and I think that this final outcome is awesome. Science (and computer science) has never been about voting, otherwise the earth would be flat, without drifting continents.

So my hope is that the Debian project as a whole, will allow itself to do mistakes, iterative trials, errors, and go back on any technical decision if they don’t make sense anymore. When being asked something, it’s ok to reply: “I don’t know”, and it should be ok for the Debian project to have this alternative as one of the possible answers. I’m convince that refusing to take a drastic choice in this point in time was exactly what we needed to do. And my hope is that Joey comes back after he realizes that we’ve all understood and embarrassed his position that science cannot be governed by polls.

For Stretch, I’m sure there’s going to be a lot of new alternatives. Maybe uselessd, eudev and others. Maybe I’ll have a bit of time to work on OpenRC Debian integration myself (hum… I’m dreaming here…). Maybe something else. Let’s just wait. We have more than 300 bugs to fix before Jessie can be released. Let’s happilly work on that together, and forget about the init systems for a while…

P.S: Just to be on the safe side: the rotten tomatoes image was not about criticizing the persons who started the poll, who I respect a lot, especially Ian, who I am convinced is trying to do his best for Debian (hug).

Categories: Elsewhere

Drupal Watchdog: Drush: The Swiss Army Knife for Drupal

Planet Drupal - Wed, 19/11/2014 - 23:52
Article

Hello again, young MacGyver!

In the previous issue you learned how to install Drush, Drupal, and contributed modules. If you missed it, make sure you go back and read Part One from the previous issue.

Updates

Now that you've successfully installed Drupal and extended it with some awesome contributed modules, it's time to apply a few updates. With Drush, it is easier by far than any method you might currently be using.

Let's get started: Make sure you are working from the root directory of your website. That would be the directory where you find index.php, and I'm going to assume that location for the remainder of this article.

Issue the following command:

drush pm-update

That command will check for new versions of core, themes, and all the contributed modules that are enabled on your site. A list of all available updates will be shown on the screen. Review the list and then press “y” at the prompt if you wish to proceed with the updates.

If you proceed with the updates, Drush will make a backup copy of all the out-of-date packages, download the new ones, and then run database updates, if any are required. It's all very quick and you don't even have to open an FTP client.

Alas, sometimes things go awry; often, very awry. That's why Drush stores a backup copy of the updated packages for you. Should an update fail, it will restore the previous versions and notify you there was a problem. Or, if you need to restore manually, you can find the backups in your user's home directory under “drush-backups”.

Now let's say you only want to update Drupal, but none of the contributed projects. Easy enough: this time only check for Drupal core. Let’s use the shorter version of the command, which I prefer:

drush up drupal

The command “up” is short for “pm-update”. As in the first example, Drush will backup the installed version, replace it with the latest, and then run database updates, if any are required. In this case, we specified “drupal”, so Drush will only check for updates for Drupal core.

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator - Elsewhere