Feed aggregator

Bálint Réczey: Proposing amd64-hardened architecture for Debian

Planet Debian - Tue, 15/04/2014 - 12:02

Facing last week’s Heartbleed bug the need for improving the security of our systems became more apparent than usually. In Debian there are widely used methods for Hardening packages at build time and guidelines for improving the default installations’ security.

Employing such methods usually come at an expense, for example slower code execution of binaries due to additional checks or additional configuration steps when setting up a system. Balancing between usability and security Debian chose an approach which would satisfy the most users by using C/C++ features which only slightly decrease execution speed of built binaries and by using reasonable defaults in package installations.

All the architectures supported by  Debian aims using the same methods for enhancing security but it does not have to stay the same way. Amd64 is the most widely used architecture of Debian according to popcon and amd64 hardware comes with powerful CPU-s. I think there would be a significant amount of people (being one of them :-)) who would happily use a version of Debian with more security features enabled by default sacrificing some CPU power and installing and setting up additional packages.

My proposal for serving those security-focused users is introducing a new architecture targeting amd64 hardware, but with more security related C/C++ features turned on for every package (currently hardening has to be enabled by the maintainers in some way) through compiler flags as a start.

Introducing the new architecture would also let package maintainers enabling additional dependencies and build rules selectively for the new architecture improving the security further. On the users’ side the advantage of having a separate security enhanced architecture instead of a Debian derivative is the potential of installing a set of security enhanced packages using multiarch. You could have a fast amd64 installation as a base and run Apache or any other sensitive server from the amd64-hardened packages!

I have sent the proposal for discussion to debian-dev, too. Please join the discussion there or leave a comment here.

Categories: Elsewhere

David Herron: Static HTML website builders (AkashaCMS, etc) slashes web hosting costs to the bone

Planet Drupal - Tue, 15/04/2014 - 10:52

Today's web is supposedly about fancy software on both server and client, building amazingly flexible applications merging content and functionality from anywhere.  What, then, is the role of old-school HTML websites?  In particular, why am I wasting my time building AkashaCMS and not building websites with Drupal?


Categories: Elsewhere

flink: "But that's easy in Drupal, isn't it?"

Planet Drupal - Tue, 15/04/2014 - 10:17

Many a time a customer’s casual challenge of “But that’s easy in Drupal isn’t it?” has resulted in us taking up the gauntlet and putting in some hard yards to indeed "make that easy”.

Clearly 8,500 D7 modules on drupal.org is not enough.

As we were working on a government project last year the question popped up again and we had to rise to the occasion. The project was eventually put on ice, but during its course another module baby was born. We called it Views Aggregator Plus and set her free in Drupalland. It seemed a waste not to share it.

One thing we've learned about releasing modules: you have no idea upfront which ones will thrive. We thought we had created a niche application and planned for one install, namely that single install on that site that never happened. But the module usage statistics prove that it has found applications way beyond its initial purpose. Not bad for a module that wasn't meant to see the light outside that one web site!

As so often in Drupal, further enhancements were very much driven by the needs the community raised, thus taking the module into directions we’d never anticipated, like its application as a Webform submissions post-processor.

For your inspiration we contacted some of the early adopters we came to know through the module's issue queue and collected from them some examples of how Drupal site builders all over the world get value out of Views Aggregator Plus.

We hope you enjoy the screenshots and explanations below. Among these may be just that thing you also needed to be easy in Drupal.

Special thanks to Nick Veenhof of Acquia and Mads Bordinggaard Christensen of Rillbar Records who happily shared screenshots and application stories.

Example 1

by Nick Veenhof of Acquia

"My View displays a list of Search Cores. I configured the module to apply grouping on the customer name. The aggregation functions applied on remaining fields were summations on the column cells of selected fields. This is how the Document Count and Query Count columns were created."

 

 

 

Example 2

by Mads Bordinggaard Christensen of Rillbar Records

"My company is primarily a wholesale distributor of music on vinyl and CD’s. After each quarter I have to send out statements to all the different Record labels who deliver their products to us. The attached PDF is an example of a dummy record company, presenting the amount of sold items within the period.

Dummy Records have 2 different products in their catalogue. The CD has sold 32 items and the vinyl 20 within the period. These sales are however spread out on 6 different orders. So instead of having 6 table rows representing each order, I can use Views Aggregator Plus to compress and group the rows using a unique value, in this case the product SKU.

Before using Views Aggregator Plus each order of the given product had its own line, and it quickly became very unclear and messy. Some products sell a lot of items, but maybe only 1 or 2 on each order, so it can quickly result in a lot of rows (especially for the record labels who had to sum the amount of sold items in order to make an invoice for me).

So Views Aggregator Plus is a really important factor in creating these statements to our suppliers. Apart from that it allows us to quickly pull an exact amount of sold items from the database within a given period (using the date range as a contextual filter)."

 

Example 3

by Rik de Boer, founder of flink

Lastly the use-case for which Views Aggregator Plus was initially developed.

"I've illustrated the construction of the VAP View (bottom) through two intermediate phases, both normal Views. The top one shows a number of government projects (names omitted) by industry, their budgets and their durations (a duration is a Date field with start and end values).

After enabling the Views PHP module a PHP code snippet field (see below) was added to turn the date ranges into more readable keywords: "not started”, “underway” or “closed”. A copy of the PHP field was added thus creating an identical column (with a different title) in preparation for the next step.The duration column was excluded from display.

For the final View the format was flicked from “Table” to “Table with aggregation options”. The “Industry” field was grouped (compressed) and tallied, the budget value field was group-summed.

“No. projects underway” had the “Count (having regexp)” aggregation function applied with “underway” as the regular expression to count. The same aggregation function was applied to “No. projects closed” except that this time the counting parameter was “closed".

To make the “Totals” row, column sum functions were added for all fields except the first. And finally sorting was enabled… voila!"

<?php $start_date = strtotime($data->field_field_duration[0]['raw']['value']); $end_date = strtotime($data->field_field_duration[0]['raw']['value2']); echo time() < $start_date ? 'not started' : (time() < $end_date ? 'underway' : 'closed'); ?> File under:  Planet Drupal
Categories: Elsewhere

ThinkShout: Refactoring The iATS Drupal Commerce Module

Planet Drupal - Tue, 15/04/2014 - 09:00

Last month, we wrapped up a project for nonprofit-oriented payment processor, iATS Payments. iATS Payments wanted to invest in gaining wider adoption of their services and enlisted ThinkShout's help in building a PHP wrapper for their existing SOAP API.

Being a bunch of software engineers who have implemented our fair share of APIs (both good and bad), we knew we had to achieve certain goals if we were going to ease the adoption of iATS Payments within PHP applications:

  • Comprehensive: The wrapper handles all communication with the iATS Payments SOAP API, validation of API calls, and error handling.
  • Well documented: We made use of phpDocumentor to generate easily browsable documentation from our code comments.
  • Reliable: Via a comprehensive test suite covering every API call written in PHPUnit.

With the new PHP wrapper finished and unit tests passing, our attention shifted to the project we felt would most benefit from the work we'd done: the Commerce iATS Drupal module. This module leverages Drupal Commerce to facilitate payment processing via iATS Payments on any Drupal website.

We had already integrated Commerce iATS into some of our clients' websites, so we knew it was a great module, but it was written before there was a standard iATS Payments PHP wrapper and contained some unwieldy code that could be eliminated by using the new PHP wrapper. With support from the community and sponsorship from iATS, we rewrote the module, drastically reducing complexity, which any engineer can appreciate, and improved stability, which site owners love even more. We're excited to replicate the success of our partnership with MailChimp, which created a win for the community, the vendor, and, yes, ThinkShout.

Refactoring Commerce iATS

In refactoring Commerce iATS, we didn't just plug in the PHP wrapper and call it a day. While Commerce iATS was originally written with support for only credit card payments, our PHP wrapper supports all payment methods provided by iATS Payments and we wanted to make sure Commerce iATS had room to grow and take advantage of those payment methods.

Some of the problems

Looking through the code of the existing Commerce iATS module, we realized the current design would not scale well as we added additional payment methods.

As an example, take a look at the 2.x-dev release of Commerce iATS.

Here the function commerce_iats_soap_process_submit_form_submit() is being used to handle a lot more logic than a form submit handler ideally would. Breaking it down:

A lot of code in commerce_iats_soap_process_submit_form_submit() is later duplicated when commerce_iats_customer_code_charge_submit_form_submit() is called.

The refactor

We set out to redesign the module's architecture and rebuild it with modularity and expansion in mind. Here's what we did.

Created a new standard payment processing function
  • This function handles the API call, response handling, transaction creation and logging.
  • To handle multiple payment methods, the function accepts a callback function as a parameter. This callback function is the function that makes the API call via the PHP Wrapper and returns the response.

The first lines of commerce_iats_process_payment() demonstrate how the callback function is used:

<?php function commerce_iats_process_payment($payment_method, $payment_data, $order, $charge, $payment_callback) { // Process the payment using the defined callback method. $response = $payment_callback($payment_method, $payment_data, $order, $charge); Broke payment methods out into their own include files

As an example, here's the credit card payment method. Each payment method file contains these standard Commerce functions (where credit_card is the payment method:)

  • commerce_iats_credit_card_settings_form()
  • commerce_iats_credit_card_submit_form()
  • commerce_iats_credit_card_submit_form_validate()
  • commerce_iats_credit_card_submit_form_submit()

Then we added our own callback function, commerce_iats_process_credit_card_payment().

The callback function handles building the API request and getting a response from the API. To show how this works, here's a line from commerce_iats_credit_card_submit_form_submit():

<?php return commerce_iats_process_payment($payment_method, $payment_data, $order, $charge, 'commerce_iats_process_credit_card_payment');

As you can see, all the payment information from the form submit handler is being passed into commerce_iats_process_payment(). That function then calls the callback function commerce_iats_process_credit_card_payment() to make the API call and get the response.

This design is very easy to extend and allows us to add as many additional payment methods as we need in a very clean way. We were able to use this design to implement Commerce Card on File as a submodule of Commerce iATS, eliminating that dependency from the base module.

Roadmap and next steps

All our work on Commerce iATS is currently available in the 2.0-beta1 release. Please take a look and let us know if you have any feedback.

We're already hard at work along with our partners at iATS Payments to integrate more of their payment processing facilities into the Commerce iATS module. While the module currently only supports credit card payments, ACH/EFT and Direct Debit payments will arive before DrupalCon Austin. Speaking of, both ThinkShout and iATS Payments will be attending and spending some time at the iATS booth, number 508. Come find us to say hello and talk some e-commerce.

Keep an eye on the Commerce iATS project page and this blog for more updates.

Categories: Elsewhere

Andrew Pollock: [life] Day 77: Port of Brisbane tour

Planet Debian - Tue, 15/04/2014 - 07:14

Sarah dropped Zoe around this morning at about 8:30am. She was still a bit feverish, but otherwise in good spirits, so I decided to stick with my plan for today, which was a tour of the Port of Brisbane.

Originally the plan had been to do it with Megan and her Dad, Jason, but Jason had some stuff to work on on his house, so I offered to take Megan with us to allow him more time to work on the house uninterrupted.

I was casting around for something to do to pass the time until Jason dropped Megan off at 10:30am, and I thought we could do some foot painting. We searched high and low for something I could use as a foot washing bucket, other than the mop bucket, which I didn't want to use because of potential chemical residue. I gave up because I couldn't anything suitable, and we watched a bit of TV instead.

Jason dropped Megan around, and we immediately jumped in the car and headed out to the Port. I missed the on ramp for the M4 from Lytton Road, and so we took the slightly longer Lytton Road route, which was fine, because we had plenty of time to kill.

The plan was to get there for about 11:30am, have lunch in the observation cafe on the top floor of the visitor's centre building, and then get on the tour bus at 12:30pm. We ended up arriving much earlier than 11:30am, so we looked around the foyer of the visitor's centre for a bit.

It was quite a nice building. The foyer area had some displays, but the most interesting thing (for the girls) was an interactive webcam of the shore bird roost across the street. There was a tablet where you could control the camera and zoom in and out on the birds roosting on a man-made island. That passed the time nicely. One of the staff also gave the girls Easter eggs as we arrived.

We went up to the cafe for lunch next. The view was quite good from the 7th floor. On one side you could look out over the bay, notably Saint Helena Island, and on the other side you got quite a good view of the port operations and the container park.

Lunch didn't take all that long, and the girls were getting a bit rowdy, running around the cafe, so we headed back downstairs to kill some more time looking at the shore birds with the webcam, and then we boarded the bus.

It was just the three of us and three other adults, which was good. The girls were pretty fidgety, and I don't think they got that much out of it. The tour didn't really go anywhere that you couldn't go yourself in your own car, but you did get running commentary from the driver, which made all the difference. The girls spent the first 5 minutes trying to figure out where his voice was coming from (he was wired up with a microphone).

The thing I found most interesting about the port operations was the amount of automation. There were three container terminals, and the two operated by DP World and Hutchinson Ports employed fully automated overhead cranes for moving containers around. Completely unmanned, they'd go pick a container from the stack and place it on a waiting truck below.

What I found even more fascinating was the Patrick terminal, which used fully automated straddle carriers, which would, completely autonomously move about the container park, pick up a container, and then move over to a waiting truck in the loading area and place it on the truck. There were 27 of these things moving around the container park at a fairly decent clip.

Of course the girls didn't really appreciate any of this, and half way through the tour Megan was busting to go to the toilet, despite going before we started the tour. I was worried about her having an accident before we got back, she didn't, so it was all good.

I'd say in terms of a successful excursion, I'd score it about a 4 out of 10, because the girls didn't really enjoy the bus tour all that much. I was hoping we'd see more ships, but there weren't many (if any) in port today. They did enjoy the overall outing. Megan spontaneously thanked me as we were leaving, which was sweet.

We picked up the blank cake I'd ordered from Woolworths on the way through on the way home, and then dropped Megan off. Zoe wanted to play, so we hung around for a little while before returning home.

Zoe watched a bit more TV while we waited for Sarah to pick her up. Her fever picked up a bit more in the afternoon, but she was still very perky.

Categories: Elsewhere

Dirk Eddelbuettel: BH release 1.54.0-2

Planet Debian - Tue, 15/04/2014 - 03:47
Yesterday's release of RcppBDT 0.2.3 lead to an odd build error. If one used at the same time a 32-bit OS, a compiler as recent as g++ 4.7 and the Boost 1.54.0 headers (directly or via the BH package) then the file lexical_cast.hpp barked and failed to compile for lack of an 128-bit integer (which is not a surprise on a 32-bit OS).

After looking at this for a bit, and looking at some related bug report, I came up with a simple fix (which I mentioned in an update to the RcppBDT 0.2.3 release post). Sleeping over it, and comparing to the Boost 1.55 file, showed that the hunch was right, and I have since made a new release 1.54.0-2 of the BH package which contains the fix.

Changes in version 1.54.0-2 (2014-04-14)
  • Bug fix to lexical_cast.hpp which now uses the test for INT128 which the rest of Boost uses, consistent with Boost 1.55 too.

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

Comments and suggestions are welcome via the mailing list or issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Colin Watson: Porting GHC: A Tale of Two Architectures

Planet Debian - Tue, 15/04/2014 - 03:45

We had some requests to get GHC (the Glasgow Haskell Compiler) up and running on two new Ubuntu architectures: arm64, added in 13.10, and ppc64el, added in 14.04. This has been something of a saga, and has involved rather more late-night hacking than is probably good for me.

Book the First: Recalled to a life of strange build systems

You might not know it from the sheer bulk of uploads I do sometimes, but I actually don't speak a word of Haskell and it's not very high up my list of things to learn. But I am a pretty experienced build engineer, and I enjoy porting things to new architectures: I'm firmly of the belief that breadth of architecture support is a good way to shake out certain categories of issues in code, that it's worth doing aggressively across an entire distribution, and that, even if you don't think you need something now, new requirements have a habit of coming along when you least expect them and you might as well be prepared in advance. Furthermore, it annoys me when we have excessive noise in our build failure and proposed-migration output and I often put bits and pieces of spare time into gardening miscellaneous problems there, and at one point there was a lot of Haskell stuff on the list and it got a bit annoying to have to keep sending patches rather than just fixing things myself, and ... well, I ended up as probably the only non-Haskell-programmer on the Debian Haskell team and found myself fixing problems there in my free time. Life is a bit weird sometimes.

Bootstrapping packages on a new architecture is a bit of a black art that only a fairly small number of relatively bitter and twisted people know very much about. Doing it in Ubuntu is specifically painful because we've always forbidden direct binary uploads: all binaries have to come from a build daemon. Compilers in particular often tend to be written in the language they compile, and it's not uncommon for them to build-depend on themselves: that is, you need a previous version of the compiler to build the compiler, stretching back to the dawn of time where somebody put things together with a big magnet or something. So how do you get started on a new architecture? Well, what we do in this case is we construct a binary somehow (usually involving cross-compilation) and insert it as a build-dependency for a proper build in Launchpad. The ability to do this is restricted to a small group of Canonical employees, partly because it's very easy to make mistakes and partly because things like the classic "Reflections on Trusting Trust" are in the backs of our minds somewhere. We have an iron rule for our own sanity that the injected build-dependencies must themselves have been built from the unmodified source package in Ubuntu, although there can be source modifications further back in the chain. Fortunately, we don't need to do this very often, but it does mean that as somebody who can do it I feel an obligation to try and unblock other people where I can.

As far as constructing those build-dependencies goes, sometimes we look for binaries built by other distributions (particularly Debian), and that's pretty straightforward. In this case, though, these two architectures are pretty new and the Debian ports are only just getting going, and as far as I can tell none of the other distributions with active arm64 or ppc64el ports (or trivial name variants) has got as far as porting GHC yet. Well, OK. This was somewhere around the Christmas holidays and I had some time. Muggins here cracks his knuckles and decides to have a go at bootstrapping it from scratch. It can't be that hard, right? Not to mention that it was a blocker for over 600 entries on that build failure list I mentioned, which is definitely enough to make me sit up and take notice; we'd even had the odd customer request for it.

Several attempts later and I was starting to doubt my sanity, not least for trying in the first place. We ship GHC 7.6, and upgrading to 7.8 is not a project I'd like to tackle until the much more experienced Haskell folks in Debian have switched to it in unstable. The porting documentation for 7.6 has bitrotted more or less beyond usability, and the corresponding documentation for 7.8 really isn't backportable to 7.6. I tried building 7.8 for ppc64el anyway, picking that on the basis that we had quicker hardware for it and didn't seem likely to be particularly more arduous than arm64 (ho ho), and I even got to the point of having a cross-built stage2 compiler (stage1, in the cross-building case, is a GHC binary that runs on your starting architecture and generates code for your target architecture) that I could copy over to a ppc64el box and try to use as the base for a fully-native build, but it segfaulted incomprehensibly just after spawning any child process. Compilers tend to do rather a lot, especially when they're built to use GCC to generate object code, so this was a pretty serious problem, and it resisted analysis. I poked at it for a while but didn't get anywhere, and I had other things to do so declared it a write-off and gave up.

Book the Second: The golden thread of progress

In March, another mailing list conversation prodded me into finding a blog entry by Karel Gardas on building GHC for arm64. This was enough to be worth another look, and indeed it turned out that (with some help from Karel in private mail) I was able to cross-build a compiler that actually worked and could be used to run a fully-native build that also worked. Of course this was 7.8, since as I mentioned cross-building 7.6 is unrealistically difficult unless you're considerably more of an expert on GHC's labyrinthine build system than I am. OK, no problem, right? Getting a GHC at all is the hard bit, and 7.8 must be at least as capable as 7.6, so it should be able to build 7.6 easily enough ...

Not so much. What I'd missed here was that compiler engineers generally only care very much about building the compiler with older versions of itself, and if the language in question has any kind of deprecation cycle then the compiler itself is likely to be behind on various things compared to more typical code since it has to be buildable with older versions. This means that the removal of some deprecated interfaces from 7.8 posed a problem, as did some changes in certain primops that had gained an associated compatibility layer in 7.8 but nobody had gone back to put the corresponding compatibility layer into 7.6. GHC supports running Haskell code through the C preprocessor, and there's a __GLASGOW_HASKELL__ definition with the compiler's version number, so this was just a slog tracking down changes in git and adding #ifdef-guarded code that coped with the newer compiler (remembering that stage1 will be built with 7.8 and stage2 with stage1, i.e. 7.6, from the same source tree). More inscrutably, GHC has its own packaging system called Cabal which is also used by the compiler build process to determine which subpackages to build and how to link them against each other, and some crucial subpackages weren't being built: it looked like it was stuck on picking versions from "stage0" (i.e. the initial compiler used as an input to the whole process) when it should have been building its own. Eventually I figured out that this was because GHC's use of its packaging system hadn't anticipated this case, and was selecting the higher version of the ghc package itself from stage0 rather than the version it was about to build for itself, and thus never actually tried to build most of the compiler. Editing ghc_stage1_DEPS in ghc/stage1/package-data.mk after its initial generation sorted this out. One late night building round and round in circles for a while until I had something stable, and a Debian source upload to add basic support for the architecture name (and other changes which were a bit over the top in retrospect: I didn't need to touch the embedded copy of libffi, as we build with the system one), and I was able to feed this all into Launchpad and watch the builders munch away very satisfyingly at the Haskell library stack for a while.

This was all interesting, and finally all that work was actually paying off in terms of getting to watch a slew of several hundred build failures vanish from arm64 (the final count was something like 640, I think). The fly in the ointment was that ppc64el was still blocked, as the problem there wasn't building 7.6, it was getting a working 7.8. But now I really did have other much more urgent things to do, so I figured I just wouldn't get to this by release time and stuck it on the figurative shelf.

Book the Third: The track of a bug

Then, last Friday, I cleared out my urgent pile and thought I'd have another quick look. (I get a bit obsessive about things like this that smell of "interesting intellectual puzzle".) slyfox on the #ghc IRC channel gave me some general debugging advice and, particularly usefully, a reduced example program that I could use to debug just the process-spawning problem without having to wade through noise from running the rest of the compiler. I reproduced the same problem there, and then found that the program crashed earlier (in stg_ap_0_fast, part of the run-time system) if I compiled it with +RTS -Da -RTS. I nailed it down to a small enough region of assembly that I could see all of the assembly, the source code, and an intermediate representation or two from the compiler, and then started meditating on what makes ppc64el special.

You see, the vast majority of porting bugs come down to what I might call gross properties of the architecture. You have things like whether it's 32-bit or 64-bit, big-endian or little-endian, whether char is signed or unsigned, that sort of thing. There's a big table on the Debian wiki that handily summarises most of the important ones. Sometimes you have to deal with distribution-specific things like whether GL or GLES is used; often, especially for new variants of existing architectures, you have to cope with foolish configure scripts that think they can guess certain things from the architecture name and get it wrong (assuming that powerpc* means big-endian, for instance). We often have to update config.guess and config.sub, and on ppc64el we have the additional hassle of updating libtool macros too. But I've done a lot of this stuff and I'd accounted for everything I could think of. ppc64el is actually a lot like amd64 in terms of many of these porting-relevant properties, and not even that far off arm64 which I'd just successfully ported GHC to, so I couldn't be dealing with anything particularly obvious. There was some hand-written assembly which certainly could have been problematic, but I'd carefully checked that this wasn't being used by the "unregisterised" (no specialised machine dependencies, so relatively easy to port but not well-optimised) build I was using. A problem around spawning processes suggested a problem with SIGCHLD handling, but I ruled that out by slowing down the first child process that it spawned and using strace to confirm that SIGSEGV was the first signal received. What on earth was the problem?

From some painstaking gdb work, one thing I eventually noticed was that stg_ap_0_fast's local stack appeared to be being corrupted by a function call, specifically a call to the colourfully-named debugBelch. Now, when IBM's toolchain engineers were putting together ppc64el based on ppc64, they took the opportunity to fix a number of problems with their ABI: there's an OpenJDK bug with a handy list of references. One of the things I noticed there was that there were some stack allocation optimisations in the new ABI, which affected functions that don't call any vararg functions and don't call any functions that take enough parameters that some of them have to be passed on the stack rather than in registers. debugBelch takes varargs: hmm. Now, the calling code isn't quite in C as such, but in a related dialect called "Cmm", a variant of C-- (yes, minus), that GHC uses to help bridge the gap between the functional world and its code generation, and which is compiled down to C by GHC. When importing C functions into Cmm, GHC generates prototypes for them, but it doesn't do enough parsing to work out the true prototype; instead, they all just get something like extern StgFunPtr f(void);. In most architectures you can get away with this, because the arguments get passed in the usual calling convention anyway and it all works out, but on ppc64el this means that the caller doesn't generate enough stack space and then the callee tries to save its varargs onto the stack in an area that in fact belongs to the caller, and suddenly everything goes south. Things were starting to make sense.

Now, debugBelch is only used in optional debugging code; but runInteractiveProcess (the function associated with the initial round of failures) takes no fewer than twelve arguments, plenty to force some of them onto the stack. I poked around the GCC patch for this ABI change a bit and determined that it only optimised away the stack allocation if it had a full prototype for all the callees, so I guessed that changing those prototypes to extern StgFunPtr f(); might work: it's still technically wrong, not least because omitting the parameter list is an obsolescent feature in C11, but it's at least just omitting information about the parameter list rather than actively lying about it. I tweaked that and ran the cross-build from scratch again. Lo and behold, suddenly I had a working compiler, and I could go through the same build-7.6-using-7.8 procedure as with arm64, much more quickly this time now that I knew what I was doing. One upstream bug, one Debian upload, and several bootstrapping builds later, and GHC was up and running on another architecture in Launchpad. Success!

Epilogue

There's still more to do. I gather there may be a Google Summer of Code project in Linaro to write proper native code generation for GHC on arm64: this would make things a good deal faster, but also enable GHCi (the interpreter) and Template Haskell, and thus clear quite a few more build failures. Since there's already native code generation for ppc64 in GHC, getting it going for ppc64el would probably only be a couple of days' work at this point. But these are niceties by comparison, and I'm more than happy with what I got working for 14.04.

The upshot of all of this is that I may be the first non-Haskell-programmer to ever port GHC to two entirely new architectures. I'm not sure if I gain much from that personally aside from a lot of lost sleep and being considered extremely strange. It has, however, been by far the most challenging set of packages I've ported, and a fascinating trip through some odd corners of build systems and undefined behaviour that I don't normally need to touch.

Categories: Elsewhere

Richard Hartmann: git-annex corner case: Changing commit messages retroactively and after syncing

Planet Debian - Tue, 15/04/2014 - 00:12

This is half a blog post and half a reminder for my future self.

So let's say you used the following commands:

git add foo git annex add bar git annex sync # move to different location with different remotes available git add quux git annex add quuux git annex sync

what I wanted to happen was to simply sync the already committed stuff to the other remotes. What happened instead was git annex sync's automagic commit feature (which you can not disable, it seems) doing its job: Commit what was added earlier and use "git-annex automatic sync" as commit message.

This is not a problem in and as of itself, but as this is my my master annex and as I managed to maintain clean commit messages for the last few years, I felt the need to clean this mess up.

Changing old commit messages is easy:

git rebase --interactive HEAD~3

pick the r option for "reword" and amend the two commit messages. I did the same on my remote and all the branches I could find with git branch -a. Problem is, git-annex pulls in changes from refs which are not shown as branches; run git annex sync and back are the old commits along with a merge commit like an ugly cherry on top. Blegh.

I decided to leave my comfort zone and ended up with the following:

# always back up before poking refs git clone --mirror repo backup git reset --hard 1234 git show-ref | grep master # for every ref returned, do: git update-ref $ref 1234

rinse repeat for every remote, git annex sync, et voilà. And yes, I avoided using an actual loop on purpose; sometimes, doing things slowly and by hand just feels safer.

For good measure, I am running

git fsck && git annex fsck

on all my remotes now, but everything looks good up to now.

Categories: Elsewhere

Forum One: Big on Drupal in the Big Apple – NYC Camp 2014

Planet Drupal - Mon, 14/04/2014 - 22:27

We’re back from another successful Drupal NYC Camp!

A great event as always (thanks to Forest Mars and all the other volunteers and organizers that made the event possible), this year the event attracted more than 800 attendees and was held at a truly awesome venue: the United Nations.

Forum One’s presence this year was bigger than ever! Five of us attended, four of whom spoke at five different sessions covering a variety of topics:
Keenan Holloway talked to a packed room with his tongue-twisting, alliteratively-titled session: Paraphrasing Panels, Panelizer and Panopoly;
Chaz Chumley showed his extensive knowledge of the upcoming Drupal 8 Theming system – look for his book on the same topic later this year;
Michaela Hackner joined forces with Chaz Chumley to highlight some of our recent work with the American Red Cross on the Global Disaster Preparedness Center in a session called Designing for Disasters: An Iterative Progression Towards a More Prepared World;
• and William Hurley (that’s me!) was honored to have the opportunity to talk about Building Interactive Web Applications with Drupal, as well as Developing Locally with Virtual Machines at the DevOps summit (this latter one, sadly, wasn’t recorded due to some technical difficulties). I was blown away by the attendance at both of my sessions and was honored to be able to share some of our challenges and solutions at each.

These camps aren’t solely about sessions, of course. While not all of us were able to stay the whole weekend, Kalpana Goel stayed through Monday to work on some of the Drupal 8 sprints that were going on.

We love the opportunity to give back to the community in as many ways as possible, in code contributions to Drupal 8, contributed modules and also attending and speaking whenever possible. If you appreciate our expertise and would like us to speak at an event, drop us a line at marketing (at) forumone (dot) com and we’ll be happy to participate!

Back from the Big Apple, we highlight our participation in the 2014 NYC Drupal Camp and share the recordings of the five sessions that our team rocked at the event, which was held at the United Nations building in New York City.

Categories: Elsewhere

Drupal Association News: Submit Your Design Proposals for DrupalCon Latin America!

Planet Drupal - Mon, 14/04/2014 - 21:19

Though DrupalCon Latin America - Bogotá, Columbia is just under a year away, we’re already getting the ball rolling on planning and organization— and we need your help!

Categories: Elsewhere

ImageX Media: An inheritable install profile architecture for Drupal

Planet Drupal - Mon, 14/04/2014 - 20:55

Drupal core comes with a built-in structure called an installation profile. An install profile is a specific set of features and configurations that get built when the site is installed. Drupal has almost always had some variety of install profile, but with Drupal 7 they became a whole lot easier to create and understand.

Categories: Elsewhere

Frederick Giasson: Installing OSF for Drupal using the OSF Installer (Screencast)

Planet Drupal - Mon, 14/04/2014 - 20:01

The Open Semantic Framework (OSF) for Drupal is a middleware layer that allows structured data (RDF) and associated vocabularies (ontologies) to “drive” tailored tools and data displays within Drupal. The basic OSF for Drupal modules provide two types of capabilities. First, there are a series of connector modules such as OSF Entities, OSF SearchAPI and OSF Field Storage to integrate an OSF instance into Drupal’s core APIs. Second, there is a series of module tools used to administer all of these capabilities.

By using OSF for Drupal, you may create, read, update and delete any kind of content in a OSF instance. You may also search, browse, import and export structured datasets from an OSF instance.

OSF for Drupal connects to the underlying structured (RDF) data via the separately available open-source OSF Web Services. OSF Web Services is a mostly RESTful Web services layer that allows standalone or multiple Drupal installations to share and collaborate structured data with one another via user access rights and privileges to registered datasets. Collaboration networks may be established directly to distributed OSF Web Services servers, also allowing non-Drupal installations to participate in the network.

OSF for Drupal can also act as a linked data platform. With Drupal’s other emerging RDF capabilities, content generated by Drupal can be ingested by the OSF Web Services and managed via the OSF for Drupal tools, including the publication and exposure on the Web of linked data with query and Web service endpoints.

OSF for Drupal has dependencies on OSF Web Services, which means an operational OSF for Drupal website only requires access to a fully operational OSF instance. For instance, you can check the Installing Core OSF (Open Semantic Framework) screencast to see how you can deploy your own OSF Web Services instance.

Installing OSF for Drupal using the OSF Installer

In this screencast, we will cover how to install OSF for Drupal using the OSF Installer command line tool.

Categories: Elsewhere

Daniel Kahn Gillmor: OTR key replacement (heartbleed)

Planet Debian - Mon, 14/04/2014 - 19:45
I'm replacing my OTR key for XMPP because of heartbleed (see below).

If the plain ASCII text below is mangled beyond verification, you can retrieve a copy of it from my web site that should be able to be verified.

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 OTR Key Replacement for XMPP dkg@jabber.org =========================================== Date: 2014-04-14 My main XMPP account is dkg@jabber.org. I prefer OTR [0] conversations when using XMPP for private discussions. I was using irssi to connect to XMPP servers, and irssi relies on OpenSSL for the TLS connections. I was using it with versions of OpenSSL that were vulnerable to the "Heartbleed" attack [1]. It's possible that my OTR long-term secret key was leaked via this attack. As a result, I'm changing my OTR key for this account. The new, correct OTR fingerprint for the XMPP account at dkg@jabber.org is: F8953C5D 48ABABA2 F48EE99C D6550A78 A91EF63D Thanks for taking the time to verify your peers' fingerprints. Secure communication is important not only to protect yourself, but also to protect your friends, their friends and so on. Happy Hacking, --dkg (Daniel Kahn Gillmor) Notes: [0] OTR: https://otr.cypherpunks.ca/ [1] Heartbleed: http://heartbleed.com/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQJ8BAEBCgBmBQJTTBF+XxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRFQjk2OTEyODdBN0FEREUzNzU3RDkxMUVB NTI0MDFCMTFCRkRGQTVDAAoJEKUkAbEb/fpcYwkQAKLzEnTV1lrK6YrhdvRnuYnh Bh9Ad2ZY44RQmN+STMEnCJ4OWbn5qx/NrziNVUZN6JddrEvYUOxME6K0mGHdY2KR yjLYudsBuSMZQ+5crZkE8rjBL8vDj8Dbn3mHyT8bAbB9cmASESeQMu96vni15ePd 2sB7iBofee9YAoiewI+xRvjo2aRX8nbFSykoIusgnYG2qwo2qPaBVOjmoBPB5YRI PkN0/hAh11Ky0qQ/GUROytp/BMJXZx2rea2xHs0mplZLqJrX400u1Bawllgz3gfV qQKKNc3st6iHf3F6p6Z0db9NRq+AJ24fTJNcQ+t07vMZHCWM+hTelofvDyBhqG/r l8e4gdSh/zWTR/7TR3ZYLCiZzU0uYNd0rE3CcxDbnGTUS1ZxooykWBNIPJMl1DUE zzcrQleLS5tna1b9la3rJWtFIATyO4dvUXXa9wU3c3+Wr60cSXbsK5OCct2KmiWY fJme0bpM5m1j7B8QwLzKqy/+YgOOJ05QDVbBZwJn1B7rvUYmb968yLQUqO5Q87L4 GvPB1yY+2bLLF2oFMJJzFmhKuAflslRXyKcAhTmtKZY+hUpxoWuVa1qLU3bQCUSE MlC4Hv6vaq14BEYLeopoSb7THsIcUdRjho+WEKPkryj6aVZM5WnIGIS/4QtYvWpk 3UsXFdVZGfE9rfCOLf0F =BGa1 -----END PGP SIGNATURE-----
Categories: Elsewhere

AGLOBALWAY: Mobile First?

Planet Drupal - Mon, 14/04/2014 - 18:57
Much has been said over the last number of years since the publication of Luke Wroblewski’s Mobile First in 2011, as part of the A Book Apart series, marked as “brief books for people who make websites.”  The series offers valuable tools about designing for and working in the web business, and Luke’s contribution is no small one.   And while a few years have come and gone, has anything really changed? I don’t think so. But perhaps some clarification of terms is in order.    One of the hallmarks of “mobile first” is asking tough questions about what we actually put on the page. For example, if we determine that something is not necessary for the mobile experience of a website, it can be worth calling into question whether it is valuable for the “full desktop experience” as well.    Given the restrictions of the viewport on mobile devices, it makes perfect sense to limit the things that can take away from a quality experience of your website. Ideally, a user’s focus would be on the content, which (hopefully) is the reason to be on your site in the first place. So let’s get rid of everything else!   Behold the pendulum swinging, babies thrown out with the bathwater.   While nobody would deny the increase in the use of mobile devices, desktop browsers are still king of the hill when it comes to how people access the internet. Given the numbers (a quick Google search will give you a general idea), it is understandable that people get scared that by eliminating things from the mobile experience of your site, we may be getting rid of too much. And indeed, there have no doubt been many cases of this happening.   Mobile first, not mobile only.   What needs bearing in mind, however, is the idea of designing for mobile first. I’m sure Mr. Wroblewski reflected on the terms carefully, deciding not to title his book Designing for Mobile, as though it were a separate thing - indeed, if it is separate, we now know it ought not be. Thankfully, he had the foresight to be able to craft the right message, even if it fell on a few deaf ears.   More and more, mobile users area demanding a complete experience to be possible for them as well. This was certainly to be expected. Should we really be assuming that mobile users are necessarily “on the go” and therefore should not expect what they might experience on a desktop?  We all know what they say about making assumptions…   There are many, many challenges when it comes to building responsive websites, and I believe that designing for the mobile experience is chief among them. Not a small part of which is understanding the technical implications of such designs - this is a certainly justification for placing the mobile experience “first” in the design stage. And yet, rather than being limited by screen size in designing for mobile, we actually have an opportunity to take advantage of the power of the device. Perhaps the mobile experience could even be a superior one because of its capabilities.   So should we still be designing for mobile first? Yes - so long as it remains part of an holistic overall design for the user experience. I’m sure Luke would agree. Tags: Mobiledrupal planet
Categories: Elsewhere

NYC Camp News & Announcements: Free Drupal trainings at NYC Camp

Planet Drupal - Mon, 14/04/2014 - 18:53
Body: 

Did you know NYC Camp has a massive list of completely free Drupal trainings scheduled for Thursday April 10th??? Check out the line-up and sign up!

Don't Forget To Register!

Make sure you create an account and register for NYC Camp 2014, Registration is completely free but the UN security is fairly strict so please register for the camp and then you can go ahead and sign up for a free training on any of the training description pages!

Date: Monday, April 14, 2014
Categories: Elsewhere

Fred Parke | The Web Developer: Creating content types and fields using a custom module in Drupal 7

Planet Drupal - Mon, 14/04/2014 - 18:44

I was writing a custom module recently which used a custom content type or two. I wanted to make the module as reusable as possible but I also wanted to avoid including a feature inside of the module to add these content types.

Categories: Elsewhere

Christine Spang: PyCon 2014 retrospective

Planet Debian - Mon, 14/04/2014 - 18:15

PyCon 2014 happened. (Sprints are still happening.)

This was my 3rd PyCon, but my first year as a serious contributor to the event, which led to an incredibly different feel. I also came as a person running a company building a complex system in Python, and I loved having the overarching mission of what I'm building driving my approach to what I chose to do. PyCon is one of the few conferences I go to where the feeling of acceptance and at-homeness mitigates the introvert overwhelm at nonstop social interaction. It's truly a special event and community.

Here are some highlights:

  • I gave a tutorial about search, which was recorded in its entirety... if you watch this video, I highly recommend skipping the hands-on parts where I'm just walking around helping people out.
  • I gave a talk! It's called Subprocess to FFI, and you can find the video here. Through three full iterations of dry runs with feedback, I had a ton of fun preparing this talk. I'd like to give more like it in the future as I continue to level up my speaking skills.
  • Allen Downey came to my talk and found me later to say hi. Omg amazing, made my day.
  • Aux Vivres and Dieu du Ciel, amazing eats and drink with great new and old friends. Special shout out to old Debian friends Micah Anderson, Matt Zimmerman, and Antoine Beaupré for a good time at Dieu du Ciel.
  • The Geek Feminism open space was a great place to chill out and always find other women to hang with, much thanks to Liz Henry for organizing it.
  • Talking to the community from the Inbox booth on Startup Row in the Expo hall on Friday. Special thanks for Don Sheu and Yannick Gingras for making this happen, it was awesome!
  • The PyLadies lunch. Wow, was that amazing. Not only did I get to meet Julia Evans (who also liked meeting me!), but there was an amazing lineup of amazing women telling everyone about what they're doing. This and Noami Ceder's touching talk about openly transitioning while being a member of the Python community really show how the community walks the walk when it comes to diversity and is always improving.
  • Catching up with old friends like Biella Coleman, Selena Deckelmann, Deb Nicholson, Paul Tagliamonte, Jessica McKellar, Adam Fletcher, and even friends from the bay area who I don't see often. It was hard to walk places without getting too distracted running into people I knew, I got really good at waving and continuing on my way.

I didn't get to go to a lot of talks in person this year since my personal schedule was so full, but the PyCon video team is amazing as usual, so I'm looking forward to checking out the archive. It really is a gift to get the videos up while energy from the conference is still so high and people want to check out things they missed and share the talks they loved.

Thanks to everyone, hugs, peace out, et cetera!

Categories: Elsewhere

Appnovation Technologies: 12 Best Designed College Websites

Planet Drupal - Mon, 14/04/2014 - 17:08
Here's a look at 12 of the best designed college websites. var switchTo5x = false;stLight.options({"publisher":"dr-75626d0b-d9b4-2fdb-6d29-1a20f61d683"});
Categories: Elsewhere

Drupal Association News: Drupal Association Board Meeting this Wednesday

Planet Drupal - Mon, 14/04/2014 - 16:29

The month of March was pretty huge for the Association - we tackled a lot! Join us for the next Drupal Asssociation board meeting where we will review the work we accomplished and set the stage for even more. In addition to our review of March, we'll be discussing a new Marketing Committeee charter, a new Procurement Policy, and review some branding updates for the Association.

Categories: Elsewhere

Craig Small: mutt ate my i key

Planet Debian - Mon, 14/04/2014 - 15:11

I did a large upgrade tonight and noticed there was a mutt upgrade, no biggie really….Except my I have for years (incorrectly?) used the “i” key when reading a specific email to jump back to the list of emails, or from index to pager in mutt speak.

Instead of my pager of mails, I got “No news servers defined!” The fix is rather simple, in muttrc put

bind pager i exit

and you’re back to using the i key the wrong way again like me.

 

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator