Elsewhere

Drupal Association News: Drupal Association Board Meeting: 20 August 2014

Planet Drupal - Tue, 26/08/2014 - 01:46

We held our most recent monthly board meeting last Wednesday, 20 August and we had a lot of news to report and a big agenda to cover. You can review the materials or check out the recording

As the year continues to progress, our momentum as a team continues to build. We're accomplishing more and more with the community, which is fantastic to see. That said, it's also been a challenging year. This is the first year we have attempted to systematically measure the impact of our work. On the one hand, it's been wonderful to start to accumulate a baseline of data we can measure against for the future. On the other hand, the data is also a little all over the place. In some cases, we had very little to go on when setting the goals, which means that we aimed way too high or low. In other places, we have some areas of real concern to address. 

Here the topics we tackled on Wednesday:

Drupal.org Improvments

Overall, we've begun to see some significant improvements to the stability and performance of Drupal.org. Although many of our metrics related to performance are still in the red for the year, the last few months have seen significant improvements in page load times, etc. In short, things ARE getting better. Additionally, as the tech team has begun to gel under new CTO Josh Mitchell's leadership, they have begun to rapidly turn out great work on the feature side of Drupal.org. We've tackled a remarkable number of issues in just the last few months:

  • Implemented user pictures on Drupal.org profiles
  • Conducted 30 User Research interviews and began developing personas for a skill acquisition model of user design (more to come from the DCWG)
  • Implemented RESTws API for Drupal.org
  • Implemented Semantic Versioning for Drupal 8
  • Added Supporting Organizations field on projects (entity reference to an organization with an explanation field - we need to promote this change as it was part of the overall efforts to give credit to organizations)
  • Took over maintenance of the PIFT and PFFR testbots so that the community could continue with improvements to a modern, best-practice alternative
  • Updated the Bakery module to allow us to better integrate with subsites like Drupal Jobs
  • Responded to spam on several subsites where the basic Mollom configurations were overwhelmed by human spammers
  • Responded to and deployed several security release updates including the recent XMLRPC response where we teamed up with WordPress
  • Launched a new landing page on Drupal.org for designers and front-end developers
  • Automated process for publishing supporting partners on Drupal.org

Although Drupal.org is chock full of data, this is an area where we had very little longitudinal or granular data to guide our goal setting. Combined with our slow hiring cycle, we've had a tough time really making a dent in some of our red numbers, but we ARE making progress and most importantly will know so much more for next year than we did for this year. 

DrupalCons

We shared a very in-depth review of DrupalCon Austin at this board meeting, as well as trends for Amsterdam. The long and short is that we had, in almost every way, a very successful DrupalCon in Texas. We were able to compare evaluation, finance, and attendance numbers to Portland and show our year over year trends, which was very helpful. While there is a lot to be happy about, we also have reason for concern. While DrupalCons have sustained growth year over year since their beginning, Texas was basically flat compared to Portland in terms of attendees. Looking ahead at the Amsterdam numbers, we're also finding that we may be at or slightly below our Prague numbers. 

There are many reason we could be seeing a plateauing of these numbers. It could be a natural by product of where we are in the product development cycle. No Drupal 8 and a really mature Drupal 7 product means there is less to talk about and learn. It may be that our demographics are shifting and the Con is not needing their needs. It may be a combination of many things. 

What we DO know is that we need to get to the bottom of the issue so that we can adjust our strategy accordingly. After Amsterdam, you will see a survey from the Association to help understand your DrupalCon motivations. So whether you've always gone to DrupalCon or have never entertained the notion, we will want to hear from you.

Licensing Issues on Drupal.org

I've heard from lots of volunteers on Drupal.org recently that our policies for enforcing GPL v2 licensing on Drupal.org have been problematic. In short, there are too many issues, those issues are reported inconsistently, and volunteers are not trained on our licensing issues and apply remediation to those issues inconsistently. It's a pretty typical story - great volunteers getting stuck in an escalating situation. 

To help mitigate these issues, I pulled together a call with folks who had been working on these issues for advice about how we can help fix the process. The advice of the group is to form a Licensing team (like the Security Team), that receives training from the Association's lawyers and works together to resolve licensing issues quickly and consistently. We would create a separate queue for licensing issues and get this off the plates of the webmasters queue volunteers (where most issues end up now). 

The board agreed that this woudl be the logical next step and a meeting has been scheduled for September 9th to begin work on a charter for the group.  We'll share more details as we have them.

Quarterly Financials

Finally, in executive session we reviewed and approved the financials for the second quarter of 2014. Here they are:

Next Meeting

The next board meeting was scheduled for 17 September 2014. However, given the proximity to the 3 October board meeting at DrupalCon Amsterdam, the board decided to cancel that meeting. Remember though, you can always review the minutes and meeting materials in the public GDrive

Flickr photo: xjm

Categories: Elsewhere

Robert Douglass: Robert Douglass takes the ALS Ice Bucket Challenge in Köln

Planet Drupal - Mon, 25/08/2014 - 20:23

Baris Wanschers called me out, and here it is, my ALS Ice Bucket Challenge.

Thank you to the Drupal Community for 10 years of prosperity: I hope you take this challenge too, and find it in your heart to give to ALSA.org or a charitable organizations of your choice.

This video is dedicated to Aaron Winborn. Aaron's family could also use your donations, as he is suffering from ALS.

Finally, I expect to see Dries Buytaert, Kieran Lal, and Jeffrey "jam" McGuire complete this challenge within 24 hours!

Tags: Drupal Planet
Categories: Elsewhere

Drupal.org Featured Case Studies: Tech Coast Angels

Planet Drupal - Mon, 25/08/2014 - 18:54
Completed Drupal site or project URL: http://www.techcoastangels.com/

Tech Coast Angels is the largest angel investment organization in the United States. With over 300 members throughout Southern California, Tech Coast Angels' members have invested over $120 million in over 200 startup companies since their inception in 1997.

Since 2013, Exaltation of Larks has been working with Tech Coast Angels with their online systems, including an extensive Drupal web application that their members use as a deal flow tracker and document management system. Services we’ve provided include support, maintenance, security improvements, performance optimization, and mobile integration.

The web application that Tech Coast Angels uses allows its members to view startup companies' applications for funding, discuss each company's application, and collaborate with one another in researching each company, which then helps them make individual decisions on funding.

Key modules/theme/distribution used: ServicesPHP Filter LockAPC - Alternative PHP CacheSecure Password HashesFeaturesACLOrganizations involved: Exaltation of LarksTeam members: focal55
Categories: Elsewhere

Code Karate: Multiple Views Part 3

Planet Drupal - Mon, 25/08/2014 - 15:23
Episode Number: 164

In the last installment of multiple views you will learn how to change the look of the view using the two classes you set in the previous video. By using CSS, you will be able to display content in two ways depending on the choice of the viewer. This is a nice advantage to provide options for the visitor to your site.

Tags: DrupalViewsDrupal 7Theme DevelopmentDrupal PlanetUI/DesignCSS
Categories: Elsewhere

Acquia: How I learned Drupal 8

Planet Drupal - Mon, 25/08/2014 - 15:13

In this post, I will share my experience on trying to learn Drupal 8 during its alpha stage, talk about some of the challenges of keeping up with the ongoing changes while trying to learn it, and end with some tips and resources which proved useful for me.

Categories: Elsewhere

Lullabot: Module Monday: Office Hours

Planet Drupal - Mon, 25/08/2014 - 15:00

Lets say you are building a site for an institution with multiple locations, each of which have varying hours depending on time of year or other variables. What is the best way to manage this data? This is a pretty common type of content modeling problem. The easiest thing to do is to just give each location a text field for their hours, but that limits display options and is prone to data entry errors. You could also build out a whole fancy content type with multi-instance date fields, but that could get bloated and difficult to maintain pretty quickly.

Categories: Elsewhere

Steve Kemp: Updates on git-hosting and load-balancing

Planet Debian - Mon, 25/08/2014 - 10:16

To round up the discussion of the Debian Administration site yesterday I flipped the switch on the load-balancing. Rather than this:

https -> pound \ \ http -------------> varnish --> apache

We now have the simpler route for all requests:

http -> haproxy -> apache https -> haproxy -> apache

This means we have one less HTTP-request for all incoming secure connections, and these days secure connections are preferred since a Strict-Transport-Security header is set.

In other news I've been juggling git repositories; I've setup an installation of GitBucket on my git-host. My personal git repository used to contain some private repositories and some mirrors.

Now it contains mirrors of most things on github, as well as many more private repositories.

The main reason for the switch was to get a prettier interface and bug-tracker support.

A side-benefit is that I can use "groups" to organize repositories, so for example:

Most of those are mirrors of the github repositories, but some are new. When signed in I see more sources, for example the source to http://steve.org.uk.

I've been pleased with the setup and performance, though I had to add some caching and some other magic at the nginx level to provide /robots.txt, etc, which are not otherwise present.

I'm not abandoning github, but I will no longer be using it for private repositories (I was gifted a free subscription a year or three ago), and nor will I post things there exclusively.

If a single canonical source location is required for a repository it will be one that I control, maintain, and host.

I don't expect I'll give people commit access on this mirror, but it is certainly possible. In the past I've certainly given people access to private repositories for collaboration, etc.

Categories: Elsewhere

Hideki Yamane: Could you try to consider speaking more slowly and clearly at sessions, please?

Planet Debian - Mon, 25/08/2014 - 06:08

Some people (including me :) are not native English speaking person, and also not use English for usual conversation. So, it's a bit tough for them to hear what you said if you speak as usual speed. We want to listen your presentation to understand and discuss about it (of course!), but sometimes machine gun speaking would prevent it.

Calm down, take a deep breath and do your presentation - then it'll be a fantastic, my cat will be pleased with it as below (meow!).



Thank you for your reading. See you in cheese & wine party.

Categories: Elsewhere

Friendly Machine: Web Performance: A Guide to Building Fast Drupal Websites

Planet Drupal - Mon, 25/08/2014 - 00:50

What follows is part one in a series of posts on web performance that I've wanted to write for quite some time. In this series of posts I'll not only be talking about optimizing web performance generally, but also providing specific guidance for speeding up Drupal sites.

Although I'm not a web performance specialist or expert, I have taken a keen interest in the topic in my work as a frontend developer building responsive websites. I love building fast sites and have gained some experience over the years getting Drupal to shed some its inherent sluggishness. 

As a way of systematically tackling what can be a complex subject, we'll use the results of a test from WebPageTest.org, a Google-sponsored tool that provides very in-depth information about the performance of a site in nice, easily digestible chunks.

How Fast Is Fast Enough?

Before we get into the details of web performance we should first stop to ask how fast a site should be in order to qualify as "fast". Here are some research results courtesy of Radware that might help bring things into focus:

  • 64% of smartphone users expect pages to load in less than 4 seconds.
  • The performance poverty line (i.e. the plateau at which your website’s load time ceases to matter because you’ve hit close to rock bottom in terms of business metrics) for most sites is around 8 seconds.

More guidance comes courtesy of Ilya Grigorek of Google. In his recent presentation at the Velocity conference, he cited research that indicates a target page render time should be 1000ms - or one second  - to avoid "context switching" among users.

Basically, if it takes a page longer than one second to render, you risk losing the attention of the user. If it takes longer than eight seconds for a page to render, it's similar in terms of business metrics (conversions, sales, etc) as if it took 30 seconds or a minute.

If one second sounds impossibly ambitious, there is further research showing that a load time of three seconds or less is probably OK

The bottom line: your pages should load in under three seconds on desktop, and under 4 seconds on a mobile.

Pretty harsh reality check, huh? Let's see what can be done to get our sites whipped into shape.

Test Results for this Analysis

In order for us to have a practical example for our discussion, I ran the Friendly Machine site through WebPageTest. Here are the results (click to enlarge image):

I recently completed a refresh of the design of this site with a lot of attention focused on keeping things fast. My target page load time was one second, so I was happy when the results consistently came in below that.

Let's start our analysis by looking at the first number in the above table - under the heading "Load Time". You'll see the value is 0.662 seconds. That's pretty darn good, but if you scan across the table you may see something on the far right that's a bit confusing - a Fully Loaded Time of 0.761 seconds.

So what's the difference between Load Time and Fully Loaded Time?

Load Time is calculated from the time when the user started navigating to the page until the Document Complete event is fired. The Document Complete event is fired by the browser once the page has completed loading.

The Fully Loaded time, on the other hand, also includes any metrics up until there is no network activity for two seconds. Most of the time this means watching for things being loaded by JavaScript in the background.

First Byte Time = Backend Performance

Whenever talk turns to web performance, it seems a lot of folks immediately start thinking of what's happening on the server. Although it's a very important piece of the puzzle, as we walk through this analysis, you'll see that most web performance issues actually reside on the frontend.

Before we get ahead of ourselves, though, let's return to the results from our test and look at the next metric in our table, First Byte Time (highlighted in blue below) which tells us about performance on the server.

This First Byte Time represents the time from when a user began navigating to the page until the first bit of the server response arrived at the browser. The target time for this on WebPageTest is a meager 87 ms!

This metric is also represented as the first in the series of letter grades you see at the top of the test results. You'll notice Friendly Machine got an "A" and I really wish I could take credit for it, but the truth is my host Pantheon - and the awesome backend performance they provide - are responsible for this metric scoring well.

Backend Drupal Performance

Let's pause here and talk specifically about backend Drupal performance. How would we address this metric if it hadn't scored well? This topic can get pretty deep, so we'll only review the most popular options, but they'll still be able to do wonders for improving this key metric if your site is not performing well.

Let's start by discussing server resources (with a brief, tangential mini-rant about shared hosting).

If you want a fast Drupal website, you really shouldn't be on a shared host, period. Although many of them will claim to be Drupal specialists, very few of them actually are. One giveaway is the number for PHP memory limit.

Although this number doesn't directly impact performance, it can break your site if it's too low and is also useful for smoking out hosts that don't know Drupal. You can find this number at admin/reports/status and it will look something like the image below.

You can see on my site this number is at 256 megabytes and this is most likely where you want it, although if you have a simple site without Views or Panels, then 128M might work. If it's set at 64M, then it's too low and this is often what you'll find with shared hosting arrangements. 

Another issue with shared hosting - and one that does impact performance - is that your website is on a server with perhaps hundreds of other sites. If one of those sites gets hit with a large spike in traffic or some other issue, it can affect all the sites on that server as it gobbles up the available resources.

Perhaps the biggest issue with shared hosting, however, is that advanced caching using tools like Memcached and Varnish are rarely, if ever, available. And when it comes to Drupal backend performance, caching is critical. The best you'll probably be able to do with regard to caching on shared hosting is using the Boost module, which we'll talk about in the next section.

To ensure that server resources aren't an issue for your website, consider either a managed VPS or a Drupal host like Pantheon, both of which start at around $25 per month. Pantheon is what I recommend for small to medium sized sites because your site will scale better with them and they offer tremendous value, although they are a great fit for enterprise clients as well. If you have a bigger budget, Acquia or BlackMesh might fit the bill.

Sure, these options cost more than the $7 per month the cheap hosts offer, but they will provide a professional level of service that will more than pay for itself over time.

Caching for Drupal Websites

We said caching was critical, so here are five of the most important caching solutions for a Drupal website:

  1. Drupal's built-in caching
  2. Boost module
  3. Memcached
  4. Varnish
  5. Views caching

There are other options, of course, but these five cover most of the ground. Let's briefly go through them one at a time.

Drupal's Built-in Caching

Most of a Drupal site is stored in the database - nodes, information about blocks, etc. - and enabling the default caching will store the results of these database queries so that they aren't executed every time a page is requested. Enabling these settings alone can have a big impact on performance, particularly if your pages use a lot of views. This one is kind of a no-brainer.

Boost Module

The Boost module is pretty great. It works very well in tandem with Drupal caching, but it requires some additional configuration. What you end up with after you have the module installed and configured is a caching system that stores the output of your Drupal site as static HTML pages. This takes PHP processing out of the equation, leading to another nice bump in performance.

Memcached

Memcached can speed up dynamic applications (like Drupal) by storing objects in memory. With Boost and Drupal caching, the data being cached is stored on the server's hard drive. With memcached, it's being stored in memory, something that greatly speeds up the response time for a request. Memcached works great in conjunction with both Boost and Drupal caching.

Varnish

Varnish is an HTTP accelerator that, similar to memcached, stores data in memory. It's capable of serving pages much faster than Apache (the most common web server for Drupal sites). It can also be used in conjunction with memcached, although it's often the case that they are not used together and other advanced caching methods are instead implemented alongside Varnish.

Views Caching

Another type of database caching is Views caching. Views is a very popular, but rather resource intensive, Drupal module. Implementing Views caching can give your site a nice additional performance boost by possibly removing a few database queries from the build process.

To set views caching, go to your view. On the right hand side, under Advanced > Other, you'll see a link for Caching. Just go in and set a value - an hour is usually a good default - for each view on your site.

Wrapping Up Part One

Wow, long post and all we've really covered so far is backend performance and caching! This discussion hasn't been comprehensive by any means, but it does provide a great start.

Next time we'll start digging into frontend performance, the area where most of our performance issues reside. What should be obvious so far is that web performance is a subject that is both deep and wide, but also critically important to building successful websites.

If you have any comments about this post, you may politely leave them below.

Drupal Web Performance
Categories: Elsewhere

DebConf team: Full video coverage for DebConf14 talks (Posted by Tiago Bortoletto Vaz)

Planet Debian - Sun, 24/08/2014 - 22:25

We are happy to announce that live video streams will be available for talks and discussion meetings in DebConf14. Recordings will be posted soon after the events. You can also interact with other local and remote attendees by joining the IRC channels which are listed at the streams page.

For people who want to view the streams outside a webbrowser, the page for each room lists direct links to the streams.

More information on the streams and the various possibilities offered is available at DebConf Videostreams.

The schedule of talks is available at DebConf 14 Schedule.

Thanks to our amazing video volunteers for making it possible. If you like the video coverage, please add a thank you note to VideoTeam Thanks

Categories: Elsewhere

Noah Meyerhans: Debconf by train

Planet Debian - Sun, 24/08/2014 - 22:19

Today is the first time I've taken an interstate train trip in something like 15 years. A few things about the trip were pleasantly surprising. Most of these will come as no surprise:

  1. Less time wasted in security theater at the station prior to departure.
  2. On-time departure
  3. More comfortable seats than a plane or bus.
  4. Quiet.
  5. Permissive free wifi

Wifi was the biggest surprise. Not that it existed, since we're living in the future and wifi is expected everywhere. It's IPv4 only and stuck behind a NAT, which isn't a big surprise, but it is reasonably open. There isn't any port filtering of non-web TCP ports, and even non-TCP protocols are allowed out. Even my aiccu IPv6 tunnel worked fine from the train, although I did experience some weird behavior with it.

I haven't used aiccu much in quite a while, since I have a native IPv6 connection at home, but it can be convenient while travelling. I'm still trying to figure out happened today, though. The first symptoms were that, although I could ping IPv6 hosts, I could not actually log in via IMAP or ssh. Tcpdump showed all the standard symptoms of a PMTU blackhole. Small packets flow fine, large ones are dropped. The interface MTU is set to 1280, which is the minimum MTU for IPv6 and any path on the internet is expected to handle packets of at least that size. Experimentation via ping6 reveals that the largest payload size I can successfully exchange with a peer is 820 bytes. Add 8 bytes for the ICMPv6 header for 828 bytes of payload, plus 40 bytes for the IPv6 header gives an 868 byte packet, which is well under what should be the MTU for this path.

I've worked around this problem with an ip6tables rule to rewrite the MSS on outgoing SYN packets to 760 bytes, which should leave 40 for the IPv6 header and 20 for any extension headers:

sudo ip6tables -t mangle -A OUTPUT -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 760

It is working well and will allow me to publish this from the train, which I'd otherwise have been unable to do. But... weird.

Categories: Elsewhere

Vincent Sanders: Without craftsmanship, inspiration is a mere reed shaken in the wind.

Planet Debian - Sun, 24/08/2014 - 21:47
While I imagine Johannes Brahms was referring to music I think the sentiment applies to other endeavours just as well. The trap of believing an idea is worth something without an implementation occurs all too often, however this is not such an unhappy tale.

Lars Wirzenius, Steve McIntyre and myself were chatting a few weeks ago about several of the ongoing Debian discussions. As is often the case these discussions had devolved into somewhat unproductive noise and yet amongst all this was a voice of reason in Russ Allbery.

Lars decided that would take the opportunity of the upcoming opportunity of Debconf 14 to say thank you to Russ for his work. It was decided that a plaque would be a nice gift and I volunteered to do the physical manufacture. Lars came up with the idea of a DEBCON scale similar to the DEFCON scale and got some text together with an initial design idea.

I took the initial design and as is often the case what is practically possible forced several changes. The prototype was a steep learning curve on using the Cambridge makespace laser cutter to create all the separate pieces.

The construction is pretty simple and consisted of three layers of transparent acrylic plastic. The base layer is a single piece of plastic with the correct outline. The next layer has the DEBCON title, the Debian swirl and level numbers. The top layer has the text engraved in its back surface giving the impression the text floats above the layer behind it.

For the prototype I attempted to glue the pieces together. This was a complete disaster and required discarding the entire piece and starting again with new materials.

For the second version I used four small nylon bolts to hold the sandwich of layers together which worked very well.

Yesterday at the Debconf 14 opening Steve McIntyre presented it to Russ and I think he was pleased, certainly he was surprised (photo from Aigars Mahinovs).

The design files are available from my design git repo, though why anyone would want to reproduce it I have no idea ;-)
Categories: Elsewhere

LookAlive: Saving a serialized data Array as a property on a custom Entity (D7)

Planet Drupal - Sun, 24/08/2014 - 21:42

Doing some initial prototyping work on the Comstack module I hit this question without a clear answer. For clarity here's a chunk of the schema structure for a Message Type (exportable entity).

/**
* Implements hook_schema().
*/
function comstack_schema() {
  $schema = array();

  $schema['comstack_message_type'] = array(
    'description' => 'Stores information about all defined {comstack_message} types.',
    'fields' => array(
      'id' => array(
        'type' => 'serial',
        'not null' => TRUE,
        'description' => 'Primary Key: Unique {comstack_message} type ID.',
      ),
    ...
    'delivery_methods' => array(
        'type' => 'text',
        'not null' => FALSE,
        'size' => 'big',
        'serialize' => TRUE,
        'description' => 'A serialized array of allowed send methods for this type.',
      ),

Following the instructions on how to create an exportable Entity as over here on d.o https://www.drupal.org/node/1021526 and here https://www.drupal.org/node/1021576

The second link has the following code which is the submit process where the form values are wrapped up and a new entity put together for you before saving.

/**
* Form API submit callback for the type form.
*/
function profile2_type_form_submit(&$form, &$form_state) {
  $profile_type = entity_ui_form_submit_build_entity($form, $form_state);
  ...

So how do we construct a form which will allow for arbitrary array structures? Like this (in your form function)!

  $form['delivery_methods'] = array(
    '#title' => t('Delivery methods to allow'),
    '#type' => 'checkboxes',
    '#required' => TRUE,
    '#options' => $delivery_methods,
    '#default_value' => isset($comstack_message_type->delivery_methods) ? $comstack_message_type->delivery_methods : array(),
    '#tree' => TRUE,
  );

It's the #tree bit there that does it. Here's an explanation from the Form API documentation page which is marked as archived but still useful https://www.drupal.org/node/48643.

When we set fieldset value to TRUE we create the form:

<?php
$form['colors'] = array(
'#type' => 'fieldset',
'#title' => t('Choose a color'),
'#collapsible' => FALSE,
'#tree' => TRUE,
);
$form['colors']['green'] = array(
'#type' => 'checkbox',
'#title' => t('Green'),
'#default_value' => $node->green,
'#required' => FALSE,
);

and this is how they are inserted or updated in a db_query:

<?php
function example_insert($node){
  db_query("INSERT INTO {example} (nid, question, green, blue) VALUES (%d,'%s', %d, %d)", $node->nid, $node->title, $node->colors['green'], $node->colors['blue']);
}

Any questions? Leave them in the comments :]

Categories: Elsewhere

Lucas Nussbaum: on the Dark Ages of Free Software: a “Free Service Definition”?

Planet Debian - Sun, 24/08/2014 - 17:39

Stefano Zacchiroli opened DebConf’14 with an insightful talk titled Debian in the Dark Ages of Free Software (slides available, video available soon).

He makes the point (quoting slide 16) that the Free Software community is winning a war that is becoming increasingly pointless: yes, users have 100% Free Software thin client at their fingertips [or are really a few steps from there]. But all their relevant computations happen elsewhere, on remote systems they do not control, in the Cloud.

That give-up on control of computing is a huge and important problem, and probably the largest challenge for everybody caring about freedom, free speech, or privacy today. Stefano rightfully points out that we must do something about it. The big question is: how can we, as a community, address it?

Towards a Free Service Definition?

I believe that we all feel a bit lost with this issue because we are trying to attack it with our current tools & weapons. However, they are largely irrelevant here: the Free Software Definition is about software, and software is even to be understood strictly in it, as software programs. Applying it to services, or to computing in general, doesn’t lead anywhere. In order to increase the general awareness about this issue, we should define more precisely what levels of control can be provided, to understand what services are not providing to users, and to make an informed decision about waiving a particular level of control when choosing to use a particular service.

Benjamin Mako Hill pointed out yesterday during the post-talk chat that services are not black or white: there aren’t impure and pure services. Instead, there’s a graduation of possible levels of control for the computing we do. The Free Software Definition lists four freedoms — how many freedoms, or types of control, should there be in a Free Service Definition, or a Controlled-Computing Definition? Again, this is not only about software: the platform on which a particular piece of software is executed has a huge impact on the available level of control: running your own instance of WordPress, or using an instance on wordpress.com, provides very different control (even if as Asheesh Laroia pointed out yesterday, WordPress does a pretty good job at providing export and import features to limit data lock-in).

The creation of such a definition is an iterative process. I actually just realized today that (according to Wikipedia) the very first occurrence of an attempt at a Free Software Definition was published in 1986 (GNU’s bulletin Vol 1 No.1, page 8) — I thought it happened a couple of years earlier. Are there existing attempts at defining such freedoms or levels of controls, and at benchmarking such criteria against existing services? Such criteria would not only include control over software modifications and (re)distribution, but also likely include mentions of interoperability and open standards, both to enable the user to move to a compatible service, and to avoid forcing the user to use a particular implementation of a service. A better understanding of network effects is also needed: how much and what type of service lock-in is acceptable on social networks in exchange of functionality?

I think that we should inspire from what was achieved during the last 30 years on Free Software. The tools that were produced are probably irrelevant to address this issue, but there’s a lot to learn from the way they were designed. I really look forward to the day when we will have:

  • a Free Software Definition equivalent for services
  • Debian Free Software Guidelines-like tests/checklist to evaluate services
  • an equivalent of The Cathedral and the Bazaar, explaining how one can build successful business models on top of open services

Exciting times!

Categories: Elsewhere

Gregor Herrmann: Debian Perl Group Micro-Sprint

Planet Debian - Sun, 24/08/2014 - 02:25

DebConf 14 has started earlier today with the first two talks in sunny portland, oregon.

this year's edition of DebConf didn't feature a preceding DebCamp, & the attempts to organize a proper pkg-perl sprint were not very successful.

nevertheless, two other members of the Debian Perl Group & me met here in PDX on wednesday for our informal unofficial pkg-perl µ-sprint, & as intended, we've used the last days to work on some pkg-perl QA stuff:

  • upload packages which were waiting for Perl 5.20
  • upload packages which didn't have the Perl Group in Maintainer
  • update OpenTasks wiki page
  • update subscription to Perl packages in Ubuntu/Launchpad
  • start annual git repos cleanup
  • pkg-perl-tools: improve scripts to integrate upstream git repo
  • update alternative (build) dependencies after perl 5.20 upload
  • update Module::Build (build) dependencies

as usual, having someone to poke besides you, & the opportunity to get a second pair of eyes quickly was very beneficial. – & of course, spending time with my nice team mates is always a pleasure for me!

Categories: Elsewhere

Doug Vann: 10 Useful Ways for Drupal Event Attendees To Be Engaged

Planet Drupal - Sat, 23/08/2014 - 20:09

I am sitting here at DrupalCamp Asheville2014. I took a break and hung out in the BoF room and decided to compose this list of ideas on how Drupal Event attendees can engage the event.

I'd love to hear your comments below!

  1. Know The Session
    • What is this session about? Is it a show-n-tell of a module? Is it a case study of a website? If you are consciously aware of what to expect, then you are prepared to take what you hear and frame it within the context of the session topic. This is important for the attendee because not every element of the presentation will relate directly to the session topic. If the speaker needs to lay down some groundwork for a few minutes, it is important for you to remember what the overall topic is so that you don’t get lost in the weeds.
  2. Know The Session Speaker
    • Check out their Drupal.org Profile or their profile on the Event website. Get a sense of their background and perspective. This is also helpful if you ask questions at their session. You can ask questions that you know relate to elements of their background.
  3. Ask Questions At The Sessions
    • The number of questions at any given session will vary. But when there are none it can be a tad awkward. Then after the session, you might still see ppl walk up and ask questions.
    • I encourage you to fill in that silence with some immediate questions that come to mind. The speakers really really appreciate the questions.
  4. Engage Social Media
    • Tweet about the event. Maybe tweet about each session you attend and provide a link to the session description and invoke the speaker’s twitter name as well. 
    • Take pictures and post them wherever you post your pics.
    • Use the Hashtag if the event has one.
    • Do you blog? Blog about the event and what you liked.
    • The event organizers and spekers REALLY appreciate the media exposure.
    • Don’t forget that many Drupal events publish their videos online so you can catch the ones you missed or revisit the one you liked.
  5. Hang Out
    • Don’t feel like you have to attend a session in every timeslot. Feel free to hang out near the coffee tables or registration tables or in Birds of a Feather rooms. Wherever you see people hanging out, join them!
  6. Join A Stranger For Lunch
    • In general, the Drupal Community is a VERY social bunch. When it’s time to sit down and eat, it is also a good time to make some new friends. To the extent that you are comfortable with it, you can learn a lot by asking ppl how the event is going and what they do with Drupal.
  7. Get Swag
    • Walk around the sponsor’s booths and look for swag. These sponsors often DO NOT want to take that stuff back to the office. Sometimes you find some pretty useful things like shirts, pens, thumb drives, fold-up cloth flying disks, hackysacks, yo-yos, puzzles, keychains, etc.
  8. Talk To The Sponsors
    • I’ve never seen a sponsor bite or hard-sell a passerby at their booth! :-)
    • You may be amazed at what you will learn by reading the signs, looking at any literature on their table, and actually talking to the representative. 
  9. Fill Out Any Feedback Forms
    • Not ever event has feedback forms, but more and more are using them.
    • Forms may be available per-class, and for the event in general.
    • The organizers REALLY appreciate ALL comments.
    • As you might expect, the negative ones get more attention, so don’t hold back about the audio/video comments, or the need for more beginner topics, or how difficult it was to get to the venue from the airport, etc.
    • They really want to hear this!
  10. THANK The Organizers!
    • If you know their faces, be sure to thank them personally for their hard work organizing the speakers, the facilities, the meals, the WiFi, etc.
    • Be sure to tweet and post about it as well when you leave.
Drupal Planet

View the discussion thread.

Categories: Elsewhere

Thorsten Alteholz: Moving WordPress to another server

Planet Debian - Sat, 23/08/2014 - 19:58

Today I moved this blog from a vServer to a dedicated server. The migration went surprisingly smooth. I just had to apt-get install the Debian packages apache2, mysql-server and wordpress. Afterwards only the following steps were necessary:

  • dumping the old database with basically just one command:

    mysqldump -u$DBUSER -p$DBPASS –lock-tables=false $DBNAME > $DBFILE

  • creating the database on the new host:

    CREATE DATABASE $DBNAME;
    \r $DBNAME
    GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP,ALTER ON $DBNAME TO ‘$DBUSER’@'localhost’ IDENTIFIED BY ‘$DBPASS’;
    GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP,ALTER ON $DBNAME.* TO ‘$DBUSER’@'localhost’ IDENTIFIED BY $DBPASS’;
    FLUSH PRIVILEGES;

  • importing the dump with something like:

    mysql –user=$DBUSER –password=$DBPASS $DBNAME < $DBFILE

and almost done …

Finally some fine tuning of /etc/wordpress/htaccess and access rights of a few directories to allow installation of plugins. As I wanted to clean up my wp-content-directory, I manually reinstalled all plugins instead of just copying them. Thankfully all of the important plugins store their data in the database and all settings survived the migration.

Categories: Elsewhere

Antti-Juhani Kaijanaho: A milestone toward a doctorate

Planet Debian - Sat, 23/08/2014 - 19:44

Yesterday I received my official diploma for the degree of Licentiate of Philosophy. The degree lies between a Master’s degree and a doctorate, and is not required; it consists of the coursework required for a doctorate, and a Licentiate Thesis, “in which the student demonstrates good conversance with the field of research and the capability of independently and critically applying scientific research methods” (official translation of the Government decree on university decrees 794/2004, Section 23 Paragraph 2).

The title and abstract of my Licentiate Thesis follow:

Kaijanaho, Antti-Juhani
The extent of empirical evidence that could inform evidence-based design of programming languages. A systematic mapping study.
Jyväskylä: University of Jyväskylä, 2014, 243 p.
(Jyväskylä Licentiate Theses in Computing,
ISSN 1795-9713; 18)
ISBN 978-951-39-5790-2 (nid.)
ISBN 978-951-39-5791-9 (PDF)
Finnish summary

Background: Programming language design is not usually informed by empirical studies. In other fields similar problems have inspired an evidence-based paradigm of practice. Central to it are secondary studies summarizing and consolidating the research literature. Aims: This systematic mapping study looks for empirical research that could inform evidence-based design of programming languages. Method: Manual and keyword-based searches were performed, as was a single round of snowballing. There were 2056 potentially relevant publications, of which 180 were selected for inclusion, because they reported empirical evidence on the efficacy of potential design decisions and were published on or before 2012. A thematic synthesis was created. Results: Included studies span four decades, but activity has been sparse until the last five years or so. The form of conditional statements and loops, as well as the choice between static and dynamic typing have all been studied empirically for efficacy in at least five studies each. Error proneness, programming comprehension, and human effort are the most common forms of efficacy studied. Experimenting with programmer participants is the most popular method. Conclusions: There clearly are language design decisions for which empirical evidence regarding efficacy exists; they may be of some use to language designers, and several of them may be ripe for systematic reviewing. There is concern that the lack of interest generated by studies in this topic area until the recent surge of activity may indicate serious issues in their research approach.

Keywords: programming languages, programming language design, evidence-based paradigm, efficacy, research methods, systematic mapping study, thematic synthesis

A Licentiate Thesis is assessed by two examiners, usually drawn from outside of the home university; they write (either jointly or separately) a substantiated statement about the thesis, in which they suggest a grade. The final grade is almost always the one suggested by the examiners. I was very fortunate to have such prominent scientists as Dr. Stefan Hanenberg and Prof. Stein Krogdahl as the examiners of my thesis. They recommended, and I received, the grade “very good” (4 on a scale of 1–5).

The thesis has been accepted for publication in our faculty’s licentiate thesis series and will in due course appear in our university’s electronic database (along with a very small number of printed copies). In the mean time, if anyone wants an electronic preprint, send me email at antti-juhani.kaijanaho@jyu.fi.

Figure 1 of the thesis: an overview of the mapping process

As you can imagine, the last couple of months in the spring were very stressful for me, as I pressed on to submit this thesis. After submission, it took me nearly two months to recover (which certain people who emailed me on Planet Haskell business during that period certainly noticed). It represents the fruit of almost four years of work (way more than normally is taken to complete a Licentiate Thesis, but never mind that), as I designed this study in Fall 2010.

Figure 8 of the thesis: Core studies per publication year

Recently, I have been writing in my blog a series of posts in which I have been trying to clear my head about certain foundational issues that irritated me during the writing of the thesis. The thesis contains some of that, but that part of it is not very strong, as my examiners put it, for various reasons. The posts have been a deliberately non-academic attempt to shape the thoughts into words, to see what they look like fixed into a tangible form. (If you go read them, be warned: many of them are deliberately provocative, and many of them are intended as tentative in fact if not in phrasing; the series also is very incomplete at this time.)

I closed my previous post, the latest post in that series, as follows:

In fact, the whole of 20th Century philosophy of science is a big pile of failed attempts to explain science; not one explanation is fully satisfactory. [...] Most scientists enjoy not pondering it, for it’s a bit like being a cartoon character: so long as you don’t look down, you can walk on air.

I wrote my Master’s Thesis (PDF) in 2002. It was about the formal method called “B”; but I took a lot of time and pages to examine the history and content of formal logic. My supervisor was, understandably, exasperated, but I did receive the highest possible grade for it (which I never have fully accepted I deserved). The main reason for that digression: I looked down, and I just had to go poke the bridge I was standing on to make sure I was not, in fact, walking on air. In the many years since, I’ve taken a lot of time to study foundations, first of mathematics, and more recently of science. It is one reason it took me about eight years to come up with a doable doctoral project (and I am still amazed that my department kept employing me; but I suppose they like my teaching, as do I). The other reason was, it took me that long to realize how to study the design of programming languages without going where everyone has gone before.

Debian people, if any are still reading, may find it interesting that I found significant use for the dctrl-tools toolset I have been writing for Debian for about fifteen years: I stored my data collection as a big pile of dctrl-format files. I ended up making some changes to the existing tools (I should upload the new version soon, I suppose), and I wrote another toolset (unfortunately one that is not general purpose, like the dctrl-tools are) in the process.

For the Haskell people, I mainly have an apology for not attending to Planet Haskell duties in the summer; but I am back in business now. I also note, somewhat to my regret, that I found very few studies dealing with Haskell. I just checked; I mention Haskell several times in the background chapter, but it is not mentioned in the results chapter (because there were not studies worthy of special notice).

I am already working on extending this work into a doctoral thesis. I expect, and hope, to complete that one faster.

Categories: Elsewhere

Joachim Breitner: This blog goes static

Planet Debian - Sat, 23/08/2014 - 17:54

After a bit more than 9 years, I am replacing Serendipity, which as been hosting my blog, by a self-made static solution. This means that when you are reading this, my server no longer has to execute some rather large body of untyped code to produce the bytes sent to you. Instead, that happens once in a while on my laptop, and they are stored as static files on the server.

I hope to get a little performance boost from this, so that my site can more easily hold up to being mentioned on hackernews. I also do not want to worry about security issues in Serendipity – static files are not hacked.

Of course there are down-sides to having a static blog. The editing is a bit more annoying: I need to use my laptop (previously I could post from anywhere) and I edit text files instead of using a JavaScript-based WYSIWYG editor (but I was slightly annoyed by that as well). But most importantly your readers cannot comment on static pages. There are cloud-based solutions that integrate commenting via JavaScript on your static pages, but I decided to go for something even more low-level: You can comment by writing an e-mail to me, and I’ll put your comment on the page. This has the nice benefit of solving the blog comment spam problem.

The actual implementation of the blog is rather masochistic, as my web page runs on one of these weird obfuscated languages (XSLT). Previously, it contained of XSLT stylesheets producing makefiles calling XSLT sheets. Now it is a bit more-self-contained, with one XSLT stylesheet writing out all the various html and rss files.

I managed to import all my old posts and comments thanks to this script by Michael Hamann (I had played around with this some months ago and just spend what seemed to be an hour to me to find this script again) and a small Haskell script. Old URLs are rewritten (using mod_rewrite) to the new paths, but feed readers might still be confused by this.

This opens the door to a long due re-design of my webpage. But not today...

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator - Elsewhere