Feed aggregator

Drupal Association News: What's new on Drupal.org - April 2015

Planet Drupal - Sat, 16/05/2015 - 20:45

Look for links to our Strategic Roadmap highlighting how this work falls into our priorities set by the Drupal Association Board and Drupal.org Working Groups.

Better account creation Community User Role Expanded

The community user role which we introduced in March will now be automatically granted to users who reach a certain level of participation on Drupal.org. While the exact activities that can grant this role will not be explicitly published (as we do with other spam prevention measures) the activities are representative of those an engaged community member would take while participating on Drupal.org.

Existing users who have already reached the required level of contribution will receive the role upon their next activity on Drupal.org. As of the end of April the automatic role granting had extended the Community user role to more than 5000 users.

Content Strategy and Visual Design System for Drupal.org

and

Making Drupal.org Search Usable

During April the Association staff focused on communicating the results and recommendations of our Content Strategy work with the Working Groups and the Drupal Association Board of Directors.

A deep investigation of the current organization of content on Drupal.org, the workflow provided by Drupal.org for our User Personas, and the governance of content on Drupal.org has brought us to a comprehensive proposal for the future state of Drupal.org.

These proposals involve creating new sections on Drupal.org that better match to common user activities and better content types to support those activities. As we begin organizing Drupal.org into new and updated content types we’ll also be rolling in our initiative to improve search on Drupal.org. As we work on each content type we’ll be assessing the search facets for each type.

The next step to move this proposal forward has been to create issues for the specific proposals that have evolved from the content strategy project to date and the feedback from the Working Groups.

This issue and child issues that follow are based on the findings of the Content Strategy project performed by the Drupal Association staff in partnership with Forum One Communications during December 2014 - April 2015.

Community Initiatives (D8 Blockers) DrupalCI

Drupal Association staff and community volunteers have continued pushing hard to get DrupalCI production ready and integrated with Drupal.org.

The community helped tremendously by providing some formal guidance into the minimum viable and ideal state of the test environments.
Association staff has the primary environment successfully running all tests, and will be working on the additional environments as well as the Drupal.org integration in the run up to DrupalCon Los Angeles.

Again - tremendous thanks to our community volunteers who sprinted with us in Portland: Jeremy Thorson, Nick Schuch, Bastian Widmer, Ricardo Amaro, Paul Mitchum, Mike Prasuhn, Karoly Negyesi-- and to Shayamala Rajaram, Angie Byron, and Jonathan Hedstrom who helped us from afar!

Localize.Drupal.org

In partnership with the community members who have been working on the port of localize.Drupal.org to Drupal 7, association staff have been working to get this migration across the finish line.

We focused fire on the issues found in click-testing, and hope to deploy localize.Drupal.org on Drupal 7 in May.

Revenue-related projects (funding our work) Try Drupal

We’ve created Try Drupal with our Premium Hosting Supporters to make it easier for CMS evaluators and Drupal.org newcomers to test and work with a Drupal demo site. The Program will showcase a selection of Hosting Companies where a new user can quickly (in less than 20 minutes) sign up and have a Drupal demo site up and running for them to use for free.

DrupalCons

It’s almost time for DrupalCon Los Angeles! In the run up to DrupalCon Los Angeles we’ve been fixing bugs on Events.Drupal.org and preparing for the launch of the DrupalCon Barcelona full site.

We’ve also just started planning out our work for the next Cons to be announced at DrupalCon Los Angeles - more to come there after Los Angeles!

Sustaining Support and Maintenance Pre-Production Infra Rebuild

An issue was reported to the Drupal.org infrastructure team that uncovered an installed rootkit on our pre-production (dev and staging) environment on April 19th. We stopped all services on these servers. The access was gained through an open VNC port on our OpenStack environment that allowed hijacking of an open console session. The attacker was attempting to create a distributed denial of service (DDoS) attack on targeted IPs.

There is no evidence that information was taken from our staging database or that user information was compromised.

To ensure site integrity, we rebuilt our staging and development environments. Our infrastructure team took the opportunity during the rebuild to address some best practices and better security configuration options. The majority of these environments are now on Amazon Web Services. Particularly for our development environments, this gives us options for more easily scaling up and down our development needs, and gives us more separation between production and pre-production servers.

---
As always, we’d like to say thanks to all volunteers who are working with us and to the Drupal Association Supporters, who made it possible for us to work on these projects.

Follow us on Twitter for regular updates: @drupal_org, @drupal_infra.

Categories: Elsewhere

Addison Berry: Getting Started as a Board Director

Planet Drupal - Sat, 16/05/2015 - 20:06

A few months ago I ran for, and won, a seat on the Drupal Association (DA) Board as an At-Large Director. I'd like to share my journey with everyone, both to provide another look into the work that the board does, and to understand what it's like to be a new board member. I've now attended two board meetings (April and May) and taken part in my first board retreat, the weekend before DrupalCon LA. There's a lot going on, so I'll break this up into several posts.

On-boarding

Once I was elected, and the board confirmed the election results, Holly contacted me to let me know just before announcing it to the entire community. Shortly after that we scheduled a time to get on the phone, and I started getting access to a bunch of documents. I mean a whole bunch!

That first call with Holly was great for getting me oriented. She walked me through logistical things like board meetings, communication, necessary paperwork, and pointing me in the right direction with the documents to look at for various topics and back story. She also asked if I'd ever served on a board before, which I had not, and took time to explain what that means in terms of expectations for board members (things like publicly representing the board and identifying conflicts of interest). She also gave me a summary of the major topics from the last board retreat, which had occurred in January. She continued from there to summarize the big issues that the board was in the middle of discussing and working on, with an idea of what topics we were looking to tackle during the LA retreat in May. This was incredibly useful to prepare me for my first board meeting. I caught up on details by reading the minutes from the January retreat and this year's monthly board meetings. I didn't have many questions after my on-boarding and I felt prepared to dive into the conversations that were already ongoing.

One thing that I did right after that call was to set up times to chat one-on-one with the DA staff leadership team. I wanted to hear from each of them what they were working on, and understand what they needed to get from the board (and therefore me) to do their jobs better. It was a great introduction to the work that the staff takes on every day, and helped me clarify what I need to keep focused on to help them. It was also just awesome to get to know them a little more as people, which can be hard to do in our crazy, busy schedules.

Board Email

In addition to documents and phone calls, I was also added to the board email list. It is a pretty low traffic list, but I got to see a few conversations run through there prior to my first meeting. We had a thread to help clarify what info we needed to have for the meeting, and that board members should read reports ahead of time so we could get straight to things in the meeting itself. In addition to internal process things like that, this is also a place where members can raise issues they think we need to discuss or vote on in a meeting.

First Board Meeting

I was elected just a few weeks before the April board meeting, and I wasn't required to attend that meeting since I was still getting up and running, but I wanted to dive in. Board members are expected to make all monthly board meetings, with at least 10 a year being the minimum to attend. The time is a set time, and so one thing I knew before I even nominated myself was that I would need to make space for this 2-hour call every month on a Wednesday night from 9pm–11pm (since I live in Denmark).

A few days before each board meeting we all receive a meeting packet which has the agenda, phone connection info, links to any presentations or documents we should review, and a list of the DA key performance indicators (KPIs). This board packet is publicly available as well, and you can check them out yourself and even listen in on the board meeting. I spent some time to read everything over and think about what I might want to bring up in the conversation during the meeting.

I didn't have a whole lot to say as I was just trying to absorb as much as I could. We did however discuss releasing the election results, which I obviously had some thoughts about, having just come through the election process. This issue was a good example of how the DA works with community feedback. We have never released election data in the past, and we hadn't made that an expectation for candidates, so when people asked for the data, we couldn't just hand it out with considering a few things. I think we came up with a good solution to be able to release the data for this election, and we now have a plan in place to incorporate this in future elections. You can read more about this decision in Holly's post 2015 At-Large Election Data Released.

The first part of every board meeting is public (as mentioned above). After the public section, we drop off the phone and meet on another phone line with just the board, Holly, and needed staff. This is a place for us to discuss things that are still in progress, or to handle internal board matters. On this particular call we discussed things like reviewing the Q1 financials and and giving updates on board members' efforts to help raise funds for D8 Accelerate.

In my next post I'll give a rundown of the board retreat and my board experience at DrupalCon LA. A lot of people have asked me how I feel about being on the board after the retreat, and I have to say that I'm very happy. I felt the level and direction of conversation was great. I'll talk more about what that was, and why I'm so pleased, especially compared to my previous DA experience from many years ago.

drupal associationdrupal
Categories: Elsewhere

Ian Campbell: A vhosting git setup with gitolite and gitweb

Planet Debian - Sat, 16/05/2015 - 19:52

Since gitorious' shutdown I decided it was time to start hosting my own git repositories for my own little projects (although the company which took over gitorious has a Free software offering it seems that their hosted offering is based on the proprietary version, and in any case once bitten, twice shy and all that).

After a bit of investigation I settled on using gitolite and gitweb. I did consider (and even had a vague preference for) cgit but it wasn't available in Wheezy (even backports, and the backport looked tricky) and I haven't upgraded my VPS yet. I may reconsider cgit this once I switch to Jessie.

The only wrinkle was that my VPS is shared with a friend and I didn't want to completely take over the gitolite and gitweb namespaces in case he ever wanted to setup git.hisdomain.com, so I needed something which was at least somewhat compatible with vhosting. gitolite doesn't appear to support such things out of the box but I found an interesting/useful post from Julius Plenz which included sufficient inspiration that I thought I knew what to do.

After a bit of trial and error here is what I ended up with:

Install gitolite

The gitolite website has plenty of documentation on configuring gitolite. But since gitolite is in Debian its even more trivial than even the quick install makes out.

I decided to use the newer gitolite3 package from wheezy-backports instead of the gitolite (v2) package from Wheezy. I already had backports enabled so this was just:

# apt-get install gitolite3/wheezy-backports

I accepted the defaults and gave it the public half of the ssh key which I had created to be used as the gitolite admin key.

By default this added a user gitolite3 with a home directory of /var/lib/gitolite3. Since they username forms part of the URL used to access the repositories I want it to include the 3, so I edited /etc/passwd, /etc/groups, /etc/shadow and /etc/gshadow to say just gitolite but leaving the home directory as gitolite3.

Now I could clone the gitolite-admin repo and begin to configure things.

Add my user

This was simple as dropping the public half into the gitolite-admin repo as keydir/ijc.pub, then git add, commit and push.

Setup vhosting

Between the gitolite docs and Julius' blog post I had a pretty good idea what I wanted to do here.

I wasn't too worried about making the vhost transparent from the developer's (ssh:// URL) point of view, just from the gitweb and git clone side. So I decided to adapt things to use a simple $VHOST/$REPO.git schema.

I created /var/lib/gitolite3/local/lib/Gitolite/Triggers/VHost.pm containing:

package Gitolite::Triggers::VHost; use strict; use warnings; use File::Slurp qw(read_file write_file); sub post_compile { my %vhost = (); my @projlist = read_file("$ENV{HOME}/projects.list"); for my $proj (sort @projlist) { $proj =~ m,^([^/\.]*\.[^/]*)/(.*)$, or next; my ($host, $repo) = ($1,$2); $vhost{$host} //= []; push @{$vhost{$host}} => $repo; } for my $v (keys %vhost) { write_file("$ENV{HOME}/projects.$v.list", { atomic => 1 }, join("\n",@{$vhost{$v}})); } } 1;

I then edited /var/lib/gitolite3/.gitolite.rc and ensured it contained:

LOCAL_CODE => "$ENV{HOME}/local", POST_COMPILE => [ 'VHost::post_compile', ],

(The first I had to uncomment, the second to add).

All this trigger does is take the global projects.list, in which gitolite will list any repo which is configured to be accessible via gitweb, and split it into several vhost specific lists.

Create first repository

Now that the basics were in place I could create my first repository (for hosting qcontrol).

In the gitolite-admin repository I edited conf/gitolite.conf and added:

repo hellion.org.uk/qcontrol RW+ = ijc

After adding, committing and pushing I now have "/var/lib/gitolite3/projects.list" containing:

hellion.org.uk/qcontrol.git testing.git

(the testing.git repository is configured by default) and /var/lib/gitolite3/projects.hellion.org.uk.list containing just:

qcontrol.git

For cloning the URL is:

gitolite@${VPSNAME}:hellion.org.uk/qcontrol.git

which is rather verbose (${VPSNAME} is quote long in my case too), so to simplify things I added to my .ssh/config:

Host gitolite Hostname ${VPSNAME} User gitolite IdentityFile ~/.ssh/id_rsa_gitolite

so I can instead use:

gitolite:hellion.org.uk/qcontrol.git

which is a bit less of a mouthful and almost readable.

Configure gitweb (http:// URL browsing)

Following the documentation's advice I edited /var/lib/gitolite3/.gitolite.rc to set:

UMASK => 0027,

and then:

$ chmod -R g+rX /var/lib/gitolite3/repositories/*

Which arranges that members of the gitolite group can read anything under /var/lib/gitolite3/repositories/*.

Then:

# adduser www-data gitolite

This adds the user www-data to the gitolite group so it can take advantage of those relaxed permissions. I'm not super happy about this but since gitweb runs as www-data:www-data this seems to be the recommended way of doing things. I'm consoling myself with the fact that I don't plan on hosting anything sensitive... I also arranged things such that members of the groups can only list the contents of directories from the vhost directory down by setting g=x not g=rx on higher level directories. Potentially sensitive files do not have group permissions at all either.

Next I created /etc/apache2/gitolite-gitweb.conf:

die unless $ENV{GIT_PROJECT_ROOT}; $ENV{GIT_PROJECT_ROOT} =~ m,^.*/([^/]+)$,; our $gitolite_vhost = $1; our $projectroot = $ENV{GIT_PROJECT_ROOT}; our $projects_list = "/var/lib/gitolite3/projects.${gitolite_vhost}.list"; our @git_base_url_list = ("http://git.${gitolite_vhost}");

This extracts the vhost name from ${GIT_PROJECT_ROOT} (it must be the last element) and uses it to select the appropriate vhost specific projects.list.

Then I added a new vhost to my apache2 configuration:

<VirtualHost 212.110.190.137:80 [2001:41c8:1:628a::89]:80> ServerName git.hellion.org.uk SetEnv GIT_PROJECT_ROOT /var/lib/gitolite3/repositories/hellion.org.uk SetEnv GITWEB_CONFIG /etc/apache2/gitolite-gitweb.conf Alias /static /usr/share/gitweb/static ScriptAlias / /usr/share/gitweb/gitweb.cgi/ </VirtualHost>

This configures git.hellion.org.uk (don't forget to update DNS too) and sets the appropriate environment variables to find the custom gitolite-gitweb.conf and the project root.

Next I edited /var/lib/gitolite3/.gitolite.rc again to set:

GIT_CONFIG_KEYS => 'gitweb\.(owner|description|category)',

Now I can edit the repo configuration to be:

repo hellion.org.uk/qcontrol owner = Ian Campbell desc = qcontrol RW+ = ijc R = gitweb

That R permission for the gitweb pseudo-user causes the repo to be listed in the global projects.list and the trigger which we've added causes it to be listed in projects.hellion.org.uk.list, which is where our custom gitolite-gitweb.conf will look.

Setting GIT_CONFIG_KEYS allows those options (owner and desc are syntactic sugar for two of them) to be set here and propagated to the actual repo.

Configure git-http-backend (http:// URL cloning)

After all that this was pretty simple. I just added this to my vhost before the ScriptAlias / /usr/share/gitweb/gitweb.cgi/ line:

ScriptAliasMatch \ "(?x)^/(.*/(HEAD | \ info/refs | \ objects/(info/[^/]+ | \ [0-9a-f]{2}/[0-9a-f]{38} | \ pack/pack-[0-9a-f]{40}\.(pack|idx)) | \ git-(upload|receive)-pack))$" \ /usr/lib/git-core/git-http-backend/$1

This (which I stole straight from the git-http-backend(1) manpage causes anything which git-http-backend should deal with to be sent there and everything else to be sent to gitweb.

Having done that access is enabled by editing the repo configuration one last time to be:

repo hellion.org.uk/qcontrol owner = Ian Campbell desc = qcontrol RW+ = ijc R = gitweb daemon

Adding R permissions for daemon causes gitolite to drop a stamp file in the repository which tells git-http-backend that it should export it.

Configure git daemon (git:// URL cloning)

I actually didn't bother with this, git http-backend supports the smart HTTP mode which should be as efficient as the git protocol. Given that I couldn't see any reason to run another network facing daemon on my VPS.

FWIW it looks like vhosting could have been achieved by using the --interpolated-path option.

Conclusion

There's quite a few moving parts, but they all seems to fit together quite nicely. In the end apart from adding www-data to the gitolite group I'm pretty happy with how things ended up.

Categories: Elsewhere

Another Drop in the Drupal Sea: DrupalCon LA Friday Recap

Planet Drupal - Sat, 16/05/2015 - 19:06

From my vantage point the sprint day was extremely well attended. I spent my day working on a patch I had submitted to the Flag module and working on OG Forum and OG Forum D7.

We had the traditional live commit in the afternoon.

There wasn't any announcement if any more critical bugs were squashed for core.

How many of you participated in the sprints? When did you head home? Are you participating on Saturday?

read more

Categories: Elsewhere

Holger Levsen: 20150516-lts-march-and-april

Planet Debian - Sat, 16/05/2015 - 18:56
My LTS March and April

In March and April 2015 I sadly didn't get much LTS work done, for a variety of reasons. Most of these reasons make me happy, while at the same time I'm sad I had to reduce my LTS involvement and actually I even failed to allocate those few hours which were assigned to me. So I'll keep this blog post short too, as time is my most precious ressource atm.

In March I only sponsored the axis upload for and wrote DLA-169-1, besides that I spent some hours to implement JSON output for the security-tracker, which was more difficult than anticipated, because a.) different people had different (first) unspoken assumptions what output they wanted and b.) since the security-trackers database schema has grown over years getting the data out in a logically structured fashion ain't as easy as one would imagine...

In April I sponsored the openldap upload and wrote DLA-203-1 and then prepared debian-security-support 2015.04.04~~deb6u1 for squeeze-lts and triaged some of d-s-s's bugs. Adding support for oldoldstable (and thus keeping support for squeeze-lts) to the security-tracker was my joyful contribution for the very joyful Jessie release day.

So in total I only spent 7.5 (paid) hours in these two months on LTS, despite I should have spent 10. The only thing I can say to my defense is that I've spent more time on LTS (supporting both users as well as contributors on the list as well as on IRC) but this time ain't billable. Which I think is right, but it still eats from my "LTS time" and so sometimes I wish I could more easily ignore people and just concentrate on technical fixes...

Categories: Elsewhere

Craig Small: Debian, WordPress and Multi-site

Planet Debian - Sat, 16/05/2015 - 10:07

For quite some time, the Debian version of WordPress has had a configuration tweak that made it possible to run multiple websites on the same server. This came from a while ago when multi-site wasn’t available. While a useful feature, it does make the initial setup of WordPress for simple sites more complicated.

I’m looking at changing the Debian package slightly so that for a single-site use it Just Works. I have also looked into the way WordPress handles the content, especially themes and plugins, to see if there is a way of updating them through the website itself. This probably won’t suit everyone but I think its a better default.

The idea will be to setup Debian packages something like this by default and then if you want more fancier stuff its all still there, just not setup. It’s not setup at the moment but the default is a little confusing which I hope to change.

Multisite

The first step was to get my pair of websites into one. So first it was backing up time and then the removal of my config-websitename.php files in /etc/wordpress. I created a single /etc/wordpress/config-default.php file that used a new database.  This initial setup worked ok and I had the primary site going reasonably quickly.

The second site was a little trickier. The problem is that multisite does things like foo.example.com and bar.example.com while I wanted example.com and somethingelse.com There is a plugin wordpress-mu-domain-mapping that almost sorta-kinda works.  While it let me make the second site with a different name, it didn’t like aliases, especially if the alias was the first site.

Some evil SQL fixed that nicely.  “UPDATE wp_domain_mapping SET blog_id=1 WHERE id=2″

So now I had:

  • enc.com.au as my primary site
  • rnms.org as a second site
  • dropbear.xyz as an alias for my primary site
Files and Permissions

We really three separate sets of files in wordpress. These files come from three different sources and are updated using three different ways with a different release cycle.

The first is the wordpress code which is shipped in the Debian package. All of this code lives in /usr/share/wordpress and is only changed if you update the Debian package, or you fiddle around with it. It needs to be readable to the webserver but not writable. The config files in /etc/wordpress are in this lot too.

Secondly, we have the user generated data. This is things like your pictures that you add to the blog. As they are uploaded through the webserver, it needs to be writable to it. These files are located in /var/lib/wordpress/wp-content/uploads

Third, is the plugins and themes. These can either be unzipped and placed into a directory or directly loaded in from the webserver. I used to do the first way but are trying the second. These files are located in /var/lib/wordpress/wp-content

Ever tried to update your plugins and get the FTP prompt? This is because the wp-content directory is not writable. I adjusted the permissions and now when a plugin wants to update, I click yes and it magically happens!

You will have to reference the /var/lib/wordpress/wp-content subdirectory in two places:

  • In your /etc/config-default.php:  WP_CONTENT_DIR definition
  • In apache or htaccess: Either a symlink out of /usr/share/wordpress and turn on followsymlinks or an apache Alias and also permit access.
What broke

Images did, in a strange way. My media library is empty, but my images are still there. Something in the export and reimport did not work. For me its a minor inconvenience and due to moving from one system to another, but it still is there.

 

 

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator