Took a while to get here, but Propellor 0.4.0 can deploy DNS servers and I just had it deploy mine. Including generating DNS zone files.
Configuration is dead simple, as far as DNS goes:& Dns.secondary hosts "joeyh.name" & Dns.primary hosts "example.com" ( Dns.mkSOA "ns1.example.com" 100 [ NS (AbsDomain "ns1.example.com") , NS (AbsDomain "ns2.example.com") ] ) 
The awesome thing is that propellor fills in all the other information in the zone file by looking at the properties of the hosts it knows about., host "blue.example.com" & ipv4 "192.168.1.1" & ipv6 "fe80::26fd:52ff:feea:2294" & alias "example.com" & alias "www.example.com" & alias "example.museum" & Docker.docked hosts "webserver" `requres` backedup "/var/www"
When it sees this host, Propellor adds its IP addresses to the example.com DNS zone file, for both its main hostname ("blue.example.com"), and also its relevant aliases. (The .museum alias would go into a different zone file.)
Multiple hosts can define the same alias, and then you automaticlly get round-robin DNS.
The web server part of of the blue.example.com config can be cut and pasted to another host in order to move its web server to the other host, including updating the DNS. That's really all there is to is, just cut, paste, and commit!
I'm quite happy with how that worked out. And curious if Puppet etc have anything similar.
One tricky part of this was how to ensure that the serial number automtically updates when changes are made. The way this is handled is Propellor starts with a base serial number (100 in the example above), and then it adds to it the number of commits in its git repository. The zone file is only updated when something in it besides the serial number needs to change.
The result is nice small serial numbers that don't risk overflowing the (so 90's) 32 bit limit, and will be consistent even if the configuration had Propellor setting up multiple independent master DNS servers for the same domain.
Another recent feature in Propellor is that it can use Obnam to back up a directory. With the awesome feature that if the backed up directory is empty/missing, Propellor will automcatically restore it from the backup.
Here's how the backedup property used in the example above might be implemented:backedup :: FilePath -> Property backedup dir = Obnam.backup dir daily [ "--repository=sftp://rsync.example.com/~/webserver.obnam" ] Obnam.OnlyClient `requires` Ssh.keyImported SshRsa "root" `requires` Ssh.knownHost hosts "rsync.example.com" "root" `requires` Gpg.keyImported "1B169BE1" "root"
Notice that the Ssh.knownHost makes root trust the ssh host key belonging to rsync.example.com. So Propellor needs to be told what that host key is, like so:, host "rsync.example.com" & ipv4 "192.168.1.4" & sshPubKey "ssh-rsa blahblahblah"
Which of course ties back into the DNS and gets this hostname set in it. But also, the ssh public key is available for this host and visible to the DNS zone file generator, and that could also be set in the DNS, in a SSHFP record. I haven't gotten around to implementing that, but hope at some point to make Propellor support DNSSEC, and then this will all combine even more nicely.
By the way, Propellor is now up to 3 thousand lines of code (not including Utility library). In 20 days, as a 10% time side project.
In 2007 I wrote a blog post about swap space . The main point of that article was to debunk the claim that Linux needs a swap space twice as large as main memory (in summary such advice is based on BSD Unix systems and has never applied to Linux and that most storage devices aren’t fast enough for large swap). That post was picked up by Barrapunto (Spanish Slashdot) and became one of the most popular posts I’ve written .
In the past 7 years things have changed. Back then 2G of RAM was still a reasonable amount and 4G was a lot for a desktop system or laptop. Now there are even phones with 3G of RAM, 4G is about the minimum for any new desktop or laptop, and desktop/laptop systems with 16G aren’t that uncommon. Another significant development is the use of SSDs which dramatically improve speed for some operations (mainly seeks).
As SATA SSDs for desktop use start at about $110 I think it’s safe to assume that everyone who wants a fast desktop system has one. As a major limiting factor in swap use is the seek performance of the storage the use of SSDs should allow greater swap use. My main desktop system has 4G of RAM (it’s an older Intel 64bit system and doesn’t support more) and has 4G of swap space on an Intel SSD. My work flow involves having dozens of Chromium tabs open at the same time, usually performance starts to drop when I get to about 3.5G of swap in use.
While SSD generally has excellent random IO performance the contiguous IO performance often isn’t much better than hard drives. My Intel SSDSC2CT12 300i 128G can do over 5000 random seeks per second but for sustained contiguous filesystem IO can only do 225M/s for writes and 274M/s for reads. The contiguous IO performance is less than twice as good as a cheap 3TB SATA disk. It also seems that the performance of SSDs aren’t as consistent as that of hard drives, when a hard drive delivers a certain level of performance then it can generally do so 24*7 but a SSD will sometimes reduce performance to move blocks around (the erase block size is usually a lot larger than the filesystem block size).
It’s obvious that SSDs allow significantly better swap performance and therefore make it viable to run a system with more swap in use but that doesn’t allow unlimited swap. Even when using programs like Chromium (which seems to allocate huge amounts of RAM that aren’t used much) it doesn’t seem viable to have swap be much bigger than 4G on a system with 4G of RAM. Now I could buy another SSD and use two swap spaces for double the overall throughput (which would still be cheaper than buying a PC that supports 8G of RAM), but that still wouldn’t solve all problems.
One issue I have been having on occasion is BTRFS failing to allocate kernel memory when managing snapshots. I’m not sure if this would be solved by adding more RAM as it could be an issue of RAM fragmentation – I won’t file a bug report about this until some of the other BTRFS bugs are fixed. Another problem I have had is when running Minecraft the driver for my ATI video card fails to allocate contiguous kernel memory, this is one that almost certainly wouldn’t be solved by just adding more swap – but might be solved if I tweaked the kernel to be more aggressive about swapping out data.
In 2007 when using hard drives for swap I found that the maximum space that could be used with reasonable performance for typical desktop operations was something less than 2G. Now with a SSD the limit for usable swap seems to be something like 4G on a system with 4G of RAM. On a system with only 2G of RAM that might allow the system to be usable with swap being twice as large as RAM, but with the amounts of RAM in modern PCs it seems that even SSD doesn’t allow using a swap space larger than RAM for typical use unless it’s being used for hibernation.Conclusion
It seems that nothing has significantly changed in the last 7 years. We have more RAM, faster storage, and applications that are more memory hungry. The end result is that swap still isn’t very usable for anything other than hibernation if it’s larger than RAM.
It would be nice if application developers could stop increasing the use of RAM. Currently it seems that the RAM requirements for Linux desktop use are about 3 years behind the RAM requirements for Windows. This is convenient as a PC is fully depreciated according to the tax office after 3 years. This makes it easy to get 3 year old PCs cheaply (or sometimes for free as rubbish) which work really well for Linux. But it would be nice if we could be 4 or 5 years behind Windows in terms of hardware requirements to reduce the hardware requirements for Linux users even further.
-  http://etbe.coker.com.au/2007/09/28/swap-space/
-  http://barrapunto.com/articles/07/09/28/0947220.shtml
Early this month at a LUV meeting I gave a talk with only my mobile phone to store notes. I used Google Keep to write the notes as it’s one of the easiest ways of writing a note on a PC and quickly transferring it to a phone – if I keep doing this I will find some suitable free software for this task. Owncloud seems promising , but at the moment I’m more concerned with people issues than software.
Over the years I’ve experimented with different ways of presenting lectures. I’m now working with the theory that presenting the same data twice (by speaking and text on a projector) distracts the audience and decreases learning.Editing and Viewing Notes
Google Keep is adequate for maintaining notes, it’s based on notes that are a list of items (like a shopping list) which is fine for lecture notes. It probably has lots of other functionality but I don’t care much about that. Keep is really fast at updating notes, I can commit a change on my laptop and have it visible on my phone in a few seconds over 3G.
Most of the lectures that I’ve given have involved notes on a laptop. My first laptop was a Thinkpad 385XD with a 12.1″ display and all my subsequent laptops have had a bigger screen. When a laptop with a 12″ or larger screen is on a lectern I can see the notes at a glance without having to lean forward when 15 or fewer lines of text are displayed on the screen. 15 lines of text is about the maximum that can be displayed on a slide for the audience to read and with the width of a computer display or projector is enough for a reasonable quantity of text.
When I run Keep on my Galaxy Note 2 it displays about 20 rather short lines of text in a “portrait” orientation (5 points for a lecture) and 11 slightly longer lines in a “landscape” orientation (4 points). In both cases the amount of text displayed on a screen is less than that with a laptop while the font is a lot smaller. My aim is to use free software for everything, so when I replace Keep with Owncloud (or something similar) I will probably have some options for changing the font size. But that means having less than 5 points displayed on screen at a time and thus a change in the way I present my talks (I generally change the order of points based on how well the audience seem to get the concepts so seeing multiple points on screen at the same time is a benefit).
The Samsung Galaxy Note 2 has a 5.5″ display which is one of the largest displays available in a phone. The Sony Xperia X Ultra is one of the few larger phones with a 6.44″ display – that’s a large phone but still not nearly large enough to have more than a few points on screen with a font readable by someone with average vision while it rests on a lectern.
The most obvious solution to the problem of text size is to use a tablet. Modern 10″ tablets have resolutions ranging from 1920*1080 to 2560*1600 and should be more readable than the Thinkpad I used in 1998 which had a 12″ 800*600 display. Another possibility that I’m considering is using an old phone, a Samsung Galaxy S weighs 118 to 155 grams and is easier to hold up than a Galaxy Note 2 which weighs 180g. While 60g doesn’t seem like much difference if I’m going to hold a phone in front of me for most of an hour the smaller and lighter phone will be easier and maybe less distracting for the audience.Distributing URLs
When I give a talk I often want to share the addresses of relevant web sites with the audience. When I give a talk with the traditional style lecture notes I just put the URLs on the final page (sometimes using tinyurl.com) for people to copy during question time. When I use a phone I have to find another way.
I did a test with QR code recognition and found that a code that takes up most of the width of the screen of my Galaxy Note 2 can be recognised by a Galaxy S at a distance of 50cm. If I ran the same software on a 10″ tablet then it would probably be readable at a distance of a meter, if I had the QR code take up the entire screen on a tablet it might be readable at 1.5m away, so it doesn’t seem plausible to hold up a tablet and allow even the first few rows of the audience to decode a QR code. Even if newer phones have better photographic capabilities than the Galaxy S that I had available for testing there are still lots of people using old phones who I want to support. I think that if QR codes are to be used they have to be usable by at least the first three rows of the audience for a small audience of maybe 50 people as that would allow everyone who’s interested to quickly get in range and scan the code at the end.
Chris Samuel has a photo (taken at the same meeting) showing how a QR code from a phone could be distributed to a room . But that won’t work for all rooms.
One option is to just have the QR code on my phone and allow audience members to scan it after the lecture. As most members of the audience won’t want the URLs it should be possible for the interested people to queue up to scan the QR code(s).
Another possibility I’m considering is to use a temporary post on my documents blog (which isn’t syndicated) for URLs. The WordPress client for Android works reasonably well so I could edit the URL list at any time. That would work reasonably well for talks that have lots of URLs – which is quite rare for me.
A final option is to use Twitter, at the end of a talk I could just tweet the URLs with suitable descriptions. A good portion of the Tweets that I have written is URLs for web sites that I find interesting so this isn’t a change. This is probably the easiest option, but with the usual caveat of using a proprietary service as an interim measure until I get a free software alternative working.Any suggestions?
Please comment if you have any ideas about ways of addressing these issues.
Also please let me know if anyone is working on a distributed Twitter replacement. Please note that anything which doesn’t support followers on multiple servers and re-tweets and tweeting to users on other servers isn’t useful in this regard.
The new version of OpenStack is out, and I have just finished uploading it all into Debian Sid. With a total of 38 packages that I uploaded yesterday (which was exhausting!), most, if not all, were only moving from Experimental to Sid with only tiny updates, and this represents the achievement of 6 months of packaging work. The new feature list is impressive, and I would like to highlight some part of it:
- New Ironic bare metal service.
- New Designate DNS as a Service project.
- Trove (DB as a Service) graduated from incubation and should work well now.
- TripleO (OpenStack On OpenStack) is now fully in Debian, together with Tuskar and Tuskar-UI.
- OpenStack now has VXLAN support through the new version of OVS and kernel >= 3.13. This solves the scalability issues with GRE tunnels.
For the moment, I haven’t packaged Sahara (eg: Hadoop as a service), but it might come later as a customer of us might require it.
There’s a lot less unit tests issues in the packages I uploaded to Sid: all SQLAlchemy issues have been dealt with. I wasn’t confident with the Havana release that Sid / Testing would be a good environment for OpenStack, but this time with Icehouse, I think it should be much better. Please test this brand new release and report issues on the BTS. As always, the packages are available also as Wheezy backports through the usual channels (see the official install guide).
Debian 7.5 will include an update to the Linux kernel, based on Linux 3.2.57. Package version 3.2.57-2 is currently available in the wheezy-proposed-updates suite. I would appreciate any testing people can do to find regressions in the next few days.
In addition to bug fixes, this version updates the e1000e and igb drivers. The drivers are now based on the versions found in Linux 3.13, which support several newer chips (i210, i211, i217, i218, i354). Please consider testing this new kernel if you have an Intel gigabit Ethernet controller, even if it was already supported in Linux 3.2.
I gave a talk this year at PyCon 2014, about one of my favorite subjects: Hy. Many of my regular readers will have no doubt explored Hy's thriving GitHub org, played with try-hy, or even installed it locally by pip installing it. I was lucky enough to be able to attend PyCon on behalf of Sunlight, with a solid contingint of my colleagues. We put together a writeup on the Sunlight blog if anyone was interested in our favorite talks.
Tons of really amazing questions, and such an amazingly warm reception from so many of my peers throughout this year's PyCon. Thank you so much to everyone that attended the talk. As always, you should Fork Hy on GitHub, follow @hylang on the twitters, and send in any bugs you find!
I was out, seeing something that wasn’t there yet when I was at school (the “web” was not ubiquitous, back then), and decided to have a look:pageok
Ugh. Oh well, PocketIE doesn’t provide a “View Source” thingy, so I asked Natureshadow (who got the same result on his Android, and had no “View Source” either apparently, so he used cURL to see it). We saw (here, re-enacted using ftp(1)):tg@blau:~ $ ftp -Vo - http://www.draitschbrunnen.de/ <!-- pageok --> <!-- managed by puppet --> <html> <pre>pageok</pre> </html>
This is the final straw… after puppet managed to trash a sudoers(5) at work (I warned people to not introduce it) now it breaks websites. ☺
(Of course, tools are useful, but at best to the skill of their users. Merely dumbly copying recipes from “the ’net” without any understanding just makes debugging harder for those of us with skills.)
ObQuestion: Does anyone have ⓐ a transcript (into UTF-8) and ⓑ a translation for the other half of the OpenBSD 2.8 poster? (I get asked this regularily.)
ObTip: I can install a few hundred Debian VMs at work manually before the effort needed to automate d-i would amortise. So I decided not to. Coworkers are shocked. I keep flexibility (can decide to have machines differ), and the boss accepts my explanations. Think before doing automation just for the sake of automation!
This screencast introduces you to another one of the most important OSF for Drupal connector: the OSF FieldStorage module. What this module does is to create a new FieldStorage type for Drupal7. It enables Drupal7 to save the values of its Content Types fields into another storage system than the default one (i.e MySQL in most of the cases).
Because of the way that the Field system has been designed in Drupal7, it is possible to save the values of different fields that compose the same Content Type bundle into different field storage system. For example, if your Content Type bundle is composed of 10 fields, then 4 of them could be saved into MySQL and 6 of them into OSF.
The main purpose of the OSF FieldStorage module is to be able to save Drupal local Content Type information into OSF. What that means is that all your Drupal7 local content then become accessible, manageable and manipulatable using the 27 Open Semantic Framework (OSF) web services endpoints. Your local Drupal content can then be shared with other Drupal instances that could use OSF for Drupal to connect to that same OSF instance and seamlessly republish/re-purpose that local content from the other Drupal portal.
Here is the documentation of the architecture of this connector module.
This is the power of the OSF FieldStorage connector module. It supports the following Drupal features:
- Full FieldStorage API
- Entities caching
- 29 field widgets
- Export feature in 6 formats
In this screencast, you will be introduced to Drupal7′s Field system. Then you will see how the OSF FieldStorage module creates a new FieldStorage type for Drupal7 and how it can be used. Then you will see how to configure the OSF FieldStorage module: to creating new Content Type fields that uses this osf_fieldstorage type, how to map these fields to RDF, how to use one of the 29 supported field widgets, etc.
Finally, you will see how you can synchronize existing Content Type pages (that was created before OSF for Drupal was installed on your Drupal instance) into a OSF instance.
If you have worked with the Field UI in Drupal 7 you will know that you are able to prevent fields from being displayed when viewing entities (e.g. content, users etc). It was fairly simple, you would go to the Manage Display tab of an entity and move the field to the ‘Hidden’ region as shown in the screenshot below.
So you could hide a fields output from being displayed when viewing that entity. But what about when editing that entity? There was no way in the Drupal 7 Field UI to hide a field on a form. You would have to write some form of hook_form_alter() in a custom module and manually force the field to be hidden, like shown in this example.
One of spring days 2012 brought us a new project. One of our regular customers recommended our company to a very ambitious and engaged Moscow businessman. After reading the specifications we were at a loss… after 5 years of active web-development! HiConversion project demanded the following:Read more
For your own pleasure:openssl s_client -connect www.walton.com.tw:443 -showcerts
or just runecho ' -----BEGIN CERTIFICATE----- MIIDNjCCAp+gAwIBAgIBATANBgkqhkiG9w0BAQQFADCBqTELMAkGA1UEBhMCWFkx FTATBgNVBAgTDFNuYWtlIERlc2VydDETMBEGA1UEBxMKU25ha2UgVG93bjEXMBUG A1UEChMOU25ha2UgT2lsLCBMdGQxHjAcBgNVBAsTFUNlcnRpZmljYXRlIEF1dGhv cml0eTEVMBMGA1UEAxMMU25ha2UgT2lsIENBMR4wHAYJKoZIhvcNAQkBFg9jYUBz bmFrZW9pbC5kb20wHhcNOTkxMDIxMTgyMTUxWhcNMDExMDIwMTgyMTUxWjCBpzEL MAkGA1UEBhMCWFkxFTATBgNVBAgTDFNuYWtlIERlc2VydDETMBEGA1UEBxMKU25h a2UgVG93bjEXMBUGA1UEChMOU25ha2UgT2lsLCBMdGQxFzAVBgNVBAsTDldlYnNl cnZlciBUZWFtMRkwFwYDVQQDExB3d3cuc25ha2VvaWwuZG9tMR8wHQYJKoZIhvcN AQkBFhB3d3dAc25ha2VvaWwuZG9tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKB gQC554Ro+VH0dJONqljPBW+C72MDNGNy9eXnzejXrczsHs3Pc92Vaat6CpIEEGue yG29xagb1o7Gj2KRgpVYcmdx6tHd2JkFW5BcFVfWXL42PV4rf9ziYon8jWsbK2aE +L6hCtcbxdbHOGZdSIWZJwc/1Vs70S/7ImW+Zds8YEFiAwIDAQABo24wbDAbBgNV HREEFDASgRB3d3dAc25ha2VvaWwuZG9tMDoGCWCGSAGG+EIBDQQtFittb2Rfc3Ns IGdlbmVyYXRlZCBjdXN0b20gc2VydmVyIGNlcnRpZmljYXRlMBEGCWCGSAGG+EIB AQQEAwIGQDANBgkqhkiG9w0BAQQFAAOBgQB6MRsYGTXUR53/nTkRDQlBdgCcnhy3 hErfmPNl/Or5jWOmuufeIXqCvM6dK7kW/KBboui4pffIKUVafLUMdARVV6BpIGMI 5LmVFK3sgwuJ01v/90hCt4kTWoT8YHbBLtQh7PzWgJoBAY7MJmjSguYCRt91sU4K s0dfWsdItkw4uQ== -----END CERTIFICATE----- ' | openssl x509 -noout -text
At least they're secure against heartbleed.
Let's face it, the web has evolved. But what about content editors? Dit they evolve with the web?
Most of them still like to paste everything they can find into a big WYSIWYG texarea. That's not a viable option when you want to build responsive website. It gets even hard when you want to use cool technologies like responsive images. Wouldn't it be awesome if you had all those pictures in image fields, or maybe even better, in Scald? Behold: Paragraphs!
We developed Paragraphs to be a full blown replacement of the default body field. It's comparable with Inline Entity Form, except that you can use different types of bundles in the same field. It also comes with content editing features like having your paragraphs collapsed by default. More features are on the way!
Some examples of Paragraph Bundles that we have created in the past:
- Slideshow - A simple slideshow in your content:
- Create a slideshow Paragraph bundle.
- Add a multi-value image field.
- Use a slideshow formatter.
- Youtube embed - A Youtube embed between your text blocks:
- Create a Youtube Paragraph bundle.
- Add a simple embed field.
- Tweak the formatter settings to fit your needs.
- Customer Quote - Add a personal quote/review from your customer in your content:
- Add a author, text and optional email field.
- Theme it to your style, with bundles-specific templates.
The bundles above are use-case-specific, we often start with the following bundles:
- Text: a simple WYSIWYG textfield with the basis buttons enabled, like bold and italic.
- Text Left, Image Right: same as above, but with an image on the right of the text.
- Text Right, Image Left: same as above, but with the image on the left.
- Fullwidth image: an image that takes the full width of the content.
All paragraph bundles have their own display settings and view modes, just like nodes! Because of that, every paragraph item also has it's own theme suggestion.
Because of the seperation of content, it's great for a responsive site. For example: Just add a wrapper around your slider paragraphs that hides the slider on mobile, or makes it smaller on tablets.
Note: Paragraphs is also useful if you want to build a Drupal site with Parallax scrolling. More on that later!
Last Saturday afternoon, we were very fortunate to have Atefeh Riazi, UN CITO and Assistant Secretary General (ASG), deliver a keynote presentation to the 500+ Drupalists in attendance.
Salem Avan delivered the introduction to the keynote, and spoke about "we the people" and our inherited collective responsibility to help advance the UN's goals of furthering peace and security, international development, and human rights.
Ms. Riazi then delivered a riveting keynote that was a call to action for the Drupal community to help use technology to better the world. She emphasized the importance of leveraging innovation, collaboration and partnerships in order to solve the global challenges we face, and to respond to this call to action in a coordinated manner through partnerships that bring all of our best resources to bear.
Her exciting keynote address was followed-up with a stirring panel on Women & Technology Leadership, that featured Mr. Riazi, Holly Ross (Executive Director or the Drupal Association), and Angie Byron (webchick). The panelist explored the pivotal importance of furthering female leadership is technology circles, and particularly the Drupal community.
You can watch the full keynote here on UN WebTV.
Thanks again to the UN Office of Information Communications Technology (OICT) for their generous support of NYC Camp and the Drupal NYC Community. You can find our more about the UN OICT at their website, Facebook page and by following them on Twitter.
Unleashed Technologies developed an enterprise platform that easily scales to accommodate The Woodhouse Day Spa’s explosive growth, as they take the company from 30 to more than 200 franchises. The Woodhouse Day Spa can now instantly create franchise sites that are consistent in branding and content, yet managed and updated by the franchisee. All sites for The Woodhouse Day Spa are fully integrated into spa management systems to provide a seamless experience to visitors. The Drupal platform developed for The Woodhouse Day Spa brings usability and control to its franchisees in order to increase engagement and improve ROI.
The Woodhouse Day Spa website won the 2013 Blue Drop Award for Drupal Site of the Year.Key modules/theme/distribution used: Drupal services JSField PermissionsTaxonomy Access ControlUbercartFeaturesUltimate CronViews Bulk Operations (D8)Nodequeue
Preparing the materials for the monthly board meeting is a lot of work, but it's a great chance to reflect each month on the momentum of the Association and the community. Looking back, March 2014 was particularly exciting as the community and staff are pushing forward in several directions at once with considerable momentum. So let's get down to it and share some of the highlights:Program Updates
Each month we review the metrics outlined in our 2014 Leadership Plan and share updates from the teams. We're pleased to say that most of our metrics are in the green (within 95% of goal). Particularly exciting is the news about DrupalCon Austin. Numbers looked very solid at the end of March (the end of our reporting period for this meeting), but we are also able to share that early-bird pricing ended just a few days after this dashboard closed and we beat our estimates, meaning that we are more than on-track to have a 4,000 person event this June - another biggest DrupalCon ever!
We are also really pleased with the momentum around the Drupal.org metrics. This is still our area of greatest concern - we have more red metrics here than anywhere else. However, March brought some tremendous gains that, if sustained, will move our metrics quickly towards green. In particular, we focused discussion on:
- Page Response Time: Our goal is 3.07 seconds. Our current average for the year is 3.93 seconds. Part of the reason that we're so far from goal is that we had some serious issues in January that pushed the numbers way up. Our hardware improvments (thanks to the DIWG and Rudy) have helped speed this up, and the upcoming CDN deployment will bring this number down even further, especially for individuals accessing the site outside of the US.
- Testbot performance: Goal is 70 minutes, but actual average for the year is about 138 minutes. This actual is also very inflated by lots of issues we had in January that pushed the total testbot time much higher. Thanks to work done at Drupal Dev Days in Szeged by Jeremy Thorson and Ricardo Amaro, along with some changes to D8 core, the actual tesbot run time average in March was just 47 minutes!
- Home Page Bounce Rate: This metric is one of the central motivations for the User Research that the DCWG has begun as part of a larger Drupal.org reinvention. We have also begun to put tools like Optimizely in place that will allow us to run tests and experiments based on our research, which should help us address bounce rate, time on site, and other engagement metrics for our various audiences. We likely won's see shifts here for some months, but we are definitely thinking about these metrics and working to put the foundation for a solution in place.
At the Association, we work hard to ensure that our actions are in line with the Drupal community values. This is, of course, particularly important when money is part of the equation. To that end, the Association has a Financial Policies document that is reviewed annually by the board Finance Committee and sets rules for transparently and openly making decisions for how Association money gets spent. Until now, one element that was lacking was a Procurement Policy to govern when we pay for a service (vs. work with a volunteer) and how and when we can take in-kind donations. Back in February, we looked for feedback from the community, and incorporated a lot of the suggestions into a final policy, which was approved by the board in the meeting.
I would like to add that this policy, though approved by the board, is just a starting place. There is so much nuance that we will encounter as we put the policy into practice. During our annual review of policies, we will have the opportunity to revisit and refine this language. In particular, we want to ensure that we are supporting and growing the volunteers who contribute to the project and not hiring contractors at the expense of the health of our community.At-Large Elections/Terms
In the March Board meeting, we updated At-Large term length and shifted the election cycle. The goal is to give our At-Large Directors a better board experience by giving them time to integrate into the board and really work on their agendas. With this change, the community will elect one At-Large Director each year to a two-year term. For this to work, we need to stagger the terms of our existing At-Large Directors, Morten DK and Matthew Saunders. Since Morten is serving his second one-year term currently, the board voted in this meeting to extend the terms of Matthew Saunders 1 year. So, in our next election (in February 2015), we will elect a new At-Large Director to fill Morten's seat for two years and Matthew will have one year left in his term.First Quarter Financials and Annual Audit
In Executive Session, the board reivewed the financials for the first quarter of 2014 and received a presentation of the 2013 Audit report from the Association's auditor. All materials were previously reviewed by the Finance Committee (which meets monthly to review the most recent financial reports), and the Finance Committee recommended approving both the fiancials and the audit. The audit documents are now being produced as final versions (we presented draft documents to the board) and will be shared with the community at the June public board meeting at DrupalCon Austin. If you're ready to dive into some numbers before then, you can review the first quarter financials now:
Since the MiniDebConf Jonas and I have been travelling in Spain, France and finally staying in Belgium for a week, getting some work done. It's been harder than imagined to work during travel. I haven't exercised either, and regained at least three of four kilos I spent much time and effort getting rid in the year preceding. I thrive in my home and find it hard to keep my own time and focus when I am deprived of my own space.
It was challenging to give a talk, "Why aren't more designers using Debian or working for Debian", my first public talk. I've been working to recapture my points in writing, to make a stronger statement, but I seem to blur my own views with conflicting ones, and I'm loosing momentum every day.
One of my reasons for speaking up was to do it even though I'm not at trained speaker and have "nothing" to contribute but my opinions from the angle of a user that happens to be a designer. Not claiming to be a superior designer, but one that would like to contribute if it was easier to figure out how. And since the community wants to encourage designers to contribute to the Debian project, I figured it to be a good idea to talk about how this has been challenging to me as a dedicated user and completely out of the question for any other designer I know - or knew before the minidebconf. No reseach, no scientific proofs, just my wiew from my "dumb user" and designer's perspective.
I saw one single attendant rolling his eyes during my talk. I didn't care at that time, but I've given that look more consideration than the people approaching me after the talk, saying thank you for voicing their opinions and thoughts. I think that's absolutely astonishing and at the same time it's just typically me. It makes me angry, first with myself for not speaking to this man's perception of things, then with myself for not just letting go of that image. I'm really glad that so many seemed to listen with curiosity and interest. What if one more - or half of the auditorium - had rolled their eyes? I don't like to feel that vulnerable.
The truth is, though, that I'm really not. I gave the talk against my fear of failure and public humiliation and I'm convinced that my thoughts and actions matter, just as anybody's does, if we dare to say what's on our minds and to take action. I believe it's in anybody's power to "make a difference" and even "change the world" - at least in a small way. I guess that's one of the underlying reasons to be a designer in the first place. That is quite a strong position to take.
I've created the wikipage http://wiki.debian.org/Design - well knowing that design is a word with many meanings. Everything is design. Since the talk I've been in doubt about that page. About the project, my aim with it, what to do about it, how to move on with just a tiny babystep, and I realise that I'm simply afraid to be disturbing someone's peace, making people angry or roll their eyes at my fumbling attempts to figure out in public what can be done to make a thriving community of designers collaborating with coders to make better, more usable and attractive software in the free, wide world. I'm starting a design process, not presenting a perfect, finished solution.
Now, having put these thoughts into words, perhaps, my mind will be somewhat appeaced and let me move on with my intended tasks of cultivating that acclaimed space in the Debian information jungle into a friendly and welcoming place with info that makes it easier to be a contributing designer in Debian.
Agile development processes can greatly help your Drupal client projects. Agile in a nutshell is a highly collaborative process the uses feedback to make constant adjustments to the project. Often people equate SCRUM with Agile, but that would be a mistake. You can have SCRUM teams that never truly deeply embrace Agile ideas. You can have Agile teams who don’t follow SCRUM.
At the heart of any Agile team is the Agile Manifesto:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
That is, while there is value in the items on the right, we value the items on the left more.
Individuals and interactions over processes and tools
With Drupal projects, your have many choices of great tools and processes to use. Don’t let these things become your focus. Focus on building great teams including your clients. Tools and processes are important but not as important as the interactions and shared understand that develops in the team.
Working software over comprehensive documentation
Many Drupal projects include configurations and site building tasks using existing modules. This begs the question, how much documentation do you need to write? You should product “just enough” for the team to be clear and not more. These can be everything from full Software Requirements Specifications (SRS) to a small collection of user stories. It really depends on the needs of the team.
Customer collaboration over contract negotiation
Don’t forget that when we speak of team in Agile development, we include the customer and everyone involved on the project. This attitude creates strong groups including of developers, testers, project manager, and clients working together to create the best website possible. Creating a collaborative environment should be the priority.
Responding to change over following a plan
All of this collaboration creates increased visibility for everyone involved in the project. This gives the opportunity for new innovations and ideas to emerge as everyone has developed a shared understanding of the full view of the project. If you blindly follow the plan, you will not be able to capitalize on new and emerging knowledge over the course of the project.
Drupal developers can greatly benefits from embracing Agile development ideas. Think about how to structure your projects around shared knowledge, learning, collaborations and clients.Tags: drupalAgiledrupal planet