Elsewhere

Junichi Uekawa: Opposite to strong typing.

Planet Debian - Thu, 15/01/2015 - 22:18
Opposite to strong typing. Maybe weak typing is too discriminatory, we should call it typing challenged. Like: sqlite is a typing challenged language.

Categories: Elsewhere

Daniel Pocock: Disk expansion

Planet Debian - Thu, 15/01/2015 - 21:29

A persistent problem that I encounter with hard disks is the capacity limit. If only hard disks could expand like the Tardis.

My current setup at home involves a HP Microserver. It has four drive bays carrying two SSDs (for home directories) and two Western Digital RE4 2TB drives for bulk data storage (photos, source tarballs and other things that don't change often). Each pair of drives is mirrored. I chose the RE4 because I use RAID1 and they offer good performance and error recovery control which is useful in any RAID scenario.

When I put in the 2TB drives, I created a 1TB partition on each for Linux md RAID1 and another 1TB partition on each for BtrFs.

Later I added the SSDs and I chose BtrFs again as it had been working well for me.

Where to from here?

Since getting a 36 megapixel DSLR that produces 100MB raw images and 20MB JPEGs I've been filling up that 2TB faster than I could have ever imagined.

I've also noticed that vendors are offering much bigger NAS and archive disks so I'm tempted to upgrade.

First I looked at the Seagate Archive 8TB drives. 2TB bigger than the nearest competition. Discussion on Reddit suggests they don't have Error Recovery Control / TLER however and that leaves me feeling they are not the right solution for me.

Then I had a look at WD Red. Slightly less performant than the RE4 drives I run now, but with the possibility of 6TB per drive and a little cheaper. Apparently they have TLER though, just like the RE4 and other enterprise drives.

Will 6 or 8TB create new problems?

This all leaves me scratching my head and wondering about a couple of things though:

  • Will I run into trouble with the firmware in my HP Microserver if I try to use such a big disk?
  • Should I run the whole thing with BtrFs and how well will it work at this scale?
  • Should I avoid the WD Red and stick with RE4 or similar drives from Seagate or elsehwere?

If anybody can share any feedback it would be really welcome.

Categories: Elsewhere

Acquia: Contributing to Drupal 8 - Drupal Global Sprint Weekend 2015

Planet Drupal - Thu, 15/01/2015 - 19:30

You can still make an important contribution to Drupal 8. Drupal Global Sprint 2015-New England takes place this Saturday, January 17, from 10 AM to 5 PM at Genuine in Boston. Acquia is co-sponsoring the event and we invite you to RSVP and jump into the community.

Categories: Elsewhere

vilepickle.com: Repeating blocks of template code in Drupal 8

Planet Drupal - Thu, 15/01/2015 - 19:06

I've had some trouble using Twig's include statements in Drupal 8 theming.  I'm not sure if this is a bug since it's at Beta 4, but it's sort of annoying.  I include my content areas in page.html.twig in a separate include file in Drupal 6 and insert it into the area I need.  For example, if I have a 3 column layout, I'm changing the Bootstrap classes from "col-md-12" to "col-md-9" and "col-md-3" (for a sidebar) if the sidebars have content in them.  Inclu

read more

Categories: Elsewhere

Michal Čihař: Weblate UI polishing

Planet Debian - Thu, 15/01/2015 - 18:00

After releasing Weblate 2.0 with Bootstrap based UI, there was still lot of things to improve. Weblate 2.1 brought more consistency in using buttons with colors and icons. Weblate 2.2 will bring some improvements in other graphics elements.

One of thing which was for quite long in our issue tracker is to provide own renderer for SVG status badge. So far Weblate has offered either PNG badge or external SVG rendered by shields.io. Relying on external service was not good in a long term and also caused requests to third party server on many pages, what could be considered bad privacy wise.

Since this week, Weblate can render SVG badge on it's own and they are also matching current style used by other services (eg. Travis CI):

One last thing which really did not fit into new UI were activity charts. In past they were rendered as PNG on server side, but for upcoming releases we have switched to use Chartist javascript library and render them as SVG on client side. This way we can nicely style them to fit into page, they scale properly and also reduce server load. You can see them in action on Hosted Weblate server:

Filed under: English phpMyAdmin SUSE Weblate | 0 comments | Flattr this!

Categories: Elsewhere

Noah Meyerhans: Spamassassin updates

Planet Debian - Thu, 15/01/2015 - 16:44

If you're running Spamassassin on Debian or Ubuntu, have you enabled automatic rule updates? If not, why not? If possible, you should enable this feature. It should be as simple as setting "CRON=1" in /etc/default/spamassassin. If you choose not to enable this feature, I'd really like to hear why. In particular, I'm thinking about changing the default behavior of the Spamassassin packages such that automatic rule updates are enabled, and I'd like to know if (and why) anybody opposes this.

Spamassassin hasn't been providing rules as part of the upstream package for some time. In Debian, we include a snapshot of the ruleset from an essentially arbitrary point in time in our packages. We do this so Spamassassin will work "out of the box" on Debian systems. People who install spamassassin from source must download rules using spamassassin's updates channel. The typical way to use this service is to use cron or something similar to periodically check for rule changes via this service. This allows the anti-spam community to quickly adapt to changes in spammer tactics, and for you to actually benefit from their work by taking advantage of their newer, presumably more accurate, rules. It also allows for quick reaction to issues such as the one described in bug 738872 and 774768.

If we do change the default, there are a couple of possible approaches we could take. The simplest would be to simply change the default value of the CRON variable in /etc/default/spamassassin. Perhaps a cleaner approach would be to provide a "spamassassin-autoupdates" package that would simply provide the cron job and a simple wrapper program to perform the updates. The Spamassassin package would then specify a Recommends relationship with this package, thus providing the default enabled behavior while still providing a clear and simple mechanism to disable it.

Categories: Elsewhere

3C Web Services: Drupal Website Launch Checklist

Planet Drupal - Thu, 15/01/2015 - 16:00
A list of items to check and test for when launching a Drupal website.
Categories: Elsewhere

Lunar: 80%

Planet Debian - Thu, 15/01/2015 - 15:43

Unfortunately I could not go on stage at the 31st Chaos Communication Congress to present reproducible builds in Debian alongside Mike Perry from the Tor Project and Seth Schoen from the Electronic Frontier Foundation. I've tried to make it up for it, though… and we have made amazing progress.

Wiki reorganization

What was a massive and frightening wiki page now looks really more welcoming:

Depending on what one is looking for, it should be much easier to find. There's now a high-level status overview given on the landing page, maintainers can learn how to make their packages reproducible, enthusiasts can more easily find what can help the project, and we have even started writing some history.

.buildinfo for all packages

New year's eve saw me hacking Perl to write dpkg-genbuildinfo. Similar to dpkg-genchanges, it's run by dpkg-buildpackage to produce .buildinfo control files. This is where the build environment, and hash of source and binary packages are recorded. This script, integrated with dpkg, replace the previous debhelper interim solution written by Niko Tyni.

We used to fix mtimes in control.tar and data.tar using a specific addition to debhelper named dh_fixmtimes. To better support the ALWAYS_EXCLUDE environment variable and for pragramtic reasons, we moved the process in dh_builddeb.

Both changes were quickly pushed to our continuous integration platform. Before, only packages using dh would create a .buildinfo and thus eventually be considered reproducible. With these modifications, many more packages had their chance… and this shows:

Yes, with our experimental toolchain we are now at more than eighty percent! That's more than 17200 source packages!

srebuild

Another big item on the todo-list was crossed over by Johannes Schauer. srebuild is a wrapper around sbuild:

Given a .buildinfo file, it first finds a timestamp of Debian Sid from snapshot.debian.org which contains the requested packages in their exact versions. It then runs sbuild with the right architecture as given by the .buildinfo file and the right base system to upgrade from, as given by the version of the base-files package version in the .buildinfo file. Using two hooks it will install the right package versions and verify that the installed packages are in the right version before the build starts.

Understanding problems

Over 1700 packages have now been reviewed to understand why build results could not be reproduced on our experimental platform. The variations between the two builds are currently limited to time and file ordering, but this still has uncovered many problems. There are still toolchain fixes to be made (more than 180 packages for the PHP registry) which can make many packages reproducible at once, but others like C pre-processor macros will require many individual changes.

debbindiff, the main tool used to understand differences, has gained support for .udeb, TrueType and OpenType fonts, PNG and PDF files. It's less likely to crash on problems with encoding or external tool. But most importantly for large package, it has been made a lot faster, thanks to Reiner Herrmann and Helmut Grohne. Helmut has also been able to spot cross-compilation issues by using debbindiff!

Targeting our efforts

It gives warm fuzzy feelings to hit the 80% mark, but it would be a bit irrelevant if this would not concern packages that matter. Thankfully, Holger worked on producing statistics for more specific package sets. Mattia Rizzolo has also done great work to improve the scripts generating the various pages visible on reproducible.debian.net.

All essential and build-esential packages, except gcc and bash, are considered reproducible or have patches ready. After some lengthy builds, I also managed to come up with a patch to make linux build reproducibly.

Miscellaneous

After my initial attempt to modify r-base to remove a timestamp in R packages, Dirk Eddelbuettel discussed the issue with upstream and came up with a better patch. The latter has already been merged upstream!

Dirk's solution is to allow timestamps to be set using an external environment variable. This is also how I modified FontForge to make it possible to reproduce fonts.

Identifiers generated by xsltproc have also been an issue. After reviewing my initial patch, Andrew Awyer came up with a much nicer solution. Its potential performance implications need to be evaluated before submission, though.

Chris West has been working on packages built with Maven amongst other things.

PDF generated by GhostScript, another painful source of troubles, is being worked on by Peter De Wachter.

Holger got X.509 certificates signed by the CA cartel for jenkins.debian.net and reproducible.debian.net. No more scary security messages now. Let's hope next year we will be able to get certificates through Let's Encrypt!

Let's make a difference together

As you can imagine with all that happened in the past weeks, the #debian-reproducible IRC channel has been a cool place to hang out. It's very energizing to get together and share contributions, exchange tips and discuss hardest points. Mandatory quote:

* h01ger is very happy to see again and again how this is a nice learning circle...! i've learned a whole lot here too... in just 3 months... and its going on...!

Reproducible builds are not going to change anything for most of our users. They simply don't care how they get software on their computer. But they care to get the right software without having to worry about it. That's our responsibility, as developpers. Enabling users to trust their software is important and a major contribution, we as Debian, can make to the wider free software movement. Once Jessie is released, we should make a collective effort to make reproducible builds an highlight of our next release.

Categories: Elsewhere

Modules Unraveled: 129 Automation Tools with Solomon Gifford - Modules Unraveled Podcast

Planet Drupal - Thu, 15/01/2015 - 14:44
Published: Thu, 01/15/15Download this episodeUse Cases
  • Okay, I can already see huge benefits of utilizing these tools. But, I’d love to get your opinion on what the benefits are for Developers/Site-builders/Themers?
    • There are two big benefits as I see them, and another not so apparent. First, a lot of these tasks are repetitive. And things like copying a database may take a bit of time. Or merging code. Or running tests. Etc. Anything that you can automate means time you can spend on other things. Second, not everyone is as experienced - or maybe they don’t have the permissions - to execute all the tasks. You don’t want mistakes, you don’t want to give everyone permissions. So you figure out the best way to do it and then you automate it. The last reason is not as obvious. I think a lot of times we hack things together in our development environments to get it working - but then may run into issues later on. We don’t spend the extra time because its temporary. By spending a little extra time getting it right, we have created a reusable pattern that we can use on our next project. By encapsulating our best practices, we not only have a quicker setup, but we have a better one too.
  • Perfect. So, save time by automating tasks like copying a database. Prevent mistakes by limiting who has permissions to execute tasks, and automating them so that even those who do have permission can’t introduce user error. And by setting up a process that uses best practices, creating new environments is faster, and better than if I had to try to remember all of the steps myself.
    • Exactly. And I’ll add, ansible can be used for each of installation, configuration, and orchestration. The examples we’ve talked about so far are orchestration - moving databases, code, etc. It can also be used to install Apache, Mysql, Mongodb, etc. Any set of system commands that are repeatable.
  • Oh... So if you’ve got a server that you have full access to, you could actually wipe and rebuild the entire server AND Drupal site? We’re not limited to just configuring the Drupal site?
    • Exactly. And throw in Vagrant into the mix and now you can do that on your local machines using Virtual machines. Immagine spinning up a brand new VM and within a few clicks you have your entire development environment with a fresh drupal install all ready for you on a VM.
  • Now, I do wonder who this is more geared toward. Developers, Site-builders or Themers. I understand that each of them can use these, and would probably help them all with their daily tasks, but who do you see benefiting the most from these tools. Or, do you have examples of people in each category that you know of that are using them?
    • I think all three benefit from automation. For example, in a previous life where I didn’t use Ansible, my themer was insanely good at theming, but when it came to running commands remotely on a server to check out his work, he was a fish out of water. I wish I had written an Ansible playbook so that he could check his code out onto staging. Or even better, if I had set up Jenkins to run an Ansible playbook to automatically check it out his work each time he committed. He wouldn’t have had to wait on me, sometimes a few days if I was not around. That said, he would not have been able create the ansible playbook.
    • As for who is using Ansible, well, Twitter does - they use it to roll out updates across their hundreds of machines. And of course BlackMesh, the hosting company I work for, also does. The product Cascade I mentioned uses ansible and Jenkins to do a lot of the things we talked about today, only we set it up so you don’t have to.
Episode Links: Solomon on drupal.orgSolomon on TwitterSolomon Gifford GitHubBlackMesh GitHubJenkinsAnsibleTags: AutomationWorkflowJenkinsAnsibleplanet-drupal
Categories: Elsewhere

Code Karate: Drupal Auto Assign Role Module

Planet Drupal - Thu, 15/01/2015 - 14:11
Episode Number: 190

The Drupal 7 Auto Assign Role module allows you a lot of flexibility in deciding what roles users receive on your Drupal 7 website. If you have ever needed to allow a user to select their own role, or if you have ever needed to automatically assign a role to every user on your Drupal site. This is the module for you.

Tags: DrupalUsersDrupal 7Drupal Planet
Categories: Elsewhere

Drupal Association News: A Few Things to Unwrap on Drupal's Birthday

Planet Drupal - Thu, 15/01/2015 - 14:07

Happy birthday to Drupal! On this day in 2001, Drupal 1.0 was released.

This milestone is the perfect time to talk about some of the findings of our recent community survey. The survey findings offer a window into what community members are thinking as the project matures and evolves. It also gives us at the Drupal Association a way to better understand what we're doing right and what we could be doing better. There aren't many surprises (and that's a good thing), but all of the findings are educational. Here are three results we thought were particularly interesting and insightful.

Drupal 8 Will Be Broadly Adopted


In the survey, about 80% of respondents said they either plan to start using Drupal 8 as soon as it is released, or plan to adopt it at some point after release. Another 8% said they did not have specific plans to adopt, but do plan to evaluate Drupal 8.

 

 

 

Drupal.org Remains an Important and Heavily-Used Tool


The overwhelming majority of respondents said they use Drupal.org more than once per week. Most also say they are satisfied or somewhat satisfied with the site. While that result is encouraging, it does not change the important mission to improve the experience of the site and make it a better tool for everyone from first time visitors to those who spend the majority of their working time on the site.

 

 

 

 

We Need to Create Broader Awareness of Drupal Association Programs


Community members who took the survey have great awareness of DrupalCons. Awareness of the work we are doing on Drupal.org seems to be steadily growing. But awareness is relatively low for Community Grants and our Supporter Programs that provide a way for organizations to give back to the Project. That awareness is clearly something we need to improve to promote transparency.

 

 

 

 

If you would like to read the full results, you can access them here (2.8M PDF). Thanks for reading, and thanks for being a part of this amazing community.

 

Categories: Elsewhere

Tim Retout: Docker London Meetup - January 2015

Planet Debian - Thu, 15/01/2015 - 08:45

Last week, I visited London for the January Docker meetup, which was the first time I'd attended this group.

It was a talk-oriented format, with around 200 attendees packed into Shoreditch Village Hall; free pizza and beer was provided thanks to the sponsors, which was awesome (and makes logistics easier when you're travelling there from work).

There were three talks.

First, Andrew Martin from British Gas spoke about how they use Docker for testing and continuous deployment of their Node.js microservices - buzzword bingo! But it's helpful to see how companies approach these things.

Second, Johan Euphrosine from Google gave a short demo of Google Cloud Platform for running Docker containers (mostly around Container Engine, but also briefly App Engine). This was relevant to my interests, but I'd already seen this sort of talk online.

Third, Dan Williams presented his holiday photos featuring a journey on a container ship, which wins points from me for liberal interpretation of the meetup topic, and was genuinely very entertaining/interesting - I just regret having to leave to catch a train halfway through.

In summary, this was worth attending, but as someone just getting started with containers I'd love some sort of smaller meetings with opportunities for interaction/activity. There's such a variety of people/use cases for Docker that I'm not sure how much everyone had in common with each other; it would be interesting to find out.

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator - Elsewhere