Elsewhere

Antoine Beaupré: bup vs attic silly benchmark

Planet Debian - Tue, 18/11/2014 - 06:39

after see attic introduced in a discussion about bup, i figured out i could give it a try. it was answering two of my biggest concerns with bup:

  • backup removal
  • encryption

and seemed to magically out of nowhere and basically do everything i need, with an inline manual on top of it.

disclaimer

Note: this is not a real benchmark! i would probably need to port bup and attic to liw's seivot software to report on this properly (and that would amazing and really interesting, but it's late now). even worse, this was done on a production server with other stuff going on so take results with a grain of salt.

procedure and results

Here's what I did. I setup backups of my ridiculously huge ~/src directory on the external hard drive where I usually make my backups. I ran a clean backup with attic, than redid it, then I ran a similar backup with bup, then redid it. Here are the results:

anarcat@marcos:~$ sudo apt-get install attic # this installed 0.13 on debian jessie amd64 [...] anarcat@marcos:~$ attic init /mnt/attic-test: Initializing repository at "/media/anarcat/calyx/attic-test" Encryption NOT enabled. Use the "--encryption=passphrase|keyfile" to enable encryption. anarcat@marcos:~$ time attic create --stats /mnt/attic-test::src ~/src/ Initializing cache... ------------------------------------------------------------------------------ Archive name: src Archive fingerprint: 7bdcea8a101dc233d7c122e3f69e67e5b03dbb62596d0b70f5b0759d446d9ed0 Start time: Tue Nov 18 00:42:52 2014 End time: Tue Nov 18 00:54:00 2014 Duration: 11 minutes 8.26 seconds Number of files: 283910 Original size Compressed size Deduplicated size This archive: 6.74 GB 4.27 GB 2.99 GB All archives: 6.74 GB 4.27 GB 2.99 GB ------------------------------------------------------------------------------ 311.60user 68.28system 11:08.49elapsed 56%CPU (0avgtext+0avgdata 122824maxresident)k 15279400inputs+6788816outputs (0major+3258848minor)pagefaults 0swaps anarcat@marcos:~$ time attic create --stats /mnt/attic-test::src-2014-11-18 ~/src/ ------------------------------------------------------------------------------ Archive name: src-2014-11-18 Archive fingerprint: be840f1a49b1deb76aea1cb667d812511943cfb7fee67f0dddc57368bd61c4bf Start time: Tue Nov 18 00:05:57 2014 End time: Tue Nov 18 00:06:35 2014 Duration: 38.15 seconds Number of files: 283910 Original size Compressed size Deduplicated size This archive: 6.74 GB 4.27 GB 116.63 kB All archives: 13.47 GB 8.54 GB 3.00 GB ------------------------------------------------------------------------------ 30.60user 4.66system 0:38.38elapsed 91%CPU (0avgtext+0avgdata 104688maxresident)k 18264inputs+258696outputs (0major+36892minor)pagefaults 0swaps anarcat@marcos:~$ sudo apt-get install bup # this installed bup 0.25 anarcat@marcos:~$ free && sync && echo 3 | sudo tee /proc/sys/vm/drop_caches && free # flush caches anarcat@marcos:~$ export BUP_DIR=/mnt/bup-test anarcat@marcos:~$ bup init Dépôt Git vide initialisé dans /mnt/bup-test/ anarcat@marcos:~$ time bup index ~/src Indexing: 345249, done. 56.57user 14.37system 1:45.29elapsed 67%CPU (0avgtext+0avgdata 85236maxresident)k 699920inputs+104624outputs (4major+25970minor)pagefaults 0swaps anarcat@marcos:~$ time bup save -n src ~/src Reading index: 345249, done. bloom: creating from 1 file (200000 objects). bloom: adding 1 file (200000 objects). bloom: creating from 3 files (600000 objects). Saving: 100.00% (6749592/6749592k, 345249/345249 files), done. bloom: adding 1 file (126005 objects). 383.08user 61.37system 10:52.68elapsed 68%CPU (0avgtext+0avgdata 194256maxresident)k 14638104inputs+5944384outputs (50major+299868minor)pagefaults 0swaps anarcat@marcos:attic$ time bup index ~/src Indexing: 345249, done. 56.13user 13.08system 1:38.65elapsed 70%CPU (0avgtext+0avgdata 133848maxresident)k 806144inputs+104824outputs (137major+38463minor)pagefaults 0swaps anarcat@marcos:attic$ time bup save -n src2 ~/src Reading index: 1, done. Saving: 100.00% (0/0k, 1/1 files), done. bloom: adding 1 file (1 object). 0.22user 0.05system 0:00.66elapsed 42%CPU (0avgtext+0avgdata 17088maxresident)k 10088inputs+88outputs (39major+15194minor)pagefaults 0swaps

Disk usage is comparable:

anarcat@marcos:attic$ du -sc /mnt/*attic* 2943532K /mnt/attic-test 2969544K /mnt/bup-test

People are encouraged to try and reproduce those results, which should be fairly trivial.

Observations

Here are interesting things I noted while working with both tools:

  • attic is Python3: i could compile it, with dependencies, by doing apt-get build-dep attic and running setup.py - i could also install it with pip if i needed to (but i didn't)
  • bup is Python 2, and has a scary makefile
  • both have an init command that basically does almost nothing and takes little enough time that i'm ignoring it in the benchmarks
  • attic backups are a single command, bup requires me to know that i first want to index and then save, which is a little confusing
  • bup has nice progress information, especially during save (because when it loaded the index, it knew how much was remaining) - just because of that, bup "feels" faster
  • bup, however, lets me know about its deep internals (like now i know it uses a bloom filter) which is probably barely understandable by most people
  • on the contrary, attic gives me useful information about the size of my backups, including the size of the current increment
  • it is not possible to get that information from bup, even after the fact - you need to du before and after the backup
  • attic modifies the files access times when backing up, while bup is more careful (there's a pull request to fix this in attic, which is how i found out about this)
  • both backup systems seem to produce roughly the same data size from the same input
Summary

attic and bup are about equally fast. bup took 30 seconds less than attic to save the files, but that's not counting the 1m45s it took indexing them, so on the total run time, bup was actually slower. attic is also (almost) two times faster on the second run as well. but this could be within the margin of error of this very quick experiment, so my provisional verdict for now would be that they are about as fast.

bup may be more robust (for example it doesn't modify the atimes), but this has not been extensively tested and is more based with my familiarity with the "conservatism" of the bup team rather than actual tests.

considering all the features promised by attic, it makes for a really serious contender to the already amazing bup.

Next steps

The properly do this, we would need to:

  • include other software (thinking of Zbackup, Burp, ddar, obnam, rdiff-backup and duplicity)
  • bench attic with the noatime patch
  • bench dev attic vs dev bup
  • bench data removal
  • bench encryption
  • test data recovery
  • run multiple backup runs, on different datasets, on a cleaner environment
  • ideally, extend seivot to do all of that
Categories: Elsewhere

NEWMEDIA: Drush Make: Evaluating the Benefits and Pain Points of Each Approach

Planet Drupal - Tue, 18/11/2014 - 04:54
Drush Make: Evaluating the Benefits and Pain Points of Each ApproachDrush make is a popular solution for Drupal developers wishing to represent an entire application codebase in a single make file (or collection of make files), but does it always make sense to use? And is it a one size fits all solution? This article reviews several advantages and disadvantages of the more common approaches used within the Drupal community.A Brief History of the Makefile

Technically, my very first computer was a Tandy 1000. In reality, it was a glorified game console used by myself, my mom, and my brother to play Tetris off a 3.5" floppy disk that I had copied from a friend at school. My first real computer came 5 years later—a Packard Bell Pentium 60Mhz with a CD-ROM drive. I was in heaven. It wasn't long until I ditched Windows 95 in order to take my first tip toe into geekdom by installing the first Linux distribution I came into contact with: Caldera.

Back in those days, Linux was a labor of love and really took DIY to the extreme. Most of the time packages were not readily available, which meant you had to compile software yourself. Enter the makefile. This single document represented all the dependencies, parameters, and commands necessary to configure, compile, and install an application across a diverse set of Linux distributions. In was a thing of beauty... when it worked. And when it didn't, it was a time consuming nightmare that resulted in a substantial amount of cursing. Needless to say, I had a love/hate relationship with makefiles.

Drush Make

Drush make files, in comparison to compiling software applications, are a much more straightforward solution. In reality they are nothing more than a compiled shopping list of the specific Drupal modules, patches, and 3rd party libraries necessary to describe a fully functional Drupal application. The key difference between the drush make file and the resulting build is that the make file is a single file representation of what ultimately generate the full file and folder structure of the Drupal application.

In comparison to Linux makefiles, drush make files are much simpler and as a result do not suffer from the compilation nightmares that I used to experience. However, there are also some commonalities: a single file representation and the time necessary to generate versus using a predefined package (or binary).

Comparison to Ruby Gems and Chef

The makefile mindset is not unique to the Linux OS or Drupal. We see the same pattern in the Ruby language and (by extension) configuration management applications like Chef. Ruby users can leverage gemfiles, which provide a similar shopping list style of gem dependencies. Chef leverages the concept of a Berksfile to specify cookbook dependencies. In each case, we have a list of items that the application then uses to generate a desired state. This single file representation is very efficient because it doesn't require one to lug a large set of files and folders around. Rather, they are represented and then generated as needed.

The pattern we see again and again is simple. We can either have the full package (binaries, full Drupal application, gems, cookbooks, etc) or a representation of that state in a single file. This leads us to...

The Key Question

To make a determination of whether or not Drush makefiles are the appropriate method to use in the full software development life cycle (i.e. development, launch, ongoing support, and then deprecation), we need to ask ourselves which method makes sense from at least 3 perspectives: development, operations, and product owners. The answer is not necessarily black and white because there are multiple considerations, multiple stakeholders, and competing optimizations. To that end, I tried to highlight as many pros and cons for a drush makefile approach and leave it to you to decide what is appropriate for you and your use case.

Drush Make Advantages Simplest Possible Representation

A single 1 kilobyte file containing 10 lines could fully represent a very simple drush site install containing a few contrib modules. By comparison, Drupal 7.30 is 3.5 megabytes (zipped) and consists of several hundred folders and several thousand files. If we extend the makefile as a component of an installation profile then the only code that is necessary within the profile is that which is specific and unique to that project. In short, a makefile can allow a repo to contain the simplest possible representation of a Drupal application.

Easier Code Reviews of Diffs

Piggybacking off of the previous item, comparing a diff between makefiles is trivial even when dozens of modules have been updated. A module upgrade might simply show views going from version 7.x-3.8 to 7.x-3.9. By contrast, viewing a commit difference between the actual modules would require sifting through a much longer list of changes that don't really mean much beyond the version change.

Security and Hacked Code

With a specific module version against a public repo, there is no question what is being committed. It's an exact copy of what is contained in the public repo. By contrast, it's much less obvious when upgrading the module itself whether or not the person committing it added any additional tweaks or modifications to it. The person reviewing the code would have to run an MD5 checksum against the code to ensure it was unmodified (or re-run drush make against the makefile).

Inheritance and Re-usability

Drush make file inheritance can be used as way to standardized particular modules, themes, libraries, and patches across many projects. One can then layer on additional and more specific sub-make files to get more granular. This can be particularly powerful in situations where one is managing dozens to hundreds of sites that are derived from a common base installation profile.

Maintain Patches

Let's face it, while Drupal core should never be forked, there are situations where legitimate patches sit within RBTC purgatory for years (I'm looking at you Secure Pages). However, managing and tracking those patches while staying up to date with Drupal core and contrib upgrades can be challenging and easy to forget. However, this becomes a trivial secondary step with a drush make file that can apply the patches in conjunction with the appropriate upgrades.

Drush Make Disadvantages Deployment Overhead

If we use the simplest possible representation (i.e. a makefile or a makefile within an installation profile) for a deployment, then every deployment will require a complete rebuild of the entire Drupal application even in the case of a single line change. Compare this with a more straightforward strategy of a git pull followed by an rsync, which would dramatically cut down the time to deploy as well as the server resources required. Building Drupal from scratch every time can add up, particularly on a shared server with multiple Drupal instances.

In addition to the build process, there is also the time involved with priming drush's cached copy of all the modules and libraries specified by the makefile.

External Dependencies

Using a makefile as the simplest possible representation also introduces the challenge of external dependencies during a production deployment. For example, an updated makefile may require new modules from drupal.org, new libraries from github, and patches hosted somewhere else. If any one of those dependencies fail, the deployment cannot proceed. Contrast this with the situation where a full clone of the git repo already lives on the server it's deploying against and there is a much smaller risk of failure.

It's important to note that this consideration isn't as much of a concern during the development phase because a short outage doesn't directly impact a production level service. However, once the application is in production and uptime is a more important factor, these external dependencies may no longer be appropriate.

Git Bisect, History, and Merges

Git bisect is a powerful strategy for efficiently stepping through a long commit log to determine when a bug or regression was introduced. If the full codebase is present, then one can quickly jump through the commit log without doing anything else (except perhaps a feature revert, a registry rebuild, and/or a cache clear). With the pure makefile approach, a rebuild is necessary after each change to address changes within the dependencies. While there are certainly work-arounds, this isn't as clean a process.

Comparing code across a branches is also made more difficult. If one is trying to see if a function changed between versions of a module, one cannot compare against two commits within the git history or compare between two branches.

Version Control Status

One quick and easy way to check for changes within a git repository is through the use of commands like "git status". The use of submodules can make this slightly less effective because it requires one to traverse to each submodule to verify if any changes are present. And a build from a makefile is even more difficult because any components added into the profile will show up as false positives and any components that are above the profile directory will not be tracked in version control at all. This can make it very difficult to verify that there are no changes across the entire code base.

Hosting Options

Unfortunately, not all hosting providers will play nice with a drush make representation of a Drupal site. Shared hosting solutions (such as entry level godaddy solutions) will not allow one to install drush on the server. And until very recently, Patheon and Acquia did not provide support for drush make files out of the box.

Complexity

One additional consideration is with respect to how the site might be handed off to the client if they decide to manage the site themselves. If they are not sophisticated enough to use a makefile approach, then they might have a difficult time deploying or maintaining it. Worse, if they need to be provided a full zip of the site while receiving additional code changes from a repo using just a make file manifest, then providing code changes might be more difficult than necessary.

Discussion

There is no question that drush make files are powerful, but they do introduce some potential limitations for certain use cases, particularly as one enters the launch and post-launch phase of a project. Once the site is live and out of heavy development, it may make more sense to switch from a manifest to a full code repository.

A Hybrid Solution

There is an hybrid solution that can retain some of the best features from both endpoints. If a drush make file is included within the root of the Drupal application, then it can still be used to control and enforce all the resulting code that is used to generate a full Drupal codebase (modules, libraries, themes, and patches). This is exactly the approach that Drupal 8 uses with its composer.json file and the resulting components that are stored within core/vendor.

A Drupal 7 example of this approach is the RhymesSite project. One can completely rebuild the codebase from the makefile while retaining all the advantages of a complete codebase when ready to deploy to production.

Closing Remarks

I hope you found this helpful. If you have specific experiences or insights that could make this analysis better and/or more accurate, we’d love to hear from you!

Categories: Elsewhere

Stauffer: The Most Important Thing I learned while writing my first Facebook app

Planet Drupal - Tue, 18/11/2014 - 02:51

When I learned I was going to be writing my first Facebook web app, I was pretty excited.  Much like the feeling of writing a Hello World! in a new programming language, I knew I was going to expand my skillset. I really enjoy new opportunities to learn and solve problems.  In fact, I feel this is a fundamental attribute of being a developer.  We code because we want to solve problems. So without further ado, let me share the most important thing I learned while writing my first Facebook app.Hopefully my experience will give someone else a head start in their own future project.

You need to ask Facebook for permission to ask permission from the User.

When you integrate Facebook login into your website, you can include the default permissions which includes public profile and email. After the user logs in, they can approve to give your site this information. If you need more than that, i.e.,  you want to be able to post to a user’s wall, you will need an extended permission. However, you can’t just include an extended permission on Facebook API, you will have to ask Facebook for permission to include an extended permission in your login. 

My assumption is that Facebook checks out your website or app in order to determine why you need an extended permission. If this is in fact the case, I believe it is a good thing because at least we can then assume  that Facebook is checking the legitimacy of your app. However, let’s wear a black hat for a second. If you really think about it, this process does not stop a malicious programmer from pretending they have a legit app. Once has Facebook approved the extended permissions, malicious programmers  can still change their code and do bad things. 

Understanding that time is one of the most important parts of web development, once you finish your website or mobile app don't celebrate too early! There will be a delay after making your project “live.”  If you needed an extended permission and Facebook is one of your core functionalities, you will have to wait for Facebook’s approval. 

Note: You do not need to do this during development because you will have all necessary extended permissions in your dev app, but once you are ready for production, you will have to submit your application to Facebook for review

Tags: facebook app , Drupal , Planet Drupal
Categories: Elsewhere

Forum One: The Drupal 8 Decision

Planet Drupal - Tue, 18/11/2014 - 01:20

From time to time, we all face big life choices. Should I attend this college? Should I take this job? Should I marry this person?

Yet few life choices loom larger for you and your organization in the next year than “When should I upgrade to Drupal 8?”

Well…perhaps the Drupal 8 decision doesn’t quite rank with the others, but for mission-driven organizations, the decision to adopt this major new release is a significant one, with implications for your digital communications for years to come. For most organizations, the upgrade represents a substantial investment that must be planned, scheduled, and budgeted.

In this article, I’ll provide a rapid overview of the promise and challenge of Drupal 8. Then, I’ll lay out the choices you are facing as a current user, or potential Drupal adopter.

The Promise of Drupal 8

Drupal 8, the first major new release in four years, represents a substantial technological departure from previous versions. (More on what constitutes a major Drupal upgrade.)

For marketers and communications professionals, there’s a lot to like. There are over 200 new features and improvements, including a mobile-first approach, in-place editing, and improved accessibility.

For technologists, Drupal 8 offers improved development techniques, as well as including improved APIs and built-in Web services. But D8 also comes with a learning curve. It has an entirely new architecture and methodology. It will take time for your developers to become comfortable with the new Symfony2 components. Drupal 8’s Object-Oriented Programming approach brings increased flexibility for those with the hardcore computer science skills necessary to exploit it.

This infographic (PDF) from the Drupal Association summarizes Drupal 8’s key features.

The Challenge of Drupal 8

One challenge for Drupal users is that the release timeline is still unknown. Certainly, we’re getting close. The beta version was released in October 2014, and it’s anticipated that it will be released sometime in 2015.

Historically, migrations from one major version to another are straightforward, but not always non-trivial. Every Drupal website is a conglomeration of the “core” Drupal software as well as typically dozens of add-on “modules” that extend or improve the software’s functionality. For example, the module that runs your fancy homepage carousel in one version may not be updated or tuned for the next version. And while Drupal 8’s upgrade process is improved, timelines and costs for Drupal upgrade projects are as varied as the websites themselves.

It’s important to realize that once Drupal 8 is released, support for previous versions will flag. If you are currently on Drupal 7, with no immediate plans for a major redesign, the issue is less pressing. But for those on earlier versions, you must be planning and budgeting for an upgrade now.

The challenge for digital communication planners depends on your existing situation. Let’s look at the possible approaches — one situation at a time.

Your Website is on Drupal 5

If you are using Drupal 5, your plan is simple. You must upgrade to Drupal 6 as soon as you possibly can.

Drupal 5 was released in 2007 and superseded by Drupal 6 a year later. If your website is running Drupal 5, it hasn’t received any security patches in four years, and is likely already compromised. It’s probably serving as jumping off point for spamming and other nefarious activities. Improving a Drupal 5 site now is difficult, and few reputable consultancies would agree to improve a Drupal 5 site without first upgrading it.

Once you are on Drupal 6, you then must consider upgrading to Drupal 7 or 8 within the next year, as described in the sections that follow.

Your Website is on Drupal 6

If your website is on Drupal 6, you need to plan for an upgrade to Drupal 7 or 8 within the next year. Once Drupal 8 is released in 2015, official support for Drupal 6 core and modules will cease within three months. This means that as security vulnerabilities are discovered, hackers are highly likely to compromise your website for their evil ends, and you will be powerless to plug the holes.

This means that Drupal 6 sites should be planning and budgeting NOW to upgrade to at least Drupal 7 in 2015. You want to be ready to move quickly once Drupal 8 is released. Three months of support is not a long time.

Your Website is on Drupal 7 (Or you are considering Drupal for Your Next Project)

If your site is currently on Drupal you can breathe easier. Drupal 7 has been out for four years, and nearly a million sites are running Drupal 7 core — far more than all previous versions combined.

It will soon be time for these sites to transition to Drupal 8, but the community will continue to support D7 until Drupal 9 is released, which is surely at least two years away.

Therefore, if you are happy with your existing website and are planning only minor improvements to the design or functionality in the near term, you can sit back and  do nothing for now. You should, of course, start planning for upgrading to Drupal 8 the next two years.

If you are currently considering building a new site on Drupal — or contemplating a major redesign in the next six months — the decision is more complicated.  You have two options.

The first option is to redesign on Drupal 7 now. That’s what thousands of projects are doing as we speak. At Forum One, every new major Drupal project currently starts with Drupal 7, and will likely continue to do so for several months following Drupal 8’s release. The software is mature, stable, widely-supported, and well understood by our staff.

The second option is to postpone your Drupal project until after Drupal 8 is released. Early adopters of D8 will get the maximum value from the new software, as their finished solution will live on Drupal for the longest period. They likely won’t need to consider a new Drupal upgrade until sometime in 2017 at the earliest. And given that the community will continue to support Drupal 8 even once Drupal 9 is released, you would be able to sleep easy knowing that your site will be able to stay patched and secure for the next four to five years.

However, at this writing, there are distinct trade-offs with waiting for D8. You are tying your project timeline to the D8 release schedule, which is community-driven and not guaranteed. Even once D8 is released, it will be a few months before skilled developers are ready to start new projects on 8. While savvy technologists are already experimenting with the D8 beta, it will take some time for modules, processes, training materials, and hosting environments to become tuned for this substantially-new platform.

Here’s another important consideration: Are you the type of product owner who can afford to be on the cutting edge? Early technology adopters typically pay more for the technology than those who follow. Later adopters benefit from the lessons of early adopters. Still, this may be an acceptable premium for your organization if Drupal 8 improves your efficiency, reduces long-term costs, or gives you a competitive advantage in achieving your goals.

Decide to Plan

Like all software, the lifespan of every major Drupal version is limited. Drupal 5 is end of life, Drupal 6 is very near end of life, and — after a good run — Drupal 7 will enter its golden years in the next twelve months.

While the Drupal 8 decision may not have the gravity of other life choices, you have an obligation to ensure that the core software for your site is, secure, stable, and well-supported.

Your most important decision is to decide to plan. Drupal 8 is coming. Are you prepared?

 

Categories: Elsewhere

Metal Toad: Drupal 8: First Impressions for the Back-End Developer

Planet Drupal - Tue, 18/11/2014 - 00:54
Drupal 8: First Impressions for the Back-End Developer Mon, 11/17/2014 - 15:54 keithdechant

Drupal 8 is in beta now, and recently I’ve had a chance to start working with it. While much of the admin interface is comparable to Drupal 7, there have been some important changes for site builders and back-end developers. In this post, I will be looking at file system and database structure changes, Drush setup, and the new configuration entity type.

Disclaimer: Drupal 8 beta 2 is not ready for production yet. If you start working with it, be warned that future beta releases might break backwards compatibility. You might need to write some code to upgrade. If this is not an option for you, you would be best served sticking with Drupal 7 until the Drupal 8 release candidate is available.

Content Types

One of the most noticeable changes to the Drupal 8 admin UI is the revised Content Type creation pages.

The “Manage Fields” page no longer contains a “widget” column. There is a new tab, “Form Display,” which allows more flexible configuration of the node add/edit form.

“Manage Fields” also no longer allows custom sorting of the fields. “Body” is always listed first, followed by the custom fields in alphabetical order. The fields can be reordered  on the “Manage Form Display” and “Manage Display” tabs.

Comments

In Drupal 8, comments are set up as a field, rather than a setting in the node type. To turn on comments for a content type, add a field of type "Comments" on the "Manage Fields" page. This is a more flexible system than in Drupal 7, allowing more than one type of comments for a single node.

This change mostly affects the admin UI. Comments are still entities in Drupal 8, and their underlying data structure is similar to Drupal 7.

Database structure

The notable changes in the database structure relate to the user profiles and the table names for field tables.

The “users” table (a single table in Drupal 7) has been split into “users” and “users_field_data” which contains the data from the built-in fields like name and password. Several other tables, including "node", "comment", and "taxonomy_term" have undergone similar structure changes. This restructuring allows for easier translations of data in the core fields.

Drupal 8 field tables now are prefixed with the type of entity they belong to. They have names like “node__field_image” or “user__field_first_name”. The data structure of these tables is similar to the Drupal 7 “field_data_*” tables, with only a few changes (e.g., the "language" column is now named "langcode").

Drush

Drush 6, commonly used with Drupal 7, is not compatible with Drupal 8. You will need to install Drush 7, which is still in development. Not to worry, it’s easy to install Drush 7 alongside Drush 6. Here is an excellent article about setting up Drush 7 with Drupal 8 using Composer: https://www.acquia.com/blog/leverage-drush-7-drupal-8

Cache and Registry

Cache and registry handling in Drupal 8 has undergone some major changes from Drupal 7. Notably, the “cache clear” command has been replaced by “cache rebuild.”

Drupal 7:
“drush cache-clear all”
(a.k.a. “drush cc all”)

Drupal 8:
"drush cache-rebuild”
(a.k.a., “drush cr”).

Note: In Drupal 8, you don't need to specify "drush cache-rebuild all." The "cache-rebuild" command appears to always clear all the caches, and any additional arguments are ignored.

Note: “drush cache-clear drush” is still used to update the list of Drush commands.

As of this writing, the “drush registry-rebuild” command does not appear to be supported for Drupal 8. This may change in the near future.

Package Management

Drupal 8 does not allow disabling of modules. The "drush pm-disable" command has been removed. To turn off a module, you need to uninstall it with "drush pm-uninstall" or by using the "Uninstall" tab on the "Extend" page in the admin UI.

As of Drupal 8 beta 2, uninstalling a module does not automatically delete the module's configuration data. However, some modules may have uninstall hooks which delete their configuration when they are uninstalled. Modules like these can no longer be uninstalled without deleting their configuration.

Package Manager bugs

As of this writing, there is a Drush 7 bug that causes an infinite loop when trying to use “drush pm-enable” (a.k.a., “drush en”) to simultaneously download and install a module:

https://github.com/drush-ops/drush/issues/5

The workaround is to download the module ("drush dl somemodule") and enable it ("drush en somemodule") in two separate steps, or to enable the module through the admin UI. This does not affect modules which you have already downloaded and wish to enable.

Compiled CSS, JS, and Twig files

In sites/default/files are several new directories:

  • sites/default/files/php - Contains compiled Twig templates
  • sites/default/files/css - Contains compiled and gzipped CSS files
  • sites/default/files/js - Contains compiled JS files

These files do not need to be added to your Git repository. Drupal will generate them automatically on page load, and they will have different file names on each machine (local dev machine, dev server, production server, etc.). Similarly, you shouldn’t try to edit these files manually because Drupal will automatically overwrite your changes at the next cache rebuild.

The Drush “cache-rebuild” command will erase these files and rebuild them.

Common errors - File permissions

Incorrect file permissions can cause PHP errors, or missing CSS or JS. Consider the following error:

Fatal error: Class '__TwigTemplate_09ab09ab7c23bd1ffe135ac9872354bdeca182f' not found in /path/to/your/site/drupal/core/lib/Drupal/Core/Template/TwigEnvironment.php on line 152

This error occurs when the web server doesn’t have permissions to write to the directory sites/default/files. Change the permissions or ownership on that directory and this error should go away.

If you encounter pages missing their CSS files, try checking the same file permissions.

Configuration Entities

Drupal 8 introduces a new type of entity, the “configuration entity.” These are represented as YAML files in the “config/install” subdirectory within a module. When the module is installed, the data from these YAML files is loaded into entries in the “config” table in the database.

The "config" table contains many of the settings that were formerly in the "system" table in Drupal 7. It also contains definitions for a number of things that were formerly in separate tables. Notable examples are Taxonomy vocabularies and text filter formats.

Config entities are only updated when the module is installed or uninstalled. Drupal does not rebuild them when you rebuild the cache. During module development, you either need to uninstall and reinstall your module, or use the config_devel contrib module to make managing your config entities easier.

Config entities are particularly useful when writing content migrations, because the migration definitions are config entities.

As of Drupal 8 beta 2, configuration entities are no longer removed when a module is uninstalled. If your custom module uses configuration entities, and you don’t want these to persist during a reinstall of the module, it might be a good idea to write an uninstall hook to remove the entities.

Example uninstall hook to remove configuration entities:

/** * Implements hook_uninstall(). * * Cleans up config entities installed by this module. */ function yourmodule_uninstall() { db_query("DELETE FROM {config} WHERE name = 'your.config.entity.name'"); drupal_flush_all_caches(); }
Categories: Elsewhere

Károly Négyesi: Adding comma separated username autocomplete to a D7 form

Planet Drupal - Mon, 17/11/2014 - 22:20

Today I needed to add autocomplete to a field that could contain comma separated usernames and obviously the requirements included autocomplete. I thought this problem must be solved already in the ecosystem and sure, Views has it already. So I have added '#autocomplete_path' => 'admin/views/ajax/autocomplete/user', '#element_validate' => array('my_module_usernames_validate') and then

<?php
function my_module_usernames_validate(&$element) {
  if ($values = drupal_explode_tags($element['#value'])) {
    // Good thing Views doesn't use the native constructor.
    $handler = new views_handler_filter_user_name();
    // And this function doesn't use the object at all.
    $handler->validate_user_strings($element, $values);
  }
}
?>

Ps. This has been confirmed as working (with a plugin instance) in D8 too.

Categories: Elsewhere

Vincent Sanders: NetSurf Developer workshop IV

Planet Debian - Mon, 17/11/2014 - 21:54
Over the weekend the NetSurf developers met to make a concentrated effort on improving the browser. This time we were kindly hosted by Codethink in their Manchester office in a pleasant environment with plenty of refreshments.

Five developers managed to attend in person from around the UK: Michael Drake, John-Mark Bell, Daniel Silverstone, Rob Kendrick and Vincent Sanders. We also had Chris Young providing some bug fixes remotely.

We started the weekend by discussing all the thorny core issues that had been put on the agenda and ensuring the outcomes were properly noted. We also held the society AGM which was minuted by Daniel.

The emphasis of this weekend was very much on planning and doing the disruptive changes we had been putting off until we were all together.

John-Mark and myself managed to change the core build system as used by all the libraries to using standard triplets to identify systems and use the gnu autoconf style of naming for parameters (i.e. HOST, BUILD and CC being used correctly).

This was accompanied by improvements and configuration changes to the CI system to accommodate the new usage.

Several issues from the bug tracker were addressed and we put ourselves in a stronger position to address numerous other usability problems in the future.

We managed to pack a great deal into the 20 hours of work on Saturday and Sunday although because we were concentrating much more on planning and infrastructure rather than a release the metrics of commits and files changed were lower than at previous events.

Categories: Elsewhere

Niels Thykier: The first 12 days and 408 unblock requests into the Jessie freeze

Planet Debian - Mon, 17/11/2014 - 21:17

The release team receives an extreme amount of unblock requests right now.  For the past 22 days[1], we have been receiving no less than 408 unblock/ageing requests.  That is an average of ~18.5/day.  In the same period, the release team have closed 350 unblocks requests, averaging 15.9/day.

This number does not account for number of unblocks, we add without a request, when we happen to spot when we look at the list of RC bugs[2]. Nor does it account for unblock requests currently tagged “moreinfo”, of which there are currently 25.

All in all, it has been 3 intensive weeks for the release team.  I am truly proud of my fellow team members for keeping up with this for so long!  Also a thanks to the non-RT members, who help us by triaging and reviewing the unblock requests!  It is much appreciated. :)

 

Random bonus info:

  • d (our diffing tool) finally got colordiff support during the Release Sprint last week.  Prior to that, we got black’n’white diffs!
    • ssh coccia.debian.org -t /srv/release.debian.org/tools/scripts/d <srcpkg>
    • Though coccia.debian.org do not have colordiff installed right now.  I have filed a request to have it installed.
  • The release team have about 132 (active) unblock hints deployed right now in our hint files.

 

[1] We started receiving some in the 10 days before the freeze as people realised that their uploads would need an unblock to make it into Jessie.

[2] Related topics: “what is adsb?” (the answer being: Our top hinter for Wheezy)

 


Categories: Elsewhere

Creative Juices: Building REST web services with Drupal 7

Planet Drupal - Mon, 17/11/2014 - 19:48
Building REST web services with Drupal 7 Mon, 11/17/2014 - 13:48 matt
Categories: Elsewhere

Daniel Leidert: Rsync files between two machines over SSH and limit read access

Planet Debian - Mon, 17/11/2014 - 17:32

From time to time I need to get contents from a remote machine to my local workstation. The data sometimes is big and I don't want to start all over again if something fails. Further the transmission should be secure and the connection should be limited to syncing only this path and its sub-directories. So I've setup a way to do this using rsync and ssh and I'm going to describe this setup.

Consider you have already created a SSH key, say ~/.ssh/key_rsa together with ~/.ssh/key_rsa.pub, and on the remote machine there is an SSH server running allowing to login by a public key and rsync is available. Lets further assume the following:

  • the remote machine is rsync.domain.tld
  • the path on the remote machine that holds the data is /path/mydata
  • the user on the remote machine being able to read /path/mydata and to login via SSH is remote_user
  • the path on the local machine to put the data is /path/mydest
  • the user on the local machine being able to write /path/mydest is local_user
  • the user on the local machine has the private key ~local_user/.ssh/key_rsa and the public key ~local_user/.ssh/key_rsa.pub

Now the public key ~local_user/.ssh/key_rsa.pub is added to the remote users ~remote_user/.ssh/authorized_keys file. The file will then probably look like this (there is just one very long line with the key, here cut out by [..]):

ssh-rsa [..]= user@domain.tld

Now I would like to limit the abilities for a user logging in with this key to only rsync the special directory /path/mydata. I therefor preceed the key with a command prefix, which is explained in the manual page sshd(8). The file then looks like this:

command="/usr/bin/rsync --server --sender -vlogDtprze . /path/mydata",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-rsa [..]= user@domain.tld

I then can rsync the remote directory to a local location over SSH by running:

rsync -avz -P --delete -e 'ssh remote_user@rsync.domain.tld' rsync.domain.tld:/path/mydata/ /path/mydest

That's it.

Categories: Elsewhere

Thorsten Alteholz: Manage own CA with Debian

Planet Debian - Mon, 17/11/2014 - 14:57

Self signed SSL certificates are nice, but only provide encryption of retrieved data. Nobody knows who is really sending the data.

If one buys an SSL certificate for a website, the browser doesn’t complain as much as with a self signed certificate. But can you really trust the other side? Almost every commercial CA has some kind of “fast validation” or “domain validation, issued in minutes”, which is done by email or phone. So if required, within minutes everybody might become you. Even with putting money on the table your users can not be sure whether this server really belongs to the right guy.

Well, why wasting time and money? Just create your own Root CA and tell users that they need to add something in order to avoid some error messages. In Debian we basically have five packages who claim to be able to manage some kind of CA.

easy-rsa is mainly needed to manage certificates used by openVPN. Within this use case it works like a charm, but I don’t want to manage a more complex CA with it.

gnomint is dead upstream and only uses SHA1 as signature algorithm. This will cause lots of problems as Mircrosoft and Google want to deprecate SHA1 in their products by 2017. Besides, this package is already orphaned and maybe it can disappear now.

tinyCA uses more signature algorithms, unfortunately SHA1 seems to be the “best” it can. There are some patches to support up to SHA512, but they don’t work for all parts of the software yet. For example Sub-CAs still use SHA1 despite of choosing something different in the GUI. So nice, but not (yet) usable in Jessie.

FreeIPA seems to be great, but didn’t make it into Jessie in time. Unfortunately the Release Team has reasons to not unblock it. So nice, but not usable in Jessie.

xca is based on QT4. As announced in the 15th DPN of 2014 the deprecated QT4 will be removed from Debian Stretch (= Jessie+1). Apart from this, the software meets all my requirements.

Categories: Elsewhere

Jonathan Wiltshire: Getting things into Jessie (#2)

Planet Debian - Mon, 17/11/2014 - 11:45
If your request doesn’t appear on the list, probably the diff was too big

There’s a size limit for mails to debian-release@lists.debian.org, and in general that’s how we read unblock requests. If your mail doesn’t appear on the list, but you do get a bug number back, your mail is likely too large..

In this case, that’s probably a sign that we won’t accept your request without exceptional circumstances. Making so many changes in a package is almost certainly not appropriate for this phase of the cycle.

Do use filterdiff to remove automatically-generated files from your diff before sending it, which increases the chances of acceptance.

Do follow up to the bug if it doesn’t appear on the list; send a short message explaining that happened and why your giant diff should be considered.

Don’t be surprised if thousands and thousands of lines changed are rejected.

Getting things into Jessie (#2) is a post from: jwiltshire.org.uk | Flattr

Categories: Elsewhere

Chris Lamb: Calculating the number of pedal turns on a bike ride

Planet Debian - Mon, 17/11/2014 - 10:34

If you have a cadence sensor on your bike such as the Garmin GSC-10, you can approximate the number of pedal turns you made on the bike ride using the following script (requires GPSBabel):

#!/bin/sh STYLE="$(mktemp)" cat >${STYLE} <<EOF FIELD_DELIMITER COMMA RECORD_DELIMITER NEWLINE OFIELD CADENCE,"","%d" EOF exec gpsbabel -i garmin_fit -f "${1}" -o xcsv,style=${STYLE} -F- | awk '{x += $1} END {print int(x / 60)}'

... then call with:

$ sh cadence.sh ~/path/to/2014-11-16-14-46-05.fit 24344

Unfortunately the Garmin .fit format doesn't store the actual number of pedal turns, only the average for each particular second. However, it should be reasonably accurate given that one keeps a reasonably steady cadence.

As a bonus, using a small amount of shell plumbing you can then sum an entire year's worth of riding like so:

$ for X in ~/path/to/2014-*.fit; do sh cadence.sh ${X}; done | awk '{x += $1} END { print x }' 749943
Categories: Elsewhere

Drupal core announcements: Drupal core security release window on Wednesday, November 19

Planet Drupal - Mon, 17/11/2014 - 05:54
Start:  2014-11-19 (All day) America/New_York User group meeting Organizers:  David_Rothstein

The monthly security release window for Drupal 6 and Drupal 7 core will take place on Wednesday, November 19.

This does not mean that a Drupal core security release will necessarily take place on that date for either the Drupal 6 or Drupal 7 branches, only that you should prepare to look out for one (and be ready to update your Drupal sites in the event that the Drupal security team decides to make a release).

There will be no bug fix release on this date; the next window for a Drupal core bug fix release is Wednesday, December 3.

For more information on Drupal core release windows, see the documentation on release timing and security releases, and the discussion that led to this policy being implemented.

Categories: Elsewhere

Paul Tagliamonte: BOS -> DC

Planet Debian - Mon, 17/11/2014 - 03:06

Hello, World

Been a while since my last blog post - things have been a bit hectic lately, and I’ve not really had the time.

Now that things have settled down a bit — I’m in DC! I’ve moved down south to join the rest of my colleagues at Sunlight to head up our State & Local team.

Leaving behind the brilliant Free Software community in Boston won’t be easy, but I’m hoping to find a similar community here in DC.

Categories: Elsewhere

Daniel Leidert: Install automatically starting XBMC to N54L microserver under Debian Wheezy 7.7

Planet Debian - Mon, 17/11/2014 - 00:12

This is a followup to my previous post about getting sound output from the Sapphire Radeon HD 6450 card in my HP N54L microserver via HDMI. This post will describe, howto install XBMC from Wheezy backports and how to automatically start it. Again, there are vaious ways and I'll only describe mine. Further, this is, what I did so far: enable the audio output for the Radeon card and install X.org together with lightdm.

Step 3 - Install XBMC

This is a pretty easy task. I've chosen to install XBMC 13.2 from the Wheezy backports repository.

# apt-get install -t wheezy-backports xbmc Step 4 - Automically start XBMC

There are various ways; some involve starting it a s a service using init scripts für sysvinit or upstrart or systemd. You'll easily find them. I've chosen to create a user, automatically log him into X and start XBMC. The user is called xbmc.

# adduser --home /home/xbmc --add_extra_groups xbmc

I used to choose a password. But I wonder, if using --disabled-password would work too? Next I adjusted /etc/lightdm/lightdm.conf. Below are only the differences to the stock version of this file. I haven't touched other lines.

[SeatDefaults]
greeter-session=lightdm-gtk-greeter
user-session=XBMC
autologin-guest=false
autologin-user=xbmc
autologin-user-timeout=0

The file /usr/share/xsessions/XBMC.desktop is the stock one, no changes made. After restarting lightdm:

# service lightdm restart

XBMC is started automatically. If anything goes wrong or doesn't work, I suggest to check /var/log/auth.log, /home/xbmc/.xsession-errors and /var/log/lightdm/*.log. In a few cases it seems necessary to login the user xbmc manually once although it wasn't necessary here.

JFTR: When I checked /var/log/auth.log I saw a few errors and installed gnome-keyring too:

apt-get install --install-recommends gnome-keyring Step 5 - Useful packages

There are some packages, which might be useful running XBMC, e.g.

Conclusion

I'm now running XBMC on top of Debian Wheezy on the N54L microserver without a bloated desktop environment. Video and sound are working fine, though it was necessary to install recent firmware and a recent kernel from Wheezy backports to get it done. The system automatically starts the XBMC session on start/reboot. I'm now examining the XBMC features :)

Categories: Elsewhere

Tollef Fog Heen: Resigning as a Debian systemd maintainer

Planet Debian - Sun, 16/11/2014 - 23:55

Apparently, people care when you, as privileged person (white, male, long-time Debian Developer) throws in the towel because the amount of crap thrown your way just becomes too much. I guess that's good, both because it gives me a soap box for a short while, but also because if enough people talk about how poisonous the well that Debian is has become, we can fix it.

This morning, I resigned as a member of the systemd maintainer team. I then proceeded to leave the relevant IRC channels and announced this on twitter. The responses I've gotten have been almost all been heartwarming. People have generally been offering hugs, saying thanks for the work put into systemd in Debian and so on. I've greatly appreciated those (and I've been getting those before I resigned too, so this isn't just a response to that). I feel bad about leaving the rest of the team, they're a great bunch: competent, caring, funny, wonderful people. On the other hand, at some point I had to draw a line and say "no further".

Debian and its various maintainer teams are a bunch of tribes (with possibly Debian itself being a supertribe). Unlike many other situations, you can be part of multiple tribes. I'm still a member of the DSA tribe for instance. Leaving pkg-systemd means leaving one of my tribes. That hurts. It hurts even more because it feels like a forced exit rather than because I've lost interest or been distracted by other shiny things for long enough that you don't really feel like part of a tribe. That happened with me with debian-installer. It was my baby for a while (with a then quite small team), then a bunch of real life thing interfered and other people picked it up and ran with it and made it greater and more fantastic than before. I kinda lost touch, and while it's still dear to me, I no longer identify as part of the debian-boot tribe.

Now, how did I, standing stout and tall, get forced out of my tribe? I've been a DD for almost 14 years, I should be able to weather any storm, shouldn't I? It turns out that no, the mountain does get worn down by the rain. It's not a single hurtful comment here and there. There's a constant drum about this all being some sort of conspiracy and there are sometimes flares where people wishes people involved in systemd would be run over by a bus or just accusations of incompetence.

Our code of conduct says, "assume good faith". If you ever find yourself not doing that, step back, breathe. See if there's a reasonable explanation for why somebody is saying something or behaving in a way that doesn't make sense to you. It might be as simple as your native tongue being English and their being something else.

If you do genuinely disagree with somebody (something which is entirely fine), try not to escalate, even if the stakes are high. Examples from the last year include talking about this as a war and talking about "increasingly bitter rear-guard battles". By using and accepting this terminology, we, as a project, poison ourselves. Sam Hartman puts this better than me:

I'm hoping that we can all take a few minutes to gain empathy for those who disagree with us. Then I'm hoping we can use that understanding to reassure them that they are valued and respected and their concerns considered even when we end up strongly disagreeing with them or valuing different things.

I'd be lying if I said I didn't ever feel the urge to demonise my opponents in discussions. That they're worse, as people, than I am. However, it is imperative to never give in to this, since doing that will diminish us as humans and make the entire project poorer. Civil disagreements with reasonable discussions lead to better technical outcomes, happier humans and a healthier projects.

Categories: Elsewhere

John Goerzen: Contemplative Weather

Planet Debian - Sun, 16/11/2014 - 23:30

Sometimes I look out the window and can’t help but feel “this weather is deep.” Deep with meaning, with import. Almost as if the weather is confident of itself, and is challenging me to find some meaning within it.

This weekend brought the first blast of winter to the plains of Kansas. Saturday was chilly and windy, and overnight a little snow fell. Just enough to cover up the ground and let the tops of the blades of grass poke through. Just enough to make the landscape look totally different, without completely hiding what lies beneath. Laura and I stood silently at the window for a few minutes this morning, gazing out over the untouched snow, extending out as far as we can see.

Yesterday, I spent some time with my great uncle and aunt. My great uncle isn’t doing so well. He’s been battling cancer and other health issues for some time, and can’t get out of the house very well. We talked for an hour and a half – about news of the family, struggles in life now and in the past, and joys. There were times when all three of us had tears in our eyes, and times when all of us were laughing so loudly. My great uncle managed to stand up twice while I was there — this took quite some effort — once to give me a huge hug when I arrived, and another to give me an even bigger hug when I left. He has always been a person to give the most loving hugs.

He hadn’t been able to taste food for awhile, due to treatment for cancer. When I realized he could taste again, I asked, “When should I bring you some borscht?” He looked surprised, then got a huge grin, glanced at his watch, and said, “Can you be back by 3:00?”

His brother, my grandpa, was known for his beef borscht. I also found out my great uncle’s favorite kind of bread, and thought that maybe I would do some cooking for him sometime soon.

Today on my way home from church, I did some shopping. I picked up the ingredients for borscht and for bread. I came home, said hi to the cats that showed up to greet me, and went inside. I turned on the radio – Prairie Home Companion was on – and started cooking.

It takes a long time to prepare what I was working on – I spent a solid two hours in the kitchen. As I was chopping up a head of cabbage, I remembered coming to what is now my house as a child, when my grandpa lived here. I remembered his borscht, zwiebach, monster cookies; his dusty but warm wood stove; his closet with toys in it. I remembered two years ago, having nearly 20 Goerzens here for Christmas, hosted by the boys and me, and the 3 gallons of borscht I made for the occasion.

I poured in some tomato sauce, added some water. The radio was talking about being kind to people, remembering that others don’t always have the advantages we do. Garrison Keillor’s fictional boy in a small town, when asked what advantages he had, mentioned “belonging.” Yes, that is an advantage. We all deal with death, our own and that of loved ones, but I am so blessed by belonging – to a loving family, two loving churches, a wonderful community.

Out came three pounds of stew beef. Chop, chop, slice, plunk into the cast iron Dutch oven. It’s my borscht pot. It looks as if it would be more at home over a campfire than a stovetop, but it works anywhere.

Outside, the sun came up. The snow melts a little, and the cats start running around even though it’s still below freezing. They look like they’re having fun playing.

I’m chopping up parsley and an onion, then wrapping them up in a cheesecloth to make the spice ball for the borscht. I add the basil and dill, some salt, and plonk them in, too. My 6-quart pot is nearly overflowing as I carefully stir the hearty stew.

On the radio, a woman who plays piano in a hospital and had dreamed of being on that particular radio program for 13 years finally was. She played with passion and delight I could hear through the radio.

Then it’s time to make bread. I pour in some warm water, add some brown sugar, and my thoughts turn to Home On The Range. I am reminded of this verse:

How often at night when the heavens are bright
With the light from the glittering stars
Have I stood here amazed and asked as I gazed
If their glory exceeds that of ours.

There’s something about a beautiful landscape out the window to remind a person of all the blessings in life. This has been a quite busy weekend — actually, a busy month — but despite the fact I have a relative that is sick in the midst of it all, I am so blessed in so many ways.

I finish off the bread, adding some yeast, and I remember my great uncle thanking me so much for visiting him yesterday. He commented that “a lot of younger people have no use for visiting an old geezer like me.” I told him, “I’ve never been like that. I am so glad I could come and visit you today. The best gifts are those that give in both directions, and this surely is that.”

Then I clean up the kitchen. I wipe down the counters from all the bits of cabbage that went flying. I put away all the herbs and spices I used, and finally go to sit down and reflect. From the kitchen, the smells of borscht and bread start to seep out, sweeping up the rest of the house. It takes at least 4 hours for the borscht to cook, and several hours for the bread, so this will be an afternoon of waiting with delicious smells. Soon my family will be home from all their activities of the day, and I will be able to greet them with a warm house and the same smells I stepped into when I was a boy.

I remember this other verse from Home On the Range:

Where the air is so pure, the zephyrs so free,
The breezes so balmy and light,
That I would not exchange my home on the range
For all of the cities so bright.

Today’s breeze is an icy blast from the north – maybe not balmy in the conventional sense. But it is the breeze of home, the breeze of belonging. Even today, as I gaze out at the frozen landscape, I realize how balmy it really is, for I know I wouldn’t exchange my life on the range for anything.

Categories: Elsewhere

Gregor Herrmann: RC bugs 2014/45-46

Planet Debian - Sun, 16/11/2014 - 23:07

I was not much at home during the last two weeks, so not much to report about RC bug activities. – some general observations:

  1. the RC bug count is still relatively low, even after lucas' archive rebuild.
  2. the release team is extremely fast in handling unblock request - kudos! – pro tip: file them quickly after uploading, or some{thing,one} else might be faster :)

my small contributions:

  • #765327 – libnet-dns-perl: "lintian fails if the machine has a link-local IPv6 nameserver configured"
    discuss possible fix with upstream (pkg-perl)
  • #768683 – src:libmoosex-storage-perl: "libmoosex-storage-perl: missing runtime dependencies cause (build) failures in other packages"
    move packages from Recommends to Depends (pkg-perl)
  • #768692 – src:libaudio-mpd-perl: "libaudio-mpd-perl: FTBFS in jessie: Failed test '10 seconds worth of music in the db': got: '9'"
    add patch from Simon McVittie (pkg-perl)
  • #768712 – src:libpoe-component-client-mpd-perl: "libpoe-component-client-mpd-perl: FTBFS in jessie: Failed test '10 seconds worth of music in the db': got: '9'"
    add patch from Simon McVittie (pkg-perl)
  • #769003 – libgluegen2-build-java: "libjogl2-java: FTBFS on arm64, ppc64el, s390x"
    add patch from Colin Watson (pkg-java)
Categories: Elsewhere

Gizra.com: Behat - The Right Way

Planet Drupal - Sun, 16/11/2014 - 23:00

Behat is a wonderful tool for automatic testing. It allows you to write your user stories and scenarios in proper English, which is then parsed by Behat and transformed to a set of clicks or other operations that mimic a real user.

If you don't have automated tests on your project, I would argue that you're doing it wrong (I explain why on The Gizra Way presentation). Even having a single test is much better than none.

With that said, it's super easy to abuse Behat. We are developers and we think sort of like machines (not really, but you get my point). If you would like to test login to your site you could easily do

Given I visit "/user/login" # fill the username and password input fields, and click submit When I fill "username" with "foo" And I fill "password" with "bar" And I press "Login" Then I should get a "200" HTTP response

Your test will return green, but it could be improved:

Continue reading…

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator - Elsewhere