Feed aggregator

Drupalize.Me: Twig Filters: Modifying Variables in Drupal 8 Template Files

Planet Drupal - Tue, 18/11/2014 - 16:00
Something that's super fun about my job is that occasionally I get tasked with things like, "Learn how Twig works so you can tell us how it fits into our curriculum plans.". And I get to spend some time exploring various new features in Drupal 8, with an eye towards being able help explain them.
Categories: Elsewhere

Drupal.org Featured Case Studies: KSS Architects

Planet Drupal - Tue, 18/11/2014 - 15:42
Completed Drupal site or project URL: http://kssarchitects.com/

KSS Architects is a nationally recognized architecture firm with offices in Princeton, New Jersey and Philadelphia, Pennsylvania. KSS works with clients in the fields of education, culture, land development, urban development, and corporate environments.

After completing brand strategy for KSS, TOKY Branding + Design created a website that would better showcase the architecture firm’s personality, process, and projects. TOKY specializes in Web and print work for clients in architecture, building, and design, as well as the arts, education, and premium consumer products.

Key modules/theme/distribution used: BreakpointsCampaign MonitorDate Popup AuthoredEntity referenceField collectionGoogle Site SearchjQuery UpdateMaxlengthMediaMediaElementMenu blockMetatagPictureTaxonomy displayTypogrifyview_unpublishedVocabulary Permissions Per RoleOrganizations involved: TOKY Branding + DesignTeam members: Daniel Korte
Categories: Elsewhere

Joachim's blog: Building Fast and Flexible Application UIs with Entity Operations

Planet Drupal - Tue, 18/11/2014 - 14:38

Now I've finished the Big Monster Project of Doom that I've been on the last two years, I can talk more about some of the code that I wrote for it. I can also say what it was: it was a web application for activists to canvass the public for a certain recent national referendum (I'll let you guess which one).

One of the major modules I wrote was Entity Operations module. What began as a means to avoid repeating the same code each time I needed a new entity type soon became the workhorse for the whole application UI.

The initial idea was this: if you want a custom entity type, and you want a UI for adding, editing, and deleting entities (much like with nodes), then you have to build this all yourself: hook_menu() items, various entity callbacks, form builders (and validation and submit handlers) for the entity form and the delete confirmation form. (The Model module demonstrates this well.)

That's a lot of boilerplate code, where the only difference is the entity type's name, the base path where the entity UI sits, and the entity form builder itself (but even that can be generalized, as will be seen).

Faced with this and a project on which I knew from the start I was going to need a good handful of custom entities (for use with Microsoft Dynamics CRM, accessed with another custom module of mine, Remote Entity API), I undertook to build a framework that would take away all the repetition.

An Entity UI is thus built by declaring:

  • A base path (for nodes, this would be 'node'; we'll ignore the fact that in core, this path itself is a listing of content).
  • A list of subpaths to form the tabs, and the operation handler class for each one

With this in hand, why stop at just the entity view and edit tabs? The operation handlers can output static content or forms: they can output anything. One of the most powerful enhancements I made to this early on was to write an operations handler that outputs a view. It's the same idea as the EVA module.

So for the referendum canvassing application, I had a custom Campaign entity, that functioned as an Organic Group, and had as UI tabs several different views of members, views of contacts in the Campaign's geographic area, views of Campaign group content (such as tasks and contact lists), and so on.

This approach proved very flexible and quick. The group content entities were themselves also built with this, so that, for example, Contact List entities had operations for a user to book the entity, input data, and release it when done working on it. These were built with custom operation handlers specific to the Contact List entity, subclassing the generic form operation handler.

An unexpected bonus to all this was how easy it was to expose form-based operations to Views Bulk Operations and Services (as 'targeted actions' on the entity). This allowed the booking and release operations to work in bulk on views, and also to be done via a mobile app over Services.

A final piece of icing on the cake was the addition of alternative operation handlers for entity forms that provide just a generic bare bones form that invokes Field API to attach field widgets. With these, the amount of code needed to build a custom entity type is reduced to just three functions:

  • hook_entity_info(), to declare the entity type to Drupal core
  • hook_entity_operations_info(), to declare the operations that make up the UI
  • callback_entity_access(), which controls the access to the operations

The module has a few further tricks up its sleeve. If you're using user permissions for your entities, there's a helper function to use in your hook_permission(), which creates permissions out of all the operations (so: 'edit foobar entities', 'book foobar entities', 'discombobulate foobar entities' and so on). The entity URI callback that Drupal core requires you to have can be taken care of by a helper callback which uses the entity's base path definition. There's a form builder that lets you easily embed form-based operations into the entity build, so that you can put the sort of operations that are single buttons ('publish', 'book', etc) on the entity rather than in a tab. And finally, the links to operation tabs can be added to a view as fields, allowing a list of entities with links to view, edit, book, discombobulate, and so on.

So what started as a way to simplify and remove repetitive code became a system for building a whole entity-based UI, which ended up powering the whole of the application.

Categories: Elsewhere

Dirk Eddelbuettel: RcppAnnoy 0.0.3

Planet Debian - Tue, 18/11/2014 - 12:48

Hours after the initial blog post announcing the first release of the new package RcppAnnoy, Qiang Kou sent us a very nice pull request adding mmap support in Windows.

So a new release with Windows support is on now CRAN, and Windows binaries should be available by this evening as usual.

To recap, RcppAnnoy wraps the small, fast, and lightweight C++ template header library Annoy written by Erik Bernhardsson for use at Spotify. RcppAnnoy uses Rcpp Modules to offer the exact same functionality as the Python module wrapped around Annoy.

Courtesy of CRANberries, there is also a diffstat report for this release. More detailed information is on the RcppAnnoy page page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Jonathan Wiltshire: Getting things into Jessie (#3)

Planet Debian - Tue, 18/11/2014 - 12:16
Make sure everything you’ve changed is in the changelog

We do read the diffs in detail, and if there’s no explanation for something that’s changed we’ll ask. We also expect it to be in the changelog.

Do save some round-trips by making sure your changelog is in order. One round-trip about your package is an inconvenience; when it’s scaled up to the number of requests we receive, it’s a serious time-sink for us.

Getting things into Jessie (#3) is a post from: jwiltshire.org.uk | Flattr

Categories: Elsewhere

Josselin Mouette: Introspection (not the GObject one)

Planet Debian - Tue, 18/11/2014 - 11:00
Disclaimer: I’m not used to writing personal stuff on Debian channels. However, there is nothing new here for those who know me from other public channels.


Yesterday, I received the weirdest email from well-known troll MikeeUSA. He thought I shared his views of a horrible world full of bloodthirsty feminists using systemd in their quest for domination over poor white male heterosexuals. The most nauseating paragraph was probably the one where he showed signs of the mentality of a pedocriminal.

At first, I shrugged it off and sent him an email explaining I didn’t want anything with his stinky white male supremacist theories, assorted with a bit of taunting. But after discovering all that stuff was actually sent to public mailing lists, I took the time for a second look and started a bit of introspection.

MikeeUSA thought I was a white male supremacist because of the so-called SmellyWerewolf incident, 6 years ago.
Oh boy, people change in six years. Upon re-reading that, I had trouble admitting I was the one to write it. Memory is selective, and with time, you tend not to remember some gruesome details, especially the ones that conflict most with your moral values.

I can assure every reader that the only people I intended to mock then were those who mistook Debian mailing lists for advertising channels; but I understand now that my message must have caused pain to a lot more people than that. So, it may come late, but let me take this opportunity to offer my sincerest apologies to anyone I may have hurt at that time.


It may seem strange for someone with deeply-rooted values of equality to have written that. To have considered that it was okay to stereotype people. And I think I found this okay because to me, those people were given equal rights, and were therefore equal. But the fight for equality is not over when everyone is given the same rights. Not until they are given the same opportunities to exert those rights. Which does not happen when they live in a society that likes to fit them in little archetypal peg holes, never giving you the chance to question where those stereotypes come from.

For me, that chance came from an unusual direction: the fight against prostitution. This goes way back for me. Since when I was a teenager, I have always been ticked off at the idea of nonconsensual sex that somehow evades criminal responsibility because of money compensation. I never understood why it wasn’t considered as rape. Yet it sounded weird that a male heterosexual would hold such opinions; after all, male heterosexuals should go to prostitutes as a kind of social ritual, right?

It was only three years ago that an organization of men against prostitution was founded in France. Not only did I find out that I was not alone with my progressive ideas, I was given the opportunity to exchange with many men and women who had studied prostitution: its effects on victims, its relationship to rape culture and more generally to the place men and women hold in society. Because eventually, it all boils down to little peg holes in which we expect people to fit: the virile man or the faggot, the whore or the mother. For me, it was liberating. I could finally get rid of the discomfort of being a white male heterosexual that didn’t enter the little peg holes that were made for me.

And now, after Sweden 15 years ago, a new group of countries are finally adopting laws to criminalize the act of paying for sex. Including France. That’s too bad for MikeeUSA, but this country is no longer the eldorado for white male supremacists. And I’m proud that our lobbying made a contribution, however small, to that change.
Categories: Elsewhere

Oliver Davies: Include CSS Fonts by Using a SASS each Loop

Planet Drupal - Tue, 18/11/2014 - 10:39

Using a file structure similar to this, organise your font files into directories, using the the font name for both the directory name and for the file names.

Tags:
Categories: Elsewhere

Promo Kids: Reinstall Drupal 8 without pain

Planet Drupal - Tue, 18/11/2014 - 10:09

While developing Drupal Promo Kit we reinstall Drupal very frequently.
First of all, you don't have to delete everything and repeat installation process from scratch. Only three things should be done:

Categories: Elsewhere

Erich Schubert: Generate iptables rules via pyroman

Planet Debian - Tue, 18/11/2014 - 09:46
Vincent Bernat blogged on using Netfilter rulesets, pointing out that inserting the rules one-by-one using iptables calls may leave your firewall temporarily incomplete, eventually half-working, and that this approach can be slow. He's right with that, but there are tools that do this properly. ;-) Some years ago, for a multi-homed firewall, I wrote a tool called Pyroman. Using rules specified either in Python or XML syntax, it generates a firewall ruleset for you. But it also adresses the points Vincent raised:
  • It uses iptables-restore to load the firewall more efficiently than by calling iptables a hundred times
  • It will backup the previous firewall, and roll-back on errors (or lack of confirmation, if you are remote and use --safe)
It also has a nice feature for the use in staging: it can generate firewall rule sets offline, to allow you reviewing them before use, or transfer them to a different host. Not all functionality is supported though (e.g. the Firewall.hostname constant usable in python conditionals will still be the name of the host you generate the rules on - you may want to add a --hostname parameter to pyroman) pyroman --print-verbose will generate a script readable by iptables-restore except for one problem: it contains both the rules for IPv4 and for IPv6, separated by #### IPv6 rules. It will also annotate the origin of the rule, for example: # /etc/pyroman/02_icmpv6.py:82 -A rfc4890f -p icmpv6 --icmpv6-type 255 -j DROP indicates that this particular line was produced due to line 82 in file /etc/pyroman/02_icmpv6.py. This makes debugging easier. In particular it allows pyroman to produce a meaningful error message if the rules are rejected by the kernel: it will tell you which line caused the rule that was rejected. For the next version, I will probably add --output-ipv4 and --output-ipv6 options to make this more convenient to use. So far, pyroman is meant to be used on the firewall itself. Note: if you have configured a firewall that you are happy with, you can always use iptables-save to dump the current firewall. But it will not preserve comments, obviously.
Categories: Elsewhere

Jaldhar Vyas: And The Papers Want To Know Whose Shirts You Wear

Planet Debian - Tue, 18/11/2014 - 07:52

Today I was walking past the Courant Institute at NYU when I saw a man wearing a t-shirt with a picture of a cow diagramming all the various cuts of beef.

Now I've lost all interest in science. Thanks a lot jerks.

Categories: Elsewhere

Antoine Beaupré: bup vs attic silly benchmark

Planet Debian - Tue, 18/11/2014 - 06:39

after see attic introduced in a discussion about bup, i figured out i could give it a try. it was answering two of my biggest concerns with bup:

  • backup removal
  • encryption

and seemed to magically out of nowhere and basically do everything i need, with an inline manual on top of it.

disclaimer

Note: this is not a real benchmark! i would probably need to port bup and attic to liw's seivot software to report on this properly (and that would amazing and really interesting, but it's late now). even worse, this was done on a production server with other stuff going on so take results with a grain of salt.

procedure and results

Here's what I did. I setup backups of my ridiculously huge ~/src directory on the external hard drive where I usually make my backups. I ran a clean backup with attic, than redid it, then I ran a similar backup with bup, then redid it. Here are the results:

anarcat@marcos:~$ sudo apt-get install attic # this installed 0.13 on debian jessie amd64 [...] anarcat@marcos:~$ attic init /mnt/attic-test: Initializing repository at "/media/anarcat/calyx/attic-test" Encryption NOT enabled. Use the "--encryption=passphrase|keyfile" to enable encryption. anarcat@marcos:~$ time attic create --stats /mnt/attic-test::src ~/src/ Initializing cache... ------------------------------------------------------------------------------ Archive name: src Archive fingerprint: 7bdcea8a101dc233d7c122e3f69e67e5b03dbb62596d0b70f5b0759d446d9ed0 Start time: Tue Nov 18 00:42:52 2014 End time: Tue Nov 18 00:54:00 2014 Duration: 11 minutes 8.26 seconds Number of files: 283910 Original size Compressed size Deduplicated size This archive: 6.74 GB 4.27 GB 2.99 GB All archives: 6.74 GB 4.27 GB 2.99 GB ------------------------------------------------------------------------------ 311.60user 68.28system 11:08.49elapsed 56%CPU (0avgtext+0avgdata 122824maxresident)k 15279400inputs+6788816outputs (0major+3258848minor)pagefaults 0swaps anarcat@marcos:~$ time attic create --stats /mnt/attic-test::src-2014-11-18 ~/src/ ------------------------------------------------------------------------------ Archive name: src-2014-11-18 Archive fingerprint: be840f1a49b1deb76aea1cb667d812511943cfb7fee67f0dddc57368bd61c4bf Start time: Tue Nov 18 00:05:57 2014 End time: Tue Nov 18 00:06:35 2014 Duration: 38.15 seconds Number of files: 283910 Original size Compressed size Deduplicated size This archive: 6.74 GB 4.27 GB 116.63 kB All archives: 13.47 GB 8.54 GB 3.00 GB ------------------------------------------------------------------------------ 30.60user 4.66system 0:38.38elapsed 91%CPU (0avgtext+0avgdata 104688maxresident)k 18264inputs+258696outputs (0major+36892minor)pagefaults 0swaps anarcat@marcos:~$ sudo apt-get install bup # this installed bup 0.25 anarcat@marcos:~$ free && sync && echo 3 | sudo tee /proc/sys/vm/drop_caches && free # flush caches anarcat@marcos:~$ export BUP_DIR=/mnt/bup-test anarcat@marcos:~$ bup init Dépôt Git vide initialisé dans /mnt/bup-test/ anarcat@marcos:~$ time bup index ~/src Indexing: 345249, done. 56.57user 14.37system 1:45.29elapsed 67%CPU (0avgtext+0avgdata 85236maxresident)k 699920inputs+104624outputs (4major+25970minor)pagefaults 0swaps anarcat@marcos:~$ time bup save -n src ~/src Reading index: 345249, done. bloom: creating from 1 file (200000 objects). bloom: adding 1 file (200000 objects). bloom: creating from 3 files (600000 objects). Saving: 100.00% (6749592/6749592k, 345249/345249 files), done. bloom: adding 1 file (126005 objects). 383.08user 61.37system 10:52.68elapsed 68%CPU (0avgtext+0avgdata 194256maxresident)k 14638104inputs+5944384outputs (50major+299868minor)pagefaults 0swaps anarcat@marcos:attic$ time bup index ~/src Indexing: 345249, done. 56.13user 13.08system 1:38.65elapsed 70%CPU (0avgtext+0avgdata 133848maxresident)k 806144inputs+104824outputs (137major+38463minor)pagefaults 0swaps anarcat@marcos:attic$ time bup save -n src2 ~/src Reading index: 1, done. Saving: 100.00% (0/0k, 1/1 files), done. bloom: adding 1 file (1 object). 0.22user 0.05system 0:00.66elapsed 42%CPU (0avgtext+0avgdata 17088maxresident)k 10088inputs+88outputs (39major+15194minor)pagefaults 0swaps

Disk usage is comparable:

anarcat@marcos:attic$ du -sc /mnt/*attic* 2943532K /mnt/attic-test 2969544K /mnt/bup-test

People are encouraged to try and reproduce those results, which should be fairly trivial.

Observations

Here are interesting things I noted while working with both tools:

  • attic is Python3: i could compile it, with dependencies, by doing apt-get build-dep attic and running setup.py - i could also install it with pip if i needed to (but i didn't)
  • bup is Python 2, and has a scary makefile
  • both have an init command that basically does almost nothing and takes little enough time that i'm ignoring it in the benchmarks
  • attic backups are a single command, bup requires me to know that i first want to index and then save, which is a little confusing
  • bup has nice progress information, especially during save (because when it loaded the index, it knew how much was remaining) - just because of that, bup "feels" faster
  • bup, however, lets me know about its deep internals (like now i know it uses a bloom filter) which is probably barely understandable by most people
  • on the contrary, attic gives me useful information about the size of my backups, including the size of the current increment
  • it is not possible to get that information from bup, even after the fact - you need to du before and after the backup
  • attic modifies the files access times when backing up, while bup is more careful (there's a pull request to fix this in attic, which is how i found out about this)
  • both backup systems seem to produce roughly the same data size from the same input
Summary

attic and bup are about equally fast. bup took 30 seconds less than attic to save the files, but that's not counting the 1m45s it took indexing them, so on the total run time, bup was actually slower. attic is also (almost) two times faster on the second run as well. but this could be within the margin of error of this very quick experiment, so my provisional verdict for now would be that they are about as fast.

bup may be more robust (for example it doesn't modify the atimes), but this has not been extensively tested and is more based with my familiarity with the "conservatism" of the bup team rather than actual tests.

considering all the features promised by attic, it makes for a really serious contender to the already amazing bup.

Next steps

The properly do this, we would need to:

  • include other software (thinking of Zbackup, Burp, ddar, obnam, rdiff-backup and duplicity)
  • bench attic with the noatime patch
  • bench dev attic vs dev bup
  • bench data removal
  • bench encryption
  • test data recovery
  • run multiple backup runs, on different datasets, on a cleaner environment
  • ideally, extend seivot to do all of that
Categories: Elsewhere

NEWMEDIA: Drush Make: Evaluating the Benefits and Pain Points of Each Approach

Planet Drupal - Tue, 18/11/2014 - 04:54
Drush Make: Evaluating the Benefits and Pain Points of Each ApproachDrush make is a popular solution for Drupal developers wishing to represent an entire application codebase in a single make file (or collection of make files), but does it always make sense to use? And is it a one size fits all solution? This article reviews several advantages and disadvantages of the more common approaches used within the Drupal community.A Brief History of the Makefile

Technically, my very first computer was a Tandy 1000. In reality, it was a glorified game console used by myself, my mom, and my brother to play Tetris off a 3.5" floppy disk that I had copied from a friend at school. My first real computer came 5 years later—a Packard Bell Pentium 60Mhz with a CD-ROM drive. I was in heaven. It wasn't long until I ditched Windows 95 in order to take my first tip toe into geekdom by installing the first Linux distribution I came into contact with: Caldera.

Back in those days, Linux was a labor of love and really took DIY to the extreme. Most of the time packages were not readily available, which meant you had to compile software yourself. Enter the makefile. This single document represented all the dependencies, parameters, and commands necessary to configure, compile, and install an application across a diverse set of Linux distributions. In was a thing of beauty... when it worked. And when it didn't, it was a time consuming nightmare that resulted in a substantial amount of cursing. Needless to say, I had a love/hate relationship with makefiles.

Drush Make

Drush make files, in comparison to compiling software applications, are a much more straightforward solution. In reality they are nothing more than a compiled shopping list of the specific Drupal modules, patches, and 3rd party libraries necessary to describe a fully functional Drupal application. The key difference between the drush make file and the resulting build is that the make file is a single file representation of what ultimately generate the full file and folder structure of the Drupal application.

In comparison to Linux makefiles, drush make files are much simpler and as a result do not suffer from the compilation nightmares that I used to experience. However, there are also some commonalities: a single file representation and the time necessary to generate versus using a predefined package (or binary).

Comparison to Ruby Gems and Chef

The makefile mindset is not unique to the Linux OS or Drupal. We see the same pattern in the Ruby language and (by extension) configuration management applications like Chef. Ruby users can leverage gemfiles, which provide a similar shopping list style of gem dependencies. Chef leverages the concept of a Berksfile to specify cookbook dependencies. In each case, we have a list of items that the application then uses to generate a desired state. This single file representation is very efficient because it doesn't require one to lug a large set of files and folders around. Rather, they are represented and then generated as needed.

The pattern we see again and again is simple. We can either have the full package (binaries, full Drupal application, gems, cookbooks, etc) or a representation of that state in a single file. This leads us to...

The Key Question

To make a determination of whether or not Drush makefiles are the appropriate method to use in the full software development life cycle (i.e. development, launch, ongoing support, and then deprecation), we need to ask ourselves which method makes sense from at least 3 perspectives: development, operations, and product owners. The answer is not necessarily black and white because there are multiple considerations, multiple stakeholders, and competing optimizations. To that end, I tried to highlight as many pros and cons for a drush makefile approach and leave it to you to decide what is appropriate for you and your use case.

Drush Make Advantages Simplest Possible Representation

A single 1 kilobyte file containing 10 lines could fully represent a very simple drush site install containing a few contrib modules. By comparison, Drupal 7.30 is 3.5 megabytes (zipped) and consists of several hundred folders and several thousand files. If we extend the makefile as a component of an installation profile then the only code that is necessary within the profile is that which is specific and unique to that project. In short, a makefile can allow a repo to contain the simplest possible representation of a Drupal application.

Easier Code Reviews of Diffs

Piggybacking off of the previous item, comparing a diff between makefiles is trivial even when dozens of modules have been updated. A module upgrade might simply show views going from version 7.x-3.8 to 7.x-3.9. By contrast, viewing a commit difference between the actual modules would require sifting through a much longer list of changes that don't really mean much beyond the version change.

Security and Hacked Code

With a specific module version against a public repo, there is no question what is being committed. It's an exact copy of what is contained in the public repo. By contrast, it's much less obvious when upgrading the module itself whether or not the person committing it added any additional tweaks or modifications to it. The person reviewing the code would have to run an MD5 checksum against the code to ensure it was unmodified (or re-run drush make against the makefile).

Inheritance and Re-usability

Drush make file inheritance can be used as way to standardized particular modules, themes, libraries, and patches across many projects. One can then layer on additional and more specific sub-make files to get more granular. This can be particularly powerful in situations where one is managing dozens to hundreds of sites that are derived from a common base installation profile.

Maintain Patches

Let's face it, while Drupal core should never be forked, there are situations where legitimate patches sit within RBTC purgatory for years (I'm looking at you Secure Pages). However, managing and tracking those patches while staying up to date with Drupal core and contrib upgrades can be challenging and easy to forget. However, this becomes a trivial secondary step with a drush make file that can apply the patches in conjunction with the appropriate upgrades.

Drush Make Disadvantages Deployment Overhead

If we use the simplest possible representation (i.e. a makefile or a makefile within an installation profile) for a deployment, then every deployment will require a complete rebuild of the entire Drupal application even in the case of a single line change. Compare this with a more straightforward strategy of a git pull followed by an rsync, which would dramatically cut down the time to deploy as well as the server resources required. Building Drupal from scratch every time can add up, particularly on a shared server with multiple Drupal instances.

In addition to the build process, there is also the time involved with priming drush's cached copy of all the modules and libraries specified by the makefile.

External Dependencies

Using a makefile as the simplest possible representation also introduces the challenge of external dependencies during a production deployment. For example, an updated makefile may require new modules from drupal.org, new libraries from github, and patches hosted somewhere else. If any one of those dependencies fail, the deployment cannot proceed. Contrast this with the situation where a full clone of the git repo already lives on the server it's deploying against and there is a much smaller risk of failure.

It's important to note that this consideration isn't as much of a concern during the development phase because a short outage doesn't directly impact a production level service. However, once the application is in production and uptime is a more important factor, these external dependencies may no longer be appropriate.

Git Bisect, History, and Merges

Git bisect is a powerful strategy for efficiently stepping through a long commit log to determine when a bug or regression was introduced. If the full codebase is present, then one can quickly jump through the commit log without doing anything else (except perhaps a feature revert, a registry rebuild, and/or a cache clear). With the pure makefile approach, a rebuild is necessary after each change to address changes within the dependencies. While there are certainly work-arounds, this isn't as clean a process.

Comparing code across a branches is also made more difficult. If one is trying to see if a function changed between versions of a module, one cannot compare against two commits within the git history or compare between two branches.

Version Control Status

One quick and easy way to check for changes within a git repository is through the use of commands like "git status". The use of submodules can make this slightly less effective because it requires one to traverse to each submodule to verify if any changes are present. And a build from a makefile is even more difficult because any components added into the profile will show up as false positives and any components that are above the profile directory will not be tracked in version control at all. This can make it very difficult to verify that there are no changes across the entire code base.

Hosting Options

Unfortunately, not all hosting providers will play nice with a drush make representation of a Drupal site. Shared hosting solutions (such as entry level godaddy solutions) will not allow one to install drush on the server. And until very recently, Patheon and Acquia did not provide support for drush make files out of the box.

Complexity

One additional consideration is with respect to how the site might be handed off to the client if they decide to manage the site themselves. If they are not sophisticated enough to use a makefile approach, then they might have a difficult time deploying or maintaining it. Worse, if they need to be provided a full zip of the site while receiving additional code changes from a repo using just a make file manifest, then providing code changes might be more difficult than necessary.

Discussion

There is no question that drush make files are powerful, but they do introduce some potential limitations for certain use cases, particularly as one enters the launch and post-launch phase of a project. Once the site is live and out of heavy development, it may make more sense to switch from a manifest to a full code repository.

A Hybrid Solution

There is an hybrid solution that can retain some of the best features from both endpoints. If a drush make file is included within the root of the Drupal application, then it can still be used to control and enforce all the resulting code that is used to generate a full Drupal codebase (modules, libraries, themes, and patches). This is exactly the approach that Drupal 8 uses with its composer.json file and the resulting components that are stored within core/vendor.

A Drupal 7 example of this approach is the RhymesSite project. One can completely rebuild the codebase from the makefile while retaining all the advantages of a complete codebase when ready to deploy to production.

Closing Remarks

I hope you found this helpful. If you have specific experiences or insights that could make this analysis better and/or more accurate, we’d love to hear from you!

Categories: Elsewhere

Stauffer: The Most Important Thing I learned while writing my first Facebook app

Planet Drupal - Tue, 18/11/2014 - 02:51

When I learned I was going to be writing my first Facebook web app, I was pretty excited.  Much like the feeling of writing a Hello World! in a new programming language, I knew I was going to expand my skillset. I really enjoy new opportunities to learn and solve problems.  In fact, I feel this is a fundamental attribute of being a developer.  We code because we want to solve problems. So without further ado, let me share the most important thing I learned while writing my first Facebook app.Hopefully my experience will give someone else a head start in their own future project.

You need to ask Facebook for permission to ask permission from the User.

When you integrate Facebook login into your website, you can include the default permissions which includes public profile and email. After the user logs in, they can approve to give your site this information. If you need more than that, i.e.,  you want to be able to post to a user’s wall, you will need an extended permission. However, you can’t just include an extended permission on Facebook API, you will have to ask Facebook for permission to include an extended permission in your login. 

My assumption is that Facebook checks out your website or app in order to determine why you need an extended permission. If this is in fact the case, I believe it is a good thing because at least we can then assume  that Facebook is checking the legitimacy of your app. However, let’s wear a black hat for a second. If you really think about it, this process does not stop a malicious programmer from pretending they have a legit app. Once has Facebook approved the extended permissions, malicious programmers  can still change their code and do bad things. 

Understanding that time is one of the most important parts of web development, once you finish your website or mobile app don't celebrate too early! There will be a delay after making your project “live.”  If you needed an extended permission and Facebook is one of your core functionalities, you will have to wait for Facebook’s approval. 

Note: You do not need to do this during development because you will have all necessary extended permissions in your dev app, but once you are ready for production, you will have to submit your application to Facebook for review

Tags: facebook app , Drupal , Planet Drupal
Categories: Elsewhere

Forum One: The Drupal 8 Decision

Planet Drupal - Tue, 18/11/2014 - 01:20

From time to time, we all face big life choices. Should I attend this college? Should I take this job? Should I marry this person?

Yet few life choices loom larger for you and your organization in the next year than “When should I upgrade to Drupal 8?”

Well…perhaps the Drupal 8 decision doesn’t quite rank with the others, but for mission-driven organizations, the decision to adopt this major new release is a significant one, with implications for your digital communications for years to come. For most organizations, the upgrade represents a substantial investment that must be planned, scheduled, and budgeted.

In this article, I’ll provide a rapid overview of the promise and challenge of Drupal 8. Then, I’ll lay out the choices you are facing as a current user, or potential Drupal adopter.

The Promise of Drupal 8

Drupal 8, the first major new release in four years, represents a substantial technological departure from previous versions. (More on what constitutes a major Drupal upgrade.)

For marketers and communications professionals, there’s a lot to like. There are over 200 new features and improvements, including a mobile-first approach, in-place editing, and improved accessibility.

For technologists, Drupal 8 offers improved development techniques, as well as including improved APIs and built-in Web services. But D8 also comes with a learning curve. It has an entirely new architecture and methodology. It will take time for your developers to become comfortable with the new Symfony2 components. Drupal 8’s Object-Oriented Programming approach brings increased flexibility for those with the hardcore computer science skills necessary to exploit it.

This infographic (PDF) from the Drupal Association summarizes Drupal 8’s key features.

The Challenge of Drupal 8

One challenge for Drupal users is that the release timeline is still unknown. Certainly, we’re getting close. The beta version was released in October 2014, and it’s anticipated that it will be released sometime in 2015.

Historically, migrations from one major version to another are straightforward, but not always non-trivial. Every Drupal website is a conglomeration of the “core” Drupal software as well as typically dozens of add-on “modules” that extend or improve the software’s functionality. For example, the module that runs your fancy homepage carousel in one version may not be updated or tuned for the next version. And while Drupal 8’s upgrade process is improved, timelines and costs for Drupal upgrade projects are as varied as the websites themselves.

It’s important to realize that once Drupal 8 is released, support for previous versions will flag. If you are currently on Drupal 7, with no immediate plans for a major redesign, the issue is less pressing. But for those on earlier versions, you must be planning and budgeting for an upgrade now.

The challenge for digital communication planners depends on your existing situation. Let’s look at the possible approaches — one situation at a time.

Your Website is on Drupal 5

If you are using Drupal 5, your plan is simple. You must upgrade to Drupal 6 as soon as you possibly can.

Drupal 5 was released in 2007 and superseded by Drupal 6 a year later. If your website is running Drupal 5, it hasn’t received any security patches in four years, and is likely already compromised. It’s probably serving as jumping off point for spamming and other nefarious activities. Improving a Drupal 5 site now is difficult, and few reputable consultancies would agree to improve a Drupal 5 site without first upgrading it.

Once you are on Drupal 6, you then must consider upgrading to Drupal 7 or 8 within the next year, as described in the sections that follow.

Your Website is on Drupal 6

If your website is on Drupal 6, you need to plan for an upgrade to Drupal 7 or 8 within the next year. Once Drupal 8 is released in 2015, official support for Drupal 6 core and modules will cease within three months. This means that as security vulnerabilities are discovered, hackers are highly likely to compromise your website for their evil ends, and you will be powerless to plug the holes.

This means that Drupal 6 sites should be planning and budgeting NOW to upgrade to at least Drupal 7 in 2015. You want to be ready to move quickly once Drupal 8 is released. Three months of support is not a long time.

Your Website is on Drupal 7 (Or you are considering Drupal for Your Next Project)

If your site is currently on Drupal you can breathe easier. Drupal 7 has been out for four years, and nearly a million sites are running Drupal 7 core — far more than all previous versions combined.

It will soon be time for these sites to transition to Drupal 8, but the community will continue to support D7 until Drupal 9 is released, which is surely at least two years away.

Therefore, if you are happy with your existing website and are planning only minor improvements to the design or functionality in the near term, you can sit back and  do nothing for now. You should, of course, start planning for upgrading to Drupal 8 the next two years.

If you are currently considering building a new site on Drupal — or contemplating a major redesign in the next six months — the decision is more complicated.  You have two options.

The first option is to redesign on Drupal 7 now. That’s what thousands of projects are doing as we speak. At Forum One, every new major Drupal project currently starts with Drupal 7, and will likely continue to do so for several months following Drupal 8’s release. The software is mature, stable, widely-supported, and well understood by our staff.

The second option is to postpone your Drupal project until after Drupal 8 is released. Early adopters of D8 will get the maximum value from the new software, as their finished solution will live on Drupal for the longest period. They likely won’t need to consider a new Drupal upgrade until sometime in 2017 at the earliest. And given that the community will continue to support Drupal 8 even once Drupal 9 is released, you would be able to sleep easy knowing that your site will be able to stay patched and secure for the next four to five years.

However, at this writing, there are distinct trade-offs with waiting for D8. You are tying your project timeline to the D8 release schedule, which is community-driven and not guaranteed. Even once D8 is released, it will be a few months before skilled developers are ready to start new projects on 8. While savvy technologists are already experimenting with the D8 beta, it will take some time for modules, processes, training materials, and hosting environments to become tuned for this substantially-new platform.

Here’s another important consideration: Are you the type of product owner who can afford to be on the cutting edge? Early technology adopters typically pay more for the technology than those who follow. Later adopters benefit from the lessons of early adopters. Still, this may be an acceptable premium for your organization if Drupal 8 improves your efficiency, reduces long-term costs, or gives you a competitive advantage in achieving your goals.

Decide to Plan

Like all software, the lifespan of every major Drupal version is limited. Drupal 5 is end of life, Drupal 6 is very near end of life, and — after a good run — Drupal 7 will enter its golden years in the next twelve months.

While the Drupal 8 decision may not have the gravity of other life choices, you have an obligation to ensure that the core software for your site is, secure, stable, and well-supported.

Your most important decision is to decide to plan. Drupal 8 is coming. Are you prepared?

 

Categories: Elsewhere

Metal Toad: Drupal 8: First Impressions for the Back-End Developer

Planet Drupal - Tue, 18/11/2014 - 00:54
Drupal 8: First Impressions for the Back-End Developer Mon, 11/17/2014 - 15:54 keithdechant

Drupal 8 is in beta now, and recently I’ve had a chance to start working with it. While much of the admin interface is comparable to Drupal 7, there have been some important changes for site builders and back-end developers. In this post, I will be looking at file system and database structure changes, Drush setup, and the new configuration entity type.

Disclaimer: Drupal 8 beta 2 is not ready for production yet. If you start working with it, be warned that future beta releases might break backwards compatibility. You might need to write some code to upgrade. If this is not an option for you, you would be best served sticking with Drupal 7 until the Drupal 8 release candidate is available.

Content Types

One of the most noticeable changes to the Drupal 8 admin UI is the revised Content Type creation pages.

The “Manage Fields” page no longer contains a “widget” column. There is a new tab, “Form Display,” which allows more flexible configuration of the node add/edit form.

“Manage Fields” also no longer allows custom sorting of the fields. “Body” is always listed first, followed by the custom fields in alphabetical order. The fields can be reordered  on the “Manage Form Display” and “Manage Display” tabs.

Comments

In Drupal 8, comments are set up as a field, rather than a setting in the node type. To turn on comments for a content type, add a field of type "Comments" on the "Manage Fields" page. This is a more flexible system than in Drupal 7, allowing more than one type of comments for a single node.

This change mostly affects the admin UI. Comments are still entities in Drupal 8, and their underlying data structure is similar to Drupal 7.

Database structure

The notable changes in the database structure relate to the user profiles and the table names for field tables.

The “users” table (a single table in Drupal 7) has been split into “users” and “users_field_data” which contains the data from the built-in fields like name and password. Several other tables, including "node", "comment", and "taxonomy_term" have undergone similar structure changes. This restructuring allows for easier translations of data in the core fields.

Drupal 8 field tables now are prefixed with the type of entity they belong to. They have names like “node__field_image” or “user__field_first_name”. The data structure of these tables is similar to the Drupal 7 “field_data_*” tables, with only a few changes (e.g., the "language" column is now named "langcode").

Drush

Drush 6, commonly used with Drupal 7, is not compatible with Drupal 8. You will need to install Drush 7, which is still in development. Not to worry, it’s easy to install Drush 7 alongside Drush 6. Here is an excellent article about setting up Drush 7 with Drupal 8 using Composer: https://www.acquia.com/blog/leverage-drush-7-drupal-8

Cache and Registry

Cache and registry handling in Drupal 8 has undergone some major changes from Drupal 7. Notably, the “cache clear” command has been replaced by “cache rebuild.”

Drupal 7:
“drush cache-clear all”
(a.k.a. “drush cc all”)

Drupal 8:
"drush cache-rebuild”
(a.k.a., “drush cr”).

Note: In Drupal 8, you don't need to specify "drush cache-rebuild all." The "cache-rebuild" command appears to always clear all the caches, and any additional arguments are ignored.

Note: “drush cache-clear drush” is still used to update the list of Drush commands.

As of this writing, the “drush registry-rebuild” command does not appear to be supported for Drupal 8. This may change in the near future.

Package Management

Drupal 8 does not allow disabling of modules. The "drush pm-disable" command has been removed. To turn off a module, you need to uninstall it with "drush pm-uninstall" or by using the "Uninstall" tab on the "Extend" page in the admin UI.

As of Drupal 8 beta 2, uninstalling a module does not automatically delete the module's configuration data. However, some modules may have uninstall hooks which delete their configuration when they are uninstalled. Modules like these can no longer be uninstalled without deleting their configuration.

Package Manager bugs

As of this writing, there is a Drush 7 bug that causes an infinite loop when trying to use “drush pm-enable” (a.k.a., “drush en”) to simultaneously download and install a module:

https://github.com/drush-ops/drush/issues/5

The workaround is to download the module ("drush dl somemodule") and enable it ("drush en somemodule") in two separate steps, or to enable the module through the admin UI. This does not affect modules which you have already downloaded and wish to enable.

Compiled CSS, JS, and Twig files

In sites/default/files are several new directories:

  • sites/default/files/php - Contains compiled Twig templates
  • sites/default/files/css - Contains compiled and gzipped CSS files
  • sites/default/files/js - Contains compiled JS files

These files do not need to be added to your Git repository. Drupal will generate them automatically on page load, and they will have different file names on each machine (local dev machine, dev server, production server, etc.). Similarly, you shouldn’t try to edit these files manually because Drupal will automatically overwrite your changes at the next cache rebuild.

The Drush “cache-rebuild” command will erase these files and rebuild them.

Common errors - File permissions

Incorrect file permissions can cause PHP errors, or missing CSS or JS. Consider the following error:

Fatal error: Class '__TwigTemplate_09ab09ab7c23bd1ffe135ac9872354bdeca182f' not found in /path/to/your/site/drupal/core/lib/Drupal/Core/Template/TwigEnvironment.php on line 152

This error occurs when the web server doesn’t have permissions to write to the directory sites/default/files. Change the permissions or ownership on that directory and this error should go away.

If you encounter pages missing their CSS files, try checking the same file permissions.

Configuration Entities

Drupal 8 introduces a new type of entity, the “configuration entity.” These are represented as YAML files in the “config/install” subdirectory within a module. When the module is installed, the data from these YAML files is loaded into entries in the “config” table in the database.

The "config" table contains many of the settings that were formerly in the "system" table in Drupal 7. It also contains definitions for a number of things that were formerly in separate tables. Notable examples are Taxonomy vocabularies and text filter formats.

Config entities are only updated when the module is installed or uninstalled. Drupal does not rebuild them when you rebuild the cache. During module development, you either need to uninstall and reinstall your module, or use the config_devel contrib module to make managing your config entities easier.

Config entities are particularly useful when writing content migrations, because the migration definitions are config entities.

As of Drupal 8 beta 2, configuration entities are no longer removed when a module is uninstalled. If your custom module uses configuration entities, and you don’t want these to persist during a reinstall of the module, it might be a good idea to write an uninstall hook to remove the entities.

Example uninstall hook to remove configuration entities:

/** * Implements hook_uninstall(). * * Cleans up config entities installed by this module. */ function yourmodule_uninstall() { db_query("DELETE FROM {config} WHERE name = 'your.config.entity.name'"); drupal_flush_all_caches(); }
Categories: Elsewhere

Károly Négyesi: Adding comma separated username autocomplete to a D7 form

Planet Drupal - Mon, 17/11/2014 - 22:20

Today I needed to add autocomplete to a field that could contain comma separated usernames and obviously the requirements included autocomplete. I thought this problem must be solved already in the ecosystem and sure, Views has it already. So I have added '#autocomplete_path' => 'admin/views/ajax/autocomplete/user', '#element_validate' => array('my_module_usernames_validate') and then

<?php
function my_module_usernames_validate(&$element) {
  if ($values = drupal_explode_tags($element['#value'])) {
    // Good thing Views doesn't use the native constructor.
    $handler = new views_handler_filter_user_name();
    // And this function doesn't use the object at all.
    $handler->validate_user_strings($element, $values);
  }
}
?>

Ps. This has been confirmed as working (with a plugin instance) in D8 too.

Categories: Elsewhere

Vincent Sanders: NetSurf Developer workshop IV

Planet Debian - Mon, 17/11/2014 - 21:54
Over the weekend the NetSurf developers met to make a concentrated effort on improving the browser. This time we were kindly hosted by Codethink in their Manchester office in a pleasant environment with plenty of refreshments.

Five developers managed to attend in person from around the UK: Michael Drake, John-Mark Bell, Daniel Silverstone, Rob Kendrick and Vincent Sanders. We also had Chris Young providing some bug fixes remotely.

We started the weekend by discussing all the thorny core issues that had been put on the agenda and ensuring the outcomes were properly noted. We also held the society AGM which was minuted by Daniel.

The emphasis of this weekend was very much on planning and doing the disruptive changes we had been putting off until we were all together.

John-Mark and myself managed to change the core build system as used by all the libraries to using standard triplets to identify systems and use the gnu autoconf style of naming for parameters (i.e. HOST, BUILD and CC being used correctly).

This was accompanied by improvements and configuration changes to the CI system to accommodate the new usage.

Several issues from the bug tracker were addressed and we put ourselves in a stronger position to address numerous other usability problems in the future.

We managed to pack a great deal into the 20 hours of work on Saturday and Sunday although because we were concentrating much more on planning and infrastructure rather than a release the metrics of commits and files changed were lower than at previous events.

Categories: Elsewhere

Niels Thykier: The first 12 days and 408 unblock requests into the Jessie freeze

Planet Debian - Mon, 17/11/2014 - 21:17

The release team receives an extreme amount of unblock requests right now.  For the past 22 days[1], we have been receiving no less than 408 unblock/ageing requests.  That is an average of ~18.5/day.  In the same period, the release team have closed 350 unblocks requests, averaging 15.9/day.

This number does not account for number of unblocks, we add without a request, when we happen to spot when we look at the list of RC bugs[2]. Nor does it account for unblock requests currently tagged “moreinfo”, of which there are currently 25.

All in all, it has been 3 intensive weeks for the release team.  I am truly proud of my fellow team members for keeping up with this for so long!  Also a thanks to the non-RT members, who help us by triaging and reviewing the unblock requests!  It is much appreciated. :)

 

Random bonus info:

  • d (our diffing tool) finally got colordiff support during the Release Sprint last week.  Prior to that, we got black’n’white diffs!
    • ssh coccia.debian.org -t /srv/release.debian.org/tools/scripts/d <srcpkg>
    • Though coccia.debian.org do not have colordiff installed right now.  I have filed a request to have it installed.
  • The release team have about 132 (active) unblock hints deployed right now in our hint files.

 

[1] We started receiving some in the 10 days before the freeze as people realised that their uploads would need an unblock to make it into Jessie.

[2] Related topics: “what is adsb?” (the answer being: Our top hinter for Wheezy)

 


Categories: Elsewhere

Creative Juices: Building REST web services with Drupal 7

Planet Drupal - Mon, 17/11/2014 - 19:48
Building REST web services with Drupal 7 Mon, 11/17/2014 - 13:48 matt
Categories: Elsewhere

Daniel Leidert: Rsync files between two machines over SSH and limit read access

Planet Debian - Mon, 17/11/2014 - 17:32

From time to time I need to get contents from a remote machine to my local workstation. The data sometimes is big and I don't want to start all over again if something fails. Further the transmission should be secure and the connection should be limited to syncing only this path and its sub-directories. So I've setup a way to do this using rsync and ssh and I'm going to describe this setup.

Consider you have already created a SSH key, say ~/.ssh/key_rsa together with ~/.ssh/key_rsa.pub, and on the remote machine there is an SSH server running allowing to login by a public key and rsync is available. Lets further assume the following:

  • the remote machine is rsync.domain.tld
  • the path on the remote machine that holds the data is /path/mydata
  • the user on the remote machine being able to read /path/mydata and to login via SSH is remote_user
  • the path on the local machine to put the data is /path/mydest
  • the user on the local machine being able to write /path/mydest is local_user
  • the user on the local machine has the private key ~local_user/.ssh/key_rsa and the public key ~local_user/.ssh/key_rsa.pub

Now the public key ~local_user/.ssh/key_rsa.pub is added to the remote users ~remote_user/.ssh/authorized_keys file. The file will then probably look like this (there is just one very long line with the key, here cut out by [..]):

ssh-rsa [..]= user@domain.tld

Now I would like to limit the abilities for a user logging in with this key to only rsync the special directory /path/mydata. I therefor preceed the key with a command prefix, which is explained in the manual page sshd(8). The file then looks like this:

command="/usr/bin/rsync --server --sender -vlogDtprze . /path/mydata",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-rsa [..]= user@domain.tld

I then can rsync the remote directory to a local location over SSH by running:

rsync -avz -P --delete -e 'ssh remote_user@rsync.domain.tld' rsync.domain.tld:/path/mydata/ /path/mydest

That's it.

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator