Feed aggregator

Appnovation Technologies: Straight from the Source: Achieving Your Goals with osCaddie

Planet Drupal - Fri, 19/09/2014 - 20:02
See why global non-profit organization Teach For All chose our osCaddie solution var switchTo5x = false;stLight.options({"publisher":"dr-75626d0b-d9b4-2fdb-6d29-1a20f61d683"});
Categories: Elsewhere

Drupal core announcements: Today there are zero Drupal 8 beta blockers! Here's what's next.

Planet Drupal - Fri, 19/09/2014 - 19:17

As of 06:58 UTC today, September 19, there are zero Drupal 8 beta blockers. This means that, after more than nine months of focused effort, we are almost ready to release the first Drupal 8 beta!

When will Drupal 8.0.0-beta 1 be released?

Today (September 19), we have released one more Drupal 8 alpha, Drupal 8.0.0-alpha15. This alpha can be treated as a "beta release candidate". If no additional beta blockers are identified in the next 10-14 days, we will then tag the first beta! (If we do discover additional beta blockers, then we will evaluate them and adjust our timeline.)

What does beta mean?

Betas are good testing targets for developers and site builders who are comfortable reporting (and where possible, fixing) their own bugs, and who are prepared to rebuild their test sites from scratch if necessary. Beta releases are not recommended for non-technical users, nor for production websites.

See Dries' original announcement about the beta for more information on the beta and the criteria for beta blockers. The explanation of the Drupal 8 release management tags explains the differences between critical beta blockers and other issues impacted by the beta phase.

How can I help? Help stabilize the beta

The beta is an important milestone for Drupal 8. Help test the final alpha for critical and potentially beta-blocking bugs, and take extra care to avoid introducing regressions during this pre-beta window.

Beta deadline issues (complete by September 28)

This final pre-beta window is our final chance to complete beta deadline issues. As a reminder, changes to the following have a beta deadline:

  1. Non-critical changes to the core data model. (See the beta-to-beta upgrade path and data model stability policy and the beta-to-beta-upgrade path critical task for ongoing discussion of what is included in the Drupal 8 data model, how we will handle small data model additions, and when we will support a beta-to-beta upgrade path).
  2. Non-critical, backward-compatibility-breaking changes to the public APIs of the following critical subsystems:
    • The Configuration system
    • The Entity Field API
    • The Plugin API
    • The Menu and Routing APIs
  3. Other broad, non-critical changes that significantly break backward compatibility, at core maintainer discretion.

Beta deadline issues can be committed up until Sunday, September 28, after which there will be a freeze to ensure stability. If you have questions or concerns about completing a particular change, speak to a core maintainer about it soon.

If you know of issues that would introduce any of these changes, add the "beta deadline" issue tag so that contributors can find and help complete them before the beta. The following issues are particular priorities:

(Also see the full queue of known beta deadline issues.)

Keep in mind that API and schema additions may still be made during the beta phase, at core maintainer discretion. Limited API and data model changes will also happen during the beta phase, though core maintainers will try to isolate these changes to non-fundamental APIs or critical bug fixes. (See the ongoing beta-to-beta-upgrade path discussion.)

Beta target issues

"Beta target" issues are issues that we hope to complete early during the beta phase, but can still be added to Beta 2 or later. These are the next priority after important beta deadline issues. We especially need to work on:

(Also see the full queue of known beta target issues).

Thank you!

Many thanks to the 234 contributors who have helped resolve our 177 beta blockers in Drupal 8, and to the incredibly dedicated Drupal 8 branch maintainers. Your focus and effort is helping us build a solid Drupal 8 beta and, going forward, a better release.

Categories: Elsewhere

Bluespark Labs: Cleaning our repository history

Planet Drupal - Fri, 19/09/2014 - 17:49

In our daily work we all make mistakes in our git commits. Sometimes this errors can easily be repaired just by reverting our commits. But if we are working in a public repository and we have accidentally pushed some sensitive information, we now have a problem.

That sensitive information is in our repository history and anybody who has the enough time to explore can gain access to that. Our clients or even ourselves are now dealing with a privacy issue.

We can always try to repair that commit in our local environment and push our code again using the --force parameter. But we know, when you do that, a kitten dies. And if your team members already pushed something, everything in the repository will be messed up.

So the best option is to try and fix this in a more elegant way that allow us to erase all the traces of our mistake, but preserves repository integrity.

Git provides the filter-branch command, but sometimes this powerful tool becomes too complicated and slow. In trying to find an easier way to do it, finally came across the BFG Repo-Cleaner.

This tool is an alternative to git filter-branch that provides a faster and easier way to clean git repositories. It is written in Java, so you need to make sure you have JRE 6.0 or above installed. To clean your repository you only have to follow the steps below:

Clone your repository using the --mirror option. Beforehand, you should repair manually your mistakes in the repository.

1 $ git clone --mirror git://example.com/my-repo.git

Now, download BFG and execute it against your cloned repository.
1 $ java -jar bfg.jar --strip-blobs-bigger-than 1M my-repo.git
This step will remove all the blobs bigger than 1MB from your repository.

Once the index has been cleaned, examine your repository's history and then use the standard git gc command to strip out the unwanted dirty data, which Git will now recognise as surplus to requirements:
1 2 3 $ cd my-repo.git $ git reflog expire --expire=now --all $ git gc --prune=now --aggressive

Finally, once you're happy with the updated state of your repo, push it back up
1 $ git push

If everything went well, your repository won't include any of the accidentally committed files.

Here you have some common examples to use with Drupal:
Delete all files named 'id_rsa' or 'id_dsa' :
1 $ java -jar bfg.jar --delete-files id_{dsa,rsa}  my-repo.git

Delete database dumps:
1 $ java -jar bfg.jar --delete-files *{mysql,mysql.gz}

Delete files folder:
1 $ java -jar bfg.jar --delete-folders files

We have to remark that BFG assumes that you have repaired your repository before executing it. You need to make sure your current commits are clean. This protects your current work and gives you peace of mind knowing that the BFG is only changing your repo history, not meddling with the current files of your project.

Finally, here you have some useful related links:

Tags: Drupal Planet
Categories: Elsewhere

Code Karate: Drupal 7 Honeypot Module

Planet Drupal - Fri, 19/09/2014 - 14:48
Episode Number: 169

In this tutorial you will learn about the Honeypot module. The Honeypot modules is a SPAM prevention module that uses a hidden form field to catch SPAM bots from posting onto your site. This tutorial shows you how to configure the module to work on various forms on your site.

Tags: DrupalFormsWebformDrupal 7Drupal PlanetSpam Prevention
Categories: Elsewhere

HackMonkey: Configuring CSS Source Maps & Compass

Planet Drupal - Fri, 19/09/2014 - 10:04

After hours of searching Google, lots of trial and error, and a bunch of grumbling, I had a breakthrough and finally figured out how to get Source Maps to work under Chrome and Compass. The problem is that this functionality has been around for over a year in various forms in the pre-release versions of Sass and Chrome. As such, many of the posts I found were out dated and didn't work with the current, stable versions. So this post is partially to document the process for myself (and a small victory lap!), but hopefully someone else will get something out of it.

Categories: Elsewhere

Dariusz Dwornikowski: getting real "done date" of a bug from Debian BTS

Planet Debian - Fri, 19/09/2014 - 09:17

As I wrote in my last post currently, SOAP interface, nor Ultimate Debian Database do not provide a date when a given bug was closed (done date). It is quite hard to calculate statistics on a bug tracker when you do not know when a bug was closed (!!).

Done date of bug can be found in its log. The log itself can be downloaded by SOAP method get_bug_log but the processing of it is quite complicated. The same comes to web scrapping of a BTS's web interface. Fortunatelly the web interface gives a possibility to download a log in an mbox format.

Below is a script that extracts the done date of a bug from its log in mbox format. It uses requests to download the mbox and caches the result in ~/.cache/rfs_bugs, which you need to create. It performs different checks:

  1. Check existence of a header e.g. Received: (at 657783-done) by bugs.debian.org; 29 Jan 2012 13:27:42 +0000
  2. Check for header CC: NUMBER-close|done
  3. Check for header TO: NUMBER-close|done
  4. Check for Close: NUMBER in body.

The code is below:

import requests from datetime import datetime import mailbox import re import os import tempfile def get_done_date(bug_num): CACHE_DIR = os.path.expanduser("~") + "/.cache/rfs_bugs/" def get_from_cache(): if os.path.exists("{}{}".format(CACHE_DIR, bug_num)): with open("{}{}".format(CACHE_DIR, bug_num)) as f: return datetime.strptime(f.readlines()[0].rstrip(), "%Y-%m-%d").date() else: return None done_date = get_from_cache() if done_date is not None: return done_date else: r = requests.get("https://bugs.debian.org/cgi-bin/bugreport.cgi?mbox=yes;bug={};mboxstatus=yes".format(self._num)) d = try_header(r.text) if d is None: d = try_cc(r.text) if d is None: d = try_body(r.text) if d is not None: with open("{}{}".format(CACHE_DIR, bug_num), "w") as f: f.write("{}".format(d.date())) else: return None return d.date() def try_body(text): reg = "\(at\s.+\)\s+by\sbugs\.debian\.org;\s(\d{1,2}\s\w\w\w\s\d\d\d\d)" handle, name = tempfile.mkstemp() with open(name, "w") as f: f.write(text.encode('latin-1')) mbox = mailbox.mbox(name) for i in mbox.items(): if i[1].is_multipart(): for m in i[1].get_payload(): if "close" in str(m) or "done" in str(m): try: result = re.search(reg, i[1]['Received']) return datetime.strptime(result.group(1), "%d %b %Y") except: return None else: if "close" in i[1].get_payload() or "done" in i[1].get_payload(): try: result = re.search(reg, i[1]['Received']) return datetime.strptime(result.group(1), "%d %b %Y") except: return None return None def try_header(text): reg = "Received:\s\(at\s\d\d\d\d\d\d-(close|done)\)\s+by.+" try: result = re.search(reg, r.text) line = result.group(0) reg2 = "\d{1,2}\s\w\w\w\s\d\d\d\d" result = re.search(reg2, line) d = datetime.strptime(result.group(0), "%d %b %Y") return d except: return None def try_cc(text): reg = "\(at\s.+\)\s+by\sbugs\.debian\.org;\s(\d{1,2}\s\w\w\w\s\d\d\d\d)" handle, name = tempfile.mkstemp() with open(name, "w") as f: f.write(text.encode('latin-1')) mbox = mailbox.mbox(name) for i in mbox.items(): if ('CC' in i[1] and "done" in i[1]['CC']) or ('To' in i[1] and "done" in i[1]['To']): try: result = re.search(reg, i[1]['Received']) return datetime.strptime(result.group(1), "%d %b %Y") except: return None if __name__ == "__main__": print get_done_date(752210)

PS: I hope that the script will be not needed in the near future, as Don Armstrong plans a new BTS database, a Debconf14 video is here.

Categories: Elsewhere

Daniel Pocock: reSIProcate migration from SVN to Git completed

Planet Debian - Fri, 19/09/2014 - 08:47

This week, the reSIProcate project completed the move from SVN to Git.

With many people using the SIP stack in both open source and commercial projects, the migration was carefully planned and tested over an extended period of time. Hopefully some of the experience from this migration can help other projects too.

Previous SVN committers were tracked down using my script for matching emails to Github accounts. This also allowed us to see their recent commits on other projects and see how they want their name and email address represented when their previous commits in SVN were mapped to Git commits.

For about a year, the sync2git script had been run hourly from cron to maintain an official mirror of the project in Github. This allowed people to test it and it also allowed us to start using some Github features like travis-CI.org before officially moving to Git.

At the cut-over, the SVN directories were made read-only, sync2git was run one last time and then people were advised they could commit in Git.

Documentation has also been created to help people get started quickly sharing patches as Github pull requests if they haven't used this facility before.

Categories: Elsewhere

Glassdimly tech Blog: Drupal 7 Administration Toolbar Roundup

Planet Drupal - Fri, 19/09/2014 - 04:52

The admin toolbar is a user's first look at Drupal. A complex, cluttered toolbar gives the n00b a sense that things in this site are too much to handle. A clean, well-curated interface that presents content tasks first gives the n00b a sense that Drupal is easy to use. Drupal 8 promises a simplified toolbar that gives the user the this cozy sense of belonging.

Categories: Elsewhere

Paul Tagliamonte: Docker PostgreSQL Foreign Data Wrapper

Planet Debian - Fri, 19/09/2014 - 03:49

For the tl;dr: Docker FDW is a thing. Star it, hack it, try it out. File bugs, be happy. If you want to see what it's like to read, there's some example SQL down below.

The question is first, what the heck is a PostgreSQL Foreign Data Wrapper? PostgreSQL Foreign Data Wrappers are plugins that allow C libraries to provide an adaptor for PostgreSQL to talk to an external database.

Some folks have used this to wrap stuff like MongoDB, which I always found to be hilarous (and an epic hack).

Enter Multicorn

During my time at PyGotham, I saw a talk from Wes Chow about something called Multicorn. He was showing off some really neat plugins, such as the git revision history of CPython, and parsed logfiles from some stuff over at Chartbeat. This basically blew my mind.

If you're interested in some of these, there are a bunch in the Multicorn VCS repo, such as the gitfdw example.

All throughout the talk I was coming up with all sorts of things that I wanted to do -- this whole library is basically exactly what I've been dreaming about for years. I've always wanted to provide a SQL-like interface into querying API data, joining data cross-API using common crosswalks, such as using Capitol Words to query for Legislators, and use the bioguide ids to JOIN against the congress api to get their Twitter account names.

My first shot was to Multicorn the new Open Civic Data API I was working on, chuckled and put it aside as a really awesome hack.

Enter Docker

It wasn't until tianon connected the dots for me and suggested a Docker FDW did I get really excited. Cue a few hours of hacking, and I'm proud to say -- here's Docker FDW.

Currently it only implements reading from the API, but extending this to allow for SQL DELETE operations isn't out of the question, and likely to be implemented soon. This lets us ask all sorts of really interesting questions out of the API, and might even help folks writing webapps avoid adding too much Docker-aware logic.

Setting it up The only stumbling block you might find (at least on Debian and Ubuntu) is that you'll need a Multicorn `.deb`. It's currently undergoing an official Debianization from the Postgres team, but in the meantime I put the source and binary up on my people.debian.org. Feel free to use that while the Debian PostgreSQL team prepares the upload to unstable.

I'm going to assume you have a working Multicorn, PostgreSQL and Docker setup (including adding the postgres user to the docker group)

So, now let's pop open a psql session. Create a database (I called mine dockerfdw, but it can be anything), and let's create some tables.

Before we create the tables, we need to let PostgreSQL know where our objects are. This takes a name for the server, and the Python importable path to our FDW.

CREATE SERVER docker_containers FOREIGN DATA WRAPPER multicorn options ( wrapper 'dockerfdw.wrappers.containers.ContainerFdw'); CREATE SERVER docker_image FOREIGN DATA WRAPPER multicorn options ( wrapper 'dockerfdw.wrappers.images.ImageFdw');

Now that we have the server in place, we can tell PostgreSQL to create a table backed by the FDW by creating a foreign table. I won't go too much into the syntax here, but you might also note that we pass in some options - these are passed to the constructor of the FDW, letting us set stuff like the Docker host.

CREATE foreign table docker_containers ( "id" TEXT, "image" TEXT, "name" TEXT, "names" TEXT[], "privileged" BOOLEAN, "ip" TEXT, "bridge" TEXT, "running" BOOLEAN, "pid" INT, "exit_code" INT, "command" TEXT[] ) server docker_containers options ( host 'unix:///run/docker.sock' ); CREATE foreign table docker_images ( "id" TEXT, "architecture" TEXT, "author" TEXT, "comment" TEXT, "parent" TEXT, "tags" TEXT[] ) server docker_image options ( host 'unix:///run/docker.sock' );

And, now that we have tables in place, we can try to learn something about the Docker containers. Let's start with something fun - a join from containers to images, showing all image tag names, the container names and the ip of the container (if it has one!).

SELECT docker_containers.ip, docker_containers.names, docker_images.tags FROM docker_containers RIGHT JOIN docker_images ON docker_containers.image=docker_images.id; ip | names | tags -------------+-----------------------------+----------------------------------------- | | {ruby:latest} | | {paultag/vcs-mirror:latest} | {/de-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} | {/ny-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} | {/ar-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} 172.17.0.47 | {/ms-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} 172.17.0.46 | {/nc-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} | {/ia-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} | {/az-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} | {/oh-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} | {/va-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} 172.17.0.41 | {/wa-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} | {/jovial_poincare} | {<none>:<none>} | {/jolly_goldstine} | {<none>:<none>} | {/cranky_torvalds} | {<none>:<none>} | {/backstabbing_wilson} | {<none>:<none>} | {/desperate_hoover} | {<none>:<none>} | {/backstabbing_ardinghelli} | {<none>:<none>} | {/cocky_feynman} | {<none>:<none>} | | {paultag/postgres:latest} | | {debian:testing} | | {paultag/crank:latest} | | {<none>:<none>} | | {<none>:<none>} | {/stupefied_fermat} | {hackerschool/doorbot:latest} | {/focused_euclid} | {debian:unstable} | {/focused_babbage} | {debian:unstable} | {/clever_torvalds} | {debian:unstable} | {/stoic_tesla} | {debian:unstable} | {/evil_torvalds} | {debian:unstable} | {/foo} | {debian:unstable} (31 rows)

Success! This is just a taste of what's to come, so please feel free to hack on Docker FDW, tweet me @paultag, file bugs / feature requests. It's currently a bit of a hack, and it's something that I think has long-term potential after some work goes into making sure that this is a rock solid interface to the Docker API.

Categories: Elsewhere

Jaldhar Vyas: Scotland: Vote A DINNAE KEN

Planet Debian - Fri, 19/09/2014 - 01:39

From the crack journalists at CNN.

Interesting fact: anyone who wore a kilt at debconf is allowed to vote in the referendum.

Categories: Elsewhere

Drupal core announcements: Drupal core updates for September 18th, 2014

Planet Drupal - Thu, 18/09/2014 - 23:43
What's new with Drupal 8?

The big news this week is we're still on one beta-blocker. Patches for the remaining beta blocker are coming rapidly with @effulgentsia, @plach and @fago working hard to get it over the line. Could we have zero beta blockers by DrupalCon?

Other keys issues to land this week include Remove ArrayAccess from FormState - never again deal with random arrays - rejoice - $form_state is a first-class object!. Thanks to @timplunkett and others who helped get this through. If you have any contrib projects accessing $form_state in an array fashion eg $form_state['values']['fooey']; then you need to familiarize yourself with the change record.

In a further sign that Drupal 8 is maturing into a modern HTTP framework, we now have support for a stack-php based middleware this will allow us to clean up how page caching, conent negotiotiaton, implementing ban.module's functionalty, options requests and various other elements of the request processing pipeline work. For more information on middlewares see Stackphp.com and this article or see the list of existing middlewares supported by stack-php, and therefore likely to be compatible with Drupal.

In the same vein Modularize kernel exception handling brought some much needed cleanup to to the way we handle exceptions and enables contrib modules to easily add their own exception handling, particularly for custom REST formats.

Over in Convert UnitTestBase to PHPUnit and Remove UnitTestBase, @sun, @Berdir and @tim.plunkett have been working towards removing Simpletest-based Unit tests. There are plenty of sessions around the future of testing at Drupalcon Amsterdam so be sure to check these out if testing is your thing.

The Consensus Banana is moving full-steam ahead with loads of issues resolved to move classes out of preprocessing and into templates landing this week. Meanwhile in Split Seven's style.css into SMACSS categories @LewisNyman has been making great strides towards bringing sanity to Seven's CSS structure.

@WimLeers, @alexpott and @chx worked tirelessly in Add cacheability metadata to access checks to harmonize our access-checking systems and add cacheability to the access results in the form of an AccessResultInterface, great work!

Over in Remove text_processing option from text fields, expose existing string field types as plain text in UI @Berdir, @Wim Leers, @dawehner consolidated our text field types, an important change for Site Builders.

Finally, PHPStorm 8 has been released with lots of support for Drupal 8 APIs!

Where's Drupal 8 at in terms of release?

Since the last Drupal Core Update on Sept. 4, we've fixed 19 critical issues and 24 major issues, and added 12 criticals and 19 majors. That puts us overall at 97 release-blocking critical issues and 644 major issues.

Where can I help? Top criticals to hit this week

Each week, we check with core maintainers and contributors for the "extra critical" criticals that are blocking other work. These issues are often tough problems with a long history. If you're familiar with the problem-space of one of these issues and have the time to dig in, help drive it forward by reviewing, improving, and testing its patch, and by making sure the issue's summary is up to date and any API changes are documented with a draft change record, we could use your help!

There are also several beta deadline issues that, while not quite critical, will need to be done before the beta if they're to be done at all. The following beta deadline issues are especially important:

More ways to help
  • Now that we're nearing beta, its time to turn our attention to release-blocking criticals.
  • Beta target issues are issues that can be added to Beta 1, Beta 2, or later, but would be best done sooner rather than later for solid beta releases.
  • With a looming beta, now we can ramp up our efforts on contrib modules - there's a sprint at Amsterdam just for that - so put your name on the list if this is your thing.

As always, if you're new to contributing to core, check out Core contribution mentoring hours. Twice per week, you can log into IRC and helpful Drupal core mentors will get you set up with answers to any of your questions, plus provide some useful issues to work on.

You can also help by sponsoring independent Drupal core development.

Notable Commits

The best of git log --since "2014-09-04" --pretty=oneline (200 commits in total):

  • Issue 2333113 by effulgentsia, plach: Add an EntityDefinitionUpdateManager so that entity handlers can respond (e.g., by updating db schema) to code updates in a controlled way (e.g., from update.php).
  • Issue 1857256 by dawehner, xjm, tim.plunkett, jibran, ParisLiakos, hussainweb, pcambra, ekes, InternetDevels, rhabbachi, rdrh555, tstoeckler, oadaeh, Gábor Hojtsy, vijaycs85: Fixed Convert the taxonomy listing and feed at /taxonomy/term/%term to Views.
  • Issue 2333501 by swentel | marcvangend: Implement ThirdPartySettingsInterface in EntityView|FormDisplay.
  • Issue 1740492 by dawehner, damiankloip, dasjo, xjm: Implement a default entity views data handler.
  • Issue 2331019 by slashrsm: Implement ThirdPartySettingsInterface in Vocabulary.
  • Issue 2320157 by moshe weitzman, Wim Leers, penyaskito, tim.plunkett: Generate placeholder content for Field types - essentially devel generate in core.
  • Issue 2329485 by damiankloip, dawehner: Allow permissions.yml files to declare 'permission_callbacks' for dynamic permissions.
  • Issue 1898478 by joelpittet, Cottser, lokapujya, m1r1k, jstoller, er.pushpinderrana, duellj, organicwire, jessebeach, idflood, Jalandhar, Risse, derheap, galooph, mike.roberts, tlattimore, nadavoid, LinL, steveoliver, chakrapani, likin, killerpoke, EVIIILJ, vlad.dancer, podarok, m86 | c4rl: Menu.inc - Convert theme_ functions to Twig.
  • Issue 1915056 by Arla, Berdir, amateescu | catch: Use entity reference for taxonomy parents.
  • Issue 2321745 by larowlan, tim.plunkett: Add #type => 'path' that accepts path but optionally stores URL object or route name and parameters.
  • Issue 474004 by mdrummond, kim.pepper, Wim Leers, jibran, tim.plunkett, joachim | JohnAlbin: Add options to system menu block so primary and secondary menus can be blocks rather than variables - essentially menu block module in core.
  • Issue 2068331 by roderik, slashrsm, pcambra, Sharique, piyuesh23, vijaycs85 | plach: Convert comment SQL queries to the Entity Query API.
  • Issue 2226493 by Berdir, Wim Leers, m1r1k, mr.baileys, andypost, scor, cbr, joelpittet: Apply formatters and widgets to Node base fields.
  • Issue 2302563 by chx, dawehner: Fixed Access check Url objects.

You can also always check the Change records for Drupal core for the full list of Drupal 8 API changes from Drupal 7.

Drupal 8 Around the Interwebs Drupal 8 in "Real Life" Whew! That's a wrap!

Do you follow Drupal Planet with devotion, or keep a close eye on the Drupal event calendar, or git pull origin 8.0.x every morning without fail before your coffee? We're looking for more contributors to help compile these posts. You could either take a few hours once every six weeks or so to put together a whole post, or help with one section more regularly. Read more about how you can volunteer to help with these posts!

Finally special thanks to KatteKrab for assisting with compiling this edition.

Categories: Elsewhere

Drupal Watchdog: The Automagic Speed-Up Cache

Planet Drupal - Thu, 18/09/2014 - 19:05
Feature Motivation

The granularity of cache expiration in Drupal has been a long-standing problem.

One can have the most effective cache in the world, but if it clears entirely on any content change, it is not really workable. A “page” in Drupal can have blocks, listing, entities, regions, and many other objects. When one contained item changes, the container of that item needs to be fully rebuilt; often, that is the whole page, a problem requiring a much-needed solution.

Why can't we just rebuild the parts that have actually changed?

Consider what would be the best case scenario here. Assume that every item listed above can be cached separately. Now if one single entity changes, the following would be our "perfect" page request:

  1. Drupal bootstraps.
  2. Drupal builds the page.
  3. Drupal notices that only the “content” region has changed and retrieves the remaining regions from cache.
  4. Drupal re-builds the content region.
  5. Drupal notices only one listing in the content region has changed and retrieves the remaining blocks from cache.
  6. Drupal builds the “missing” block.
  7. The block contains a listing of entities.
  8. Drupal re-builds the listing, and entity_view() is called on these entities.
  9. Drupal retrieves all entities except the changed one from cache.

We would have a bootstrap, then we would see just one region call, one block call, one listing call, and one entity building call. Is this really possible?

Yes and no.

There are certain implementation limitations – especially around page assets – and a unified caching strategy needs to take them into account.

State of the Art

Render Caching is the saving of HTML content in a storage cache, while retaining assets like CSS and JS files and other “out-of-band” data. It can be used for reconstructing the page content, without changing the state the page would have without render caching active. The render cached HTML markup needs to be removed from the cache, or updated in the cache when the objects used for generation of the markup change.

Categories: Elsewhere

Acquia: Drupal 8 developer experience wins, the PHP Renaissance and more with Angie Byron

Planet Drupal - Thu, 18/09/2014 - 18:06

Part 2 of a 2-part conversation with Angie Byron in front of the cameras at NYC Camp 2014, held at United Nations headquarters in New York. In this part of our conversation, we talk about improvements in the Drupal developer- and learning-experience thanks to the major changes under the hood in Drupal 8; the "PHP Renaissance"; and about being welcomed "back into the fold" of the greater PHP world thanks to the nature of Drupal 8 being a sort of "meta project" (my words) that includes parts of many others.

Categories: Elsewhere

ThinkShout: The Small Business of Open Source

Planet Drupal - Thu, 18/09/2014 - 16:00

This summer, ThinkShout was named the 9th Fastest Growing Private Company in Portland, Oregon. Admittedly, this came as sort of a shock to me and Lev. Over three and a half years, we’ve grown the company from two dudes renting desks in an incubator space to a full-time team of 17 professionals averaging 10 years of experience each. But most of the time, it doesn’t feel like we’ve come up with any secret sauce for running a successful business. We try to listen to our employees and our peers in the industry. We partner with nonprofit clients trying to make the world a better place, and we do our best to treat them with integrity in all aspects of our work - from our design and engineering practices to our approach to project management and our billing process.

Perhaps there are a few things that we do particularly well. We win our fair share of work in coopetition with our peers in the nonprofit tech industry. But then again, in talking with our friends at ZivTech, Gorton Studios, Aten Design Group, Jackson River, Forum One Communications, and others, what we consistently hear is that there is more than enough work to go around in the world of technology for good.

How is that? What are the mechanics of "the small business of open source" that work for all these firms and, more importantly, for their customers?

You could tackle this question from a number of angles. Conventional wisdom suggests that the business value of open source, both in the for-profit and nonprofit sectors, is simply: you get a lot of stuff for free. This is true. With open source, you avoid licensing fees. With an open source platform such as Drupal, you can pick and choose among literally tens of thousands of free tools and extensions.

But this perspective speaks only to open source’s initial customer appeal. It speaks to the coarse outlines of the sales cycle of open source. You can’t just build a sustainable business off the idea that a platform like Drupal provides a lot of stuff for free.

So, what is the guiding principle of growing a small business around open source? For us, it boils down to:

Our customers benefit tremendously from their own contributions to open source.

Put a bit more bluntly:

Our customers benefit from paying us to give away their code.

To illustrate this point, I’d like to run through some of our numbers as a small business of open source. The following statistics are relative to the publication date of this blog post:

  • ThinkShout’s team includes 8 engineers with a total of 4,269 commits to contributed modules hosted on Drupal.org.

  • We actively maintain 24 contributed modules that represent over 100,000 lines of code.

  • The combined installation base of these modules (i.e., the aggregated number of websites using each of these modules) is over 42,000.

Interestingly, our staff has grown 750% from the end of our first quarter. The growth in adoption of our contributed tools by the community (1,400% since the end of Q1 2011) tracks closely with the growth of our quarterly revenues (1,300%).

Obviously, these statistics are only related anecdotally. On their own, they don’t prove that our open source contributions drive our success.

Still, it is interesting to look at some of the statistics around our client work:

  • Averaging the last four $80-$150K Drupal website projects we’ve completed, a typical project in this price range is made up of 36,289 lines of custom code and exported site configuration (excluding the theme, or the implementation of the graphic design).

  • In addition, most of these projects include the release of 1,000-5,000 lines of contributed code. In other words, clients who engage with us on these sorts of projects pay for us either to contribute back major features to a contributed module, or to release one or two stand-alone modules.

  • But what’s most interesting is that each of these projects leverage at least 4 to 5 of the contributed modules that other clients have paid for on previous projects. Each leverages around 20,000 lines of contributed code that our team has written for similar use cases.

Of course, these statistics reflect the significant cost savings to our clients in going with an open source solution. They get a lot of free stuff…

But more importantly, our clients benefit from becoming financially invested in the direction and cultivation of these open source projects. By becoming committers, they are ensured that their requirements will continue to be prioritized in the future development of the modules their website depend on. They benefit too by the tens, sometimes hundreds, of open source developers reviewing their contributed code for bug fixes and improvements.

Moreover, by releasing the code that powers their websites, our clients connect with other organizations with similar requirements who, in turn, will often contribute additional features to these projects over time. As a case in point, we released the Leaflet module, a web-based mapping solution, on behalf of the Intertwine Alliance in the summer of 2011. The Alliance paid for the initial release, which included a modest yet highly-innovative feature set. Since then, 36 different Drupal developers have contributed code to improve the module, and over 4,500 websites have adopted it.

Similarly, we initially released RedHen CRM, a native CRM solution built entirely within Drupal, for a small cohort of nonprofits with similar needs. In this case, we actually got these nonprofits to work together to brainstorm and prioritize their CRM requirements. Pooling funding, this group helped us launch RedHen publicly in the spring of 2012. Since then, over a dozen of our clients have continued to invest in RedHen, and it has been adopted on over 850 websites around the world. Most impressively, those initial clients that funded our early work on RedHen continue to upgrade their sites with each new release, benefitting from hundreds of thousands of dollars of free development.

Client investments in open source present many less tangible benefits as well. While our developers are inspired to support our mission-driven nonprofit customers, they are particularly excited (and therefore, keenly focused) when their work leads to open source contributions. Not only does this lead to high-quality engineering, it helps reduce turnover among our engineering team, which can be costly for clients. Among our team’s contributors to Drupal, our average turnover is less than half of the industry standard in information technology.

For readers of this post who represent agencies that also contribute to open source, I’d be curious if your data and experience track to ours. Business and nonprofit readers whose teams have paid for open source contributions, I’d be curious to hear your stories as well. After all, open source is all about transparency, which we believe makes us a more effective business.

Categories: Elsewhere

Code Karate: Drupal 7 Entity Registration Views, Access and Wait List

Planet Drupal - Thu, 18/09/2014 - 15:53
Episode Number: 168

Following up on the previous Daily Dose of Drupal episode on the Entity Registration module, this episode looks at some of the additional Entity Registration add on modules.

In this episode you will learn:

Tags: DrupalDrupal 7Drupal Planet
Categories: Elsewhere

Jonathan McDowell: Automatic inline signing for mutt with RT

Planet Debian - Thu, 18/09/2014 - 12:00

I spend a surprising amount of my time as part of keyring-maint telling people their requests are badly formed and asking them to fix them up so I can actually process them. The one that's hardest to fault anyone on is that we require requests to be inline PGP signed (i.e. the same sort of output as you get with "gpg --clearsign"). That's because RT does various pieces of unpacking[0] of MIME messages that mean that a PGP/MIME signatures that have passed through it are no longer verifiable. Daniel has pointed out that inline PGP is a bad idea and got as far as filing a request that RT handle PGP/MIME correctly (you need a login for that but there's a generic read-only one that's easy to figure out), but until that happens the requirement stands when dealing with Debian's RT instance. So today I finally added the following lines to my .muttrc rather than having to remember to switch Mutt to inline signing for this one special case:

send-hook . "unset pgp_autoinline; unset pgp_autosign" send-hook rt.debian.org "set pgp_autosign; set pgp_autoinline"

i.e. by default turn off auto inlined PGP signatures, but when emailing anything at rt.debian.org turn them on.

(Most of the other things I tell people to fix are covered by the replacing keys page; I advise anyone requesting a key replacement to read that page. There's even a helpful example request template at the bottom.)

[0] RT sticks a header on the plain text portion of the mail, rather than adding a new plain text part for the header if there are multiple parts (this is something Mailman handles better). It will also re-encode received mail into UTF-8 which I can understand, but Mutt will by default try to find an 8 bit encoding that can handle the mail, because that's more efficient, which tends to mean it picks latin1.

Categories: Elsewhere

Dariusz Dwornikowski: RFS health in Debian

Planet Debian - Thu, 18/09/2014 - 10:50

I am working on a small project to create WNPP like statistics for open RFS bugs. I think this could improve a little bit effectiveness of sponsoring new packages by giving insight into bugs that are on their way to being starved (i.e. not ever sponsored, or rotting in a queue).

The script attached in this post is written in Python and uses Debbugs SOAP interface to get currently open RFS bugs and calculates their dust and age.

The dust factor is calculated as an absolute value of a difference between bugs's age and log_modified.

Later I would like to create fully blown stats for an RFS queue, taking into account the whole history (i.e. 2012-1-1 until now), and check its health, calculate MTTGS (mean time to get sponsored).

The list looks more or less like this:

Age Dust Number Title 37 0 757966 RFS: lutris/0.3.5-1 [ITP] 1 0 762015 RFS: s3fs-fuse/1.78-1 [ITP #601789] -- FUSE-based file system backed by Amazon S3 81 0 753110 RFS: mrrescue/1.02c-1 [ITP] 456 0 712787 RFS: distkeys/1.0-1 [ITP] -- distribute SSH keys 120 1 748878 RFS: mwc/1.7.2-1 [ITP] -- Powerful website-tracking tool 1 1 762012 RFS: fadecut/0.1.4-1 3 1 761687 RFS: abraca/0.8.0+dfsg-1 -- Simple and powerful graphical client for XMMS2 35 2 758163 RFS: kcm-ufw/0.4.3-1 ITP 3 2 761636 RFS: raceintospace/1.1+dfsg1-1 [ITP] .... ....

The script rfs_health.py can be found below, it uses SOAPpy (only python <3 unfortunately).

#!/usr/bin/python import SOAPpy import time from datetime import date, timedelta, datetime url = 'http://bugs.debian.org/cgi-bin/soap.cgi' namespace = 'Debbugs/SOAP' server = SOAPpy.SOAPProxy(url, namespace) class RFS(object): def __init__(self, obj): self._obj = obj self._last_modified = date.fromtimestamp(obj.log_modified) self._date = date.fromtimestamp(obj.date) if self._obj.pending != 'done': self._pending = "pending" self._dust = abs(date.today() - self._last_modified).days else: self._pending = "done" self._dust = abs(self._date - self._last_modified).days today = date.today() self._age = abs(today - self._date).days @property def status(self): return self._pending @property def date(self): return self._date @property def last_modified(self): return self._last_modified @property def subject(self): return self._obj.subject @property def bug_number(self): return self._obj.bug_num @property def age(self): return self._age @property def dust(self): return self._dust def __str__(self): return "{} subject: {} age:{} dust:{}".format(self._obj.bug_num, self._obj.subject, self._age, self._dust) if __name__ == "__main__": bugi = server.get_bugs("package", "sponsorship-requests", "status", "open") buglist = [RFS(b.value) for b in server.get_status(bugi).item] buglist_sorted_by_dust = sorted(buglist, key=lambda x: x.dust, reverse=False) print("Age Dust Number Title") for i in buglist_sorted_by_dust: print("{:<4} {:<4} {:<7} {}".format(i.age, i.dust, i.bug_number, i.subject))
Categories: Elsewhere

Unimity Solutions Drupal Blog: Modify Apache Solr Queries in Drupal

Planet Drupal - Thu, 18/09/2014 - 07:27

In a recent project I got the opportunity to tweak Drupal’s Apache solr queries.In this blog p

Categories: Elsewhere

Jaldhar Vyas: Scotland: Vote NO

Planet Debian - Thu, 18/09/2014 - 06:21
_ __<; </_/ _/__ /> > 7 ) ~;</7 / /> / _*<---- Perth ~ </7 7~\_ </7 \ /_ _ _ |

If you don't, the UK will have to rename itself the K. And that's just silly.

Also vote yes on whether Alex Trebek should keep his mustache.

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator