Feed aggregator

Mediacurrent: Code Review for Non-Technical Folks

Planet Drupal - Thu, 24/07/2014 - 23:07

Andrew Riley leads us through a high level walkthrough of what Code Reviews are and why we need them. In this talk he covers what we check for at Mediacurrent (syntax, security, efficiency etc) and why code reviews are important for our customers and any company that writes their own code.

Additional Resources

Drupal 8 In Pictures (for Users) | Mediacurrent Blog

Categories: Elsewhere

Forum One: Drupal 8 Code Sprint at the Jersey Shore

Planet Drupal - Thu, 24/07/2014 - 20:52

On the heels of our own Drupal 8 code sprint in DC, I spent the last weekend with the New Jersey Drupal group who organized a Drupal 8 code sprint in Asbury Park, NJ – and although I was never greeted by The Boss, I was pleased to participate thanks to the generosity of the event organizers.

Issue-Focused

I worked on the MenuLink NG part 1 issue extensively with Peter Wolanin and YesCT. This issue is part of the [Meta] New plan, Phase 2 issue which proposed performance improvement and UI improvement in Drupal 8. This issue originally had a 600KB patch, but to make it more reviewable/committable the issue was split into five child issues.

Three of us spent a solid two days and more than 30 hours addressing every single point that had been raised by reviewers – and which had been holding up the process of adding this to Core.

Image courtesy of Blink Reaction

About this Issue (a high level overview)

A site builder or developer can create menu links in Drupal via configuration by changing the weight, position, creating links in menu, or in code. All these different types of menu links need to be rendered together in menus, so that they present the same in the API to developers. The developer experience on this issue needs to be easy, as almost everything depends on menu items.

While we toiled on this issue, other sprinters worked on DrupalCon Austin’s Consensus Banana, testing the migration path from Drupal 6 to Drupal 7, along with some other issues.

Results and Commits

The sprint was a very productive one, and resulted in Menu part 1 and Menu part 2 being committed to Core, which was a beta-blocker issue. As of this sprint there were seven beta-blocker issues, but solving the Menu issue helps to put us one step closer to the Drupal 8 Beta release!

For those interested, here are the commits for part 1 and part 2. And for those needing a chuckle, Alex “The Situation” Pott – who thankfully preferred DCC (Drupal Core Commits) over GTL (Gym, Tan, Laundry) – drew this goaty looking lama to celebrate his commit

Image courtesy of Blink Reaction

It was very rewarding to work for a weekend with this group of talented developers, and I think we all left the shore content in the knowledge that we’d made strides toward bringing Drupal 8 that much closer to completion.

Check out these other pictures taken by Peter Wolanin and DrupalCamp NJ at the event »

Categories: Elsewhere

Dries Buytaert: The business behind Open Source

Planet Drupal - Thu, 24/07/2014 - 16:38
Topic: DrupalAcquiaBusinessThe future

A few days ago, I sat down with Quentin Hardy of The New York Times to talk Open Source. We spoke mostly about the Drupal ecosystem and how Acquia makes money. As someone who spent almost his entire career in Open Source, I'm a firm believer in the fact that you can build a high-growth, high-margin business and help the community flourish. It's not an either-or proposition, and Acquia and Drupal are proof of that.

Rather than an utopian alternate reality as Quentin outlines, I believe Open Source is both a better way to build software, and a good foundation for an ecosystem of for-profit companies. Open Source software itself is very successful, and is capable of running some of the most complex enterprise systems. But failure to commercialize Open Source doesn't necessarily make it bad.

I mentioned to Quentin that I thought Open Source was Darwinian; a proprietary software company can't afford to experiment with creating 10 different implementations of an online photo album, only to pick the best one. In Open Source we can, and do. We often have competing implementations and eventually the best implementation(s) will win. One could say that Open Source is a more "wasteful" way of software development. In a pure capitalist read of On the Origin of Species, there is only one winner, but business and Darwin's theory itself is far more complex. Beyond "only the strongest survive", Darwin tells a story of interconnectedness, or the way an ecosystem can dictate how an entire species chooses to adapt.

While it's true that the Open Source "business model" has produced few large businesses (Red Hat being one notable example), we're also evolving the different Open Source business models. In the case of Acquia, we're selling a number of "as-a-service" products for Drupal, which is vastly different than just selling support like the first generation of Open Source companies did.

As a private company, Acquia doesn't disclose financial information, but I can say that we've been very busy operating a high-growth business. Acquia is North America's fastest growing private company on the Deloitte Fast 500 list. Our Q1 2014 bookings increased 55 percent year-over-year, and the majority of that is recurring subscription revenue. We've experienced 21 consecutive quarters of revenue growth, with no signs of slowing down. Acquia's business model has been both disruptive and transformative in our industry. Other Open Source companies like Hortonworks, Cloudera and MongoDB seem to be building thriving businesses too.

Society is undergoing tremendous change right now -- the sharing and collaboration practices of the internet are extending to transportation (Uber), hotels (Airbnb), financing (Kickstarter, LendingClub) and music services (Spotify). The rise of the collaborative economy, of which the Open Source community is a part of, should be a powerful message for the business community. It are the established, proprietary vendors whose business models are at risk, and not the other way around.

Hundreds of other companies, including several venture backed startups, have been born out of the Drupal community. Like Acquia, they have grown their businesses while supporting the ecosystem from which they came. That is more than a feel-good story, it's just good business.

Categories: Elsewhere

drunken monkey: Updating the Search API to D8 – Part 3: Creating your own service

Planet Drupal - Thu, 24/07/2014 - 14:28

Even though there was somewhat of a delay since my last post in this series, it seems no-one else has really covered any of the advanced use cases of Drupal 8 in tutorials yet. So, here is the next installment in my series. I initially wanted to cover creating a new plugin type, but since that already requires creating a new servive, I thought I'd cover that smaller part first and then move on to plugin types in the next post.
I realize that now already a lot more people have started on their Drupal 8 modules, but perhaps this will make this series all the more useful.

Services in Drupal 8

First, a short overview of what a service even is. Basically it is a component (represented as a class) providing a certain, limited range of functionality. The database is a service, the entity manager (which is what you now use for loading entities) is a service, translation, configuration – everything handled by services. Getting the current user – also a service now, ridding us of the highly unclean global variable.
In general, a lot of what was previously a file in includes/ containing some functions with a common prefix is now a service (or split into multiple services).

The upsides of this is that the implementation and logic is cleanly bundled and properly encapsulated, that all these components can easily be switched out by contrib or later core updates, and that these systems can also be very well tested with unit tests. Even more, since services can be used with dependency injection, it also makes it much easier to test all other classes that use any of these services (if they can use dependency injection and do it properly).

(For reference, here is the official documentation on services.)

Dependency injection

This has been covered already in a lot of other blog posts, probably since it is both a rather central concept in Drupal 8, and a bit complicated when you first encounter it. However, before using it, I should still at least skim over the topic. Feel free to skip to the next heading if you feel you already know what dependency injection is and how it roughly works in Drupal 8.

Dependency injection is a programming technique where a class with external dependencies (e.g., a mechanism for translating) explicitly defines these dependencies (in some form) and makes the class which constructs it responsible for supplying those dependencies. That way, the class itself can be self-contained and doesn't need to know about where it can get those dependencies, or use any global functions or anything to achieve that.

Consider for example the following class:

<?php
class ExampleClass {

  public function getDefinition() {
    return array(
      'label' => t('example class'),
      'type' => 'foo',
    );
  }

}
?>

For translating the definition label, this explicitly uses the global t() function. Now, what's bad about this I, hear you ask, it worked well enough in Drupal 7, right?
The problem is that it becomes almost impossible to properly unit-test that method without bootstrapping Drupal to the point where the t() function becomes available and functional. It's also more or less impossible to switch out Drupal's translation mechanism without hacking core, since there is no way to redirect the call to t().

But if translation is done by a class with a defined interface (in other words, a service), it 's possible to do this much cleaner:

<?php
class ExampleClass {

  public function __construct(TranslationServiceInterface $translation) {
    $this->translation = $translation;
  }

  public function getDefinition() {
    return array(
      'label' => $this->translation->translate('example class'),
      'type' => 'foo',
    );
  }

}
?>

Then our example class just has to make it easily possible for code that wants to instantiate it to know how to pass its dependencies to it. In Drupal, there are two ways to do this, depending on what you are creating:

  • Services, which themselves use dependency injection to get their dependencies (as you will see in a minute) have a definition in a YAML file that exactly states which services need to be passed to the service's constructor.
  • Almost anything else (I think) uses a static create() method which just receives a container of all available services and is then responsible for passing the correct ones to the constructor.

In either case, the idea is that subclasses/replacements of ExampleClass can easily use other dependencies without any changes being necessary to code elsewhere instantiating the class.

Creating a custom service

So, when would you want to create your own service in a module? Generally, the .module file should more or less only contain hook implementations now, any general helper functions for the module should live in classes (so they can be easily grouped by functionality, and the code can be lazy-loaded when needed). The decision to make that class into a service then depends on the following questions:

  • Is there any possibility someone would want to swap out the implementation of the class?
  • Do you want to unit-test the class?
  • Relatedly, do you want dependency injection in the class?

I'm not completely sure myself about how to make these decisions, though. We're still thinking about what should and shouldn't be a service in the Search API, currently there is (apart from the ones for plugins) only one service there:

The "Server task manager" service

The "server tasks" system, which already existed in D7, basically just ensures that when any operations on a server (e.g., removing or adding an index, deleting items, …) fails for some reason (e.g., Solr is temporarily unreachable) it is regularly retried to always ensure a consistent server state. While in D7 the system consisted of just a few functions, in D8 it was decided to encapsulate the functionality in a dedicated service, the "Server task manager".

Defining an interface and a class for the service

The first thing you need, so the service can be properly swapped out later, is an interface specifying exactly what the service should be able to do. This completely depends on your use case for the service, nothing to keep in mind here (and also no special namespace or anything). In our case, for server tasks:

<?php
namespace Drupal\search_api\Task;

interface ServerTaskManagerInterface {

  public function execute(ServerInterface $server = NULL);

  public function add(ServerInterface $server, $type, IndexInterface $index = NULL, $data = NULL);

  public function delete(array $ids = NULL, ServerInterface $server = NULL, $index = NULL);

}
?>

(Of course, proper PhpDocs are essential here, I just skipped them for brevity's sake.)

Then, just create a class implementing the interface. Again, namespace and everything else is completely up to you. In the Search API, we opted to put interface and class (they usually should be in the same namespace) into the namespace \Drupal\search_api\Task. See here for their complete code.
For this post, the only relevant part of the class code is the constructor (the rest just implements the interface's methods):

<?php
class ServerTaskManager implements ServerTaskManagerInterface {

  public function __construct(Connection $database, EntityManagerInterface $entity_manager) {
    $this->database = $database;
    $this->entity_manager = $entity_manager;
  }

}
?>

As you can see, we require the database connection and the entity manager as dependencies, and just included them in the constructor. We then save them to properties to be able to use them later in the other methods.

Now we just need to tell Drupal about our service and its dependencies.

The services.yml file

As mentioned earlier, services need a YAML definition to work, where they also specify their dependencies. For this, each module can have a MODULE.services.yml file listing services it wants to publish.

In our case, search_api.services.yml looks like this (with the plugin services removed):

services:
  search_api.server_task_manager:
    class: Drupal\search_api\Task\ServerTaskManager
    arguments: ['@database', '@entity.manager']

As you see, it's pretty simple: we assign some ID for the service (search_api.server_task_manager – properly namespaced by having the module name as the first part), specify which class the service uses by default (which, like the other definition keys, can then be altered by other modules) and specify the arguments for its constructor (i.e., its dependencies). database and entity.manager in this example are just IDs of other services defined elsewhere (in Drupal core's core.services.yml, in this case).

There are more definition keys available here, and also more features that services support, but that's more or less the gist of it. Once you have its definition in the MODULE.services.yml file, you are ready to use your new service.

Using a service

You already know one way of using a service: you can specify it as an argument for another service (or any other dependency injection-enabled component). But what if you want to use it in a hook, or any other place where dependency injection is not available (like entities, annoyingly)?

You simply do this:

<?php
/** @var \Drupal\search_api\Task\ServerTaskManagerInterface $server_task_manager */
$server_task_manager = \Drupal::service('search_api.server_task_manager');
$server_task_manager->execute();
?>

That's it, now all our code needing server tasks functionality benefits from dependency injection and all the other Drupal 8 service goodness.

Categories: Elsewhere

Craig Small: PHP uniqid() not always a unique ID

Planet Debian - Thu, 24/07/2014 - 14:17

For quite some time modern versions of JFFNMS have had a problem. In large installations hosts would randomly appear as down with the reachability interface going red. All other interface types worked, just this one.

Reachability interfaces are odd, because they call fping or fping6 do to the work. The reason is because to run a ping program you need to have root access to a socket and to do that is far too difficult and scary in PHP which is what JFFNMS is written in.

To capture the output of fping, the program is executed and the output captured to a temporary file. For my tiny setup this worked fine, for a lot of small setups this was also fine. For larger setups, it was not fine at all. Random failed interfaces and, most bizzarely of all, even though a file disappearing. The program checked for a file to exist and then ran stat in a loop to see if data was there. The file exist check worked but the stat said file not found.

At first I thought it was some odd load related problem, perhaps the filesystem not being happy and having a file there but not really there. That was, until someone said “Are these numbers supposed to be the same?”

The numbers he was referring to was the filename id of the temporary file. They were most DEFINITELY not supposed to be the same. They were supposed to be unique. Why were they always unique for me and not for large setups?

The problem is with the uniqid() function. It is basically a hex representation of the time.  Large setups often have large numbers of child processes for polling devices. As the number of poller children increases, the chance that two child processes start the reachability poll at the same time and have the same uniqid increases. It’s why the problem happened, but not all the time.

The stat error was another symptom of this bug, what would happen was:

  • Child 1 starts the poll, temp filename abc123
  • Child 2 starts the poll in the same microsecond, temp filename is also abc123
  • Child 1 and 2 wait poller starts, sees that the temp file exists and goes into a loop of stat and wait until there is a result
  • Child 1 finishes, grabs the details, deletes the temporary file
  • Child 2 loops, tries to run stat but finds no file

Who finishes first is entirely dependent on how quickly the fping returns and that is dependent on how quicky the remote host responds to pings, so its kind of random.

A minor patch to use tempnam() instead of uniqid() and adding the interface ID in the mix for good measure (no two children will poll the same interface, the parent’s scheduler makes sure of that.) The initial responses is that it is looking good.

 

Categories: Elsewhere

Martin Pitt: vim config for Markdown+LaTeX pandoc editing

Planet Debian - Thu, 24/07/2014 - 11:38

I have used LaTeX and latex-beamer for pretty much my entire life of document and presentation production, i. e. since about my 9th school grade. I’ve always found the LaTeX syntax a bit clumsy, but with good enough editor shortcuts to insert e. g. \begin{itemize} \item...\end{itemize} with just two keystrokes, it has been good enough for me.

A few months ago a friend of mine pointed out pandoc to me, which is just simply awesome. It can convert between a million document formats, but most importantly take Markdown and spit out LaTeX, or directly PDF (through an intermediate step of building a LaTeX document and calling pdftex). It also has a template for beamer. Documents now look soo much more readable and are easier to write! And you can always directly write LaTeX commands without any fuss, so that you can use markdown for the structure/headings/enumerations/etc., and LaTeX for formulax, XYTex and the other goodies. That’s how it should always should have been! ☺

So last night I finally sat down and created a vim config for it:

"-- pandoc Markdown+LaTeX ------------------------------------------- function s:MDSettings() inoremap <buffer> <Leader>n \note[item]{}<Esc>i noremap <buffer> <Leader>b :! pandoc -t beamer % -o %<.pdf<CR><CR> noremap <buffer> <Leader>l :! pandoc -t latex % -o %<.pdf<CR> noremap <buffer> <Leader>v :! evince %<.pdf 2>&1 >/dev/null &<CR><CR> " adjust syntax highlighting for LaTeX parts " inline formulas: syntax region Statement oneline matchgroup=Delimiter start="\$" end="\$" " environments: syntax region Statement matchgroup=Delimiter start="\\begin{.*}" end="\\end{.*}" contains=Statement " commands: syntax region Statement matchgroup=Delimiter start="{" end="}" contains=Statement endfunction autocmd BufRead,BufNewFile *.md setfiletype markdown autocmd FileType markdown :call <SID>MDSettings()

That gives me “good enough” (with some quirks) highlighting without trying to interpret TeX stuff as Markdown, and shortcuts for calling pandoc and evince. Improvements appreciated!

Categories: Elsewhere

NEWMEDIA: Avoiding the "API Integration Blues" on a Drupal Project

Planet Drupal - Thu, 24/07/2014 - 05:31
Avoiding the "API Integration Blues" on a Drupal ProjectAs Drupal continues to mature as a platform and gain adoption in the enterprise space, integration with one or more 3rd party systems is becoming common for medium to large scale projects. Unfortunately, it can be easy to underestimate the time and effort required to make these integrations work seamlessly. Here are lessons we've learned...

Mailchimp, Recurly, Mollom, Stripe, and on and on—It's easy to get spoiled by Drupal's extensive library of contributed modules that allow for quick, easy, and robust integration with 3rd party systems. Furthermore, if a particular integration does not yet exist, the extendability of Drupal is straightforward enough that it could be done given the usual caveats (i.e. an appropriate amount of time, resources, and effort). However, these caveats should not be taken lightly. Our own experiences have unearthed many of the same pain points again and again, which almost always result in waste. By applying this gained wisdom to all subsequent projects involving integrations, we've been much more successful in identifying and addressing these issues head on. We hope that in sharing what we've learned that you can avoid some of the more common traps.

API and Integration Gotchas Vaporware

Shocking as it may seem, there are situations where a client will assume an API exists when there isn't one to be found. Example: a client may be paying for an expensive enterprise software license that can connect to other programs within the same ecosystem, but there may not be an endpoint that can be accessed by Drupal. They key here is to ensure you have documentation up front along with a working example of a read and/or write operation written in php or through a web services call. Doing this as early as possible within the project will help prevent a nasty surprise when it's too late to change course or stop the project altogether.

Hidden Add-on Fees

An alternative to the scenario above is when an endpoint can be made available for an additional one-time or recurring fee. This can be quite an expensive surprise. It can also result in a difficult conversation with the client, particularly if it wasn't factored into the budget and now each side must determine who eats the cost. The key to preventing this is to verify (up front) if the API endpoint is included with the client's current license(s) or if it will be extra.

Limited Feature Sets

One can never assume that an entire feature set is available. Example: an enterprise resource planning (ERP) software solution may provide a significant amount of data and reporting to its end users, but it may only expose particular records (e.g. users, products, and inventory) to an API. The result: a Drupal site's scope document might include functionality that simply cannot be provided. To avoid this issue, you'll want to get your hands on any and all documentation as soon as possible. You'll also want to create an inventory of every feature that requires a read/write operation so that you can verify the documentation covers each and every item.

Documentation

Transcending the "Drupal learning cliff" was and continues to be a difficult journey for many members of the community despite the abundance of ebooks, videos, and articles on the subject matter. Consider how much more difficult building Drupal sites would be if these resources didn't exist. Now imagine trying to integrate with a systems you've never heard of using a language you're unfamiliar with and no user guide to point you in the right direction.

Sounds scary, doesn't it?

Integrating with a 3rd party application without documentation is akin to flying blind. Sure you might eventually get to the appropriate destination, but you will likely spend a significant amount of time using trial and error. Worse yet, you may simply miss certain pieces of functionality altogether.

The key here, as always, is to get documentation as soon as you can. Also, pay attention to certain red flags, such as the client not having the documentation readily available or requiring time for one of their team members to write it up. This is particularly important if the integration is a one-off that is specific to the customer versus an integration with a widely known platform (e.g. Salesforce or PayPal).

Business Logic

One of Drupal's strengths is the ability for other modules to hook in to common events. For example, a module could extend the functionality of a user saving his or her password to email a notification that a password was changed.When integrating with another system, it's equally important to understand what events may be triggered as a result of reading or writing a record. Otherwise, you may be in for a surprise to find out the external system was firing off emails or trying to charge credit card payments.

Documentation is invaluable to prevent these types of gaffs. However, in our experience it has been important to have access to a support resource that can provide warnings up front.

Support

What happens when the documentation is wrong or the software doesn't work? If support regarding the API is slow or non-existant, the project may grind to halt until this block is removed. For enterprise level solutions, there is usually some level of support that can be accessed via phone, forums, or support tickets. However, there can sometimes be a sizable fee for this service and your particular questions might not be in scope with respect to what their service provides. In those instances, it might be helpful to contract with a 3rd party vendor or contractor that has performed a similar integration in the past. This can be costly up front while saving a tremendous of time over the course of the project.

Domain Knowledge

As consultants, one of our primary objectives is to merge our expertise with that of the customer's domain knowledge in order to best achieve their goals. Therefore, it's important that we understand why the integration should work the way it does instead of just how we read and write data back and forth. A great example of this involves integrating Drupal Commerce with Quickbooks through the Web Connecter application. It's imperative to understand how the client's accounting department configures the Quickbooks application and how it manages the financial records. Otherwise a developer may make an assumption that results in an inefficient or (worse) incorrect functionality.

Similar to having a resource available for support on the API itself, it's invaluable to have access to team members on the client side that use the software on a daily basis so that nothing is missed.

Stability

Medium to large sized companies are becoming increasingly reliant on their websites to sustain and grow their businesses. Therefore, uptime is critical. And if the site depends on the uptime of a 3rd party integration to function properly, it may be useful to consider some form of redundancy or fallback solution. It is also important to make sure that support tickets can be filed with a maximum response time specified in any service licensing agreement (SLA) with the client.

Communication and Coordination

The rule of thumb here is simple: more moving parts in a project equals more communication time spent keeping everyone in sync. Additionally, it's usually wise to develop against an API endpoint specifically populated with test data so that it will not impact the client's production data. At some point, the test data will need to be cleared out and production data will need to be imported. This transition could be as simple as swapping a URL or it could involve significant amount of QA time testing and retesting full imports before making the final switch.

The best way to address these issues is simply to buffer in more communication time into the budget than a normal Drupal project.

SDKs

One gotcha that can be particularly difficult to work around is that an API may require you to use their specific software development kit (SDK) instead of a native PHP library. This may require the server to run a different OS (Windows instead of Linux) and web server (IIs instead of Apache). If you're not used to developing on these platforms, development time may be slowed down by a significant percentage. For example: a back-end developer may not be able to use the same IDE they are accustomed to (with all of their optimized configurations and memorized shortcuts). This requirement may be unavoidable in some circumstances, so the best way to deal with these situations is a simple percentage on the budgeted hours.

VMs

When possible, it is ideal for developers can work on their own machine locally with a fully replicate instance of the API they are interacting with. Example: Quickbooks connecting through their Web Connector application to read and write records from a Drupal Commerce site. To test this connection, it is extremely helpful to have a local virtual machine (VM) with Windows and Quickbooks, which a developer could then use to trigger the process. If a project involves multiple developers, they could each have their own copy to use a sandbox.

Setting up a local VM definitely adds an upfront cost to create. However, for larger projects this investment can generally be recouped many times over with respect to increased development speed and the ability to start from a consistent target.

Final Advice

By now, we hope that we've made the case that it's important to do your due diligence when taking on project involving integrations. And while this entire list of potential pain points may seem overkill, we've personally experienced the effects of every one of them at some point in our company's history. Ultimately, both you and the client want to avoid the uncomfortable conversation of a project's timeline slipping and going over budget. Therefore, it's critical to have address these issues thoroughly and as early in the project as possible. If uncertainty is especially high, it's usually beneficial to include a line item within the project statement of work to evaluate this piece separately. Finally, if you're able to effectively negotiate the terms of a contract, the budget for the integration shouldn't be set until an evaluation (even a partial one) has been completed.

Thoughts? Story to share? We'd love to get your feedback on how to improve upon this article.

Categories: Elsewhere

Matthew Palmer: First Step with Clojure: Terror

Planet Debian - Thu, 24/07/2014 - 02:30
$ sudo apt-get install -y leiningen [...] $ lein new scratch [...] $ cd scratch $ lein repl Downloading: org/clojure/clojure/1.3.0/clojure-1.3.0.pom from repository central at http://repo1.maven.org/maven2 Transferring 5K from central Downloading: org/sonatype/oss/oss-parent/5/oss-parent-5.pom from repository central at http://repo1.maven.org/maven2 Transferring 4K from central Downloading: org/clojure/clojure/1.3.0/clojure-1.3.0.jar from repository central at http://repo1.maven.org/maven2 Transferring 3311K from central [...]

Wait… what? lein downloads some random JARs from a website over HTTP1, with, as far as far I can tell, no verification that what I’m asking for is what I’m getting (has nobody ever heard of Man-in-the-Middle attacks in Maven land?). It downloads a .sha1 file to (presumably) do integrity checking, but that’s no safety net – if I can serve you a dodgy .jar, I can serve you an equally-dodgy .sha1 file, too (also, SHA256 is where all the cool kids are at these days). Finally, jarsigner tells me that there’s no signature on the .jar itself, either.

It gets better, though. The repo1.maven.org site is served by the fastly.net2 pseudo-CDN3, which adds another set of points in the chain which can be subverted to hijack and spoof traffic. More routers, more DNS zones, and more servers.

I’ve seen Debian take a kicking more than once because packages aren’t individually signed, or because packages aren’t served over HTTPS. But at least Debian’s packages can be verified by chaining to a signature made by a well-known, widely-distributed key, signed by two Debian Developers with very well-connected keys.

This repository, on the other hand… oy gevalt. There are OpenPGP (GPG) signatures available for each package (tack .asc onto the end of the .jar URL), but no attempt was made to download the signatures for the .jar I downloaded. Even if the signature was downloaded and checked, there’s no way for me (or anyone) to trust the signature – the signature was made by a key that’s signed by one other key, which itself has no signatures. If I were an attacker, it wouldn’t be hard for me to replace that key chain with one of my own devising.

Even ignoring everyone living behind a government- or company-run intercepting proxy, and everyone using public wifi, it’s pretty well common knowledge by now (thanks to Edward Snowden) that playing silly-buggers with Internet traffic isn’t hard to do, and there’s no shortage of evidence that it is, in fact, done on a routine basis by all manner of people. Serving up executable code to a large number of people, in that threat environment, with no way for them to have any reasonable assurance that code is trustworthy, is very disappointing.

Please, for the good of the Internet, improve your act, Maven. Putting HTTPS on your distribution would be a bare minimum. There are attacks on SSL, sure, but they’re a lot harder to pull off than sitting on public wifi hijacking TCP connections. Far better would be to start mandating signatures, requiring signature checks to pass, and having all signatures chain to a well-known, widely-trusted, and properly secured trust root. Signing all keys that are allowed to upload to maven.org with a “maven.org distribution root” key (itself kept in hardware and only used offline), and then verifying that all signatures chain to that key, wouldn’t be insanely difficult, and would greatly improve the security of the software supply chain. Sure, it wouldn’t be perfect, but don’t make the perfect the enemy of the good. Cost-effective improvements are possible here.

Yes, security is hard. But you don’t get to ignore it just because of that, when you’re creating an attractive nuisance for anyone who wants to own up a whole passel of machines by slipping some dodgy code into a widely-used package.

  1. To add insult to injury, it appears to ignore my http_proxy environment variable, and the repo1.maven.org server returns plain-text error responses with Content-Type: text/xml. But at this point, that’s just icing on the shit cake.

  2. At one point in the past, my then-employer (a hosting provider) blocked Fastly’s caching servers from their network because they took down a customer site with a massive number of requests to a single resource, and the incoming request traffic was indistinguishable from a botnet-sourced DDoS attack. The requests were coming from IP space registered to a number of different ISPs, with no distinguishing rDNS (184-106-82-243.static.cloud-ips.com doesn’t help me to distinguish between “I’m a professionally-run distributed proxy” and “I’m a pwned box here to hammer your site into the ground”).

  3. Pretty much all of the new breed of so-called CDNs aren’t actually pro-actively distributing content, they’re just proxies. That isn’t a bad thing, per se, but I rather dislike the far-too-common practice of installing varnish (and perhaps mod_pagespeed, if they’re providing “advanced” capabilities) on a couple of AWS instances, and hanging out your shingle as a CDN. I prefer a bit of truth in my advertising.

Categories: Elsewhere

Russ Allbery: WebAuth 4.6.1

Planet Debian - Thu, 24/07/2014 - 00:59

This is a bug-fix release of the WebAuth site-wide web authentication system. As is typical, I accumulated a variety of minor bug fixes and improvements that I wanted to get into a release before starting larger work (in this case, adding JSON support for the user information service protocol).

The most severe bug fix is something that only folks at Stanford would notice: support for AuthType StanfordAuth was broken in the 4.6.0 release. This is for legacy compatibility with WebAuth 2.5. It has been fixed in this release.

In other, more minor bug fixes, build issues when remctl support is disabled have been fixed, expiring password warnings are shown in WebLogin after any POST-based authentication, the confirmation page is forced if authorization identity switching is available, the username field is verified before multifactor authentication to avoid subsequent warnings, newlines and tabs are allowed in the XML sent from the WebKDC for user messages, empty RT and ST parameters are correctly diagnosed, and there are some documentation improvements.

The main new feature in this release is support for using FAST armor during password authentication in mod_webkdc. A new WebKdcFastArmorCache directive can be set to point at a Kerberos ticket cache to use for FAST armor. If set, FAST is required, so the KDC must support it as well. This provides better wire security for the initial password authentication to protect against brute-force dictionary attacks against the password by a passive eavesdropper.

This release also adds a couple of new factor types, mp (mobile push) and v (voice), that Stanford will use as part of its Duo Security integration.

Note that, for the FAST armor feature, there is also an SONAME bump in the shared library in this release. Normally, I wouldn't bump the SONAME in a minor release, but in this case the feature was fairly minor and most people will not notice the change, so it didn't feel like it warranted a major release. I'm still of two minds about that, but oh well, it's done and built now. (At least I noticed that the SONAME bump was required prior to the release.)

You can get the latest release from the official WebAuth distribution site or from my WebAuth distribution pages.

Categories: Elsewhere

Metal Toad: Drupal Solr Search with Domain Access filtering

Planet Drupal - Wed, 23/07/2014 - 23:34

Metal Toad has had the privilege to work over the past two years with DC Comics. What makes this partnership even more exciting, is that the main dccomics.com site also includes sites for Vertigo Comics and Mad Magazine. Most recently Metal Toad was given the task of building the new search feature for all three sites. However, while its an awesome privilege to work with such a well known brand as DC, this does not come without a complex set of issues for the three sites when working with Apache Solr search and Drupal.

Categories: Elsewhere

Lior Kaplan: Testing PHPNG on Debian/Ubuntu

Planet Debian - Wed, 23/07/2014 - 23:01

We (at Zend) want to help people get more involved in testing PHPNG (PHP next generation), so we’re started to provide binaries for it, although it’s still a branch on top of PHP’s master branch. See more details about PHPNG on Zeev Suraski’s blog post.

The binaries (64bit) are compatible with Debian testing/unstable and Ubuntu Trusty (14.04) and up. The mod_php is built for Apache 2.4 which all three flavors have.

The repository is at http://repos.zend.com/zend-server/early-access/phpng/

Installation instructions:

# wget http://repos.zend.com/zend.key -O- 2> /dev/null | apt-key add -
# echo “deb http://repos.zend.com/zend-server/early-access/phpng/ trusty zend” > /etc/apt/sources.list.d/phpng.list
# apt-get update
# apt-get install php5

For the task of providing these binaries, I had a pleasure of combining my experience as a member of the Debian PHP team and a Debian Developer with stuff more internal to the PHP development process. Using the already existing Debian packaging enabled me to test more builds scenarios easily (and report problems accoredingly). Hopefully this could also be translated back into providing more experimental packages for Debian and making sure Debian packages are ready for the PHP released after PHP 5.6.


Filed under: Debian GNU/Linux, PHP
Categories: Elsewhere

Gizra.com: Headless Drupal - Form API, Drupal 9

Planet Drupal - Wed, 23/07/2014 - 23:00
Defining moment

A few months ago my DrupalCon Austin session was rejected. I was a bit upset, since presenting plays a big part in my trip to the states, and also surprised, as I mistakenly assumed my presentation repertoire would almost guarantee my session would be accepted. But the committee decided differently.

This has been an important moment for me. Two days later I told myself I don't care. I mean, I cared about the presentation, I just stopped caring that it was not selected, since I decided I was going to do it anyway. As an "unplugged" BoF.

The Gizra Way. I think this is probably the best presentation I've given so far, and quite ironically my rejected session is second only to Dries's keynote in YouTube.

You see - I had a "there is no spoon" moment. The second I realized it can be done differently, I was on my own track, perhaps even setting the path for others.

Form API, Drupal 9

I use Drupal because Form API is so great No one, ever

Continue reading…

Categories: Elsewhere

Petter Reinholdtsen: 98.6 percent done with the Norwegian draft translation of Free Culture

Planet Debian - Wed, 23/07/2014 - 22:40

This summer I finally had time to continue working on the Norwegian docbook version of the 2004 book Free Culture by Lawrence Lessig, to get a Norwegian text explaining the problems with todays copyright law. Yesterday, I finally completed translated the book text. There are still some foot/end notes left to translate, the colophon page need to be rewritten, and a few words and phrases still need to be translated, but the Norwegian text is ready for the first proof reading. :) More spell checking is needed, and several illustrations need to be cleaned up. The work stopped up because I had to give priority to other projects the last year, and the progress graph of the translation show this very well:

If you want to read the result, check out the github project pages and the PDF, EPUB and HTML version available in the archive directory.

Please report typos, bugs and improvements to the github project if you find any.

Categories: Elsewhere

Michael Prokop: Book Review: The Docker Book

Planet Debian - Wed, 23/07/2014 - 22:16

Docker is an open-source project that automates the deployment of applications inside software containers. I’m responsible for a docker setup with Jenkins integration and a private docker-registry setup at a customer and pre-ordered James Turnbull’s “The Docker Book” a few months ago.

Recently James – he’s working for Docker Inc – released the first version of the book and thanks to being on holidays I already had a few hours to read it AND blog about it. (Note: I’ve read the Kindle version 1.0.0 and all the issues I found and reported to James have been fixed in the current version already, jey.)

The book is very well written and covers all the basics to get familiar with Docker and in my opinion it does a better job at that than the official user guide because of the way the book is structured. The book is also a more approachable way for learning some best practices and commonly used command lines than going through the official reference (but reading the reference after reading the book is still worth it).

I like James’ approach with “ENV REFRESHED_AT $TIMESTAMP” for better controlling the cache behaviour and definitely consider using this in my own setups as well. What I wasn’t aware is that you can directly invoke “docker build $git_repos_url” and further noted a few command line switches I should get more comfortable with. I also plan to check out the Automated Builds on Docker Hub.

There are some references to further online resources, which is relevant especially for the more advanced use cases, so I’d recommend to have network access available while reading the book.

What I’m missing in the book are best practices for running a private docker-registry in a production environment (high availability, scaling options,…). The provided Jenkins use cases are also very basic and nothing I personally would use. I’d also love to see how other folks are using the Docker plugin, the Docker build step plugin or the Docker build publish plugin in production (the plugins aren’t covered in the book at all). But I’m aware that this are fast moving parts and specialised used cases – upcoming versions of the book are already supposed to cover orchestration with libswarm, developing Docker plugins and more advanced topics, so I’m looking forward to further updates of the book (which you get for free as existing customer, being another plus).

Conclusion: I enjoyed reading the Docker book and can recommend it, especially if you’re either new to Docker or want to get further ideas and inspirations what folks from Docker Inc consider best practices.

Categories: Elsewhere

Acquia: Search in Drupal 8 - Thomas Seidl & Nick Veenhof

Planet Drupal - Wed, 23/07/2014 - 17:57

Thomas Seidl and Nick Veenhof took a few minutes out of the Drupal 8 Search API code sprint at the Drupal DevDays in Szeged, Hungary to talk with me about the state-of-play and what's coming in terms of search in Drupal: one flexible, pluggable solution for search functionality with the whole community behind it.

Categories: Elsewhere

Clemens Tolboom: Interested in ReST and HAL?

Planet Drupal - Wed, 23/07/2014 - 16:42
  1. Checkout the issue queue for HAL and ReST.
  2. Use the quickstart tool: https://github.com/build2be/drupal-rest-test.
Need HAL?
  1. Install HAL Browser on your site to see what we got till now.
  2. cd drupal-root
Categories: Elsewhere

Acquia: Migration Tips and Tricks

Planet Drupal - Wed, 23/07/2014 - 15:18

Cross-posted with permission from nerdstein

The Migrate module is, hands down, the defacto way to migrate content in Drupal. The only knock against it, is the learning curve. All good things come to those who take the time and learn it.

Categories: Elsewhere

Tanguy Ortolo: GNU/Linux graphic sessions: suspending your computer

Planet Debian - Wed, 23/07/2014 - 14:45

Major desktop environments such as Xfce or KDE have a built-in computer suspend feature, but when you use a lighter alternative, things are a bit more complicated, because basically: only root can suspend the computer. There used to be a standard solution to that, using a D-Bus call to a running daemon upowerd. With recent updates, that solution first stopped working for obscure reasons, but it could still be configured back to be usable. With newer updates, it stopped working again, but this time it seems it is gone for good:

$ dbus-send --system --print-reply \ --dest='org.freedesktop.UPower' \ /org/freedesktop/UPower org.freedesktop.UPower.Suspend Error org.freedesktop.DBus.Error.UnknownMethod: Method "Suspend" with signature "" on interface "org.freedesktop.UPower" doesn't exist

The reason seems to be that upowerd is not running, because it no longer provides an init script, only a systemd service. So, if you do not use systemd, you are left with one simple and stable solution: defining a sudo rule to start the suspend or hibernation process as root. In /etc/sudoers.d/power:

%powerdev ALL=NOPASSWD: /usr/sbin/pm-suspend, \ /usr/sbin/pm-suspend-hybrid, \ /usr/sbin/pm-hibernate

That allows members of the powderdev group to run sudo pm-suspend, sudo pm-suspend-hybrid and sudo pm-hibernate, which can be used with a key binding manager such as your window manager's or xbindkeys. Simple, efficient, and contrary to all that ever-changing GizmoKit and whatsitd stuff, it has worked and will keep working for years.

Categories: Elsewhere

Code Karate: Drupal 7 Splashify

Planet Drupal - Wed, 23/07/2014 - 13:36
Episode Number: 158

In this episode we cover the Splashify module. This module is used to display splash pages or popups. There are multiple configuration options available to fit your site needs.

In this episode you will learn:

  • How to set up Splashify
  • How to configure Splashify
  • How to get Splashify to use the Mobile Detect plugin
  • How Splashify displays to the end user
  • How to be awesome
Tags: DrupalDrupal 7Drupal PlanetUI/DesignJavascriptResponsive Design
Categories: Elsewhere

Francesca Ciceri: Adventures in Mozillaland #3

Planet Debian - Wed, 23/07/2014 - 13:04

Yet another update from my internship at Mozilla, as part of the OPW.

A brief one, this time, sorry.

Bugs, Bugs, Bugs, Bacon and Bugs

I've continued with my triaging/verifying work and I feel now pretty confident when working on a bug.
On the other hand, I think I've learned more or less what was to be learned here, so I must think (and ask my mentor) where to go from now on.
Maybe focus on a specific Component?
Or steadily work on a specific channel for both triaging/poking and verifying?
Or try my hand at patches?
Not sure, yet.

Also, I'd like to point out that, while working on bug triaging, the developer's answers on the bug report are really important.
Comments like this help me as a triager to learn something new, and be a better triager for that component.
I do realize that developers cannot always take the time to put in comments basic information on how to better debug their component/product, but trust me: this will make you happy on the long run.
A wiki page with basic information on how debug problems for your component is also a good idea, as long as that page is easy to find ;).

So, big shout-out for MattN for a very useful comment!

Community

After much delaying, we finally managed to pick a date for the Bug Triage Workshop: it will be on July 25th. The workshop will be an online session focused on what is triaging, why is important, how to reproduce bugs and what information ask to the reporter to make a bug report the most complete and useful possible.
We will do it in two different time slots, to accomodate various timezones, and it will be held on #testday on irc.mozilla.org.
Take a look at the official announcement and subscribe on the event's etherpad!

See you on Friday! :)

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator