Feed aggregator

Enrico Zini: Monthly link collections with staticsite

Planet Debian - Sat, 09/07/2016 - 19:23

A year ago, I wrote:

Instead of keeping substantial tabs open until I have read all of them, or losing them in the jungle of browser bookmarks, I have written a script that collects them into a file per month, and turns them into markdown files for my blog.

That script turned out to be quirky and overengineered, so much so that I stopped using it myself.

I've now rethought my approach, and downscaled it: instead of saving a copy of each page locally, I can blog a reference to https://archive.org or https://archive.is. I do not need to autogenerate a description from the site itself.

The result has been a nicely minimal set of changes to staticsite that resulted in a new version where adding a link to a monthly collection is as easy as typing ssite new -a links.

As long as I'll remember to rebuild the site 3 weeks from now, a new post should automagically appear in my blog.

Categories: Elsewhere

Charles Plessy: Congratulations, Marga!

Planet Debian - Sat, 09/07/2016 - 14:43

For the first time in our history, a woman joins the Technical Committee. Congratulations, Marga, and thanks for volunteering.

Categories: Elsewhere

Freelock : Git Branch Strategy meets Continuous Deployment

Planet Drupal - Sat, 09/07/2016 - 01:14

Our branch strategy based on Git Flow did not survive. It was getting a bit old in the tooth, but the final blow was automation.

At Freelock, we've been hard at work building out automation so we can handle the maintenance on hundreds of websites with better test coverage and more confidence than ever before. Exciting news! It's all coming together, and we have it working across the board on ALL of our projects, now.

DrupalDevOpsContinuous IntegrationContinuous DeploymentQuality AssuranceDrupal PlanetBotgit flowgit
Categories: Elsewhere

Fuse Interactive: Why you and your clients should be excited to build your next project with Drupal 8

Planet Drupal - Fri, 08/07/2016 - 23:39

Over 8 months after release and my first D8 site under my belt I can now say I am excited for the future of working with Drupal’s freshest release. That being said at this stage in the game the decision to go with D8 should approached with caution. It does what it does well but many of those shiny contrib modules you’re used to using just aren’t there yet. Unless your team and client are willing to spend the time and money needed to develop or port the missing functionality it might not be a fit for that particular project.

Categories: Elsewhere

ImageX Media: Higher Education Notes and Trends for the Week of July 4, 2016

Planet Drupal - Fri, 08/07/2016 - 21:22

The landscape of higher education continues to shift toward changing student demographics, evolving different learning approaches and what seems like a perpetual shortfall of funding for post-secondary institutions. These trends mirror that of our client website aspirations which are now more than ever are focusing on engagement with key audiences such as prospective students and alumni due to greater competition in the marketplace with less dollars to spend. 

Categories: Elsewhere

Chromatic: Digging In To Drupal 8: Code Snippets for Site Builders

Planet Drupal - Fri, 08/07/2016 - 19:36

The more I work with Drupal 8, the more I realize how much has changed for developers in the Drupal community. While the transition to a modern, object-oriented system is what's best for the longevity of the platform, it certainly doesn't come without challenges. As someone who doesn't come from an OOP background, I've found the transition difficult at times. In many cases, I know exactly what I want to do, just not how to do it the "Drupal 8 way". On top of this, tutorials and blog posts on D8 are all over the map in terms of accuracy. Many posts written during D8's development cycle are no longer applicable because of API changes, etc.

Below is a list of snippets that might be helpful to site builders or developers more familiar with D7 hooks and procedural. It might also be useful to OOP folks who are new to Drupal in general. My goal below is to add to and update these snippets over time.

Routes & Links Determine the Current Drupal Route

Need to know what the current Drupal route is or need to run some logic against the current route? You can get the current route like so:

$route = \Drupal::routeMatch()->getRouteName();

To some, the \Drupal::routeMatch() syntax might look foreign (it did to me). Here's a rundown of what's happening here:

First, \Drupal. This is calling the global Drupal class, which, in Drupal 8, is a bridge between procedural and OO methods of writing Drupal code. The following comes from the documentation:

This class acts as a unified global accessor to arbitrary services within the system in order to ease the transition from procedural code to injected OO code.

Right. Moving on to ::routeMatch(). Here we're using the routeMatch() method which "Retrieves the currently active route match object." Simple enough. But what is "::" all about? This StackOverflow answer helped me to understand what that's all about.

From there, the getRouteName() method returns the current route name as a string. Here are some example routes: entity.node.canonical, view.frontpage and node.type_add.

Is this the Front Page Route?

Need to check if the current route is the front page route? There's a service and method for that:

// Is the current route/path the front page? if ($is_front = \Drupal::service('path.matcher')->isFrontPage()) {}

Here we're calling the path.matcher service (defined in /core/core.services.yml) and using the isFrontPage() method. For more on services, check out the "Services and Dependency Injection Container" documentation on api.drupal.org which helped me understand how all of these bits work together and the why of their structure.

Get the Requested Path

Need to know what the current page's requested path was, as opposed to the route? You can do this:

$current_uri = \Drupal::request()->getRequestUri(); Redirect to a Specific Route

Need to redirect to a specific page? In Drupal 7, you would likely handle this with drupal_goto() in your page callback function. In Drupal 8, you can use RedirectResponse() for that. Here is the relevant changelog.

Here are some examples, borrowed heavily from said changelog. First, in procedural PHP:

use Symfony\Component\HttpFoundation\RedirectResponse; function my_redirect() { return new RedirectResponse(\Drupal::url('user.page')); }

Here is how you would use a Drupal 8 controller to accomplish the same thing:

use Drupal\Core\Controller\ControllerBase; class MyControllerClass extends ControllerBase { public function foo() { //... return $this->redirect('user.page'); } } Links on the Fly

Drupal 7 and prior relied heavily on the l() function. (In fact, I would wager this was my most used function over the years. In Drupal 8, if you need to create links on the fly, utilize the Link class

$link = \Drupal\Core\Link::fromTextAndUrl($text, $url); Working with Entities Query Database for Entities

If you need to query the database for some nodes (or any other entity) you should use the entityQuery service. The syntax should be pretty familiar to most D7 developers who have used EntityFieldQuery:

// Query for some entities with the entity query service. $query = \Drupal::entityQuery('node') ->condition('status', 1) ->condition('type', 'article') ->range(0, 10) ->sort('created', 'DESC'); $nids = $query->execute(); Loading Entities

If you need to load the actual entities, you can do so a number of ways:

While the following will technically work in Drupal 8:

$node = entity_load_multiple('node', $nids);

This method has been deprecated in Drupal 8 and will be removed before Drupal 9, in favor of methods overriding Entity::loadMultiple(). To future-proof your code, you would do something like the following:

$nodes = \Drupal::entityTypeManager()->getStorage('node')->loadMultiple($nids);

Here's how you would do similar for a single node:

$node = \Drupal::entityTypeManager()->getStorage('node')->load($nid);

Here are a few other entity snippets that might be useful:

// Link to an entity using the entity's link method. $author_link = $user->toLink(); // Do the same thing, but customize the link text. $author_link = $user->toLink('Some Custom Text'); // Given a node object, here's how to determine its type: $type = $node->getType(); // To get the full user entity of the node's author: $author = $node->getOwner(); // To get the raw ID of the author of a node: $author_id = $node->getOwnerId(); Image Styles

Need to whip up an image using a particular image style on the fly? This will work for that:

// Create an instance of an image using a specific image style, given a path to a file. $style = \Drupal\image\Entity\ImageStyle::load('yourStyle_image'); $img_path = $user->field_profile_some_image->entity->getFileUri(); $img_style_url = $style->buildUrl($img_path);

That's it for now. I intend to keep this post updated as we learn more and more about the new world of Drupal 8. If you have a snippet worth sharing, drop us a line via Twitter and we’ll add it to this post (with credit of course).

Categories: Elsewhere

Chromatic: Be Promiscuous with Drush's core-quick-drupal

Planet Drupal - Fri, 08/07/2016 - 19:36
Aren't you a cutie?

Here at Chromatic HQ, the team is encouraged to give back to the open-source community. (And on company time!) One way to do this is by reviewing and contributing Drupal patches. For me, this can be both rewarding and frustrating. When things go well, I feel good about contributing and I might even get a commit credit! But there are times when patches don't apply, I have no clue what's wrong and I need to start fresh. First, I curse mightily at the time wasted, then I create a new db, and then re-install a fresh copy of Drupal, and then configure it etc. etc. Using drush site-install makes this process relatively easy, but what if it could be easier? (Hint: It is!)

Hooray for promiscuity!

I recently had a fling with Drush's core-quick-drupal command. I had known about it for years, but I hadn't realized what it could really do for me. This has now changed, and together we're having an open affair!

For the uninitiated, drush core-quick-drupal takes advantage of PHP's built-in web server (PHP >= 5.4) and uses a sqlite database to get a fresh, stand-alone copy of Drupal up and running, all in about a minute. It has two aliases: drush qd and, my personal preference, drush cutie.

Out-of-the-box overview
  • In about a minute it installs a full instance of Drupal.
  • Runs a web server at http://127.0.0.1:8888 (no apache config).
  • Uses a self-contained sqlite file as the db (no mysql db to create and configure).

It's so much fun, you may want to follow along. From the command line, just cd to a folder of your choosing and run drush cutie --yes. (You'll need to have drush installed.)

Behind the scenes, a folder is created called quick-drupal with a timestamp appended to the end. (One of my older cutie folders is quick-drupal-20160214193640... a timestamp from a Valentine's evening with Drush that my wife won't soon forget!) Inside the new quick-drupal folder are subfolders with the latest D8 files and the sqlite db file. (There are lots of options to customize the Drupal version and environment, but the default nowadays is Drupal 8.)

Running it looks something like this drush cutie --yes Project drupal (8.0.3) downloaded to ... Installation complete. User name: admin User password: EawsYkGg4Y Congratulations, you installed Drupal! Listening on http://127.0.0.1:8888

(The output above has been edited to highlight the tastier bits!)

And with that I have the latest version of D8 running at http://127.0.0.1:8888. As you can see from the shell output above, the superuser is admin with a password of EawsYkGg4Y.

Okay, okay, very cool, but what can I do with it? Here's a breakdown:
  1. Review patches with minimal fuss, thereby giving back to the Drupal community.
  2. Investigate new modules without sullying your main dev environment.
  3. Test that new Feature you created to see if it really works.
  4. NOT RECOMMENDED! When that friend asks you how long it will take to build him a website, respond with "about a minute" and fire it up.
You thought I was done?

Let's run through the steps to review a patch. This is where drush core-quick-drupal really shines because it's best to have a clean install of Drupal to work with; this minimizes the number of externalities that can interfere with testing. Having a single-command, throwaway copy of vanilla Drupal is the way to go.

You could call this a blog version of a live demo; I have chosen a patch out in the wild to review. I found this one for the core taxonomy module, that had a status of "Needs Review" on D.O.

The patch file itself is here: https://www.drupal.org/files/issues/taxonomy-term-twig-cs.patch

Here are the steps I took on the command line:

# Install a temporary copy of D8 into a folder I named "test2644718" drush cutie test2644718 --yes

With the above command I got my environment running. The patch itself simply fixes the formatting in taxonomy-term.html.twig, which is a default template file for taxonomy terms, provided by the core taxonomy module.

I first tested to see the original template in action. Satisfied with the way it was working, I took steps to apply the patch.

# Move into the root folder of the new site cd test2644718/drupal/ # Use wget to grab the patch from D.O. wget https://www.drupal.org/files/issues/taxonomy-term-twig-cs.patch # Apply the patch patch -p1 < taxonomy-term-twig-cs.patch patching file core/modules/taxonomy/templates/taxonomy-term.html.twig

The patch was applied successfully and a minor change in taxonomy-term.html.twig was made. I quickly tested to ensure nothing had blown up and was satisfied that the patch works as expected.

Back in D.O., I added my two cents and marked the issue as Reviewed & tested by the community. And that's that.

Update

Though the patch originally sat awaiting review for 2 months, I'm happy to claim that my review got things moving again! After I posted RTBC, a flurry of activity took place with the scope increasing and new patches being created. I reviewed those too! A day later the patches were committed to 8.1.x. Nice.

Categories: Elsewhere

Joey Hess: twenty years of free software -- part 11 concurrent-output

Planet Debian - Fri, 08/07/2016 - 19:06

concurrent-output is a more meaty Haskell library than the ones I've covered so far. Its interface is simple, but there's a lot of complexity under the hood. Things like optimised console updates, ANSI escape sequence parsing, and transparent paging of buffers to disk.

It developed out of needing to display multiple progress bars on the console in git-annex, and also turned out to be useful in propellor. And since it solves a general problem, other haskell programs are moving toward using it, like shake and stack.

Next: ?twenty years of free software -- part 12 propellor

Categories: Elsewhere

Mediacurrent: Friday 5: 5 Quick Ways to Check Your Site&#039;s Health

Planet Drupal - Fri, 08/07/2016 - 18:39

TGIF and welcome back to another exciting episode of The Mediacurrent Friday 5!

Categories: Elsewhere

Jeff Geerling's Blog: Getting Emoji and multibyte characters on your Drupal 7 site with 7.50

Planet Drupal - Fri, 08/07/2016 - 17:43

Almost exactly a year ago, I wrote a blog post titled Solving the Emoji/character encoding problem in Drupal 7.

Since writing that post, Drupal 7 bugfixes and improvements have started to pick up steam as (a) many members of the community who were focused on launching Drupal 8 had time to take a breather and fix up some long-standing Drupal 7 bugs and improvements that hadn't yet been backported, and (b) there are two new D7 core maintainers. One of the patches I've been applying to many sites and hoping would get pulled into core for a long time was adding support for full UTF-8, which allows the entry of emojis, Asian symbols, and mathematical symbols that would break Drupal 7 sites running on MySQL previously.

My old blog post had a few steps that you could follow to make your Drupal 7 site 'mostly' support UTF-8, but there were some rough edges. Now that support is in core, the process for converting your existing site's database is more straightforward:

Categories: Elsewhere

Reproducible builds folks: Managing container and environment state

Planet Debian - Fri, 08/07/2016 - 15:58

Author: ceridwen

With some more help from Martin Pitt, it became clear to me that my previous mental model of how autopkgtest worked is very different from how it does work. I'll illustrate by borrowing my previous example. I know that schroot has the following behavior:

The default behaviour is as follows (all directory paths are inside the chroot). A login shell is run in the current working directory. If this is not available, it will try $HOME (when --preserve-environment is used), then the user's home directory, and / inside the chroot in turn. A command is always run in the current working directory inside the chroot. If none of the directories are available, schroot will exit with an error status.

I was naively thinking that the way autopkgtest would work is that it would set the current working directory of the schroot call and the ensuing subprocess call would thus take place in that directory inside the schroot. That is not how it works. If you want to change directories inside the virtual server, you have to use cd. The same is true of, at least, environment variables, which have their own specific handling in the adt_testbed.Testbed methods but have to be passed as strings, and umask. I'm assuming this is because the direct methods with qemu images or LXC containers don't work.

What this means is that I was thinking about the problem the wrong way: what reprotest needs to do is generate shell scripts. This is how autopkgtest works. If this goes beyond laying out commands linearly one after another, for instance if it demands conditionals or other nested constructs, the right way to do it is to build an abstract syntax tree representation of the shell script and then convert it to code.

Whether I need more complicated shell scripts depends on my approach to handling state in the containers. I need to know what state persists across separate command executions: if I call adt_testbed.Testbed.execute() twice, what if any changes I make to the container will carry forward from the first to the second? There are three categories here. First, some properties of a system aren't preserved even from one command execution to the next, like working directory and environment variables. (I thought working directory would be preserved, but it's not). The second is state that persists while the testbed is open and is then automatically reverted when it's closed, like files copied into temporary directories on the tesbed. The third is state that persists across different sessions on the same container and must be cleaned up by reprotest. It's worth noting that which state falls into which category may vary by the container in question, though for the most part I can either do unnecessary cleanup or issue unnecessary commands to handle the differences. autopkgtest itself has a very different approach to cleanup, as it relies almost entirely on the builtin reversion capabilities from some of its containers. I would prefer to avoid doing the same, partly because I know that some of the modifications I need to make, for instance creating new users or mounting disorderfs, can't be reverted by the faster, simpler containers like schroot.

From discussions with Lunar, I think that the variations that correspond to environment variables (captures_environment, home, locales, path, and timezone) fall into the first category, but because of the special handling for them they don't require sending a separate command. The shell (bash foo) accepts a script or command as an argument so it also doesn't need a separate command. Setting the working directory and umask require separate commands that have to be joined. On Linux, setarch also accepts a command/script as an argument so can be handled like the shell, but there's no unified POSIX protocol for mocking uname so other OSes will require different approaches. Users, groups, file ordering, host, and domain will require cleanup in all containers except for (maybe) qemu. If I want to handle the cleanup in the shell scripts themselves, I need conditionals so that for instance the shell script only tries to unmount disorderfs if disorderfs was successfully mounted. This approach would simplify the error handling problems I've had before, where when a build crashes cleanup code run from Python doesn't get run until after the testbed stop accepting commands.

Lunar suggested the Plumbum library to me, and I think I can use it to avoid writing my own shell AST library. It has a method that converts the Python representation of a shell script into a string that can be passed to Testbed.command(). Integrating Plumbum to generate the necessary scripts is where I'm going in the next week.

Any feedback on any of this is welcome. I'm also curious what other projects are using autopkgtest code. Holger brought to my attention piuparts, which brings the list up to four that I'm aware of, autopkgtest itself, sbuild, piuparts, and now reprotest.

Categories: Elsewhere

Zivtech: How to Grow Your Own Team

Planet Drupal - Fri, 08/07/2016 - 15:00
Lack of available talent is a common refrain of business owners. Give up on looking and complaining! Learn how to create a sustainable business.

Growing your own means hiring smart, motivated people with all the right soft skills and investing in them for the long haul. In return, they'll reward you with loyalty, teach your newer staff, and work in unison with a cohesive vision.

Where is the Talent? It’s not realistic to imagine that you live in a world where there are people that you can just hire for a decent price who already have all the skills you need. Just come in, hit the ground running, and make you a bunch of money. You wouldn't have any problems with them, and you wouldn't have to do much for them other than feed them some pizza and pay them.

So when managers can't find those people, they get upset, and they say, "There's not enough talent. People are not getting educated properly. We don't have the right people and the right programs out there."

The world is full of talent! No, they haven't learned the specific skills that you need, but there are so many intelligent people out there who would thrive with a little help.


What Are You Farming? When I started working in software development, I saw myself as someone who made websites. That was my output: I was making websites, or I was making code. Over the years now I see that my product is people. I'm selling their time, expertise, knowledge, and human capacity.

In web development, who cares about the code when you have the coder? It's like the egg and the chicken. You have to take care of the chicken, and not each little egg, because the chickens just keep making more.

Being a great website maker isn’t really that valuable. What is really valuable is being able to grow more people who can do the work. Then you really scale up. You're only going to do so well as a solo practitioner. If you're able to grow more and more skilled people, not only is your business doing better, but you start to realize that the task of training people is more important than building websites.


Download the full Grow Your Own white paper for free.
Categories: Elsewhere

OSTraining: How to Display PDFs on a Drupal Site

Planet Drupal - Fri, 08/07/2016 - 14:10

An OSTraining member asked us about attaching PDFs to a Drupal site.

It is possible to use the default File field and allow people to download the PDF. However, this member wanted visitors to read the PDF directly on the site.

Categories: Elsewhere

Mike Hommey: Are all integer overflows equal?

Planet Debian - Fri, 08/07/2016 - 13:15

Background: I’ve been relearning Rust (more about that in a separate post, some time later), and in doing so, I chose to implement the low-level parts of git (I’ll touch the why in that separate post I just promised).

Disclaimer: It’s friday. This is not entirely(?) a serious post.

So, I was looking at Documentation/technical/index-format.txt, and saw:

32-bit number of index entries.

What? The index/staging area can’t handle more than ~4.3 billion files?

There I was, writing Rust code to write out the index.

try!(out.write_u32::<NetworkOrder>(self.entries.len()));

(For people familiar with the byteorder crate and wondering what NetworkOrder is, I have a use byteorder::BigEndian as NetworkOrder)

And the Rust compiler rightfully barfed:

error: mismatched types: expected `u32`, found `usize` [E0308]

And there I was, wondering: “mmmm should I just add as u32 and silently truncate or … hey what does git do?”

And it turns out, git uses an unsigned int to track the number of entries in the first place, so there is no truncation happening.

Then I thought “but what happens when cache_nr reaches the max?”

Well, it turns out there’s only one obvious place where the field is incremented.

What? Holy coffin nails, Batman! No overflow check?

Wait a second, look 3 lines above that:

ALLOC_GROW(istate->cache, istate->cache_nr + 1, istate->cache_alloc);

Yeah, obviously, if you’re incrementing cache_nr, you already have that many entries in memory. So, how big would that array be?

struct cache_entry **cache;

So it’s an array of pointers, assuming 64-bits pointers, that’s … ~34.3 GB. But, all those cache_nr entries are in memory too. How big is a cache entry?

struct cache_entry { struct hashmap_entry ent; struct stat_data ce_stat_data; unsigned int ce_mode; unsigned int ce_flags; unsigned int ce_namelen; unsigned int index; /* for link extension */ unsigned char sha1[20]; char name[FLEX_ARRAY]; /* more */ };

So, 4 ints, 20 bytes, and as many bytes as necessary to hold a path. And two inline structs. How big are they?

struct hashmap_entry { struct hashmap_entry *next; unsigned int hash; }; struct stat_data { struct cache_time sd_ctime; struct cache_time sd_mtime; unsigned int sd_dev; unsigned int sd_ino; unsigned int sd_uid; unsigned int sd_gid; unsigned int sd_size; };

Woohoo, nested structs.

struct cache_time { uint32_t sec; uint32_t nsec; };

So all in all, we’re looking at 1 + 2 + 2 + 5 + 4 32-bit integers, 1 64-bits pointer, 2 32-bits padding, 20 bytes of sha1, for a total of 92 bytes, not counting the variable size for file paths.

The average path length in mozilla-central, which only has slightly over 140 thousands of them, is 59 (including the terminal NUL character).

Let’s conservatively assume our crazy repository would have the same average, making the average cache entry 151 bytes.

But memory allocators usually allocate more than requested. In this particular case, with the default allocator on GNU/Linux, it’s 156 (weirdly enough, it’s 152 on my machine).

156 times 4.3 billion… 670 GB. Plus the 34.3 from the array of pointers: 704.3 GB. Of RAM. Not counting the memory allocator overhead of handling that. Or all the other things git might have in memory as well (which apparently involves a hashmap, too, but I won’t look at that, I promise).

I think one would have run out of memory before hitting that integer overflow.

Interestingly, looking at Documentation/technical/index-format.txt again, the on-disk format appears smaller, with 62 bytes per file instead of 92, so the corresponding index file would be smaller. (And in version 4, paths are prefix-compressed, so paths would be smaller too).

But having an index that large supposes those files are checked out. So let’s say I have an empty ext4 file system as large as possible (which I’m told is 2^60 bytes (1.15 billion gigabytes)). Creating a small empty ext4 tells me at least 10 inodes are allocated by default. I seem to remember there’s at least one reserved for the journal, and there’s lost+found ; there apparently are more. Obviously, on that very large file system, We’d have a git repository. git init with an empty template creates 9 files and directories, so that’s 19 more inodes taken. But git init doesn’t create an index, and doesn’t have any objects. We’d thus have at least one file for our hundreds of gigabyte index, and at least 2 who-knows-how-big files for the objects (a pack and its index). How many inodes does that leave us with?

The Linux kernel source tells us the number of inodes in an ext4 file system is stored in a 32-bits integer.

So all in all, if we had an empty very large file system, we’d only be able to store, at best, 2^32 – 22 files… And we wouldn’t even be able to get cache_nr to overflow.

… while following the rules. Because the index can keep files that have been removed, it is actually possible to fill the index without filling the file system. After hours (days? months? years? decades?*) of running

seq 0 4294967296 | while read i; do touch $i; git update-index --add $i; rm $i; done

One should be able to reach the integer overflow. But that’d still require hundreds of gigabytes of disk space and even more RAM.

* At the rate it was possible to add files to the index when I tried (yeah, I tried), for a few minutes, and assuming a constant rate, the estimate is close to 2 years. But the time spent reading and writing the index increases linearly with its size, so the longer it’d run, the longer it’d take.

Ok, it’s actually much faster to do it hundreds of thousand files at a time, with something like:

seq 0 100000 4294967296 | while read i; do j=$(seq $i $(($i + 99999))); touch $j; git update-index --add $j; rm $j; done

At the rate the first million files were added, still assuming a constant rate, it would take about a month on my machine. Considering reading/writing a list of a million files is a thousand times faster than reading a list of a billion files, assuming linear increase, we’re still talking about decades, and plentiful RAM. Fun fact: after leaving it run for 5 times as much as it had run for the first million files, it hasn’t even done half more…

One could generate the necessary hundreds-of-gigabytes index manually, that wouldn’t be too hard, and assuming it could be done at about 1 GB/s on a good machine with a good SSD, we’d be able to craft a close-to-explosion index within a few minutes. But we’d still lack the RAM to load it.

So, here is the open question: should I report that integer overflow?

Wow, that was some serious procrastination.

Categories: Elsewhere

Drop Guard: Why we don’t (continuously) update our Drupal websites

Planet Drupal - Fri, 08/07/2016 - 13:00

 

Two weeks ago we decided to run a little survey asking Drupal folks one simple, but provocative question “Why I don’t update my website continuously”. I decided to present you the results - and I can tell that some serious voices got out!

First, I want to speak highly of least 38 of 78 participants, who actually update their website continuously and seem to know exactly what happens if they wouldn’t do it.

Drupal Drupal Planet Security Survey
Categories: Elsewhere

Leopathu: 6 Essential Drupal Interview Questions*

Planet Drupal - Fri, 08/07/2016 - 12:06
1. Name and describe the five conceptual layers in a Drupal system.
Categories: Elsewhere

Michal &#268;iha&#345;: wlc 0.4

Planet Debian - Fri, 08/07/2016 - 12:00

wlc 0.4, a command line utility for Weblate, has been just released. This release doesn't bring much changes, but still worth announcing.

The most important change is that development repository has been moved under WeblateOrg organization at GitHub, you can now find it at https://github.com/WeblateOrg/wlc. Another important news is that Debian package is currently waiting in NEW queue and will hopefully soon hit unstable.

wlc is built on API introduced in Weblate 2.6 and still being in development. Several commands from wlc will not work properly if executed against Weblate 2.6, first fully supported version will be 2.7 (current git is okay as well, it is now running on both demo and hosting servers). You can usage examples in the wlc documentation.

Filed under: Debian English SUSE Weblate | 0 comments

Categories: Elsewhere

Russell Coker: Nexus 6P and Galaxy S5 Mini

Planet Debian - Fri, 08/07/2016 - 07:47

Just over a month ago I ordered a new Nexus 6P [1]. I’ve had it for over a month now and it’s time to review it and the Samsung Galaxy S5 Mini I also bought.

Security

The first noteworthy thing about this phone is the fingerprint scanner on the back. The recommended configuration is to use your fingerprint for unlocking the phone which allows a single touch on the scanner to unlock the screen without the need to press any other buttons. To unlock with a pattern or password you need to first press the “power” button to get the phone’s attention.

I have been considering registering a fingerprint from my non-dominant hand to reduce the incidence of accidentally unlocking it when carrying it or fiddling with it.

The phone won’t complete the boot process before being unlocked. This is a good security feature.

Android version 6 doesn’t assign permissions to apps at install time, they have to be enabled at run time (at least for apps that support Android 6). So you get lots of questions while running apps about what they are permitted to do. Unfortunately there’s no “allow for the duration of this session” option.

A new Android feature prevents changing security settings when there is an “overlay running”. The phone instructs you to disable overlay access for the app in question but that’s not necessary. All that is necessary is for the app to stop using the overlay feature. I use the Twilight app [2] to dim the screen and use redder colors at night. When I want to change settings at night I just have to pause that app and there’s no need to remove the access from it – note that all the web pages and online documentation saying otherwise is wrong.

Another new feature is to not require unlocking while at home. This can be a convenience feature but fingerprint unlocking is so easy that it doesn’t provide much benefit. The downside of enabling this is that if someone stole your phone they could visit your home to get it unlocked. Also police who didn’t have a warrant permitting search of a phone could do so anyway without needing to compel the owner to give up the password.

Design

This is one of the 2 most attractive phones I’ve owned (the other being the sparkly Nexus 4). I think that the general impression of the appearance is positive as there are transparent cases on sale. My phone is white and reminds me of EVE from the movie Wall-E.

Cables

This phone uses the USB Type-C connector, which isn’t news to anyone. What I didn’t realise is that full USB-C requires that connector at both ends as it’s not permitted to have a data cable with USB-C at the device and and USB-A at the host end. The Nexus 6P ships with a 1M long charging cable that has USB-C at both ends and a ~10cm charging cable with USB-C at one end and type A at the other (for the old batteries and the PCs that don’t have USB-C). I bought some 2M long USB-C to USB-A cables for charging my new phone with my old chargers, but I haven’t yet got a 1M long cable. Sometimes I need a cable that’s longer than 10cm but shorter than 2M.

The USB-C cables are all significantly thicker than older USB cables. Part of that would be due to having many more wires but presumably part of it would be due to having thicker power wires for delivering 3A. I haven’t measured power draw but it does seem to charge faster than older phones.

Overall the process of converting to USB-C is going to be a lot more inconvenient than USB SuperSpeed (which I could basically ignore as non-SuperSpeed connectors worked).

It will be good when laptops with USB-C support become common, it should allow thinner laptops with more ports.

One problem I initially had with my Samsung Galaxy Note 3 was the Micro-USB SuperSpeed socket on the phone being more fiddly for the Micro-USB charging plug I used. After a while I got used to that but it was still an annoyance. Having a symmetrical plug that can go into the phone either way is a significant convenience.

Calendars and Contacts

I share most phone contacts with my wife and also have another list that is separate. In the past I had used the Samsung contacts system for the contacts that were specific to my phone and a Google account for contacts that are shared between our phones. Now that I’m using a non-Samsung phone I got another Gmail account for the purpose of storing contacts. Fortunately you can get as many Gmail accounts as you want. But it would be nice if Google supported multiple contact lists and multiple calendars on a single account.

Samsung Galaxy S5 Mini

Shortly after buying the Nexus 6P I decided that I spend enough time in pools and hot tubs that having a waterproof phone would be a good idea. Probably most people wouldn’t consider reading email in a hot tub on a cruise ship to be an ideal holiday, but it works for me. The Galaxy S5 Mini seems to be the cheapest new phone that’s waterproof. It is small and has a relatively low resolution screen, but it’s more than adequate for a device that I’ll use for an average of a few hours a week. I don’t plan to get a SIM for it, I’ll just use Wifi from my main phone.

One noteworthy thing is the amount of bloatware on the Samsung. Usually when configuring a new phone I’m so excited about fancy new hardware that I don’t notice it much. But this time buying the new phone wasn’t particularly exciting as I had just bought a phone that’s much better. So I had more time to notice all the annoyances of having to download updates to Samsung apps that I’ll never use. The Samsung device manager facility has been useful for me in the past and the Samsung contact list was useful for keeping a second address book until I got a Nexus phone. But most of the Samsung apps and 3d party apps aren’t useful at all.

It’s bad enough having to install all the Google core apps. I’ve never read mail from my Gmail account on my phone. I use Fetchmail to transfer it to an IMAP folder on my personal mail server and I’d rather not have the Gmail app on my Android devices. Having any apps other than the bare minimum seems like a bad idea, more apps in the Android image means larger downloads for an over-the-air update and also more space used in the main partition for updates to apps that you don’t use.

Not So Exciting

In recent times there hasn’t been much potential for new features in phones. All phones have enough RAM and screen space for all common apps. While the S5 Mini has a small screen it’s not that small, I spent many years with desktop PCs that had a similar resolution. So while the S5 Mini was released a couple of years ago that doesn’t matter much for most common use. I wouldn’t want it for my main phone but for a secondary phone it’s quite good.

The Nexus 6P is a very nice phone, but apart from USB-C, the fingerprint reader, and the lack of a stylus there’s not much noticeable difference between that and the Samsung Galaxy Note 3 I was using before.

I’m generally happy with my Nexus 6P, but I think that anyone who chooses to buy a cheaper phone probably isn’t going to be missing a lot.

Related posts:

  1. Samsung Galaxy Note 3 In June last year I bought a Samsung Galaxy Note...
  2. Nexus 4 My wife has had a LG Nexus 4 for about...
  3. CyanogenMod and the Galaxy S Thanks to some advice from Philipp Kern I have now...
Categories: Elsewhere

LevelTen Interactive: How to Market a Speaking Tour: Behind the Scenes of the R.O.W. Roadshow

Planet Drupal - Fri, 08/07/2016 - 07:00

Hey, let’s put on a show!

My colleague Felipa and I felt like we were in a Judy Garland-Mickey Rooney movie. “Let’s put on a show!” said Tom, the CEO here at LevelTen, and we all jumped in to clear out the barn and save the orphanage....Read more

Categories: Elsewhere

Yuriy Gerasimov: Automated deployments to Acquia. Cloud API

Planet Drupal - Fri, 08/07/2016 - 06:45

When you set up your Continuous Integration you really would like to set your deployments automatically. If you use Acquia hosting for your website it does make a lot of sense to use all environments in your workflow. But how you can automate deployments to these environments without touching UI (copying database, files, deploying code)?

The answer is in Cloud API


You can call them either with drush command or curl request. We will touch the drush commands approach in this post.

I personally was heavily involved in workflow called CIBox that uses separate from Acquia github repo.

I used 'master' branch to deploy to DEV environment. But both STAGING and PRODUCTION environments are tag based.

DEV environment deployment


First step of deployment for me was to sync the repositories.

cd /var/git/acquia git pull github master git push origin master # Sleep for 30 seconds. We expect Acquia to update the code. sleep 30

Little note: all these commands are run on CI server for me. So you will find plenty of ssh and scp to Acquia servers later.

Repository /var/git/acquia is a clone from hosting repo but with set up remote to our own private repo. If you use hosting repo as primary you probably won't need this step.

In Acquia UI I have set up DEV environment to follow master branch. So code gets deployed automatically.

In my CI set up, I keep copy of project's database on CI server to use it in all builds. So I deploy this db to DEV environment as next step. Workflow diagram looks like this.

Code snippet

# Copy db to DEV server scp /var/www/backup/DBNAME.sql.gz PROJECT.dev@staging-XXXX.prod.hosting.acquia.com:/home/PROJECT/proddump.sql.gz # Deploy db on DEV server ssh -t PROJECT.dev@staging-XXXX.prod.hosting.acquia.com 'rm -rf /home/PROJECT/DBNAME.sql && gunzip /home/PROJECT/proddump.sql.gz && cd /var/www/html/PROJECT.dev/docroot && drush sql-drop -y && `drush sql-connect` < /home/PROJECT/proddump.sql'

Remember in order to run this operation you need to set up your ssh keys so jenkins user (I use Jenkins as CI tool) can go to Acquia servers without password being requested.

And the last step is run all the updates, registry rebuilds and whatever your project requires.

# Run registry rebuild and clear caches ssh -t PROJECT.dev@staging-XXXX.prod.hosting.acquia.com 'cd /var/www/html/PROJECT.dev/docroot && drush php-eval "registry_rebuild();" && drush cc all -y'   # Run hook updates ssh -t PROJECT.dev@staging-XXXX.prod.hosting.acquia.com 'cd /var/www/html/PROJECT.dev/docroot && drush -y updb'

These commands actually can be set up with drush aliases. But I used terminal approach as already using it for deploying database. Just consistency reasons.

Another part I skip here is copying files over. We don't do that. Instead we enable stage_file_proxy on DEV and STAGE environments and point them to PROD so files got copied over upon request. This saves plenty of space.

STAGE environment deployment


As staging environment uses tags we need to change our code deployment part.

In order to use Cloud API you need to set up special private key in Acquia UI. Please review https://docs.acquia.com/cloud/api/auth for more details.

After setting up the key, ssh to DEV server and run command 'drush ac-api-login' and provide your email and key to it. In this way you will set up all your credentials and be able to run Cloud API drush commands.

And now, we can deploy the code.

ssh -t PROJECT.dev@staging-XXXX.prod.hosting.acquia.com 'cd /var/www/html/PROJECT.dev/docroot && drush @PROJECT.dev ac-code-deploy test --verbose'   # Sleep for 30 seconds. We expect Acquia to update the code. sleep 30

This will deploy the code from DEV environment to STAGE and adds the tag automatically. Basically it mimics the operation of dragging the code bar in Acquia UI.

All other steps (database deployment and cache clear) are the same as with DEV environment.

PROD environment deployments


Production deployment is going to be the same as STAGE with only difference we need to ssh to STAGE server to deploy the code. And of course we do not to deploy database anywhere.

I am sure in your projects you might need to add some more steps. Maybe reindexing solr, or clearing varnish caches. All these can be done with drush commands.

How do you do deployments? Please share your experience in comments.

Tags: drupal 7drupal planet
Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator