Elsewhere

Craig Small: procps 3.3.12

Planet Debian - Sun, 10/07/2016 - 07:58

The procps developers are happy to announce that version 3.3.12 of procps was released today. This version has a mixture of bug fixes and enhancements. This unfortunately means another API bump but we are hoping this will be fixed with the new library API coming soon.

procps is developed on gitlab and the new version of procps can be found at https://gitlab.com/procps-ng/procps/tree/newlib

procps 3.3.12 can be found at https://gitlab.com/procps-ng/procps/tags/v3.3.12

From the NEWS file, procps 3.1.12 has the following:

  • build: formerly optional –enable-oomem unconditional
  • free: man document rewritten for shared Debian #755233
  • free: interpret intervals in non-locale way Debian #692113
  • kill: report error if cannot kill process Debian #733172
  • library: refine calculation of ‘cached’ memory
  • library: find tty quicker Debian #770215
  • library: eliminate threads display inconsistencies Redhat #1284091
  • pidof: check cmd if space found in argv0
  • pmap: fixed detail parsing on long mapping lines
  • pmap: fix occasional incorrect memory usage values Redhat #1262864
  • ps: sort by cgroup Debian #692279
  • ps: display control group name with -o cgname
  • ps: fallback to attr/current for context Debian #786956
  • ps: enabled broken ‘thcount’ option Redhat #1174313
  • tests: conditionally add prctl Debian #816237
  • top: displays the 3 new linux-4.5 RES memory fields
  • top: man page memory fields corrected + new narrative
  • top: added display of CGNAME (control group name)
  • top: is now more responsive to cpus brought online
  • top: namespace cols use suppressible zero

We are hoping this will be the last one to use the old API and the new format API ( imaginatively called newlib ) will be used in subsequent releases.

Feedback for this and any other version of procps can be sent to either the issue tracker or the development email list.

Categories: Elsewhere

Norbert Preining: OpenPHT 1.6.2 for Debian/sid

Planet Debian - Sun, 10/07/2016 - 06:17

I have updated the openpht repository with builds of OpenPHT 1.6.2 for Debian/sid for both amd64 and i386 architecture. For those who have forgotten it, OpenPHT is the open source fork of Plex Home Theater that is used on RasPlex, see my last post concerning OpenPHT for details.

The repository also contains packages (source and amd64/i386) for shairplay which is necessary for building and running OpenPHT.

sid and testing

For sid use the following lines:

deb http://www.preining.info/debian/ openpht-sid main deb-src http://www.preining.info/debian/ openpht-sid main

You can also grab the binary for amd64 directly here for amd64 and i386, you can get the source package with

dget http://www.preining.info/debian/pool/main/o/openpht/openpht_1.6.2.20160707-1.dsc

Note that if you only get the binary deps, you also need libshairplay0 from amd64 or i386.

The release file and changes file are signed with my official Debian key 0x860CDC13.

jessie

Builds for Debian stable release jessie are available directly from the github project page of OpenPHT

Now be ready for enjoying the next movie!

Categories: Elsewhere

Wuinfo: Joy Of Collaboration

Planet Drupal - Sun, 10/07/2016 - 02:26

We make our life easier by making our colleague's lives easier. If our manager is in a stressful state, we'd better find the way to do something. It is such a fulfill feeling when we can do something together and contribute as a member of a productive team.

I had been in a financial system project for a company in NYC. We needed to migrate a huge amount of content from old system to a new one (Tech: Drupal 5 to Drupal 7). The business is making money by selling data. So, data accuracy was a big thing. Due to the complexity of the data, couple months into the project, we had not got the content migrated. Some technical problems prevented us from going forward. Everyone in the team was under pressure and working very hard. Company executives were starting to losing patience and doubt our ability to get the job done. Our boss was under a lot of stress. In a weekly meeting a few weeks after I joined the project, our manager told us that he was not sure he would still be working for the company the next day. He was afraid he might get fired by his boss. It was like a stone in my stomach. What would happen to us if he lost the job? The whole team was quickly motivated. We all liked him and did not want it to become reality. We all know it was the time all out for a greater good.

For the interest of the group, we did not care about a small personal loss. When migrating location nodes, we needed Google map API to translate hundreds of thousands postal address. It is not free. To save time from asking permission to buy the service, one of our colleagues just went ahead and created an account with his personal credit card. It cost him some money but saved some time for the whole team. We all worked late, collaborated closely and more efficiently. We did not mind to sacrifice for the interest of the project.

My job was to assistant the other backend developer to migrate the content. It was such a compelling feeling to be a part of it and wanted the success of the project. It occupied my mind. When eating, walking, taking a shower, sleeping and even dreaming I had been thinking the ways to solve some technical problems; contemplating the best possible solutions. Many of us include myself were a little bit sick. But, physical health with good rest and diet is the key to a clear and sharp mind. Even though we had a lot of stress, I believe that was one of the key elements to the success of the project. I had learned some time management skill before. Not wasting a minute, I allocated enough time to eat and sleep, and that help me to keep my mind fresh and calm all the time. Two weeks after the meeting, we successfully overcame all the major technical difficulty and got the content migrated.

Everyone seemed to be relaxed right away. It was a pleasure to the conversation during the following meeting. Like fighting a battle should by should against a ferocious enemy and win it, everybody in the team felt closer and more connected to each other. We helped each other and collaborated closely and made our life easier. It is a real joy from collaboration.

Categories: Elsewhere

Clint Adams: “Progress”

Planet Debian - Sun, 10/07/2016 - 00:43

When you replace mutt-kz with mutt 1.6.1-2, you may notice a horribly ugly thing appear. Do not panic; just add unset sidebar_visible to your ~/.mutt/muttrc .

Categories: Elsewhere

Matthew Garrett: "I recieved a free or discounted product in return for an honest review"

Planet Debian - Sat, 09/07/2016 - 21:09
My experiences with Amazon reviewing have been somewhat unusual. A review of a smart switch I wrote received enough attention that the vendor pulled the product from Amazon. At the time of writing, I'm ranked as around the 2750th best reviewer on Amazon despite having a total of 18 reviews. But the world of Amazon reviews is even stranger than that, and the past couple of weeks have given me some insight into it.

Amazon's success is fairly phenomenal. It's estimated that there's over 50 million people in the US paying $100 a year to get free shipping on Amazon purchases, and combined with Amazon's surprisingly customer friendly service there's a lot of people with a very strong preference for choosing Amazon rather than any other retailer. If you're not on Amazon, you're hurting your sales.

And if you're an established brand, this works pretty well. Some people will search for your product directly and buy it, leaving reviews. Well reviewed products appear higher up in search results, so people searching for an item type rather than a brand will still see your product appear early in the search results, in turn driving sales. Some proportion of those customers will leave reviews, which helps keep your product high up in the results. As long as your products aren't utterly dreadful, you'll probably maintain that position.

But if you're a brand nobody's ever heard of, things are more difficult. People are unlikely to search for your product directly, so you're relying on turning up in the results for more generic terms. But if you're selling a more generic kind of item (say, a Bluetooth smart bulb) then there's probably a number of other brands nobody's ever heard of selling almost identical objects. If there's no reason for anybody to choose your product then you're probably not going to get any reviews and you're not going to move up the search rankings. Even if your product is better than the competition, a small number of sales means a tiny number of reviews. By the time that number's large enough to matter, you're probably onto a new product cycle.

In summary: if nobody's ever heard of you, you need reviews but you're probably not getting any.

The old way of doing this was to send review samples to journalists, but nobody's going to run a comprehensive review of 3000 different USB cables and even if they did almost nobody would read it before making a decision on Amazon. You need Amazon reviews, but you're not getting any. The obvious solution is to send review samples to people who will leave Amazon reviews. This is where things start getting more dubious.

Amazon run a program called Vine which is intended to solve this problem. Send samples to Amazon and they'll distribute them to a subset of trusted reviewers. These reviewers write a review as normal, and Amazon tag the review with a "Vine Voice" badge which indicates to readers that the reviewer received the product for free. But participation in Vine is apparently expensive, and so there's a proliferation of sites like Snagshout or AMZ Review Trader that use a different model. There's no requirement that you be an existing trusted reviewer and the product probably isn't free. You sign up, choose a product, receive a discount code and buy it from Amazon. You then have a couple of weeks to leave a review, and if you fail to do so you'll lose access to the service. This is completely acceptable under Amazon's rules, which state "If you receive a free or discounted product in exchange for your review, you must clearly and conspicuously disclose that fact". So far, so reasonable.

In reality it's worse than that, with several opportunities to game the system. AMZ Review Trader makes it clear to sellers that they can choose reviewers based on past reviews, giving customers an incentive to leave good reviews in order to keep receiving discounted products. Some customers take full advantage of this, leaving a giant number of 5 star reviews for products they clearly haven't tested and then (presumably) reselling them. What's surprising is that this kind of cynicism works both ways. Some sellers provide two listings for the same product, the second being significantly more expensive than the first. They then offer an attractive discount for the more expensive listing in return for a review, taking it down to approximately the same price as the original item. Once the reviews are in, they can remove the first listing and drop the price of the second to the original price point.

The end result is a bunch of reviews that are nominally honest but are tied to perverse incentives. In effect, the overall star rating tells you almost nothing - you still need to actually read the reviews to gain any insight into whether the customer actually used the product. And when you do write an honest review that the seller doesn't like, they may engage in heavy handed tactics in an attempt to make the review go away.

It's hard to avoid the conclusion that Amazon's review model is broken, but it's not obvious how to fix it. When search ranking is tied to reviews, companies have a strong incentive to do whatever it takes to obtain positive reviews. What we're left with for now is having to laboriously click through a number of products to see whether their rankings come from thoughtful and detailed reviews or are just a mass of 5 star one liners.

comments
Categories: Elsewhere

CiviCRM Blog: Mapping it in 5-10min - a CiviCON 2016 Lightening Talk

Planet Drupal - Sat, 09/07/2016 - 20:31

Someone asked me to post this here - so that he can give it a try!

I did a Lightening Talk at CiviCON 2016 showing how you can put your Contacts on a Leaflet Map. It only takes a few minutes to put your CiviCRM Contacts on a Leaflet Map if you're using Drupal. Leaftlet is an open-source JavaScript Library for interactive maps. In addition what's really cool is that you can color the PIN based on the value of a CiviCRM custom field!

I've posted the details in a QA format including some of my slides from my CiviCON Lightening Talk on CiviCRM's StackExchange site:

http://civicrm.stackexchange.com/questions/12862/how-to-put-your-civicrm...

Give it try!

CiviConCiviCRMDrupalTips
Categories: Elsewhere

Enrico Zini: Monthly link collections with staticsite

Planet Debian - Sat, 09/07/2016 - 19:23

A year ago, I wrote:

Instead of keeping substantial tabs open until I have read all of them, or losing them in the jungle of browser bookmarks, I have written a script that collects them into a file per month, and turns them into markdown files for my blog.

That script turned out to be quirky and overengineered, so much so that I stopped using it myself.

I've now rethought my approach, and downscaled it: instead of saving a copy of each page locally, I can blog a reference to https://archive.org or https://archive.is. I do not need to autogenerate a description from the site itself.

The result has been a nicely minimal set of changes to staticsite that resulted in a new version where adding a link to a monthly collection is as easy as typing ssite new -a links.

As long as I'll remember to rebuild the site 3 weeks from now, a new post should automagically appear in my blog.

Categories: Elsewhere

Charles Plessy: Congratulations, Marga!

Planet Debian - Sat, 09/07/2016 - 14:43

For the first time in our history, a woman joins the Technical Committee. Congratulations, Marga, and thanks for volunteering.

Categories: Elsewhere

Freelock : Git Branch Strategy meets Continuous Deployment

Planet Drupal - Sat, 09/07/2016 - 01:14

Our branch strategy based on Git Flow did not survive. It was getting a bit old in the tooth, but the final blow was automation.

At Freelock, we've been hard at work building out automation so we can handle the maintenance on hundreds of websites with better test coverage and more confidence than ever before. Exciting news! It's all coming together, and we have it working across the board on ALL of our projects, now.

DrupalDevOpsContinuous IntegrationContinuous DeploymentQuality AssuranceDrupal PlanetBotgit flowgit
Categories: Elsewhere

Fuse Interactive: Why you and your clients should be excited to build your next project with Drupal 8

Planet Drupal - Fri, 08/07/2016 - 23:39

Over 8 months after release and my first D8 site under my belt I can now say I am excited for the future of working with Drupal’s freshest release. That being said at this stage in the game the decision to go with D8 should approached with caution. It does what it does well but many of those shiny contrib modules you’re used to using just aren’t there yet. Unless your team and client are willing to spend the time and money needed to develop or port the missing functionality it might not be a fit for that particular project.

Categories: Elsewhere

ImageX Media: Higher Education Notes and Trends for the Week of July 4, 2016

Planet Drupal - Fri, 08/07/2016 - 21:22

The landscape of higher education continues to shift toward changing student demographics, evolving different learning approaches and what seems like a perpetual shortfall of funding for post-secondary institutions. These trends mirror that of our client website aspirations which are now more than ever are focusing on engagement with key audiences such as prospective students and alumni due to greater competition in the marketplace with less dollars to spend. 

Categories: Elsewhere

Chromatic: Digging In To Drupal 8: Code Snippets for Site Builders

Planet Drupal - Fri, 08/07/2016 - 19:36

The more I work with Drupal 8, the more I realize how much has changed for developers in the Drupal community. While the transition to a modern, object-oriented system is what's best for the longevity of the platform, it certainly doesn't come without challenges. As someone who doesn't come from an OOP background, I've found the transition difficult at times. In many cases, I know exactly what I want to do, just not how to do it the "Drupal 8 way". On top of this, tutorials and blog posts on D8 are all over the map in terms of accuracy. Many posts written during D8's development cycle are no longer applicable because of API changes, etc.

Below is a list of snippets that might be helpful to site builders or developers more familiar with D7 hooks and procedural. It might also be useful to OOP folks who are new to Drupal in general. My goal below is to add to and update these snippets over time.

Routes & Links Determine the Current Drupal Route

Need to know what the current Drupal route is or need to run some logic against the current route? You can get the current route like so:

$route = \Drupal::routeMatch()->getRouteName();

To some, the \Drupal::routeMatch() syntax might look foreign (it did to me). Here's a rundown of what's happening here:

First, \Drupal. This is calling the global Drupal class, which, in Drupal 8, is a bridge between procedural and OO methods of writing Drupal code. The following comes from the documentation:

This class acts as a unified global accessor to arbitrary services within the system in order to ease the transition from procedural code to injected OO code.

Right. Moving on to ::routeMatch(). Here we're using the routeMatch() method which "Retrieves the currently active route match object." Simple enough. But what is "::" all about? This StackOverflow answer helped me to understand what that's all about.

From there, the getRouteName() method returns the current route name as a string. Here are some example routes: entity.node.canonical, view.frontpage and node.type_add.

Is this the Front Page Route?

Need to check if the current route is the front page route? There's a service and method for that:

// Is the current route/path the front page? if ($is_front = \Drupal::service('path.matcher')->isFrontPage()) {}

Here we're calling the path.matcher service (defined in /core/core.services.yml) and using the isFrontPage() method. For more on services, check out the "Services and Dependency Injection Container" documentation on api.drupal.org which helped me understand how all of these bits work together and the why of their structure.

Get the Requested Path

Need to know what the current page's requested path was, as opposed to the route? You can do this:

$current_uri = \Drupal::request()->getRequestUri(); Redirect to a Specific Route

Need to redirect to a specific page? In Drupal 7, you would likely handle this with drupal_goto() in your page callback function. In Drupal 8, you can use RedirectResponse() for that. Here is the relevant changelog.

Here are some examples, borrowed heavily from said changelog. First, in procedural PHP:

use Symfony\Component\HttpFoundation\RedirectResponse; function my_redirect() { return new RedirectResponse(\Drupal::url('user.page')); }

Here is how you would use a Drupal 8 controller to accomplish the same thing:

use Drupal\Core\Controller\ControllerBase; class MyControllerClass extends ControllerBase { public function foo() { //... return $this->redirect('user.page'); } } Links on the Fly

Drupal 7 and prior relied heavily on the l() function. (In fact, I would wager this was my most used function over the years. In Drupal 8, if you need to create links on the fly, utilize the Link class

$link = \Drupal\Core\Link::fromTextAndUrl($text, $url); Working with Entities Query Database for Entities

If you need to query the database for some nodes (or any other entity) you should use the entityQuery service. The syntax should be pretty familiar to most D7 developers who have used EntityFieldQuery:

// Query for some entities with the entity query service. $query = \Drupal::entityQuery('node') ->condition('status', 1) ->condition('type', 'article') ->range(0, 10) ->sort('created', 'DESC'); $nids = $query->execute(); Loading Entities

If you need to load the actual entities, you can do so a number of ways:

While the following will technically work in Drupal 8:

$node = entity_load_multiple('node', $nids);

This method has been deprecated in Drupal 8 and will be removed before Drupal 9, in favor of methods overriding Entity::loadMultiple(). To future-proof your code, you would do something like the following:

$nodes = \Drupal::entityTypeManager()->getStorage('node')->loadMultiple($nids);

Here's how you would do similar for a single node:

$node = \Drupal::entityTypeManager()->getStorage('node')->load($nid);

Here are a few other entity snippets that might be useful:

// Link to an entity using the entity's link method. $author_link = $user->toLink(); // Do the same thing, but customize the link text. $author_link = $user->toLink('Some Custom Text'); // Given a node object, here's how to determine its type: $type = $node->getType(); // To get the full user entity of the node's author: $author = $node->getOwner(); // To get the raw ID of the author of a node: $author_id = $node->getOwnerId(); Image Styles

Need to whip up an image using a particular image style on the fly? This will work for that:

// Create an instance of an image using a specific image style, given a path to a file. $style = \Drupal\image\Entity\ImageStyle::load('yourStyle_image'); $img_path = $user->field_profile_some_image->entity->getFileUri(); $img_style_url = $style->buildUrl($img_path);

That's it for now. I intend to keep this post updated as we learn more and more about the new world of Drupal 8. If you have a snippet worth sharing, drop us a line via Twitter and we’ll add it to this post (with credit of course).

Categories: Elsewhere

Chromatic: Be Promiscuous with Drush's core-quick-drupal

Planet Drupal - Fri, 08/07/2016 - 19:36
Aren't you a cutie?

Here at Chromatic HQ, the team is encouraged to give back to the open-source community. (And on company time!) One way to do this is by reviewing and contributing Drupal patches. For me, this can be both rewarding and frustrating. When things go well, I feel good about contributing and I might even get a commit credit! But there are times when patches don't apply, I have no clue what's wrong and I need to start fresh. First, I curse mightily at the time wasted, then I create a new db, and then re-install a fresh copy of Drupal, and then configure it etc. etc. Using drush site-install makes this process relatively easy, but what if it could be easier? (Hint: It is!)

Hooray for promiscuity!

I recently had a fling with Drush's core-quick-drupal command. I had known about it for years, but I hadn't realized what it could really do for me. This has now changed, and together we're having an open affair!

For the uninitiated, drush core-quick-drupal takes advantage of PHP's built-in web server (PHP >= 5.4) and uses a sqlite database to get a fresh, stand-alone copy of Drupal up and running, all in about a minute. It has two aliases: drush qd and, my personal preference, drush cutie.

Out-of-the-box overview
  • In about a minute it installs a full instance of Drupal.
  • Runs a web server at http://127.0.0.1:8888 (no apache config).
  • Uses a self-contained sqlite file as the db (no mysql db to create and configure).

It's so much fun, you may want to follow along. From the command line, just cd to a folder of your choosing and run drush cutie --yes. (You'll need to have drush installed.)

Behind the scenes, a folder is created called quick-drupal with a timestamp appended to the end. (One of my older cutie folders is quick-drupal-20160214193640... a timestamp from a Valentine's evening with Drush that my wife won't soon forget!) Inside the new quick-drupal folder are subfolders with the latest D8 files and the sqlite db file. (There are lots of options to customize the Drupal version and environment, but the default nowadays is Drupal 8.)

Running it looks something like this drush cutie --yes Project drupal (8.0.3) downloaded to ... Installation complete. User name: admin User password: EawsYkGg4Y Congratulations, you installed Drupal! Listening on http://127.0.0.1:8888

(The output above has been edited to highlight the tastier bits!)

And with that I have the latest version of D8 running at http://127.0.0.1:8888. As you can see from the shell output above, the superuser is admin with a password of EawsYkGg4Y.

Okay, okay, very cool, but what can I do with it? Here's a breakdown:
  1. Review patches with minimal fuss, thereby giving back to the Drupal community.
  2. Investigate new modules without sullying your main dev environment.
  3. Test that new Feature you created to see if it really works.
  4. NOT RECOMMENDED! When that friend asks you how long it will take to build him a website, respond with "about a minute" and fire it up.
You thought I was done?

Let's run through the steps to review a patch. This is where drush core-quick-drupal really shines because it's best to have a clean install of Drupal to work with; this minimizes the number of externalities that can interfere with testing. Having a single-command, throwaway copy of vanilla Drupal is the way to go.

You could call this a blog version of a live demo; I have chosen a patch out in the wild to review. I found this one for the core taxonomy module, that had a status of "Needs Review" on D.O.

The patch file itself is here: https://www.drupal.org/files/issues/taxonomy-term-twig-cs.patch

Here are the steps I took on the command line:

# Install a temporary copy of D8 into a folder I named "test2644718" drush cutie test2644718 --yes

With the above command I got my environment running. The patch itself simply fixes the formatting in taxonomy-term.html.twig, which is a default template file for taxonomy terms, provided by the core taxonomy module.

I first tested to see the original template in action. Satisfied with the way it was working, I took steps to apply the patch.

# Move into the root folder of the new site cd test2644718/drupal/ # Use wget to grab the patch from D.O. wget https://www.drupal.org/files/issues/taxonomy-term-twig-cs.patch # Apply the patch patch -p1 < taxonomy-term-twig-cs.patch patching file core/modules/taxonomy/templates/taxonomy-term.html.twig

The patch was applied successfully and a minor change in taxonomy-term.html.twig was made. I quickly tested to ensure nothing had blown up and was satisfied that the patch works as expected.

Back in D.O., I added my two cents and marked the issue as Reviewed & tested by the community. And that's that.

Update

Though the patch originally sat awaiting review for 2 months, I'm happy to claim that my review got things moving again! After I posted RTBC, a flurry of activity took place with the scope increasing and new patches being created. I reviewed those too! A day later the patches were committed to 8.1.x. Nice.

Categories: Elsewhere

Joey Hess: twenty years of free software -- part 11 concurrent-output

Planet Debian - Fri, 08/07/2016 - 19:06

concurrent-output is a more meaty Haskell library than the ones I've covered so far. Its interface is simple, but there's a lot of complexity under the hood. Things like optimised console updates, ANSI escape sequence parsing, and transparent paging of buffers to disk.

It developed out of needing to display multiple progress bars on the console in git-annex, and also turned out to be useful in propellor. And since it solves a general problem, other haskell programs are moving toward using it, like shake and stack.

Next: ?twenty years of free software -- part 12 propellor

Categories: Elsewhere

Mediacurrent: Friday 5: 5 Quick Ways to Check Your Site&#039;s Health

Planet Drupal - Fri, 08/07/2016 - 18:39

TGIF and welcome back to another exciting episode of The Mediacurrent Friday 5!

Categories: Elsewhere

Jeff Geerling's Blog: Getting Emoji and multibyte characters on your Drupal 7 site with 7.50

Planet Drupal - Fri, 08/07/2016 - 17:43

Almost exactly a year ago, I wrote a blog post titled Solving the Emoji/character encoding problem in Drupal 7.

Since writing that post, Drupal 7 bugfixes and improvements have started to pick up steam as (a) many members of the community who were focused on launching Drupal 8 had time to take a breather and fix up some long-standing Drupal 7 bugs and improvements that hadn't yet been backported, and (b) there are two new D7 core maintainers. One of the patches I've been applying to many sites and hoping would get pulled into core for a long time was adding support for full UTF-8, which allows the entry of emojis, Asian symbols, and mathematical symbols that would break Drupal 7 sites running on MySQL previously.

My old blog post had a few steps that you could follow to make your Drupal 7 site 'mostly' support UTF-8, but there were some rough edges. Now that support is in core, the process for converting your existing site's database is more straightforward:

Categories: Elsewhere

Reproducible builds folks: Managing container and environment state

Planet Debian - Fri, 08/07/2016 - 15:58

Author: ceridwen

With some more help from Martin Pitt, it became clear to me that my previous mental model of how autopkgtest worked is very different from how it does work. I'll illustrate by borrowing my previous example. I know that schroot has the following behavior:

The default behaviour is as follows (all directory paths are inside the chroot). A login shell is run in the current working directory. If this is not available, it will try $HOME (when --preserve-environment is used), then the user's home directory, and / inside the chroot in turn. A command is always run in the current working directory inside the chroot. If none of the directories are available, schroot will exit with an error status.

I was naively thinking that the way autopkgtest would work is that it would set the current working directory of the schroot call and the ensuing subprocess call would thus take place in that directory inside the schroot. That is not how it works. If you want to change directories inside the virtual server, you have to use cd. The same is true of, at least, environment variables, which have their own specific handling in the adt_testbed.Testbed methods but have to be passed as strings, and umask. I'm assuming this is because the direct methods with qemu images or LXC containers don't work.

What this means is that I was thinking about the problem the wrong way: what reprotest needs to do is generate shell scripts. This is how autopkgtest works. If this goes beyond laying out commands linearly one after another, for instance if it demands conditionals or other nested constructs, the right way to do it is to build an abstract syntax tree representation of the shell script and then convert it to code.

Whether I need more complicated shell scripts depends on my approach to handling state in the containers. I need to know what state persists across separate command executions: if I call adt_testbed.Testbed.execute() twice, what if any changes I make to the container will carry forward from the first to the second? There are three categories here. First, some properties of a system aren't preserved even from one command execution to the next, like working directory and environment variables. (I thought working directory would be preserved, but it's not). The second is state that persists while the testbed is open and is then automatically reverted when it's closed, like files copied into temporary directories on the tesbed. The third is state that persists across different sessions on the same container and must be cleaned up by reprotest. It's worth noting that which state falls into which category may vary by the container in question, though for the most part I can either do unnecessary cleanup or issue unnecessary commands to handle the differences. autopkgtest itself has a very different approach to cleanup, as it relies almost entirely on the builtin reversion capabilities from some of its containers. I would prefer to avoid doing the same, partly because I know that some of the modifications I need to make, for instance creating new users or mounting disorderfs, can't be reverted by the faster, simpler containers like schroot.

From discussions with Lunar, I think that the variations that correspond to environment variables (captures_environment, home, locales, path, and timezone) fall into the first category, but because of the special handling for them they don't require sending a separate command. The shell (bash foo) accepts a script or command as an argument so it also doesn't need a separate command. Setting the working directory and umask require separate commands that have to be joined. On Linux, setarch also accepts a command/script as an argument so can be handled like the shell, but there's no unified POSIX protocol for mocking uname so other OSes will require different approaches. Users, groups, file ordering, host, and domain will require cleanup in all containers except for (maybe) qemu. If I want to handle the cleanup in the shell scripts themselves, I need conditionals so that for instance the shell script only tries to unmount disorderfs if disorderfs was successfully mounted. This approach would simplify the error handling problems I've had before, where when a build crashes cleanup code run from Python doesn't get run until after the testbed stop accepting commands.

Lunar suggested the Plumbum library to me, and I think I can use it to avoid writing my own shell AST library. It has a method that converts the Python representation of a shell script into a string that can be passed to Testbed.command(). Integrating Plumbum to generate the necessary scripts is where I'm going in the next week.

Any feedback on any of this is welcome. I'm also curious what other projects are using autopkgtest code. Holger brought to my attention piuparts, which brings the list up to four that I'm aware of, autopkgtest itself, sbuild, piuparts, and now reprotest.

Categories: Elsewhere

Zivtech: How to Grow Your Own Team

Planet Drupal - Fri, 08/07/2016 - 15:00
Lack of available talent is a common refrain of business owners. Give up on looking and complaining! Learn how to create a sustainable business.

Growing your own means hiring smart, motivated people with all the right soft skills and investing in them for the long haul. In return, they'll reward you with loyalty, teach your newer staff, and work in unison with a cohesive vision.

Where is the Talent? It’s not realistic to imagine that you live in a world where there are people that you can just hire for a decent price who already have all the skills you need. Just come in, hit the ground running, and make you a bunch of money. You wouldn't have any problems with them, and you wouldn't have to do much for them other than feed them some pizza and pay them.

So when managers can't find those people, they get upset, and they say, "There's not enough talent. People are not getting educated properly. We don't have the right people and the right programs out there."

The world is full of talent! No, they haven't learned the specific skills that you need, but there are so many intelligent people out there who would thrive with a little help.


What Are You Farming? When I started working in software development, I saw myself as someone who made websites. That was my output: I was making websites, or I was making code. Over the years now I see that my product is people. I'm selling their time, expertise, knowledge, and human capacity.

In web development, who cares about the code when you have the coder? It's like the egg and the chicken. You have to take care of the chicken, and not each little egg, because the chickens just keep making more.

Being a great website maker isn’t really that valuable. What is really valuable is being able to grow more people who can do the work. Then you really scale up. You're only going to do so well as a solo practitioner. If you're able to grow more and more skilled people, not only is your business doing better, but you start to realize that the task of training people is more important than building websites.


Download the full Grow Your Own white paper for free.
Categories: Elsewhere

OSTraining: How to Display PDFs on a Drupal Site

Planet Drupal - Fri, 08/07/2016 - 14:10

An OSTraining member asked us about attaching PDFs to a Drupal site.

It is possible to use the default File field and allow people to download the PDF. However, this member wanted visitors to read the PDF directly on the site.

Categories: Elsewhere

Mike Hommey: Are all integer overflows equal?

Planet Debian - Fri, 08/07/2016 - 13:15

Background: I’ve been relearning Rust (more about that in a separate post, some time later), and in doing so, I chose to implement the low-level parts of git (I’ll touch the why in that separate post I just promised).

Disclaimer: It’s friday. This is not entirely(?) a serious post.

So, I was looking at Documentation/technical/index-format.txt, and saw:

32-bit number of index entries.

What? The index/staging area can’t handle more than ~4.3 billion files?

There I was, writing Rust code to write out the index.

try!(out.write_u32::<NetworkOrder>(self.entries.len()));

(For people familiar with the byteorder crate and wondering what NetworkOrder is, I have a use byteorder::BigEndian as NetworkOrder)

And the Rust compiler rightfully barfed:

error: mismatched types: expected `u32`, found `usize` [E0308]

And there I was, wondering: “mmmm should I just add as u32 and silently truncate or … hey what does git do?”

And it turns out, git uses an unsigned int to track the number of entries in the first place, so there is no truncation happening.

Then I thought “but what happens when cache_nr reaches the max?”

Well, it turns out there’s only one obvious place where the field is incremented.

What? Holy coffin nails, Batman! No overflow check?

Wait a second, look 3 lines above that:

ALLOC_GROW(istate->cache, istate->cache_nr + 1, istate->cache_alloc);

Yeah, obviously, if you’re incrementing cache_nr, you already have that many entries in memory. So, how big would that array be?

struct cache_entry **cache;

So it’s an array of pointers, assuming 64-bits pointers, that’s … ~34.3 GB. But, all those cache_nr entries are in memory too. How big is a cache entry?

struct cache_entry { struct hashmap_entry ent; struct stat_data ce_stat_data; unsigned int ce_mode; unsigned int ce_flags; unsigned int ce_namelen; unsigned int index; /* for link extension */ unsigned char sha1[20]; char name[FLEX_ARRAY]; /* more */ };

So, 4 ints, 20 bytes, and as many bytes as necessary to hold a path. And two inline structs. How big are they?

struct hashmap_entry { struct hashmap_entry *next; unsigned int hash; }; struct stat_data { struct cache_time sd_ctime; struct cache_time sd_mtime; unsigned int sd_dev; unsigned int sd_ino; unsigned int sd_uid; unsigned int sd_gid; unsigned int sd_size; };

Woohoo, nested structs.

struct cache_time { uint32_t sec; uint32_t nsec; };

So all in all, we’re looking at 1 + 2 + 2 + 5 + 4 32-bit integers, 1 64-bits pointer, 2 32-bits padding, 20 bytes of sha1, for a total of 92 bytes, not counting the variable size for file paths.

The average path length in mozilla-central, which only has slightly over 140 thousands of them, is 59 (including the terminal NUL character).

Let’s conservatively assume our crazy repository would have the same average, making the average cache entry 151 bytes.

But memory allocators usually allocate more than requested. In this particular case, with the default allocator on GNU/Linux, it’s 156 (weirdly enough, it’s 152 on my machine).

156 times 4.3 billion… 670 GB. Plus the 34.3 from the array of pointers: 704.3 GB. Of RAM. Not counting the memory allocator overhead of handling that. Or all the other things git might have in memory as well (which apparently involves a hashmap, too, but I won’t look at that, I promise).

I think one would have run out of memory before hitting that integer overflow.

Interestingly, looking at Documentation/technical/index-format.txt again, the on-disk format appears smaller, with 62 bytes per file instead of 92, so the corresponding index file would be smaller. (And in version 4, paths are prefix-compressed, so paths would be smaller too).

But having an index that large supposes those files are checked out. So let’s say I have an empty ext4 file system as large as possible (which I’m told is 2^60 bytes (1.15 billion gigabytes)). Creating a small empty ext4 tells me at least 10 inodes are allocated by default. I seem to remember there’s at least one reserved for the journal, and there’s lost+found ; there apparently are more. Obviously, on that very large file system, We’d have a git repository. git init with an empty template creates 9 files and directories, so that’s 19 more inodes taken. But git init doesn’t create an index, and doesn’t have any objects. We’d thus have at least one file for our hundreds of gigabyte index, and at least 2 who-knows-how-big files for the objects (a pack and its index). How many inodes does that leave us with?

The Linux kernel source tells us the number of inodes in an ext4 file system is stored in a 32-bits integer.

So all in all, if we had an empty very large file system, we’d only be able to store, at best, 2^32 – 22 files… And we wouldn’t even be able to get cache_nr to overflow.

… while following the rules. Because the index can keep files that have been removed, it is actually possible to fill the index without filling the file system. After hours (days? months? years? decades?*) of running

seq 0 4294967296 | while read i; do touch $i; git update-index --add $i; rm $i; done

One should be able to reach the integer overflow. But that’d still require hundreds of gigabytes of disk space and even more RAM.

* At the rate it was possible to add files to the index when I tried (yeah, I tried), for a few minutes, and assuming a constant rate, the estimate is close to 2 years. But the time spent reading and writing the index increases linearly with its size, so the longer it’d run, the longer it’d take.

Ok, it’s actually much faster to do it hundreds of thousand files at a time, with something like:

seq 0 100000 4294967296 | while read i; do j=$(seq $i $(($i + 99999))); touch $j; git update-index --add $j; rm $j; done

At the rate the first million files were added, still assuming a constant rate, it would take about a month on my machine. Considering reading/writing a list of a million files is a thousand times faster than reading a list of a billion files, assuming linear increase, we’re still talking about decades, and plentiful RAM. Fun fact: after leaving it run for 5 times as much as it had run for the first million files, it hasn’t even done half more…

One could generate the necessary hundreds-of-gigabytes index manually, that wouldn’t be too hard, and assuming it could be done at about 1 GB/s on a good machine with a good SSD, we’d be able to craft a close-to-explosion index within a few minutes. But we’d still lack the RAM to load it.

So, here is the open question: should I report that integer overflow?

Wow, that was some serious procrastination.

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator - Elsewhere