Feed aggregator

Paul Tagliamonte: go-wmata - golang bindings to the DC metro system

Planet Debian - Mon, 22/08/2016 - 04:16

A few weeks ago, I hacked up go-wmata, some golang bindings to the WMATA API. This is super handy if you are in the DC area, and want to interface to the WMATA data.

As a proof of concept, I wrote a yo bot called @WMATA, where it returns the closest station if you Yo it your location. For hilarity, feel free to Yo it from outside DC.

For added fun, and puns, I wrote a dbus proxy for the API as weel, at wmata-dbus, so you can query the next train over dbus. One thought was to make a GNOME Shell extension to tell me when the next train is. I’d love help with this (or pointers on how to learn how to do this right).

Categories: Elsewhere

PreviousNext: Drupal 8 FTW: Is it a test or is it a form? Actually, its both

Planet Drupal - Mon, 22/08/2016 - 03:25

As you'd be aware by now - Drupal 8 features lots of refactoring of from procedural code to object-oriented.

One such refactoring was the way forms are build, validated and executed.

One cool side-effect of this is that you can now build and test a form with a single class.

Yep that's right, the form and the test are one and the same - read on to find out more.

Categories: Elsewhere

Tomasz Buchert: A new version of pristine-tar

Planet Debian - Mon, 22/08/2016 - 00:42

Some of you may have noticed that a new version of pristine-tar was released in Debian unstable. Well, actually, 3 versions were released recently, since very fast a regression was found (#835586) and, sure enough, I found another one after uploading a fix for the first one.

The most important new feature is the ability to use xdelta3 instead of xdelta for delta generation. This doesn’t concern 99.9% of users and is disabled by default for backwards compatibility, but is going to be default in the future. This new format supports bigger files (see #737499) and xdelta3 seems better maintained.

Enjoy the new version and please report any problems you find.

Categories: Elsewhere

Cyril Brulebois: Freelance Debian consultant: running DEBAMAX

Planet Debian - Mon, 22/08/2016 - 00:35
Executive summary

Since October 2015, I've been running a FLOSS consulting company, specialized on Debian, called DEBAMAX.

Longer version

Everything started two years ago. Back then I blogged about one of the biggest changes in my life: trying to find the right balance between volunteer work as a Debian Developer, and entrepreneurship as a Freelance Debian consultant. Big change because it meant giving up the comfort of the salaried world, and figuring out whether working this way would be sufficient to earn a living…

I experimented for a while under a simplified status. It comes with a number of limitations but that’s a huge win compared to France’s heavy company-related administrativia. Here’s what it looked like, everything being done online:

  • 1 registration form to begin with: wait a few days, get an identifier from INSEE, mention it in your invoices, there you go!

  • 4 tax forms a year: taxes can be declared monthly or quarterly, I went for the latter.

A number of things became quite clear after a few months:

  • I love this new job! Sharing my Debian knowledge with customers, and using it to help them build/improve/stabilise their products and their internal services feels great!

  • Even if I wasn't aware of that initially, it seems like I've got a decent network already: Debian Developers, former coworkers, and friends thought about me for their Debian-related tasks. It was nice to hear about their needs, say yes, sign paperwork, and start working right away!

  • While I'm trying really hard not to get too optimistic (achieving a given turnover on the first year doesn't mean you're guaranteed to do so again the following year), it seemed to go well enough for me to consider switching from this simplified status to a full-blown company.

Thankfully I was eligible to being accompanied by the local Chamber of Commerce and Industry (CCI Rennes), which provides teaching sessions for new entrepreneurs, coaching, and meeting opportunities (accountants, lawyers, insurance companies, …). Summer in France is traditionally rather quiet (read: almost everybody is on vacation), so DEBAMAX officially started operating in October 2015. Besides different administrative and accounting duties, running this company doesn't change the way I've been working since July 2014, so everything is fine!

As before, I won't be writing much about it through my personal blog, except for an occasional update every other year; if you want to follow what's happening with DEBAMAX:

  • Website: debamax.com — in addition to the usual company, services, and references sections, it features a blog (with RSS) where some missions are going to be detailed (when it makes sense to share and when customers are fine with it). Spoiler alert: Tails is likely to be the first success story there. ;)
  • Twitter: @debamax — which is going to be retweeted for a while from my personal account, @CyrilBrulebois.
Categories: Elsewhere

Gregor Herrmann: RC bugs 2016/30-33

Planet Debian - Sun, 21/08/2016 - 23:56

not much to report but I got at least some RC bugs fixed in the last weeks. again mostly perl stuff:

  • #759979 – src:simba: "simba: FTBFS: RoPkg::Rsync ...failed! (needed)"
    keep ExtUtils::AutoInstall from downlaoding stuff, upload to DELAYED/7
  • #817549 – src:libropkg-perl: "libropkg-perl: Removal of debhelper compat 4"
    use debhelper compatibility level 5, upload to DELAYED/7
  • #832599 – iodine: "Fails to start after upgrade"
    update service file and use deb-systemd-helper in postinst
  • #832832 – src:perlbrew: "perlbrew: FTBFS: Tests failures"
    add patch to deal with removed old perl version (pkg-perl)
  • #832833 – src:libtest-valgrind-perl: "libtest-valgrind-perl: FTBFS: Tests failures"
    upload new upstream release (pkg-perl)
  • #832853 – src:libmojomojo-perl: "libmojomojo-perl: FTBFS: Tests failures"
    close, the underlying problem is fixed (pkg-perl)
  • #832866 – src:libclass-c3-xs-perl: "libclass-c3-xs-perl: FTBFS: Tests failures"
    upload new upstream release (pkg-perl)
  • #834210 – libdancer-plugin-database-core-perl: "libdancer-plugin-database-perl: FTBFS: Failed 1/5 test programs. 6/45 subtests failed."
    upload new upstream release (pkg-perl)
  • #834793 – libgit-wrapper-perl: "libgit-wrapper-perl: FTBFS: t/basic.t whitespace changes"
    add patch from upstream bug (pkg-perl)
Categories: Elsewhere

Roy Scholten: Content workflow initiative, the concept map

Planet Drupal - Sun, 21/08/2016 - 21:34

Mapping out the moving parts of the content workflow initiative we arrived at this high level grouping of related activities:

  • Create content
  • Review & approve content
  • Publish content
  • Manage the creation, review and publishing process
  • Configure the tools that enable all of the above

For either single items of content or a set of multiple items, bundled in a workspace.

Create content

Everything related to creating new, editing existing content in the first place.

Roles
  • Author
  • Copy writer
  • Photo/image editor
Tasks & activities
  • Review assignments
  • Create content
  • Format content
  • Preview content
  • Request review
  • Edit content based on feedback
  • Review other people’s content
  • Review existing, live content
Review & approve content

All the things that need to happen to get new content ready for publication. Here’s a more elaborate example of a moderation workflow using a workspace.

Roles
  • Editor
  • Marketing associate
  • Archivist
Tasks & activities
  • Review content, give feedback
  • Edit content
  • Preview content
  • Get notified of content conflicts
  • Adapt content for different channels
  • Analyse impact of content changes
  • Review existing content
  • Recover content
Publish content

Actual publication of content and managing its life cycle from then on.

Roles
  • Section editor
  • Legal
  • Compliance
Tasks & activities
  • Define/specify content packages
  • Review content (packages)
  • Audit (legal, compliance)
  • Preview content
  • Approve revivisions
  • (un)publish content items
  • (un)publish content packages
  • Schedule (un)publishing of content
  • Archive/delete content
Manage content workflow

Set the strategic agenda, coordinate with other business units, oversee all of the above.

Roles
  • Managing editor
  • Marketing executive
  • Support & maintenance
Tasks & activities
  • Define content strategy
  • Content planning
  • Coordinate with the business
  • Coordinate with IT
  • Coordinate content delivery
  • Define content assignments
  • Schedule content production
  • Monitor progress
  • Review audit trail
Configure content workflow tools

Providing the tools and processes to enable all of the above.

Roles
  • Administrator
  • Developer
Tasks & activities
  • Configure workflows for content moderation
  • Configure content workspaces
  • CMS configuration: content types, roles & permissions, notification settings…
  • Technical development

Have a look at the visual definition of a workspace and a more fleshed out example of a moderation workflow as well.

Hope this helps clarify the main concepts, activities and relationships in the workflow initiative.

Tags: drupaluxworkflow initiativedrupalplanetSub title: Must not make spelling mistakes or I have to start over again
Categories: Elsewhere

David Moreno: WIP: Perl bindings for Facebook Messenger

Planet Debian - Sun, 21/08/2016 - 19:18

A couple of weeks ago I started looking into wrapping the Facebook Messenger API into Perl. Since all the calls are extremely simple using a REST API, I thought it could be easier and simpler even, to provide a small framework to hook bots using PSGI/Plack.

So I started putting some things together and with a very simple interface you could do a lot:

use strict; use warnings; use Facebook::Messenger::Bot; my $bot = Facebook::Messenger::Bot->new({ access_token => '...', app_secret => '...', verify_token => '...' }); $bot->register_hook_for('message', sub { my $bot = shift; my $message = shift; my $res = $bot->deliver({ recipient => $message->sender, message => { text => "You said: " . $message->text() } }); ... }); $bot->spin();

You can hook a script like that as a .psgi file and plug it in to whatever you want.

Once you have some more decent user flow and whatnot, you can build something like:



…using a simple script like this one.

The work is not finished and not yet CPAN-ready but I’m posting this in case someone wants to join me in this mini-project or have suggestions, the work in progress is here.

Thanks!

Categories: Elsewhere

David Moreno: Cosmetic changes to my posts archive

Planet Debian - Sun, 21/08/2016 - 18:55

I’ve been doing a lot of cosmetic/layout changes to the nearly 750 posts in my blog’s archive. I apologize if this has broken some feed readers or aggregators. It appears like Hexo still needs better syndication support.

Categories: Elsewhere

David Moreno: Running find with two or more commands to -exec

Planet Debian - Sun, 21/08/2016 - 18:11

I spent a couple of minutes today trying to understand how to make find (1) to execute two commands on the same target.

Instead of this or any similar crappy variants:

$ find . -type d -iname "*0" -mtime +60 -exec scp -r -P 1337 "{}" "meh.server.com:/mnt/1/backup/storage" && rm -rf "{}" \;

Try something like this:

$ find . -type d -iname "*0" -mtime +60 -exec scp -r -P 1337 "{}" "meh.server.com:/mnt/1/backup/storage" \; -exec rm -rf "{}" \;

Which is:

$ find . -exec command {} \; -exec other command {} \;

And you’re good to go.

Categories: Elsewhere

Dirk Eddelbuettel: RcppEigen 0.3.2.9.0

Planet Debian - Sun, 21/08/2016 - 14:21

A new upstream release 3.2.9 of Eigen is now reflected in a new RcppEigen release 0.3.2.9.0 which got onto CRAN late yesterday and is now going into Debian. Once again, Yixuan Qiu did the heavy lifting of merging upstream (and two local twists we need to keep around). Another change is by James "coatless" Balamuta who added a row exporter.

The NEWS file entry follows.

Changes in RcppEigen version 0.3.2.9.0 (2016-08-20)
  • Updated to version 3.2.9 of Eigen (PR #37 by Yixuan closing #36 from Bob Carpenter of the Stan team)

  • An exporter for RowVectorX was added (thanks to PR #32 by James Balamuta)

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Danny Englander: Drupal 8 Development: How to Import an Existing Site Configuration Into a New Site

Planet Drupal - Sun, 21/08/2016 - 02:31

Disclaimer: The steps in this tutorial are not recommend for a live site and are experimental!

Recently, I ran into an issue in my local Drupal 8 development environment where uninstalling a module failed. I was getting errors related to the module in question in both the UI and in drush. My bad for not having a recent DB backup for my local, the one I had was too old to use. (I've since fixed that with an automated backup script!) Double my bad for having installed a not-ready-for-primetime module.

That being said, my local site had a ton of configuration and custom entities and I really wanted a clean database. I first attempted to use Features to transport all my config to a new install but kept getting an "Unmet Dependancies" error and was not able to get to the bottom of it. (There are a number of Drupal 8 Features issues open related to this.) I didn't understand this error as I was simply using the same root folder with a new database.

Features aside, I knew that by nature, an existing site configuration export can't be imported into another site, this makes sense at face value. But I got to thinking, "well, I just want to take all this nice config and pop it into a brand new site" -- less anything relating to the aforementioned bad module.

Get the site UUID

I did more searching and sure enough there is roundabout way to do what I was intending. It involves a few drush commands, drush cget and drush cset. The first command is to get the existing UUID from your present site.

drush cget system.site uuid

This will print out the Site UUID as:

'system.site:uuid': bfb11978-d1a3-4eda-91fb-45decf134e25

Once you get this, be sure to run drush cex one last time which will export the current configuration.

Install and reset

Once I had the current UUID, I put my settings files back to default, created a new database and installed Drupal 8 from scratch. Once the new site installed, I updated my settings.local.php to point to my existing configuration files:

/sites/default/files/config_......./sync/

If your remote is on Pantheon, the local config path would be like this:

/sites/default/files/config/

Once this was set, all I had to do was reset the new site UUID to my old one that I derived from the drush cget command by running this:

drush cset system.site uuid bfb11978-d1a3-4eda-91fb-45decf134e25

This resets the new site's UIID to the old site's UUID and that will allow you to run your config import.

Import the existing config

Now it was time to run drush cim which imports your site configuration. Running the command the first time gave me a rather nasty looking error.

The import failed due for the following reasons: [error] Entities exist of type Shortcut link and Default. These entities need to be deleted before importing.

This might seem like a scary error but it just has to do with admin Shortcut links and there is a core issue open for this. At any rate this was a new site so I didn't really care about deleting these. I did more searching and discovered an obscure drush command to "fix" this:

drush ev '\Drupal::entityManager()->getStorage("shortcut_set")->load("default")->delete();'

Once I did that, my configuration imported like a charm, all 300 config files and several dozens of entities. I didn't see any errors after that, and everything was perfect now.

Summary

I am not sure there would be many use cases for doing what is outlined here but it did solve a specific issue, your milage may vary. I would be suspect of using this method for an existing site that has lots of things already built out. Keep in mind, I imported my existing config into a brand new site install.

Tags 
  • Drupal
  • DevOps
  • Drupal 8
  • Drush
  • Drupal Planet
Resources 
Categories: Elsewhere

Anchal: GSoC'16 - Porting Comment Alter Module

Planet Drupal - Sun, 21/08/2016 - 02:00

For the last 3 months I’ve been working on Porting the Comment Alter module to Drupal 8 as my GSoC’16 project under mentorship of boobaa and czigor. This blog is an excerpt of the work I did during this time period. Weekly blog posts for the past 12 weeks can be accessed here.

Achievements

Creating schema for the module: Implemented hook_schema() to store the old and new entity revision IDs along with the parent entity type as altered by a comment. The revision IDs are used to show the differences over comments. The parent entity type is used to delete the entries from the comment_alter table when any revision of the parent entity is deleted, because in Drupal 8 we can have same revision IDs from different entity types. So to remove entries of particular entity type we need the entity type.

Using ThirdPartySettings to alter field config edit form - Implemented hook_form_FORM_ID_alter() for field_config_edit_form. This provides an interface to:

  1. Make any field comment alterable - Makes it possible to select any field we want to be altered by the comment. Currently all the Drupal core provided fields works well with the module.

  2. Hide alteration from diff - If the comment alterable option is enabled, then this option hides the differences shown over the comments. Instead of the differences, a link to the revision comparison is displayed for nodes. For the rest of the entities a “Changes are hidden” message is shown.

  3. Use latest revision - When a module like Workbench makes the current revision of an entity not the latest one, this option forces the Comment Alter module to use the latest revision instead of the current one. This option is present on the comment field settings.

  4. Adds Diff link on comments - Adds a Diff link on comments which takes us to the comparison page of two revisions, generated by the Diff module.

  5. Comment altering while replying to a comment - By default comment alterable fields can not be altered while replying to a comment. This option allows altering of fields even while replying to comments.

Adding pseudo fields: Implemented hook_entity_extra_field_info() to add one pseudo field for each comment alterable field on respective comment form display, and one pseudo field on comment display to show the changes made at the comment. Using these pseudo fields the position of the comment alterable fields can be re-ordered in the comment form. This gives site-builders flexibility to alter the position of any alterable fields.

Attaching comment alterable fields’ widgets on comment form: Comment alterable field widgets are retrieved from the form-display of the parent entity and they are attached to the comment form only after ensuring that there are no side effects. To support same name fields on both comment and parent entity, #parent property is provided so that the submitted field values for our alterable field widgets appears at a different location, not at the top level of $form_state->getValues(). All these added fields are re-orderable. Column informations and old values are also stored to the form at this stage, to later check if there were any changes made on the comment alterable fields.

Adding submit and validation callback for the altered comment form: First the submitted values are checked against the old values to see if the values of the alterable field changed at all or not. If they changed, then the parent entity values are updated and this is done by building the parent entity form display and copying the altered field values into it. Then the form is saved. In case the parent entity doesn’t support revisions, then do nothing else just save the parent entity with altered values. Otherwise create a revision and store the comment ID, old and new revision IDs and parent entity type in the comment alter database table, which is used to show the differences on comments using the Diff module.

Showing differences on comments: Using the Diff module and comment alter database table, the differences are shown over a particular comment. Only possible if the parent entity supports revisions. Diff module is used to get the differences between the two revisions and then those differences are rendered on comments in table format along with some custom styling.

Adding PHPUnit tests: Added automated unit tests to check the functionality of the module for different field types and widgets. The tests are written for EntityTestRev entity types to keep them as generic as possible. This was the toughest part for me as I was stuck at some places in tests for quite a while as thses tests took lot of time to run and debugging them really is hard. But in the end I’m happy that I was able to complete all the tests.

Screencast/Demo video: Created a demo video showing how the Comment Alter module works, along with a simple use case.

What’s left?

My mentors asked me to skip the Rules Integration part because the Rules module doesn’t have a stable or a beta release yet, only their first alpha release is there. So, the Rules Integration part is postponed till we get a stable or beta release.

Some important links

Thank you!

Categories: Elsewhere

Daniel Stender: Collected notes from Python packaging

Planet Debian - Sun, 21/08/2016 - 01:17

Here are some collected notes on some particular problems from packaging Python stuff for Debian, and more are coming up like this in the future. Some of the issues discussed here might be rather simple and even benign for the experienced packager, but maybe this is be helpful for people coming across the same issues for the first time, wondering what's going wrong. But some of the things discussed aren't easy. Here are the notes for this posting, there is no particular order.

UnicodeDecoreError on open() in Python 3 running in non-UTF-8 environments

I've came across this problem again recently packaging httpbin 0.5.0. The build breaks the following way e.g. trying to build with sbuild in a chroot, that's the first execution of setup.py with the default Python 3 interpreter:

I: pybuild base:184: python3.5 setup.py clean Traceback (most recent call last): File "setup.py", line 5, in <module> os.path.join(os.path.dirname(__file__), 'README.rst')).read() File "/usr/lib/python3.5/encodings/ascii.py", line 26, in decode return codecs.ascii_decode(input, self.errors)[0] UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 2386: ordinal not in range(128) E: pybuild pybuild:274: clean: plugin distutils failed with: exit code=1: python3.5 setup.py clean

One comes across UnicodeDecodeError-s quite oftenly on different occasions, mostly related to Python 2 packaging (like here). Here it's the case that in setup.py the long_description is tried to be read from the opened UTF-8 encoded file README.rst:

long_description = open( os.path.join(os.path.dirname(__file__), 'README.rst')).read()

This is a problem for Python 3.5 (and other Python 3 versions) when setup.py is executed by an interpreter run in a non-UTF-8 environment1:

$ LANG=C.UTF-8 python3.5 setup.py clean running clean $ LANG=C python3.5 setup.py clean Traceback (most recent call last): File "setup.py", line 5, in <module> os.path.join(os.path.dirname(__file__), 'README.rst')).read() File "/usr/lib/python3.5/encodings/ascii.py", line 26, in decode return codecs.ascii_decode(input, self.errors)[0] UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 2386: ordinal not in range(128) $ LANG=C python2.7 setup.py clean running clean

The reason for this error is, the default encoding for file object returned by open() e.g. in Python 3.5 is platform dependent, so that read() fails on that when there's a mismatch:

>>> readme = open('README.rst') >>> readme <_io.TextIOWrapper name='README.rst' mode='r' encoding='ANSI_X3.4-1968'>

Non-UTF-8 build environments because $LANG isn't particularly set at all or set to C are common or even default in Debian packaging, like in the continuous integration resp. test building for reproducible builds the primary environment features that (see here). That's also true for the base system of the sbuild environment:

$ schroot -d / -c unstable-amd64-sbuild -u root (unstable-amd64-sbuild)root@varuna:/# locale LANG= LANGUAGE= LC_CTYPE="POSIX" LC_NUMERIC="POSIX" LC_TIME="POSIX" LC_COLLATE="POSIX" LC_MONETARY="POSIX" LC_MESSAGES="POSIX" LC_PAPER="POSIX" LC_NAME="POSIX" LC_ADDRESS="POSIX" LC_TELEPHONE="POSIX" LC_MEASUREMENT="POSIX" LC_IDENTIFICATION="POSIX" LC_ALL=

A problem like this is solved mostly elegant by installing some workaround in debian/rules. A quick and easy fix is to add export LC_ALL=C.UTF-8 here, which supersedes the locale settings of the build environment. $LC_ALL is commonly used to change the existing locale settings, it overrides all other locale variables with the same value (see here). C.UTF-8 is an UTF-8 locale which is available by default in a base system, it could be used without installing the locales package (or worse, the huge locales-all package):

(unstable-amd64-sbuild)root@varuna:/# locale -a C C.UTF-8 POSIX

This problem ideally should be taken care of upstream. In Python 3, the default open() is io.open(), for which the specific encoding could be given, so that the UnicodeDecodeError vanishes. Python 2 uses os.open() for open(), which doesn't support that, but has io.open(), too. A fix which works for both Python branches goes like this:

import io long_description = io.open( os.path.join(os.path.dirname(__file__), 'README.rst'), encoding='utf-8').read() non-deterministic order of requirements in egg-info/requires.txt

This problem appeared in prospector/0.11.7-5 in the reproducible builds test builds, that was the first package of Prospector running on Python 32. It was revealed that there were differences in the sorting order of the [with_everything] dependencies resp. requirements in prospector-0.11.7.egg-info/requires.txt if the package was build on varying systems:

$ debbindiff b1/prospector_0.11.7-6_amd64.changes b2/prospector_0.11.7-6_amd64.changes {...} ├── prospector_0.11.7-6_all.deb │ ├── file list │ │ @@ -1,3 +1,3 @@ │ │ -rw-r--r-- 0 0 0 4 2016-04-01 20:01:56.000000 debian-binary │ │ --rw-r--r-- 0 0 0 4343 2016-04-01 20:01:56.000000 control.tar.gz │ │ +-rw-r--r-- 0 0 0 4344 2016-04-01 20:01:56.000000 control.tar.gz │ │ -rw-r--r-- 0 0 0 74512 2016-04-01 20:01:56.000000 data.tar.xz │ ├── control.tar.gz │ │ ├── control.tar │ │ │ ├── ./md5sums │ │ │ │ ├── md5sums │ │ │ │ │┄ Files in package differ │ ├── data.tar.xz │ │ ├── data.tar │ │ │ ├── ./usr/share/prospector/prospector-0.11.7.egg-info/requires.txt │ │ │ │ @@ -1,12 +1,12 @@ │ │ │ │ │ │ │ │ [with_everything] │ │ │ │ +pyroma>=1.6,<2.0 │ │ │ │ frosted>=1.4.1 │ │ │ │ vulture>=0.6 │ │ │ │ -pyroma>=1.6,<2.0

Reproducible builds folks recognized this to be a toolchain problem and set up the issue randonmness_in_python_setuptools_requires.txt to cover this. Plus, a wishlist bug against python-setuptools was filed to fix this. The patch which was provided by Chris Lamb adds sorting of dependencies in requires.txt for Setuptools by adding sorted() (stdlib) to _write_requirements() in command/egg_info.py:

--- a/setuptools/command/egg_info.py +++ b/setuptools/command/egg_info.py @@ -406,7 +406,7 @@ def warn_depends_obsolete(cmd, basename, filename): def _write_requirements(stream, reqs): lines = yield_lines(reqs or ()) append_cr = lambda line: line + '\n' - lines = map(append_cr, lines) + lines = map(append_cr, sorted(lines)) stream.writelines(lines)

O.k. In the toolchain, no instance sorts these requirements properly if differences appear, but what is the reason for these differences in the Prospector packages, though? The problem is somewhat subtle. In setup.py, [with_everything] is a dictionary entry of _OPTIONAL (which is used for extras_require) that is created by a list comprehension out of the other values in that dictionary. The code goes like this:

_OPTIONAL = { 'with_frosted': ('frosted>=1.4.1',), 'with_vulture': ('vulture>=0.6',), 'with_pyroma': ('pyroma>=1.6,<2.0',), 'with_pep257': (), # note: this is no longer optional, so this option will be removed in a future release } _OPTIONAL['with_everything'] = [req for req_list in _OPTIONAL.values() for req in req_list]

The result, the new _OPTIONAL dictionary including the key with_everything (which w/o further sorting is the source of the list of requirements requires.txt) e.g. looks like this (code snipped run through IPython):

In [3]: _OPTIONAL Out[3]: {'with_everything': ['vulture>=0.6', 'pyroma>=1.6,<2.0', 'frosted>=1.4.1'], 'with_vulture': ('vulture>=0.6',), 'with_pyroma': ('pyroma>=1.6,<2.0',), 'with_frosted': ('frosted>=1.4.1',), 'with_pep257': ()}

That list comprehension iterates over the other dictionary entries to gather the value of with_everything, and – Python programmers know that of course – dictionaries are not indexed and therefore the order of the key-value pairs isn't fixed, but is determined by certain conditions from outside the interpreter. That's the source for the non-determinism of this Debian package revision of Prospector.

This problem has been fixed by a patch, which just presorts the list of requirements before it gets added to _OPTIONAL:

@@ -76,8 +76,8 @@ 'with_pyroma': ('pyroma>=1.6,<2.0',), 'with_pep257': (), # note: this is no longer optional, so this option will be removed in a future release } -_OPTIONAL['with_everything'] = [req for req_list in _OPTIONAL.values() for req in req_list] - +with_everything = [req for req_list in _OPTIONAL.values() for req in req_list] +_OPTIONAL['with_everything'] = sorted(with_everything)

In comparison to the list method sort(), the function sorted() does not change iterables in-place, but returns a new object, both could be used. As a side note, egg-info/requires.txt isn't even needed, but that's another issue.

  1. As an alternative to prefixing LC_ALL, env -i could be used to get an empty environment. 

  2. 0.11.7-4 already but this package revision was in experimental due to the switch to Python 3 and therefore not tested by reproducible builds continuous integration. 

Categories: Elsewhere

Francois Marier: Remplacer un disque RAID défectueux

Planet Debian - Sun, 21/08/2016 - 00:00

Traduction de l'article original anglais à https://feeding.cloud.geek.nz/posts/replacing-a-failed-raid-drive/.

Voici la procédure que j'ai suivi pour remplacer un disque RAID défectueux sur une machine Debian.

Remplacer le disque

Après avoir remarqué que /dev/sdb a été expulsé de mon RAID, j'ai utilisé smartmontools pour identifier le numéro de série du disque à retirer :

smartctl -a /dev/sdb

Cette information en main, j'ai fermé l'ordinateur, retiré le disque défectueux et mis un nouveau disque vide à la place.

Initialiser le nouveau disque

Après avoir démarré avec le nouveau disque vide, j'ai copié la table de partitions avec parted.

Premièrement, j'ai examiné la table de partitions sur le disque dur non-défectueux :

$ parted /dev/sda unit s print

et créé une nouvelle table de partitions sur le disque de remplacement :

$ parted /dev/sdb unit s mktable gpt

Ensuite j'ai utilisé la commande mkpart pour mes 4 partitions et je leur ai toutes donné la même taille que les partitions équivalentes sur /dev/sda.

Finalement, j'ai utilisé les commandes toggle 1 bios_grub (partition d'amorce) et toggle X raid (où X est le numéro de la partition) pour toutes les partitions RAID, avant de vérifier avec la commande print que les deux tables de partitions sont maintenant identiques.

Resynchroniser/recréer les RAID

Pour synchroniser les données du bon disque (/dev/sda) vers celui de remplacement (/dev/sdb), j'ai exécuté les commandes suivantes sur mes partitions RAID1 :

mdadm /dev/md0 -a /dev/sdb2 mdadm /dev/md2 -a /dev/sdb4

et j'ai gardé un oeil sur le statut de la synchronisation avec :

watch -n 2 cat /proc/mdstat

Pour accélérer le processus, j'ai utilisé le truc suivant :

blockdev --setra 65536 "/dev/md0" blockdev --setra 65536 "/dev/md2" echo 300000 > /proc/sys/dev/raid/speed_limit_min echo 1000000 > /proc/sys/dev/raid/speed_limit_max

Ensuite, j'ai recréé ma partition swap RAID0 comme suit :

mdadm /dev/md1 --create --level=0 --raid-devices=2 /dev/sda3 /dev/sdb3 mkswap /dev/md1

Par que la partition swap est toute neuve (il n'est pas possible de restorer une partition RAID0, il faut la re-créer complètement), j'ai dû faire deux choses:

  • remplacer le UUID pour swap dans /etc/fstab, avec le UUID donné par la commande mkswap (ou bien en utilisant la command blkid et en prenant le UUID pour /dev/md1)
  • remplacer le UUID de /dev/md1 dans /etc/mdadm/mdadm.conf avec celui retourné pour /dev/md1 par la commande mdadm --detail --scan
S'assurer que l'on peut démarrer avec le disque de remplacement

Pour être certain de bien pouvoir démarrer la machine avec n'importe quel des deux disques, j'ai réinstallé le boot loader grub sur le nouveau disque :

grub-install /dev/sdb

avant de redémarrer avec les deux disques connectés. Ceci confirme que ma configuration fonctionne bien.

Ensuite, j'ai démarré sans le disque /dev/sda pour m'assurer que tout fonctionnerait bien si ce disque décidait de mourir et de me laisser seulement avec le nouveau (/dev/sdb).

Ce test brise évidemment la synchronisation entre les deux disques, donc j'ai dû redémarrer avec les deux disques connectés et puis ré-ajouter /dev/sda à tous les RAID1 :

mdadm /dev/md0 -a /dev/sda2 mdadm /dev/md2 -a /dev/sda4

Une fois le tout fini, j'ai redémarrer à nouveau avec les deux disques pour confirmer que tout fonctionne bien :

cat /proc/mdstat

et j'ai ensuite exécuter un test SMART complet sur le nouveau disque :

smartctl -t long /dev/sdb
Categories: Elsewhere

Francois Marier: Remplacer un disque RAID défectueux

Planet Debian - Sun, 21/08/2016 - 00:00

Traduction de l'article original anglais à https://feeding.cloud.geek.nz/posts/replacing-a-failed-raid-drive/.

Voici la procédure que j'ai suivi pour remplacer un disque RAID défectueux sur une machine Debian.

Remplacer le disque

Après avoir remarqué que /dev/sdb a été expulsé de mon RAID, j'ai utilisé smartmontools pour identifier le numéro de série du disque à retirer :

smartctl -a /dev/sdb

Cette information en main, j'ai fermé l'ordinateur, retiré le disque défectueux et mis un nouveau disque vide à la place.

Initialiser le nouveau disque

Après avoir démarré avec le nouveau disque vide, j'ai copié la table de partitions avec parted.

Premièrement, j'ai examiné la table de partitions sur le disque dur non-défectueux :

$ parted /dev/sda unit s print

et créé une nouvelle table de partitions sur le disque de remplacement :

$ parted /dev/sdb unit s mktable gpt

Ensuite j'ai utilisé la commande mkpart pour mes 4 partitions et je leur ai toutes donné la même taille que les partitions équivalentes sur /dev/sda.

Finalement, j'ai utilisé les commandes toggle 1 bios_grub (partition d'amorce) et toggle X raid (où X est le numéro de la partition) pour toutes les partitions RAID, avant de vérifier avec la commande print que les deux tables de partitions sont maintenant identiques.

Resynchroniser/recréer les RAID

Pour synchroniser les données du bon disque (/dev/sda) vers celui de remplacement (/dev/sdb), j'ai exécuté les commandes suivantes sur mes partitions RAID1 :

mdadm /dev/md0 -a /dev/sdb2 mdadm /dev/md2 -a /dev/sdb4

et j'ai gardé un oeil sur le statut de la synchronisation avec :

watch -n 2 cat /proc/mdstat

Pour accélérer le processus, j'ai utilisé le truc suivant :

blockdev --setra 65536 "/dev/md0" blockdev --setra 65536 "/dev/md2" echo 300000 > /proc/sys/dev/raid/speed_limit_min echo 1000000 > /proc/sys/dev/raid/speed_limit_max

Ensuite, j'ai recréé ma partition swap RAID0 comme suit :

mdadm /dev/md1 --create --level=0 --raid-devices=2 /dev/sda3 /dev/sdb3 mkswap /dev/md1

Par que la partition swap est toute neuve (il n'est pas possible de restorer une partition RAID0, il faut la re-créer complètement), j'ai dû faire deux choses:

  • remplacer le UUID pour swap dans /etc/fstab, avec le UUID donné par la commande mkswap (ou bien en utilisant la command blkid et en prenant le UUID pour /dev/md1)
  • remplacer le UUID de /dev/md1 dans /etc/mdadm/mdadm.conf avec celui retourné pour /dev/md1 par la commande mdadm --detail --scan
S'assurer que l'on peut démarrer avec le disque de remplacement

Pour être certain de bien pouvoir démarrer la machine avec n'importe quel des deux disques, j'ai réinstallé le boot loader grub sur le nouveau disque :

grub-install /dev/sdb

avant de redémarrer avec les deux disques connectés. Ceci confirme que ma configuration fonctionne bien.

Ensuite, j'ai démarré sans le disque /dev/sda pour m'assurer que tout fonctionnerait bien si ce disque décidait de mourir et de me laisser seulement avec le nouveau (/dev/sdb).

Ce test brise évidemment la synchronisation entre les deux disques, donc j'ai dû redémarrer avec les deux disques connectés et puis ré-ajouter /dev/sda à tous les RAID1 :

mdadm /dev/md0 -a /dev/sda2 mdadm /dev/md2 -a /dev/sda4

Une fois le tout fini, j'ai redémarrer à nouveau avec les deux disques pour confirmer que tout fonctionne bien :

cat /proc/mdstat

et j'ai ensuite exécuter un test SMART complet sur le nouveau disque :

smartctl -t long /dev/sdb
Categories: Elsewhere

Roy Scholten: Getting something in the box

Planet Drupal - Sat, 20/08/2016 - 23:53

First impressions matter. The first glance has a lot if impact on further expectations. Drupal core doesn’t do well there. As webchick points out, after installation the opening line is “you have no content”.

Yeah, thanks.

This empty canvas makes Drupal appear very limited in functionality. Which is the exact opposite of how Drupal is advertised (flexible, extensible, there’s a module for that!)

This is not news. The issue for adding sample content is over 10 years old. The image I posted earlier is from a core conversation 6 years ago. Eaton and moi presented on Snowman in Prague 3 years ago.

But now we do have Drupal 8, with essential features available right out of the box. We have a new release schedule that encourages shipping new features and improvements every 6 months. And we’re settling on a better process for figuring out the part from initial idea to fleshed out plan first, before implementing that plan.

So lets work together and come up with a plan for sample content in core. Which means answering product focussed questions like:

  • Audience: who do we make this for?
  • Goals: what do these people want to achieve?
  • Features: which tools (features) to provide to make that possible?
  • Priorities: which of those tools are essential, which are nice to haves?

But purpose first: audience and goals.

We’re always balancing product specifics with framework generics in what core provides. Pretty sure we can do something more opiniated than our current default “Article” and “Page” content types without painting ourselves in a corner.

We’ll discuss this topic during upcoming UX meetings and in the UX channel in drupal.slack.com (get your automatic invite here).

Tags: drupalonboardingdrupalplanetSub title: Tabula rasa is not an effective onboarding strategy
Categories: Elsewhere

Jose M. Calhariz: Availabilty of at at the Major Linux Distributions

Planet Debian - Sat, 20/08/2016 - 20:11

In this blog post I will cover what versions of software at is used by the leading Linux Distributions as reported by LWN.

Also

Currently some distributions are lagging on the use of the latest at software.

Categories: Elsewhere

Russell Coker: Basics of Backups

Planet Debian - Sat, 20/08/2016 - 08:04

I’ve recently had some discussions about backups with people who aren’t computer experts, so I decided to blog about this for the benefit of everyone. Note that this post will deliberately avoid issues that require great knowledge of computers. I have written other posts that will benefit experts.

Essential Requirements

Everything that matters must be stored in at least 3 places. Every storage device will die eventually. Every backup will die eventually. If you have 2 backups then you are covered for the primary storage failing and the first backup failing. Note that I’m not saying “only have 2 backups” (I have many more) but 2 is the bare minimum.

Backups must be in multiple places. One way of losing data is if your house burns down, if that happens all backup devices stored there will be destroyed. You must have backups off-site. A good option is to have backup devices stored by trusted people (friends and relatives are often good options).

It must not be possible for one event to wipe out all backups. Some people use “cloud” backups, there are many ways of doing this with Dropbox, Google Drive, etc. Some of these even have free options for small amounts of storage, for example Google Drive appears to have 15G of free storage which is more than enough for all your best photos and all your financial records. The downside to cloud backups is that a computer criminal who gets access to your PC can wipe it and the backups. Cloud backup can be a part of a sensible backup strategy but it can’t be relied on (also see the paragraph about having at least 2 backups).

Backup Devices

USB flash “sticks” are cheap and easy to use. The quality of some of those devices isn’t too good, but the low price and small size means that you can buy more of them. It would be quite easy to buy 10 USB sticks for multiple copies of data.

Stores that sell office-supplies sell USB attached hard drives which are quite affordable now. It’s easy to buy a couple of those for backup use.

The cheapest option for backing up moderate amounts of data is to get a USB-SATA device. This connects to the PC by USB and has a cradle to accept a SATA hard drive. That allows you to buy cheap SATA disks for backups and even use older disks as backups.

With choosing backup devices consider the environment that they will be stored in. If you want to store a backup in the glove box of your car (which could be good when travelling) then a SD card or USB flash device would be a good choice because they are resistant to physical damage. Note that if you have no other options for off-site storage then the glove box of your car will probably survive if your house burns down.

Multiple Backups

It’s not uncommon for data corruption or mistakes to be discovered some time after it happens. Also in recent times there is a variety of malware that encrypts files and then demands a ransom payment for the decryption key.

To address these problems you should have older backups stored. It’s not uncommon in a corporate environment to have backups every day stored for a week, backups every week stored for a month, and monthly backups stored for some years.

For a home use scenario it’s more common to make backups every week or so and take backups to store off-site when it’s convenient.

Offsite Backups

One common form of off-site backup is to store backup devices at work. If you work in an office then you will probably have some space in a desk drawer for personal items. If you don’t work in an office but have a locker at work then that’s good for storage too, if there is high humidity then SD cards will survive better than hard drives. Make sure that you encrypt all data you store in such places or make sure that it’s not the secret data!

Banks have a variety of ways of storing items. Bank safe deposit boxes can be used for anything that fits and can fit hard drives. If you have a mortgage your bank might give you free storage of “papers” as part of the service (Commonwealth Bank of Australia used to offer that). A few USB sticks or SD cards in an envelope could fit the “papers” criteria. An accounting firm may also store documents for free for you.

If you put a backup on USB or SD storage in your waller then that can also be a good offsite backup. For most people losing data from disk is more common than losing their wallet.

A modern mobile phone can also be used for backing up data while travelling. For a few years I’ve been doing that. But note that you have to encrypt all data stored on a phone so an attacker who compromises your phone can’t steal it. In a typical phone configuration the mass storage area is much less protected than application data. Also note that customs and border control agents for some countries can compel you to provide the keys for encrypted data.

A friend suggested burying a backup device in a sealed plastic container filled with dessicant. That would survive your house burning down and in theory should work. I don’t know of anyone who’s tried it.

Testing

On occasion you should try to read the data from your backups and compare it to the original data. It sometimes happens that backups are discovered to be useless after years of operation.

Secret Data

Before starting a backup it’s worth considering which of the data is secret and which isn’t. Data that is secret needs to be treated differently and a mixture of secret and less secret data needs to be treated as if it’s all secret.

One category of secret data is financial data. If your accountant provides document storage then they can store that, generally your accountant will have all of your secret financial data anyway.

Passwords need to be kept secret but they are also very small. So making a written or printed copy of the passwords is part of a good backup strategy. There are options for backing up paper that don’t apply to data.

One category of data that is not secret is photos. Photos of holidays, friends, etc are generally not that secret and they can also comprise a large portion of the data volume that needs to be backed up. Apparently some people have a backup strategy for such photos that involves downloading from Facebook to restore, that will help with some problems but it’s not adequate overall. But any data that is on Facebook isn’t that secret and can be stored off-site without encryption.

Backup Corruption

With the amounts of data that are used nowadays the probability of data corruption is increasing. If you use any compression program with the data that is backed up (even data that can’t be compressed such as JPEGs) then errors will be detected when you extract the data. So if you have backup ZIP files on 2 hard drives and one of them gets corrupt you will easily be able to determine which one has the correct data.

Any Suggestions?

If you have any other ideas for backups by typical home users then please leave a comment. Don’t comment on expert issues though, I have other posts for that.

Related posts:

  1. No Backups WTF Some years ago I was working on a project that...
  2. Basics of EC2 I have previously written about my work packaging the tools...
  3. document storage I have been asked for advice about long-term storage of...
Categories: Elsewhere

ImageX Media: Complete Content Marketing with Drupal

Planet Drupal - Sat, 20/08/2016 - 02:11

At its most basic, content marketing is about maintaining or changing consumer behaviour. Or more elaborately, it’s “a marketing technique of creating and distributing valuable, relevant and consistent content to attract and acquire a clearly defined audience -- with the objective of driving profitable customer action.”

Categories: Elsewhere

ImageX Media: Want to be a Content Marketing Paladin? Then Automate Your Content Production Workflows with These (Free) Tools

Planet Drupal - Sat, 20/08/2016 - 02:07

Flat-lining content experiences and withering conversion rates can be the kiss of death to almost any website. When content experiences deteriorate one issue seems to make an appearance time and time again: the amount of time and resources required to produce and manage content marketing initiatives. Among the many best practices and strategies that will accelerate growth includes the all-powerful move towards productivity automation. 

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator