Elsewhere

Jose M. Calhariz: Availabilty of at at the Major Linux Distributions

Planet Debian - Sat, 20/08/2016 - 20:11

In this blog post I will cover what versions of software at is used by the leading Linux Distributions as reported by LWN.

Also

Currently some distributions are lagging on the use of the latest at software.

Categories: Elsewhere

Russell Coker: Basics of Backups

Planet Debian - Sat, 20/08/2016 - 08:04

I’ve recently had some discussions about backups with people who aren’t computer experts, so I decided to blog about this for the benefit of everyone. Note that this post will deliberately avoid issues that require great knowledge of computers. I have written other posts that will benefit experts.

Essential Requirements

Everything that matters must be stored in at least 3 places. Every storage device will die eventually. Every backup will die eventually. If you have 2 backups then you are covered for the primary storage failing and the first backup failing. Note that I’m not saying “only have 2 backups” (I have many more) but 2 is the bare minimum.

Backups must be in multiple places. One way of losing data is if your house burns down, if that happens all backup devices stored there will be destroyed. You must have backups off-site. A good option is to have backup devices stored by trusted people (friends and relatives are often good options).

It must not be possible for one event to wipe out all backups. Some people use “cloud” backups, there are many ways of doing this with Dropbox, Google Drive, etc. Some of these even have free options for small amounts of storage, for example Google Drive appears to have 15G of free storage which is more than enough for all your best photos and all your financial records. The downside to cloud backups is that a computer criminal who gets access to your PC can wipe it and the backups. Cloud backup can be a part of a sensible backup strategy but it can’t be relied on (also see the paragraph about having at least 2 backups).

Backup Devices

USB flash “sticks” are cheap and easy to use. The quality of some of those devices isn’t too good, but the low price and small size means that you can buy more of them. It would be quite easy to buy 10 USB sticks for multiple copies of data.

Stores that sell office-supplies sell USB attached hard drives which are quite affordable now. It’s easy to buy a couple of those for backup use.

The cheapest option for backing up moderate amounts of data is to get a USB-SATA device. This connects to the PC by USB and has a cradle to accept a SATA hard drive. That allows you to buy cheap SATA disks for backups and even use older disks as backups.

With choosing backup devices consider the environment that they will be stored in. If you want to store a backup in the glove box of your car (which could be good when travelling) then a SD card or USB flash device would be a good choice because they are resistant to physical damage. Note that if you have no other options for off-site storage then the glove box of your car will probably survive if your house burns down.

Multiple Backups

It’s not uncommon for data corruption or mistakes to be discovered some time after it happens. Also in recent times there is a variety of malware that encrypts files and then demands a ransom payment for the decryption key.

To address these problems you should have older backups stored. It’s not uncommon in a corporate environment to have backups every day stored for a week, backups every week stored for a month, and monthly backups stored for some years.

For a home use scenario it’s more common to make backups every week or so and take backups to store off-site when it’s convenient.

Offsite Backups

One common form of off-site backup is to store backup devices at work. If you work in an office then you will probably have some space in a desk drawer for personal items. If you don’t work in an office but have a locker at work then that’s good for storage too, if there is high humidity then SD cards will survive better than hard drives. Make sure that you encrypt all data you store in such places or make sure that it’s not the secret data!

Banks have a variety of ways of storing items. Bank safe deposit boxes can be used for anything that fits and can fit hard drives. If you have a mortgage your bank might give you free storage of “papers” as part of the service (Commonwealth Bank of Australia used to offer that). A few USB sticks or SD cards in an envelope could fit the “papers” criteria. An accounting firm may also store documents for free for you.

If you put a backup on USB or SD storage in your waller then that can also be a good offsite backup. For most people losing data from disk is more common than losing their wallet.

A modern mobile phone can also be used for backing up data while travelling. For a few years I’ve been doing that. But note that you have to encrypt all data stored on a phone so an attacker who compromises your phone can’t steal it. In a typical phone configuration the mass storage area is much less protected than application data. Also note that customs and border control agents for some countries can compel you to provide the keys for encrypted data.

A friend suggested burying a backup device in a sealed plastic container filled with dessicant. That would survive your house burning down and in theory should work. I don’t know of anyone who’s tried it.

Testing

On occasion you should try to read the data from your backups and compare it to the original data. It sometimes happens that backups are discovered to be useless after years of operation.

Secret Data

Before starting a backup it’s worth considering which of the data is secret and which isn’t. Data that is secret needs to be treated differently and a mixture of secret and less secret data needs to be treated as if it’s all secret.

One category of secret data is financial data. If your accountant provides document storage then they can store that, generally your accountant will have all of your secret financial data anyway.

Passwords need to be kept secret but they are also very small. So making a written or printed copy of the passwords is part of a good backup strategy. There are options for backing up paper that don’t apply to data.

One category of data that is not secret is photos. Photos of holidays, friends, etc are generally not that secret and they can also comprise a large portion of the data volume that needs to be backed up. Apparently some people have a backup strategy for such photos that involves downloading from Facebook to restore, that will help with some problems but it’s not adequate overall. But any data that is on Facebook isn’t that secret and can be stored off-site without encryption.

Backup Corruption

With the amounts of data that are used nowadays the probability of data corruption is increasing. If you use any compression program with the data that is backed up (even data that can’t be compressed such as JPEGs) then errors will be detected when you extract the data. So if you have backup ZIP files on 2 hard drives and one of them gets corrupt you will easily be able to determine which one has the correct data.

Any Suggestions?

If you have any other ideas for backups by typical home users then please leave a comment. Don’t comment on expert issues though, I have other posts for that.

Related posts:

  1. No Backups WTF Some years ago I was working on a project that...
  2. Basics of EC2 I have previously written about my work packaging the tools...
  3. document storage I have been asked for advice about long-term storage of...
Categories: Elsewhere

ImageX Media: Complete Content Marketing with Drupal

Planet Drupal - Sat, 20/08/2016 - 02:11

At its most basic, content marketing is about maintaining or changing consumer behaviour. Or more elaborately, it’s “a marketing technique of creating and distributing valuable, relevant and consistent content to attract and acquire a clearly defined audience -- with the objective of driving profitable customer action.”

Categories: Elsewhere

ImageX Media: Want to be a Content Marketing Paladin? Then Automate Your Content Production Workflows with These (Free) Tools

Planet Drupal - Sat, 20/08/2016 - 02:07

Flat-lining content experiences and withering conversion rates can be the kiss of death to almost any website. When content experiences deteriorate one issue seems to make an appearance time and time again: the amount of time and resources required to produce and manage content marketing initiatives. Among the many best practices and strategies that will accelerate growth includes the all-powerful move towards productivity automation. 

Categories: Elsewhere

Joey Hess: keysafe alpha release

Planet Debian - Sat, 20/08/2016 - 01:48

Keysafe securely backs up a gpg secret key or other short secret to the cloud. But not yet. Today's alpha release only supports storing the data locally, and I still need to finish tuning the argon2 hash difficulties with modern hardware. Other than that, I'm fairly happy with how it's turned out.

Keysafe is written in Haskell, and many of the data types in it keep track of the estimated CPU time needed to create, decrypt, and brute-force them. Running that through a AWS SPOT pricing cost model lets keysafe estimate how much an attacker would need to spend to crack your password.


(Above is for the password "makesad spindle stick")

If you'd like to be an early adopter, install it like this:

sudo apt-get install haskell-stack libreadline-dev libargon2-0-dev zenity stack install keysafe

Run ~/.local/bin/keysafe --backup --store-local to back up a gpg key to ~/.keysafe/objects/local/

I still need to tune the argon2 hash difficulty, and I need benchmark data to do so. If you have a top of the line laptop or server class machine that's less than a year old, send me a benchmark:

~/.local/bin/keysafe --benchmark | mail keysafe@joeyh.name -s benchmark

Bonus announcement: http://hackage.haskell.org/package/zxcvbn-c/ is my quick Haskell interface to the C version of the zxcvbn password strength estimation library.

PS: Past 50% of my goal on Patreon!

Categories: Elsewhere

Dirk Eddelbuettel: RQuantLib 0.4.3: Lots of new Fixed Income functions

Planet Debian - Fri, 19/08/2016 - 23:18

A release of RQuantLib is now on CRAN and in Debian. It contains a lot of new code contributed by Terry Leitch over a number of pull requests. See below for full details but the changes focus on Fixed Income and Fixed Income Derivatives, and cover swap, discount curves, swaptions and more.

In the blog post for the previous release 0.4.2, we noted that a volunteer was needed for a new Windows library build of QuantLib for Windows to replace the outdated version 1.6 used there. Josh Ulrich stepped up, and built them. Josh and I tried for several month to get the win-builder to install these, but sadly other things took priority and we were unsuccessful. So this release will not have Windows binaries on CRAN as QuantLib 1.8 is not available there. Instead, you can use the ghrr drat and do

if (!require("drat")) install.packages("drat") drat::addRepo("ghrr") install.packages("RQuantLib")

to fetch prebuilt Windows binaries from the ghrr drat. Everybody else gets sources from CRAN.

The full changes are detailed below. Changes in RQuantLib version 0.4.3 (2016-08-19)
  • Changes in RQuantLib code:

    • Discount curve creation has been made more general by allowing additional arguments for day counter and fixed and floating frequency (contributed by Terry Leitch in #31, plus some work by Dirk in #32).

    • Swap leg parameters are now in combined variable and allow textual description (Terry Leitch in #34 and #35)

    • BermudanSwaption has been modfied to take option expiration and swap tenors in order to enable more general swaption structure pricing; a more general search for the swaptions was developed to accomodate this. Also, a DiscountCurve is allowed as an alternative to market quotes to reduce computation time for a portfolio on a given valuation date (Terry Leitch in #42 closing issue #41).

    • A new AffineSwaption model was added with similar interface to BermudanSwaption but allowing for valuation of a European exercise swaption utlizing the same affine methods available in BermudanSwaption. AffineSwaption will also value a Bermudan swaption, but does not take rate market quotes to build a term structure and a DiscountCurve object is required (Terry Leitch in #43).

    • Swap tenors can now be defined up to 100 years (Terry Leitch in #48 fising issue #46).

    • Additional (shorter term) swap tenors are now defined (Guillaume Horel in #49, #54, #55).

    • New SABR swaption pricer (Terry Leitch in #60 and #64, small follow-up by Dirk in #65).

    • Use of Travis CI has been updated and switch to maintained fork of deprecated mainline.

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the rquantlib-devel mailing list off the R-Forge page. Issue tickets can be filed at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Gizra.com: Getting started with a Core Initiative

Planet Drupal - Fri, 19/08/2016 - 23:00
Driesnote where GraphQL was featured. Picture from Josef Jerabek

After some time contributing to the Drupal project in different ways, I finally decided to step up and get involved in one of the Core Initiatives. I was on IRC when I saw an announcement about the JSON API / GraphQL initiative weekly meeting and it seemed like a great chance to join. So, this blog post is about how you can get involved in a Core Initiative and more specifically, how can you get involved in the JSON API / GraphQL Initiative.

Continue reading…

Categories: Elsewhere

ImageX Media: Debugging Your Migrations in Drupal 8

Planet Drupal - Fri, 19/08/2016 - 21:04

One of the most useful features of Drupal 8 is the migration framework in core, and there are already plenty of plugins to work with different sources that are available in contributed modules. 

When writing your own code, it must always be debugged. As migrations can only be started with Drush, the debugging can be a bit challenging. And it gets even more interesting when you develop your website in a Vagrant box. 

In this tutorial, we will go through setting up xDebug and PhpStorm to debug your migrations.

Categories: Elsewhere

Simon Désaulniers: [GSOC] Final report

Planet Debian - Fri, 19/08/2016 - 18:04



The Google Summer of Code is now over. It has been a great experience and I’m very glad I’ve been able to make it. I’ve had the pleasure to contribute to a project showing very good promise for the future of communication: Ring. The words “privacy” and “freedom” in terms of technologies are being more and more present in the mind of people. All sorts of projects wanting to achieve these goals are coming to life each days like decentralized web networks (ZeroNet for e.g.), blockchain based applications, etc.

Debian

I’ve had the great opportunity to go to the Debian Conference 2016. I’ve been introduced to the debian community and debian developpers (“dd” in short :p). I was lucky to meet with great people like the president of the FSF, John Sulivan. You can have a look at my Debian conference report here.

If you want to read my debian reports, you can do so by browsing the “Google Summer Of Code” category on this blog.

What I have done

Ring is now in official debian repositories since June 30th. This is a good news for the GNU/Linux community. I’m proud to say that I’ve been able to contribute to debian by working on OpenDHT and developing new functionalities to reduce network traffic. The goal behind this was to finally optimize the data persistence traffic consumption on the DHT.

Github repository: https://github.com/savoirfairelinux/opendht

Queries

Issues:

  • #43: DHT queries

Pull requests:

  • #79: [DHT] Queries: remote values filtering
  • 93: dht: return consistent query from local storage
  • #106: [dht] rework get timings after queries in master
Value pagination

Issues:

  • #71: [DHT] value pagination

Pull requests:

  • #110: dht: Value pagination using queries
  • #113: dht: value pagination fix
Indexation (feat. Nicolas Reynaud)

Pull requests:

  • #77: pht: fix invalid comparison, inexact match lookup
  • #78: [PHT] Key consistency
General maintenance of OpenDHT

Issues:

  • #72: Packaging issue for Python bindings with CMake: $DESTDIR not honored
  • #75: Different libraries built with Autotools and CMake
  • #87: OpenDHT does not build on armel
  • #92: [DhtScanner] doesn’t compile on LLVM 7.0.2
  • #99: 0.6.2 filenames in 0.6.3

Pull requests:

  • #73: dht: consider IPv4 or IPv6 disconnected on operation done
  • #74: [packaging] support python installation with make DESTDIR=$DIR
  • #84: [dhtnode] user experience
  • #94: dht: make main store a vector>
  • #94: autotools: versionning consistent with CMake
  • #103: dht: fix sendListen loop bug
  • #106: dht: more accurate name for requested nodes count
  • #108: dht: unify bootstrapSearch and refill method using node cache
View by commits

You can have a look at my work by commits just by clicking this link: https://github.com/savoirfairelinux/opendht/commits/master?author=sim590

What’s left to be done Data persistence

The only thing left before achieving the totality of my work is to rigorously test the data persistence behavior to demonstrate the network traffic reduction. To do so we use our benchmark python module. We are able to analyse traffic and produce plots like this one:


Plot: 32 nodes, 1600 values with normal condition test.

This particular plot was drawn before the enhancements. We are confident to improve the results using my work produced during the GSOC.

TCP

In the middle of the GSOC, we soon realized that passing from UDP to TCP would be ask too much efforts in too short lapse of time. Also, it is not yet cleared if we should really do that.

Categories: Elsewhere

OSTraining: How to Use the Drupal Group Module

Planet Drupal - Fri, 19/08/2016 - 15:22

In this tutorial, I'm going to explain how you can use the new Group module to organize your site's users. Group is extremely powerful Drupal 8 module.

At the basic level, Group allows you to add extra permissions to content. 

At the more advanced level, this module is potentially a Drupal 8 replacement for Organic Groups.

Categories: Elsewhere

Olivier Grégoire: Conclusion Google Summer of Code 2016

Planet Debian - Fri, 19/08/2016 - 14:57

SmartInfo project with Debian

1. Me

Before getting into the thick of my project, let me present myself:
I am Olivier Grégoire (Gasuleg), and I study IT engineering at École de Technologie supérieure in Montreal.
I am an technician in electronics, and I began object-oriented programming just last year.
I applied to GSoC because I loved the concept of the project that I would work on and I really wanted to be part of it. I also wanted to discover the word of the free software.

2. My Project

During this GSoC, I worked on the Ring project.

“Ring is a free software for communication that allows its users to make audio or video calls, in pairs or groups, and to send messages, safely and freely, in confidence.

Savoir-faire Linux and a community of contributors worldwide develop Ring. It is available on GNU/Linux, Windows, Mac OSX and Android. It can be associated with a conventional phone service or integrated with any connected object.

Under this very easy to use software, there is a combination of technologies and innovations opening all kinds of perspectives to its users and developers.

Ring is a free software whose code is open. Therefore, it is not the software that controls you.

With Ring, you take control of your communication!

Ring is an open source software under the GPL v3 license. Everyone can verify the codes and propose new ones to improve the software’s performace. It is a guarantee of transparency and freedom for everyone!”
Source: ring.cx

The problem is about the typical user of Ring, the one who don’t use the terminal to launch Ring. He has no information about what has happened in the system. My goal is to create a tool that display statistics of Ring.

3. Quick Explanation of What My Program Can Do

The Code

Here are the links to the code I was working on all throughout the Google Summer of Code (You can see what I have done after the GSoC by clicking on the newest patchs):

Patch Status Daemon On Review Lib Ring Client (LRC) On Review Gnome client On review Remove unused code   Merged

!!!!!CHANGE LINK TO PUT THE LATEST PATCHES BEFORE THE END OF GSOC!!!!!

What Can Be Displayed?
This is the final list of information I can display and some ideas on what information we could display in the future:

Information     Details Done? Call ID The identification number of the call Yes Resolution Local and remote Yes Framerate Local and remote Yes Codec Audio and video in local and remote     Yes Bandwidth Download and upload No Performance use CPU, GPU, RAM No Security level In SIP call No Connection time   No Packets lost   No


To launch it you need to right click on the call and click on “Show advanced information”.

To stop it, same thing: right click on the call and click on “Hide advanced information”.

4. More Details About My Project

My program needs to retrieve information from the daemon (LibRing) and then display it in gnome client. So, I needed to create a patch for the daemon, the D-Bus layer (in the daemon patch), LibRingClient and the GNU/Linux (Gnome) client.

This is what the architecture of the project looks like.

source: ring.cx

And this is how I implemented my project.

5. Future of the Project

  • Add background on the gnome client
  • Implement the API smartInfoHub in all the other clients
  • Gather more information, such as bandwidth, resource consumption, security level, connection time, number of packets lost and anything else that could be deemed interesting
  • Display information for every participant in a conference call. I began to implement it for the daemon on patch set 25 .

Weekly report link

Thanks

I would like to thank the following:
- The Google Summer of Code organisation, for this wonderful experience.
- Debian, for accepting my project proposal and letting me embark on this fantastic adventure.
- My mentor, Mr Guillaume Roguez, and all his team, for being there to help me.

Categories: Elsewhere

Olivier Grégoire: Conclusion Google Summer of Code 2016

Planet Debian - Fri, 19/08/2016 - 14:57

SmartInfo project with Debian

1. Me

Before getting into the thick of my project, let me present myself:
I am Olivier Grégoire (Gasuleg), and I study IT engineering at École de Technologie supérieure in Montreal.
I am an technician in electronics, and I began object-oriented programming just last year.
I applied to GSoC because I loved the concept of the project that I would work on and I really wanted to be part of it. I also wanted to discover the word of the free software.

2. My Project

During this GSoC, I worked on the Ring project.

“Ring is a free software for communication that allows its users to make audio or video calls, in pairs or groups, and to send messages, safely and freely, in confidence.

Savoir-faire Linux and a community of contributors worldwide develop Ring. It is available on GNU/Linux, Windows, Mac OSX and Android. It can be associated with a conventional phone service or integrated with any connected object.

Under this very easy to use software, there is a combination of technologies and innovations opening all kinds of perspectives to its users and developers.

Ring is a free software whose code is open. Therefore, it is not the software that controls you.

With Ring, you take control of your communication!

Ring is an open source software under the GPL v3 license. Everyone can verify the codes and propose new ones to improve the software’s performace. It is a guarantee of transparency and freedom for everyone!”
Source: ring.cx

The problem is about the typical user of Ring, the one who don’t use the terminal to launch Ring. He has no information about what has happened in the system. My goal is to create a tool that display statistics of Ring.

3. Quick Explanation of What My Program Can Do

The Code

Here are the links to the code I was working on all throughout the Google Summer of Code (You can see what I have done after the GSoC by clicking on the newest patchs):

Patch Status Daemon On Review Lib Ring Client (LRC) On Review Gnome client On review Remove unused code   Merged

!!!!!CHANGE LINK TO PUT THE LATEST PATCHES BEFORE THE END OF GSOC!!!!!

What Can Be Displayed?
This is the final list of information I can display and some ideas on what information we could display in the future:

Information     Details Done? Call ID The identification number of the call Yes Resolution Local and remote Yes Framerate Local and remote Yes Codec Audio and video in local and remote     Yes Bandwidth Download and upload No Performance use CPU, GPU, RAM No Security level In SIP call No Connection time   No Packets lost   No


To launch it you need to right click on the call and click on “Show advanced information”.

To stop it, same thing: right click on the call and click on “Hide advanced information”.

4. More Details About My Project

My program needs to retrieve information from the daemon (LibRing) and then display it in gnome client. So, I needed to create a patch for the daemon, the D-Bus layer (in the daemon patch), LibRingClient and the GNU/Linux (Gnome) client.

This is what the architecture of the project looks like.

source: ring.cx

And this is how I implemented my project.

5. Future of the Project

  • Gather more information, such as bandwidth, resource consumption, security level, connection time, number of packets lost and anything else that could be deemed interesting
  • Display information for every participant in a conference call. I began to implement it for the daemon on patch set 25 .

6. Thanks

I would like to thank the following:
- The Google Summer of Code organisation, for this wonderful experience.
- Debian, for accepting my project proposal and letting me embark on this fantastic adventure.
- My mentor, Mr Guillaume Roguez, and all his team, for being there to help me.

Categories: Elsewhere

Mediacurrent: Friday 5: 5 Ways to Use Your Browser Developer Tools

Planet Drupal - Fri, 19/08/2016 - 14:12

TGIF! We hope the work week has treated you well.

Categories: Elsewhere

Nuvole: Optimal deployment workflow for Composer-based Drupal 8 projects

Planet Drupal - Fri, 19/08/2016 - 13:20
Considerations following our Drupal Dev Day Milan and Drupalaton presentations; and a preview of our DrupalCon training.

This post is an excerpt from the topics covered by our DrupalCon Dublin training: Drupal 8 Development - Workflows and Tools.

During the recent Nuvole presentations at Drupal Dev Days Milan 2016 and Drupalaton Hungary 2016 we received a number of questions on how to properly setup a Drupal 8 project with Composer. An interesting case where we discovered that existing practices are completely different from each other is: "What is the best way to deploy a Composer-based Drupal 8 project?".

We'll quickly discuss some options and describe what works best for us.

What to commit

You should commit:

  • The composer.json file: this is obvious when using Composer.
  • The composer.lock file: this is important since it will allow you to rebuild the entire codebase at the same status it was at a given point in the past.

The fully built site is commonly left out of the repository. But this also means that you need to find a way for rebuilding and deploying the codebase safely.

Don't run Composer on the production server

You would clearly never run composer update on the production server, as you want to be sure that you will be deploying the same code you have been developing upon. For a while, we considered it to be enough to have Composer installed on the server and run composer install to get predictable results from the (committed) composer.lock file.

Then we discovered that this approach has a few shortcomings:

  • The process is not robust. A transient network error or timeout might result in a failed build, thus introducing uncertainty factors in the deploy scripts. Easy to handle, but still not desirable as part of a delicate step such as deployment.

  • The process will inevitably take long. If you run composer install in the webroot directly, your codebase will be unstable for a few minutes. This is orders of magnitude longer than a standard update process (i.e., running drush updb and drush cim) and it may affect your site availability. This can be circumvented by building in a separate directory and then symlinking or moving directories.

  • Even composer install can be unpredictable, especially on servers with restrictions or running different versions of Composer or PHP; in rare circumstances, a build may succeed but yield a different codebase. This can be mitigated by enforcing (e.g., through Docker or virtualization) a dev/staging environment that matches the production environment, but you are still losing control on a relatively lengthy process.

  • You have no way of properly testing the newly built codebase after building it and before making it live.

  • Composer simply does not belong in a production server. It is a tool with a different scope, unrelated to the main tasks of a production server.

Where to build the codebase? CI to the rescue

After ruling out the production server, where should the codebase be built then?

Building it locally (i.e., using a developer's environment) can't work: besides the differences between the development and the production (--no-dev) setup, there is the risk of missing possible small patches applied to the local codebase. And a totally clean build is always necessary anyway.

We ended up using Continuous Integration for this task. Besides the standard CI job, which operates after any push operation to the branches under active development, performs a clean installation and runs automated tests, another CI job builds the full codebase based on the master branch and the composer.lock file. This allows sharing it between developers, a fast deployment to production through a tarball or rsync, and opportunities for actually testing the upgrade (with a process like: automatically import the production database, run database updates, import the new configuration, run a subset of automated tests to ensure that basic site functionality has no regressions) for maximum safety.

Slides from our recent presentations, mostly focused on Configuration Management but covering part of this discussion too, are below.

Tags: Drupal PlanetDrupal 8DrupalConTrainingAttachments:  Slides: Configuration Management in Drupal 8
Categories: Elsewhere

Norbert Preining: Debian/TeX Live 2016.20160819-1

Planet Debian - Fri, 19/08/2016 - 12:43

A new – and unplanned – release in quick succession. I have uploaded testing packages to experimental which incorporate tex4ht into the TeX Live packages, but somehow the tex4ht transitional updated slipped into sid, and made many packages uninstallable. Well, so after a bit more testing let’s ship the beast to sid, meaning that tex4ht will finally updated from the last 2009 version to what is the current status in TeX Live.

From the list of new packages I want to pick out the group of phf* packages that seem from a quick reading over the package documentations as very interesting.

But most important is the incorporation of tex4ht into the TeX Live packages, so please report bugs and shortcomings to the BTS. Thanks.

New packages

aurl, bxjalipsum, cormorantgaramond, notespages, phffullpagefigure, phfnote, phfparen, phfqit, phfquotetext, phfsvnwatermark, phfthm, table-fct, tocdata.

Updated packages

acmart, acro, biblatex-abnt, biblatex-publist, bxdpx-beamer, bxjscls, bxnewfont, bxpdfver, dccpaper, etex-pkg, europasscv, exsheets, glossaries-extra, graphics-def, graphics-pln, guitarchordschemes, ijsra, kpathsea, latexpand, latex-veryshortguide, ledmac, libertinust1math, markdown, mcf2graph, menukeys, mfirstuc, mhchem, mweights, newpx, newtx, optidef, paralist, parnotes, pdflatexpicscale, pgfplots, philosophersimprint, pstricks-add, showexpl, tasks, tetex, tex4ht, texlive-docindex, udesoftec, xcolor-solarized.

Categories: Elsewhere

Jim Birch: Styling Views Exposed Filters Selects in Drupal 8

Planet Drupal - Fri, 19/08/2016 - 11:20

Styling the HTML <select> tag to appear similar in all the different browsers is a task unto itself.  It seems on each new site , I find myself back visiting this post by Ivor Reić for a CSS only solution.  My task for today is to use this idea to theme an exposed filter on a view.

The first thing we need to do is add a div around the select.  We can do this by editing the select's twig template from Drupal 8 core's stable theme.  Copy the file from

/core/themes/stable/templates/form/select.html.twig to

/themes/yourtheme/templates/form/select.html.twig

Then add the extra <div class="select-style"> and closing </div> as so.

Here is the LESS file that I compile which includes Ivor's CSS, but also some adjustments I added to event the exposed filter out. Each rule is commented, explaining what they do.

I will compile this into my final CSS and we are good to go.  The display of the form, and the select list should be pretty accurate to what I want across all modern browsers.  Adjust as needed for your styles and design.

Read more

Categories: Elsewhere

Guido Günther: Foreman's Ansible integration

Planet Debian - Fri, 19/08/2016 - 11:16

Gathering from some recent discussions it seems to be not that well known that Foreman (a lifecycle tool for your virtual machines) does not only integrate well with Puppet but also with ansible. This is a list of tools I find useful in this regard:

  • The ansible-module-foreman ansible module allows you to setup all kinds of resources like images, compute resources, hostgroups, subnets, domains within Foreman itself via ansible using Foreman's REST API. E.g. creating a hostgroup looks like:

    - foreman_hostgroup: name: AHostGroup architecture: x86_64 domain: a.domain.example.com foreman_host: "{{ foreman_host }}" foreman_user: "{{ foreman_user }}" foreman_pass: "{{ foreman_pw }}"
  • The foreman_ansible plugin for Foreman allows you to collect reports and facts from ansible provisioned hosts. This requires an additional hook in your ansible config like:

    [defaults] callback_plugins = path/to/foreman_ansible/extras/

    The hook will report to Foreman back after a playbook finished.

  • There are several options for creating hosts in Foreman via the ansible API. I'm currently using ansible_foreman_module tailored for image based installs. This looks in a playbook like:

    - name: Build 10 hosts foremanhost: name: "{{ item }}" hostgroup: "a/host/group" compute_resource: "hopefully_not_esx" subnet: "webservernet" environment: "{{ env|default(omit) }}" ipv4addr: {{ from_ipam|default(omit) }}" # Additional params to tag on the host params: app: varnish tier: web color: green api_user: "{{ foreman_user }}" api_password: "{{ foreman_pw }}" api_url: "{{ foreman_url }}" with_sequence: start=1 end=10 format="newhost%02d"
  • The foreman_ansible_inventory is a dynamic inventory script for ansible that fetches all your hosts and groups via the Foreman REST APIs. It automatically groups hosts in ansible from Foreman's hostgroups, environments, organizations and locations and allows you to build additional groups based on any available host parameter (and combinations thereof). So using the above example and this configuration:

    [ansible] group_patterns = ["{app}-{tier}", "{color}"]

    it would build the additional ansible groups varnish-web, green and put the above hosts into them. This way you can easily select the hosts for e.g. blue green deployments. You don't have to pass the parameters during host creation, if you have parameters on e.g. domains or hostgroups these are available too for grouping via group_patterns.

  • If you're grouping your hosts via the above inventory script and you use lots of parameters than having these displayed in the detail page can be useful. You can use the foreman_params_tab plugin for that.

There's also support for triggering ansible runs from within Foreman itself but I've not used that so far.

Categories: Elsewhere

Michal &#268;iha&#345;: Wammu 0.42

Planet Debian - Fri, 19/08/2016 - 06:00

Yesterday, I've released Wammu 0.42. There are no major updates, more likely it's usual localization and minor bugfixes release.

As usual up to date packages are now available in Debian sid, Gammu PPA for Ubuntu or openSUSE buildservice for various RPM based distros.

Want to support further Wammu development? Check our donation options or support Gammu team on BountySource Salt.

Filed under: Debian English Gammu | 0 comments

Categories: Elsewhere

Eriberto Mota: Debian: GnuPG 2, chroot and debsign

Planet Debian - Fri, 19/08/2016 - 03:38

Since GPG 2 was set as default for Debian (Sid, August 2016), an error message appeared inside jails triggered by chroot, when using debuild/debsign commands:

clearsign failed: Inappropriate ioctl for device

The problem is that GPG 2 uses a dialog window to ask for a passphrase. This dialog window needs a tty (from /dev/pts/ directory). To solve the problem, you can use the following command (inside the jail):

# mount devpts -t devpts /dev/pts

Alternatively, you can add to /etc/fstab file in jail:

devpts /dev/pts devpts defaults 0 0

and use the command:

# mount /dev/pts

Enjoy!

Categories: Elsewhere

Zlatan Todorić: Defcon24

Planet Debian - Fri, 19/08/2016 - 03:15

I went to Defcon24 as Purism representative. It was (as usual) held in Las Vegas, the city of sin. In the same module as with DebConf, here we go with good, bad and ugly.

Good

Badges are really cool. You can find good hackers here and there (but very small number compared to total number). Some talks are good and workshop + village idea looks good (although I didn't manage to attend any workshop as there was place for 1100 and there were 22000 attendees). The movie night idea is cool and Arcade space (where you can play old arcade games, relax and hack and also listen to some cool music) is really lovely. Also you have a camp/village for kids learning things such as electronics, soldering etc but you need to pay attention that they don't see too much of twisted folks that also gather on this con. And that's it. Oh, yea, Dark Tangent appears actually to be cool dude.

Bad

One does not simply hold a so-called hacker conference in Las Vegas. Having a conference inside hotel/casino where you mix with gamblers and casino workes (for good or for bad) is simply not in hacker spirit and certainly brings all kind of people to the same place. Also, there were simply not enough space for 22000 Defcon attendees, and you don't get proud of having on average ONLY 40min lines. You get proud if you don't have lines! Organization is not the strongest part of Defcon.

Huge majority of attendees are not hackers. They are script kiddies, hacker wannabes, comic con people, few totally lost souls etc etc. That simply brings the quality of a conference down. Yes it is cool to have mix of many diverse people but not for the sake of just having people.

Ugly

They lack Code of Conduct (everyone knows I am not in favor of any writens rules how people should behave but after Defcon I clearly see need for it). Actually, tbh, they do have it but no one gives a damn about it. And you should report to Goons, more about them below. Sexism is huge here. I remember and hear about stories of sexual harassment in IT industry, but Debian somehow mitigated that before me entering its domains, so I never experienced it. The sheer number of sexist behavior on Defcon is tremendous. It appears to me that those people had lonely childhood and now they act as a spoiled 6 year old: they're spoiled, they need to yell to show their point, they have low and stupid sexist jokes and they simply think that is cool.

Majority of Goons (their coordinators or whatever) are simply idiots. I don't know do they feel they have some superpowers, or are drunk or just stupid but yelling on people, throwing low jokes on people, more yelling, cursing all the time, more yelling - simply doesn't work for me. So now you can see the irony of CoC on Defcon. They even like to say, hey we are old farts, let us our con be as we want it to be. So no real diversity there. Either it is their way, and god forsaken if you try to change something for better and make them stop cursing or throwing sexist jokes ("squeeze, people. together, touch each other, trust me it will feel good"), or highway.

Also it appears that to huge number of vocal people, word "fuck" has some fetish meaning. Either it needs to show how "fucking awesome this con or they are" or to "fucking tell few things about random fucking stuff". Thank you, but no thank you.

So what did I do during con. I attended few talks, had some discussion with people, went to one party (great DJs, again people doing stupid things, like breaking invertory to name just one of them) and had so much time (read "I was bored") that I bought domain, brough up server on which I configured nginx and cp'ed this blog to blog.zlatan.tech (yes, recently I added letsencrypt because it is, let me be in Defcon mood, FUCKING AWESOME GRRR UGH) and now I even made .onion domain for it. What can boredom do to people, right?

So the ultimate question is - would I go again to Defcon. I am strongly leaning to no, but in my nature is to give second chance and now I have more experience (and I also have thick skin so I guess I can play calm for one more round).

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator - Elsewhere