Elsewhere

Dirk Eddelbuettel: pkgKitten 0.1.3: Still creating R Packages that purr

Planet Debian - sam, 13/06/2015 - 16:13

A new release, now at version 0.1.3, of pkgKitten arrived on CRAN this morning.

The main change is (optional) support of the excellent whoami package by Gabor which allows us to fill in the Author: and Maintainer: fields of the DESCRIPTION file with automatically discovered values. This is however only a Suggests: and not a Depends: to not force the added dependencies on everywhere. We also alter the default values of Title: and Description: so that they actually pass the current level of tests enforced by R CMD check --as-cran.

Changes in version 0.1.3 (2015-06-12)
  • The fields Title: and Description: in the file DESCRIPTION file are now updated such that they actually pass R CMD check on current versions of R.

  • If installed, the whoami package (version 1.1.0 or later) is now used to discover the username and email in the DESCRIPTION file.

More details about the package are at the pkgKitten webpage and the pkgKitten GitHub repo.

Courtesy of CRANberries, there is also a diffstat report for this release

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Catégories: Elsewhere

Steinar H. Gunderson: Code size optimization

Planet Debian - sam, 13/06/2015 - 13:15

I've finally found an area where Clang/LLVM produces better code than GCC for me: By measuring for size. (For speed or general sanity, I am not that impressed, but there are tons of people who seem to assume “Clang is newer and has a more modern architecture, surely it must be faster; and by the way, I heard someone with an impressive microbenchmark once”.)

I took SaneStation, a 64k synth (ie., in practice it's designed to fit into about 16 kB with tune and bank data, although that is after compression), and compiled it for 32-bit x86 with g++ 4.9 and Clang 3.7. The .text segment for GCC was 39206 bytes, for Clang it was 34323 bytes; a marked difference.

Of course, none of these can measure up to MSVC. I don't have a Clang environment set up for compiling to Windows, but could compare on a similar project (same synth, slightly different player binary) where MinGW (based on GCC 4.9) had 38032 bytes of code, and MSVC had 31142.

Then of course there's the fact that KKrunchy refuses to accept the binaries MinGW outputs due to TLS usage, and that really counts more than any size difference; UPX just isn't there. :-)

Catégories: Elsewhere

Vincent Fourmond: Rescan-scsi-bus and slow DVD drives

Planet Debian - sam, 13/06/2015 - 10:41
For reasons that fail my, my internal SATA DVD drive is very seldom seen by the kernel at startup. My guess is that it takes very long to start, and the kernel doesn't wait that long before deciding that it had all SCSI devices, so it misses it. It's actually very annoying, since you can't use the drive at all. After digging around, I finally stumbled on the rescan-scsi-bus tool from the scsitools package. Running (as root, of course)
~ rescan-scsi-bus -w -l (sometimes two or three times) is enough to get the device back up, with the /dev/dvd udev symlink.

Hope this'll help !

Catégories: Elsewhere

Gunnar Wolf: «Almost free» — Some experiences with the Raspberry Pi, CI20, BananaPi, CuBox-i... And whatever will follow

Planet Debian - sam, 13/06/2015 - 02:46

I know very little about hardware.

I think I have a good understanding on many aspects of what happens inside a computer, but my knowledge is clearly firmer on what happens once an operating system is already running. And even then, my understanding of the lower parts of reality is shaky at most — At least according to my self-evaluation, of course, comparing to people I'm honored to call "my peers".

During the last ~18 months, my knowledge of this part of reality, while still far from complete, has increased quite a bit — Maybe mostly showing that I'm basically very cheap: As I have come across very cheap (or even free for me!) hardware, I have tried to understand and shape what happens in levels below those where I dwell.

I have been meaning to do a writeup on the MIPS Creator CI20, which was shipped to me for free (thanks++!) by Imagination Technologies; I still want to get more familiar with the board and have better knowledge before reporting on it. Just as a small advance, as this has been keeping me somewhat busy: I got this board after their offer to Debian Developers, and prompted because I'll be teaching some modules on the Embedded Linux diploma course dictated by Facultad de Ingeniería, UNAM — Again, I'll blog about that later.

My post today follows Riku's, titled Dystopia of things, where he very clearly finds holes in the Internet of Things offering of one specific product and one specific company, but allows for generalizations on what we will surely see as the model. Riku says:

Today, the GPL sources for hub are available - at least the kernel and a patch for busybox. The proper GPL release is still only through written offer. The sources appeared online April this year while Hub has been sold for two years already. Even if I ordered the GPL CD, it's unlikely I could build a modified system with it - too many proprietary bits. The whole GPL was invented by someone who couldn't make a printer do what he wanted. The dystopian today where I have to rewrite the whole stack running on a Linux-based system if I'm not happy what's running there as provided by OEM.

This is not exactly the situation on the boards/products (it's a disservice to call the cute CuBox-i just a board!) I mention I'm using, but it's neither too far. Being used to the easy x86 world, I am used to bitching on specific hardware that does not get promptly recognized by the Linux kernel — But even with the extra work UEFI+SecureBoot introduces, getting the kernel to boot is something we just take for granted. In the MIPS and ARM worlds, this is not so much of a given; I'm still treating the whole SPL and DeviceTree world as a black box, but that's where a lot of the work happens.

The boards I am working on try to make a point they are Open Hardware. The CI20 is quite impressive in this regard, as not only it has a much more complete set of on-board peripherials than any other, but a wealth of schematics, datasheets and specifications for the different parts of its components. And, of course, the mere availability of the MIPSfpga program to universities worldwide is noteworthy — Completely outside of my skillset, but looks most interesting.

However... Despite being so much almost-Free-with-a-capital-F, all those boards fail our definitions of freedom in several ways. And yes, they lead us to a situation similar to what Riku describes, to what Stallman feared... To a situation not really better to where we stand on openly closed-source, commodity x86 hardware: Relying on binary blobs and on non-free portions of code to just use our hardware, or at least to use many of the features that would be available to us otherwise.

As an example, both the CI20 and the CuBox-i vendors provide system images able to boot what they describe as a Debian 7 system, based on a 3.0 Linux kernel (which Debian never used; IIRC the CuBox-i site said it was derived from a known-good Android kernel)... Only that it's an image resulting of somebody else installing and configuring it. Why should we trust their image to be sane? Yes, the resulting installation is quite impressive (i.e. the CI20's 3D demos are quite impressive for a system that feels otherwise sluggish, and out of my ARM experience, I'd wager it feels sluggish mostly because of a slow SSD)...

I have managed to do clean Debian installs on most of my ARM machines (the CuBox-i as described in my previous blog post; this post from Elena ``of Valhalla'' prompted me into trying the already well documented way of running the official Debian Installer, which worked like a charm and gave me a very nice and responsive Debian 8 install — Modulo yes, the Banana's non-free video interface, which AFAICT uses the non-free Mail binary driver... And which I haven't had the time to play with yet. Of course, my CuBox is in a similar situation, where it works like a charm as a personal server, but is completely worthless as a set-top box.

So, with those beautiful, small, cheap SoC systems, we are close to where we stood twenty years ago with x86 Linux: Good support for a small set of peripherials, but a far cry from having a functional system with exclusively free software. ,

Despite claims of being open source, this is not open source hardware. If you are thinking of getting this device, you should also try looking into the hardware from our Community instead.

Still... Playing with these boards has taught me a lot, and has clearly taught me I'm still standing on the first steps of the n00b level. I have a lot to learn to be able to responsibly teach my part of the diploma course, and I'm very thankful for the differences in hardware (and, of course, for the hardware manufacturers, specially for the MIPS Creator CI20 and the Lemaker Banana Pi for giving me boards to work on!)

I shall keep posting on this topic.

Catégories: Elsewhere

Steve Kemp: I'm still moving, but ..

Planet Debian - sam, 13/06/2015 - 02:00

Previously I'd mentioned that we were moving from Edinburgh to Newcastle, such that my wife could accept a position in a training-program, and become a more specialized (medical) doctor.

Now the inevitable update: We're still moving, but we're no longer moving to Newcastle, instead we're moving to Helsinki, Finland.

Me? I care very little about where I end up. I love Edinburgh, I always have, and I never expected to leave here, but once the decision was made that we needed to be elsewhere the actual destination does/didn't matter too much to me.

Sure Newcastle is the home of Newcastle Brown Ale, and has the kind of proper-Northern accents I both love and miss but Finland has Leipäjuusto, Saunas, and lovely people.

Given the alternative - My wife moves to Finland, and I do not - Moving to Helsinki is a no-brainer.

I'm working on the assumption that I can keep my job and work more-remotely. If that turns out not to be the case that'll be a real shame given the way the past two years have worked out.

So .. 60 days or so left in the UK. Fun.

Catégories: Elsewhere

Drupal core announcements: Drupal core security release window on Wednesday, June 17

Planet Drupal - sam, 13/06/2015 - 00:30
Start:  2015-06-17 (All day) America/New_York Online meeting (eg. IRC meeting) Organizers:  David_Rothstein

The monthly security release window for Drupal 6 and Drupal 7 core will take place on Wednesday, June 17.

This does not mean that a Drupal core security release will necessarily take place on that date for either the Drupal 6 or Drupal 7 branches, only that you should prepare to look out for one (and be ready to update your Drupal sites in the event that the Drupal security team decides to make a release).

There will be no bug fix/feature release on this date; the next window for a Drupal core bug fix/feature release is Wednesday, July 1.

For more information on Drupal core release windows, see the documentation on release timing and security releases, and the discussion that led to this policy being implemented.

Catégories: Elsewhere

Open Source Training: Easily Apply Drupal Patches with Patch Manager

Planet Drupal - ven, 12/06/2015 - 21:56

Have you ever updated your Drupal site only to suddenly have errors?

If you use Drupal regularly, this will happen to you at some point. However, one of the good things about using Drupal is there are so many other users that someone else may well have found and solved the error.

One common way to solve an error is with a patch. A patch changes the code on your site, but only by editing a file rather than providing a complete update.

Many of the available instructions for applying patches ask you to use an application called Git and to use command line instructions. These instructions can be intimidating, so we're going to show you how non-coders can safely and effectively apply patches.

Catégories: Elsewhere

LightSky: LightSky is Seeking a Senior Ruby on Rails Developer / Drupal Developer

Planet Drupal - ven, 12/06/2015 - 21:20

LightSky is seeking a Senior Ruby on Rails Developer.

About the job

Catégories: Elsewhere

Riku Voipio: Dystopia of Things

Planet Debian - ven, 12/06/2015 - 20:42
The Thing on InternetI've now had an "Internet of Things" device for about a year. It is Logitech Harmony HUB, an universal remote controller. It comes with a traditional remote, but the interesting part is that it allows me to use my smartphone/tablet as remote over WiFi. With the android app it provides a rather nice use experience, yet I can see the inevitable end in anger.

Bare minimum GPL respectToday, the GPL sources for hub are available - at least the kernel and a patch for busybox. The proper GPL release is still only through written offer. The sources appeared online April this year while Hub has been sold for two years already. Even if I ordered the GPL CD, it's unlikely I could build a modified system with it - too many proprietary bits. The whole GPL was invented by someone who couldn't make a printer do what he wanted. The dystopian today where I have to rewrite the whole stack running on a Linux-based system if I'm not happy what's running there as provided by OEM.

App onlyThe smartphone app is mandatory. The app is used to set up the hub. There is no HTML5 interface or any other way to control to the hub - just the bundled remote and phone apps. Fully proprietary apps, with limited customization options. And if app store update removes a feature you have used.. well you can't get it from anywhere anymore. The dystopian today where "Internet of Things" is actually "Smartphone App of Things".

Locked APIMaybe instead of modifying the official app you could write your own UI? Like one screen with only the buttons you ever use when watching TV? There *is* an API with delightful headline "Better home experiences, together". However, not together with me, the owner of the harmony hub. The official API is locked to selected partners. And the API is not to control the hub - it's to let the hub connect to other IoT devices. Of course, for talented people, locked api is usually just undocumented api. People have reverse engineered how the app talks to the hub over wifi. Curiously it is actually Jabber based with some twists like logging credentials through Logitech servers. The dystopian today where I can't write programs to remotely control the internet connected thing I own without reverse engineering protocols.

Central ServerDid someone say Logitech servers? Ah yes, all configuring of the remote happens via myharmony servers, where the database of remote controllers lives. There is some irony in calling the service *my* harmony when it's clearly theirs. The communication with cloud servers leaks at minimum back to Logitech what hardware I control with my hub. At the worst, it will become an avenue of exploits. And how long will Logitech manage these servers? The moment they are gone, harmony hub becomes a semi-brick. It will still work, but I can't change any configuration anymore. The dystopian future where the Internet of Thing will stop working *when* cloud servers get sunset

What nowThis is not just the Harmony hub - this is a pattern that many IoT products follow - Linux based gadget, smartphone app, cloud services, monetized apis. After the gadget is bought, the vendor has little incentive to provide any updates. After all, the next chance I'll carry them money is when the current gadget gets obsolete.

I can see two ways out. The easy way is to get IoT gadgets as monthly paid service. Now the gadget vendor has the right incentive - instead of trying to convince me to buy their next gadget, their incentive is to keep me happily paying the monthly bill. The polar opposite is to start making open competing IoT's, and market to people the advantage of being yourself in control. I can see markets for both options. But half-way between is just pure dystopy.

Catégories: Elsewhere

Drupal Watchdog: Yubikey NEO and a Better Password Manager: pass

Planet Drupal - ven, 12/06/2015 - 18:52
Supergenpass and its Problems

For a very long time I have been using supergenpass as my primary password “manager”. It started as a simple bookmarklet and evolved into browser extensions and mobile apps. Taking a primary password and the domain name, it creates a password unique to the domain. There are a number of problems with this: if the master password gets compromised, all your passwords are compromised even the ones you will only create in the future. The created password is not flexible: some systems have nonsensical and ill-advised limitations on what the password must contain. It’s not easy to change your password every few months if you want to since it’d involve changing the master password. Also, since it’s domain dependent, logging into amazon.ca with your amazon.com password or ba.com with your britishairways.com password is slightly problematic/annoying. One Shall Pass iterates on this idea and adds a “generation” parameter so you can easily change your password but then you need to remember what generation were you using for a site...

And it’s only a password, it’s not a storage, so it can’t help with PIN codes or security questions and answers which is necessary because you should never use real answers to those questions as they are too easy to social engineer. When asked about your childhood address, use something like “That red van down by the river” or something similar but if you want to put in a different one for every site, you need to store your answers.

Other Solutions

Many use solutions like Lastpass, but I find them entirely unacceptable as they are black boxes and you have no control over your own data. In my world view anything interacting with my passwords must be open source. Also, it creates a huge “single point of failure” in your digital life -- if your cloud-based password manager goes down you can’t log into anything. Something like KeePassX or Kwallet is slightly better but there you have another problem: the master password. It obviously needs to be strong, but that means it’s cumbersome to type in all the time so you will have some long timeout between password prompts and then compromising your machine means compromising all your passwords in one go.

Pass and the NEO

I’ve found a program called pass “the standard unix password manager”. In fact, it’s just a friendly wrapper around GPG encoded files (GPG really needs more friendly UIs). One file per domain is the recommended way to organize your files. Pass can copy the first line of the file to the clipboard so it is recommended to put the password there and use the rest of the file for other data. By itself it’s not much stronger than KeePassX or similar: you have the gpg-agent keeping your private key open (much like ssh-agent). But then there is the Yubikey NEO (and the NEO-n) which can store a GPG key. Now you only present your private key when it’s needed for decryption. Also, since the private key can not be exported from the NEO, a simple (easy to remember and enter) PIN is adequate as it is impossible to brute force the PIN as the device will lock after a few tries.

The Worst Case

Even in the worst case where an attacker can execute arbitrary commands on your computer the pass-NEO combo is not defeated immediately: again, the NEO does not support exporting the key so each password file would need to be sent to the NEO for decryption. However, it is only present very briefly -- just when you log in. So it will take time for the attacker to walk away with every password you have and in such a catastrophic event every small hindrance might matter. (The really worst case is a machine compromised in this fashion and then the attacker physically stealing your YubiKey later. Our only advice for this case: try not to cross any three letter agencies.)

The Various Modes of the NEO One Time Password (OTP)

The NEO can operate in a number of modes: it can provide a one time password (OTP) which is not particularly useful because the server would need to implement the YubiKey API for this to be useful and few websites do.

Universal Two Factor (U2F)

The U2F mode implements an up-and-coming standard which -- as these standards usually do -- won’t be ubiquitous any time soon. Where it is implemented, it prevents both phishing and spear-phishing attacks.

Chip Card Interface Device (CCID)

Finally, it can emulate a smartcard reader and the smartcard both, this is called the CCID mode. It is capable of emulating the removal of the smartcard as well which is very useful for the “worst case” described above. YubiKey calls this the “eject” mode: one touch of the device “inserts” the smartcard, another “ejects” it. It is even capable of triggering an “eject” automatically a few seconds after the “insert”. How long it should wait for the automated eject is configurable.

Setting up Eject Mode GUI to Read the Warnings

There are no less than three utilities provided for mode switching. The GUI, called neoman is useless for us: it is not capable of switching on eject mode at all. Experimenting with it, however, shows a very useful warning: after switching modes you need to remove the device and plug it back. No other utility shows this warning. For this reason, if you are setting up a NEO-n I recommend using a simple USB extension cord to make it (much) easier to unplug and replug.

ykpersonalize to Set

The ykpersonalize utility can set eject mode and also can set the automated timeout. To do this, run ykpersonalize -m81:12:1 where 81 is the mode for eject, the middle number belongs to a mode we do not use (can not use alongside eject, in fact) and the last 1 means one second automated timeout. Once you’ve run this command, do not forget to unplug and replug. After that, ykpersonalize no longer recognizes the NEO. If you run pscc_scan you will get

Reader 0: Yubico Yubikey NEO CCID 00 00 Card state: Card removed, Exclusive Mode

And touching the device will switch on the LED and make pcsc_scan show the card “inserted” (press Ctrl+C to exit pcsc_scan). After one second the LED switches off and pcsc_scan now reports the card removed. If you do not get these results from pcsc_scan, make sure you have pcscd running.

ykneomngr to Reset

If you want to change the eject timeout then you need to run the third utility provided with the NEO, ykneomgr. Even this won’t be able to read the status of your device, but it will be able to reset to a mode where ykpersonalize can work again. Since this requires the device to be “present” and we set up a very short timeout, it’s recommended to run this in an infinite loop: until ykneomgr -M0 2&> /dev/null ; do sleep 0.1 ; done then touch the NEO. Once reset to mode 0, don’t forget to unplug and replug, and then you can set a different timeout if you want or completely without an automated timeout even with ykpersonalize -m81.

Now that’s sorted, we can turn to creating GPG keys, subkeys and installing them into the NEO/NEO-n. This process I won’t cover because it’s extensively covered elsewhere.

Using it All

After all this setup, the usage is fairly simple:

On Mobile

This combo works with Android as well: the NEO is NFC compatible and there is an Android version of pass which uses OpenKeyChain (the rough equivalent of gpg-agent for Android) to communicate with the NEO.

On Desktop

I’ve written a little script to make my life easier: first it waits for an URL to appear on the clipboard, then it’ll wait for a Yubikey and call pass with the domain extracted from the URL. Since there is an extension for copying URLs from Chrome and I have a NEO-n the login process becomes this: click the URL copy button in the addressbar (or press Ctrl-L Ctrl-C or F6 Ctrl-C), touch the NEO-n, wait for the notification and paste the password. Without this script, the one second auto timeout recommended in the setup section is not viable. Although certainly not as simple as the Supergenpass extension, it’s still pretty easy and incomparably more secure.

Catégories: Elsewhere

Sooper Drupal Themes: What's your opinion on "premium Drupal modules"

Planet Drupal - ven, 12/06/2015 - 18:45

We had this discussion 4 years ago. Why bring it up again now? Because several big codecanyon projects are coming to Drupal soon and I think it will have an impact. One of them is Slider Revolution. Slider Revolution is an "All-purpose Slide Displaying Solution that allows for showing almost any kind of content with highly customizable transitions, effects and custom animations". With nearly 60.000 sales at 18 USD it's the second most popular Wordpress plugin on codecanyon. The number of sites using this module is much greater because hundreds of premium Wordpress themes ship with the slider built into the theme. Some of these themes like Avada (140.000 sales) are widespread and amplify the impact of paid plugins in Wordpress.

To refresh our memories here are some quotes from 2011:

the DrupalAppStore that killed drupal

MortenDK, http://morten.dk/blog/drupalappstore-killed-drupal

..one thing that open source doesn't do a good job with: building teams of people with complementary skills to make sure that the software is a good experience for the customer. Why? Because there is no customer. Oh sure, hundreds of thousands of people use my software and they consider themselves customers, but ultimately they are not. Why? The definition of a customer involves, among other things, providing a revenue stream.

Earl Miles (merlinofchaos), http://www.angrydonuts.com/contributing-to-open-source

The pay-per-copy business model just doesn't work very well, practically, unless you have the completely artificial system of copyright restrictions to prop it up. (Physical objects have a natural scarcity that makes pay-per-copy vastly more practical.) When you're dealing with copyleft software, it works even more poorly.

Larry Garfield, http://www.angrydonuts.com/contributing-to-open-source

Sometimes I keep wondering why on almost every drupaller comment I read on the net is against making money on selling modules but it is OK to sell themes?

If themer can get away / circumvent GPL by licensing their css/images/js in different license than GPL why can't module developer create a separate php class api that doesn't touch any of drupal api and license it with commercial license?

Jason Xie, http://drupal-translation.com/content/it-evil-request-payment

With respect to the question "How" this last commenter was on to something. Large projects on CodeCanyon protect themselves against redistribution by having a functional code library that can work independently from the CMS integration. If there is any open source lawyer reading this I would love to hear comments on GPL compatibility of this structure.

My opinion

As a (premium) Drupal themes developer I have a special interest in development of these plugins: they provide great value to clients of my premium theme. For only around 100 USD I can buy an extended license for a module that cost the developer hundreds of man-hours to develop. By including several plugins into my theme that cost 20 USD per plugin, my theme which costs 48 USD is instantly more valuable to end-users. In general I have a positive attitude to CodeCanyon developers joining the Drupal modules. However, there will be some modules that are good for Drupal and others that could be bad for the Drupal ecosystem. 

For example, me and several other Drupal developers have been working on improving Bootstrap+Views integration through the views_bootstrap module: https://www.drupal.org/node/2203111. In the meantime, some guy on CodeCanyon seems to have all our problems figured out already and he is selling a very sleek Views+Bootstrap module on CodeCanyon. The code he sells is all Drupal-integrated programming. As far as I understand GPL this means that all his code is also GPL. So what can we do, is it legal to copy his code into the views_bootstrap module? Is it compliant with the rules and code of conduct on Drupal.org? Is it morally OK?

 

Tags planet drupal app store premium modules premium themes Drupal 7.x
Catégories: Elsewhere

James Bromberger: Logical Volume Management with Debian on Amazon EC2

Planet Debian - ven, 12/06/2015 - 15:41

The recent AWS introduction of the Elastic File System gives you an automatic grow-and-shrink capability as an NFS mount, an exciting option that takes away the previous overhead in creating shared block file systems for EC2 instances.

However it should be noted that the same auto-management of capacity is not true in the EC2 instance’s Elastic Block Store (EBS) block storage disks; sizing (and resizing) is left to the customer. With current 2015 EBS, one cannot simply increase the size of an EBS Volume as the storage becomes full; (as at June 2015) an EBS volume, once created, has fixed size. For many applications, that lack of resize function on its local EBS disks is not a problem; many server instances come into existence for a brief period, process some data and then get Terminated, so long term managment is not needed.

However for a long term data store on an instance (instead of S3, which I would recommend looking closely at from a durability and pricing fit), and where I want to harness the capacity to grow (or shrink) disk for my data, then I will need to leverage some slightly more advanced disk management. And just to make life interesting, I wish to do all this while the data is live and in-use, if possible.

Enter: Logical Volume Management, or LVM. It’s been around for a long, long time: LVM 2 made a debut around 2002-2003 (2.00.09 was Mar 2004) — and LVM 1 was many years before that — so it’s pretty mature now. It’s a powerful layer that sits between your raw storage block devices (as seen by the operating system), and the partitions and file systems you would normally put on them.

In this post, I’ll walk through the process of getting set up with LVM on Debian in the AWS EC2 environment, and how you’d do some basic maintenance to add and remove (where possible) storage with minimal interruption.

Getting Started

First a little prep work for a new Debian instance with LVM.

As I’d like to give the instance its own ability to manage its storage, I’ll want to provision an IAM Role for EC2 Instances for this host. In the AWS console, visit IAM, Roles, and I’ll create a new Role I’ll name EC2-MyServer (or similar), and at this point I’ll skip giving it any actual privileges (later we’ll update this). As at this date, we can only associate an instance role/profile at instance launch time.

Now I launch a base image Debian EC2 instance launched with this IAM Role/Profile; the root file system is an EBS Volume. I am going to put data that I’ll be managing on a separate disk from the root file system.

First, I need to get the LVM utilities installed. It’s a simple package to install: the lvm2 package. From my EC2 instance I need to get root privileges (sudo -i) and run:

apt update && apt install lvm2

After a few moments, the package is installed. I’ll choose a location that I want my data to live in, such as /opt/.  I want a separate disk for this task for a number of reasons:

  1. Root EBS volumes cannot currently be encrypted using Amazon’s Encrypted EBS Volumes at this point in time. If I want to also use AWS’ encryption option, it’ll have to be on a non-root disk. Note that instance-size restrictions also exist for EBS Encrypted Volumes.
  2. It’s possibly not worth make a snapshot of the Operating System at the same time as the user content data I am saving. The OS install (except the /etc/ folder) can almost entirely be recreated from a fresh install. so why snapshot that as well (unless that’s your strategy for preserving /etc, /home, etc).
  3. The type of EBS volume that you require may be different for different data: today (Apr 2015) there is a choice of Magnetic, General Purpose 2 (GP2) SSD, and Provisioned IO/s (PIOPS) SSD, each with different costs; and depending on our volume, we may want to select one for our root volume (operating system), and something else for our data storage.
  4. I may want to use EBS snapshots to clone the disk to another host, without the base OS bundled in with the data I am cloning.

I will create this extra volume in the AWS console and present it to this host. I’ll start by using a web browser (we’ll use CLI later) with the EC2 console.

The first piece of information we need to know is where my EC2 instance is running. Specifically, the AWS Region and Availability Zone (AZ). EBS Volumes only exist within the one designated AZ. If I accidentally make the volume(s) in the wrong AZ, then I won’t be able to connect them to my instance. It’s not a huge issue, as I would just delete the volume and try again.

I navigate to the “Instances” panel of the EC2 Console, and find my instance in the list:

A (redacted) list of instance from the EC2 console.

Here I can see I have located an instance and it’s running in US-East-1A: that’s AZ A in Region US-East-1. I can also grab this with a wget from my running Debian instance by asking the MetaData server:

wget -q -O - http://169.254.169.254/latest/meta-data/placement/availability-zone

The returned text is simply: “us-east-1a”.

Time to navigate to “Elastic Block Store“, choose “Volumes” and click “Create“:

Creating a volume in AWS EC2: ensure the AZ is the same as your instance

You’ll see I selected that I wanted AWS to encrypt this and as noted above, at this time that doesn’t include the t2 family. However, you have an option of using encryption with LVM – where the customer looks after the encryption key – see LUKS.

What’s nice is that I can do both — have AWS Encrypted Volumes, and then use encryption on top of this, but I have to manage my own keys with LUKS, and should I lose them, then I can keep all the cyphertext!

I deselected this for my example (with a t2.micro), and continue; I could see the new volume in the list as “creating”, and then shortly afterwards as “available”. Time to attach it: select the disk, and either right-click and choose “Attach“, or from the menu at the top of the list, chose “Actions” -> “Attach” (both do the same thing).

Attaching a volume to an instance: you’ll be prompted for the compatible instances in the same AZ.

At this point in time your EC2 instance will now notice a new disk; you can confirm this with “dmesg |tail“, and you’ll see something like:

[1994151.231815]  xvdg: unknown partition table

(Note the time-stamp in square brackets will be different).

Previously at this juncture you would format the entire disk with your favourite file system, mount it in the desired location, and be done. But we’re adding in LVM here – between this “raw” device, and the filesystem we are yet to make….

Marking the block device for LVM

Our first operation with LVM is to put a marker on the volume to indicate it’s being use for LVM – so that when we scan the block device, we know what it’s for. It’s a really simple command:

pvcreate /dev/xvdg

The device name above (/dev/xvdg) should correspond to the one we saw from the dmesg output above. The output of the above is rather straight forward:

  Physical volume "/dev/xvdg" successfully created Checking our EBS Volume

We can check on the EBS volume – which LVM sees as a Physical Volume – using the “pvs” command.

# pvs   PV         VG   Fmt  Attr PSize PFree   /dev/xvdg       lvm2 ---  5.00g 5.00g

Here we see the entire disk is currently unused.

Creating our First Volume Group

Next step, we need to make an initial LVM Volume Group which will use our Physical volume (xvdg). The Volume Group will then contain one (or more) Logical Volumes that we’ll format and use. Again, a simple command to create a volume group by giving it its first physical device that it will use:

# vgcreate  OptVG /dev/xvdg   Volume group "OptVG" successfully created

And likewise we can check our set of Volume Groups with ” vgs”:

# vgs   VG    #PV #LV #SN Attr   VSize VFree   OptVG   1   0   0 wz--n- 5.00g 5.00g

The Attribute flags here indicate this is writable, resizable, and allocating extents in “normal” mode. Lets proceed to make our (first) Logical Volume in this Volume Group:

# lvcreate -n OptLV -L 4.9G OptVG   Rounding up size to full physical extent 4.90 GiB   Logical volume "OptLV" created

You’ll note that I have created our Logical Volume as almost the same size as the entire Volume Group (which is currently one disk) but I left some space unused: the reason for this comes down to keeping some space available for any jobs that LVM may want to use on the disk – and this will be used later when we want to move data between raw disk devices.

If I wanted to use LVM for Snapshots, then I’d want to leave more space free (unallocated) again.

We can check on our Logical Volume:

# lvs   LV    VG    Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert   OptLV OptVG -wi-a----- 4.90g

The attribytes indicating that the Logical Volume is writeable, is allocating its data to the disk in inherit mode (ie, as the Volume Group is doing), and that it is active. At this stage you may also discover we have a device /dev/OptVG/OptLV, and this is what we’re going to format and mount. But before we do, we should review what file system we’ll use.

Filesystems Popular Linux file systems Name Shrink Grow Journal Max File Sz Max Vol Sz btrfs Y Y N 16 EB 16 EB ext3 Y off-line Y Y 2 TB 32 TB ext4 Y off-line Y Y 16 TB 1 EB xfs N Y Y 8 EB 8 EB zfs* N Y Y 16 EB 256 ZB

For more details see Wikipedia comparison. Note that ZFS requires 3rd party kernel module of FUSE layer, so I’ll discount that here. BTRFS only went stable with Linux kernel 3.10, so with Debian Jessie that’s a possibility; but for tried and trusted, I’ll use ext4.

The selection of ext4 also means that I’ll only be able to shrink this file system off-line (unmounted).

I’ll make the filesystem:

# mkfs.ext4 /dev/OptVG/OptLV mke2fs 1.42.12 (29-Aug-2014) Creating filesystem with 1285120 4k blocks and 321280 inodes Filesystem UUID: 4f831d17-2b80-495f-8113-580bd74389dd Superblock backups stored on blocks:         32768, 98304, 163840, 229376, 294912, 819200, 884736 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done

And now mount this volume and check it out:

# mount /dev/OptVG/OptLV /opt/ # df -HT /opt Filesystem              Type  Size  Used Avail Use% Mounted on /dev/mapper/OptVG-OptLV ext4  5.1G   11M  4.8G   1% /opt

Lastly, we want this to be mounted next time we reboot, so edit /etc/fstab and add the line:

/dev/OptVG/OptLV /opt ext4 noatime,nodiratime 0 0

With this in place, we can now start using this disk.  I selected here not to update the filesystem every time I access a file or folder – updates get logged as normal but access time is just ignored.

Time to expand

After some time, our 5 GB /opt/ disk is rather full, and we need to make it bigger, but we wish to do so without any downtime. Amazon EBS doesn’t support resizing volumes, so our strategy is to add a new larger volume, and remove the older one that no longer suits us; LVM and ext4’s online resize ability will allow us to do this transparently.

For this example, we’ll decide that we want a 10 GB volume. It can be a different type of EBS volume to our original – we’re going to online-migrate all our data from one to the other.

As when we created the original 5 GB EBS volume above, create a new one in the same AZ and attach it to the host (perhaps a /dev/xvdh this time). We can check the new volume is visible with dmesg again:

[1999786.341602]  xvdh: unknown partition table

And now we initalise this as a Physical volume for LVM:

# pvcreate /dev/xvdh   Physical volume "/dev/xvdh" successfully created

And then add this disk to our existing OptVG Volume Group:

# vgextend OptVG /dev/xvdh   Volume group "OptVG" successfully extended

We can now review our Volume group with vgs, and see our physical volumes with pvs:

# vgs   VG    #PV #LV #SN Attr   VSize  VFree   OptVG   2   1   0 wz--n- 14.99g 10.09g # pvs   PV         VG    Fmt  Attr PSize  PFree   /dev/xvdg  OptVG lvm2 a--   5.00g 96.00m   /dev/xvdh  OptVG lvm2 a--  10.00g 10.00g

There are now 2 Physical Volumes – we have a 4.9 GB filesystem taking up space, so 10.09 GB of unallocated space in the VG.

Now its time to stop using the /dev/xvgd volume for any new requests:

# pvchange -x n /dev/xvdg   Physical volume "/dev/xvdg" changed   1 physical volume changed / 0 physical volumes not changed

At this time, our existing data is on the old disk, and our new data is on the new one. Its now that I’d recommend running GNU screen (or similar) so you can detach from this shell session and reconnect, as the process of migrating the existing data can take some time (hours for large volumes):

# pvmove /dev/sdb1 /dev/sdd1   /dev/xvdg: Moved: 0.1%   /dev/xvdg: Moved: 8.6%   /dev/xvdg: Moved: 17.1%   /dev/xvdg: Moved: 25.7%   /dev/xvdg: Moved: 34.2%   /dev/xvdg: Moved: 42.5%   /dev/xvdg: Moved: 51.2%   /dev/xvdg: Moved: 59.7%   /dev/xvdg: Moved: 68.0%   /dev/xvdg: Moved: 76.4%   /dev/xvdg: Moved: 84.7%   /dev/xvdg: Moved: 93.3%   /dev/xvdg: Moved: 100.0%

During the move, checking the Monitoring tab in the AWS EC2 Console for the two volumes should show one with a large data Read metric, and one with a large data Write metric – clearly data should be flowing off the old disk, and on to the new.

A note on disk throughput

The above move was a pretty small, and empty volume. Larger disks will take longer, naturally, so getting some speed out of the process maybe key. There’s a few things we can do to tweak this:

  • EBS Optimised: a launch-time option that reserves network throughput from certain instance types back to the EBS service within the AZ. Depending on the size of the instance this is 500 MB/sec up to 4GB/sec. Note that for the c4 family of instances, EBS Optimised is on by default.
  • Size of GP2 disk: the larger the disk, the longer it can sustain high IO throughput – but read this for details.
  • Size and speed of PIOPs disk: if consistent high IO is required, then moving to Provisioned IO disk may be useful. Looking at the (2 weeks) history of Cloudwatch logs for the old volume will give me some idea of the duty cycle of the disk IO.
Back to the move…

Upon completion I can see that the disk in use is the new disk and not the old one, using pvs again:

# pvs   PV         VG    Fmt  Attr PSize  PFree   /dev/xvdg  OptVG lvm2 ---   5.00g 5.00g   /dev/xvdh  OptVG lvm2 a--  10.00g 5.09g

So all 5 GB is now unused (compare to above, where only 96 MB was PFree). With that disk not containing data, I can tell LVM to remove the disk from the Volume Group:

# vgreduce OptVG /dev/xvdg   Removed "/dev/xvdg" from volume group "OptVG"

Then I cleanly wipe the labels from the volume:

# pvremove /dev/xvdg   Labels on physical volume "/dev/xvdg" successfully wiped

If I really want to clean the disk, I could choose to use shred(1) on the disk to overwrite with random data. This can take a lng time

Now the disk is completely unused and disassociated from the VG, I can return to the AWS EC2 Console, and detach the disk:

Detach an EBS volume from an EC2 instance

Wait for a few seconds, and the disk is then shown as “available“; I then chose to delete the disk in the EC2 console (and stop paying for it).

Back to the Logical Volume – it’s still 4.9 GB, so I add 4.5 GB to it:

# lvresize -L +4.5G /dev/OptVG/OptLV   Size of logical volume OptVG/OptLV changed from 4.90 GiB (1255 extents) to 9.40 GiB (2407 extents).   Logical volume OptLV successfully resized

We now have 0.6GB free space on the physical volume (pvs confirms this).

Finally, its time to expand out ext4 file system:

# resize2fs /dev/OptVG/OptLV resize2fs 1.42.12 (29-Aug-2014) Filesystem at /dev/OptVG/OptLV is mounted on /opt; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/OptVG/OptLV is now 2464768 (4k) blocks long.

And with df we can now see:

# df -HT /opt/ Filesystem              Type  Size  Used Avail Use% Mounted on /dev/mapper/OptVG-OptLV ext4  9.9G   12M  9.4G   1% /opt Automating this

The IAM Role I made at the beginning of this post is now going to be useful. I’ll start by adding an IAM Policy to the Role to permit me to List Volumes, Create Volumes, Attach Volumes and Detach Volumes to my instance-id. Lets start with creating a volume, with a policy like this:

{   "Version": "2012-10-17",   "Statement": [     {       "Sid": "CreateNewVolumes",       "Action": "ec2:CreateVolume",       "Effect": "Allow",       "Resource": "*",       "Condition": {         "StringEquals": {           "ec2:AvailabilityZone": "us-east-1a",           "ec2:VolumeType": "gp2"         },         "NumericLessThanEquals": {           "ec2:VolumeSize": "250"         }       }     }   ] }

This policy puts some restrictions on the volumes that this instance can create: only within the given Availability Zone (matching our instance), only GP2 SSD (no PIOPs volumes), and size no more than 250 GB. I’ll add another policy to permit this instance role to tag volumes in this AZ that don’t yet have a tag called InstanceId:

{   "Version": "2012-10-17",   "Statement": [     {       "Sid": "TagUntaggedVolumeWithInstanceId",       "Action": [         "ec2:CreateTags"       ],       "Effect": "Allow",       "Resource": "arn:aws:ec2:us-east-1:1234567890:volume/*",       "Condition": {         "Null": {           "ec2:ResourceTag/InstanceId": "true"         }       }     }   ] }

Now that I can create (and then tag) volumes, this becomes a simple procedure as to what else I can do to this volume. Deleting and creating snapshots of this volume are two obvious options, and the corresponding policy:

{   "Version": "2012-10-17",   "Statement": [     {       "Sid": "CreateDeleteSnapshots-DeleteVolume-DescribeModifyVolume",       "Action": [         "ec2:CreateSnapshot",         "ec2:DeleteSnapshot",         "ec2:DeleteVolume",         "ec2:DescribeSnapshotAttribute",         "ec2:DescribeVolumeAttribute",         "ec2:DescribeVolumeStatus",         "ec2:ModifyVolumeAttribute"       ],       "Effect": "Allow",       "Resource": "*",       "Condition": {         "StringEquals": {           "ec2:ResourceTag/InstanceId": "i-123456"         }       }     }   ] }

Of course it would be lovely if I could use a variable inside the policy condition instead of the literal string of the instance ID, but that’s not currently possible.

Clearly some of the more important actions I want to take are to attach and detach a volume to my instance:

{ "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1434114682836", "Action": [ "ec2:AttachVolume" ], "Effect": "Allow", "Resource": "arn:aws:ec2:us-east-1:123456789:volume/*", "Condition": { "StringEquals": { "ec2:ResourceTag/InstanceID": "i-123456" } } }, { "Sid": "Stmt1434114745717", "Action": [ "ec2:AttachVolume" ], "Effect": "Allow", "Resource": "arn:aws:ec2:us-east-1:123456789:instance/i-123456" } ] }

Now with this in place, we can start to fire up the AWS CLI we spoke of. We’ll let the CLI inherit its credentials form the IAM Instance Role and the polices we just defined.

AZ=`wget -q -O - http://169.254.169.254/latest/meta-data/placement/availability-zone` Region=`wget -q -O - http://169.254.169.254/latest/meta-data/placement/availability-zone|rev|cut -c 2-|rev` InstanceId=`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id VolumeId=`aws ec2 --region ${Region} create-volume --availability-zone ${AZ} --volume-type gp2 --size 1 --query "VolumeId" --output text` aws ec2 --region ${Region} create-tags --resource ${VolumeID} --tags Key=InstanceId,Value=${InstanceId} aws ec2 --region ${Region} attach-volume --volume-id ${VolumeId} --instance-id ${InstanceId}

…and at this stage, the above manipulation of the raw block device with LVM can begin. Likewise you can then use the CLI to detach and destroy any unwanted volumes if you are migrating off old block devices.

Catégories: Elsewhere

Richard Hartmann: Happy Friday!!

Planet Debian - ven, 12/06/2015 - 13:31

So, what do you do before you break the Internet?

You tweet this:

Catégories: Elsewhere

OpenLucius: 13 new kick-ass features | Drupal distro OpenLucius updated

Planet Drupal - ven, 12/06/2015 - 13:26

It has been quiet for a few weeks around our Drupal distro OpenLucius, but this has its reason. We have worked hard on the first 'Release Candidate'. And we could not resist to implement new features. These new features have been developed with the feedback of the now thousands of OpenLucius users.

So team and project management has been made even easier! :-D

The 13 new features and improvements:

Catégories: Elsewhere

Open Source Training: How to Add a Slider Search to Drupal

Planet Drupal - ven, 12/06/2015 - 02:18

One of our customers was setting up a Drupal Commerce site and wanted to add a slider search.

Here's an example of a slider search in action:

There are a lot of steps involved in this process, but it's worth the effort. This tutorial will use the default search, without relying on Apache Solr or alternatives.

Catégories: Elsewhere

DrupalCon News: Book Your Hotel Room Today!

Planet Drupal - ven, 12/06/2015 - 02:10

Have you reserved your room at the Barcelona Princess yet? It’s the preferred hotel for all DrupalCon Barcelona attendees, and we’ve negotiated some great rates for our Drupal friends.

Catégories: Elsewhere

Norbert Preining: TeX Live 2015 released

Planet Debian - ven, 12/06/2015 - 01:17

After long time of waiting we finally released TeX Live 2015. Get some champagne, and ready to party!

Get it via downloading the net-installer, by obtaining a DVD, or downloading the iso. Besides the huge list up updates to packages and loads of bug fixes to all the programs, here are some notable news:

  • new LaTeX2e release which automatically includes fixltx2e
  • pdfTeX got improved jpeg support (exif and JFIF)
  • luaTeX got a new library newtokenlib
  • XeTeX got image handling fixes
  • MetaPost got a new numbersystem binary, and Japanese variants were added (upmpost and updvitomp)
  • MacTeX’s Ghostscript packages got a facelift with better CJK support, other Unix-like distributions can do the same with my the newly included script cjk-gs-integrate (also one one my projects)
  • fmtutil got rewritten in perl, as reported here
  • architectures: *kfreebsd was removed,and some additional platforms available as custom binaries. The DVD does not contain all platforms (due to space reasons), but all of them can be installed via the net-installer.

Regular updates of the repository will start in due course.

Thanks to all the contributors for the hard work. Let’s have a party!

Catégories: Elsewhere

Vincent Fourmond: Release 0.13 of ctioga2

Planet Debian - ven, 12/06/2015 - 00:01
Today is ctioga2's release. Unlike most other release, this one does not bring many visible features, but quite a few changes nevertheless, including:
  • finally customizable output PDF resolution, which was asked some time ago
  • ways to average successive Y values (for the same X value), setting the error bars to the standard deviation
  • handling of histograms with missing X values (SF issue #1)
  • improvements in the emacs mode (including contextual help)

As usual, the new version is available as a gem

~ gem update ctioga Enjoy !
Catégories: Elsewhere

Red Crackle: 15 minutes to your first Drupal integration test

Planet Drupal - jeu, 11/06/2015 - 22:52
This post will help you write and run your first Drupal integration test in less than 15 minutes. By end of this post, you will be able to write an automated test to make sure that superuser is able to create two pieces of content, one published and the other unpublished. We'll test that anonymous user is able to view the published content and is not able to view the unpublished content. You can follow along these steps on any Unix or Mac machine and have automated tests running in a matter of minutes.
Catégories: Elsewhere

Drupal core announcements: DrupalCI: It's coming!

Planet Drupal - jeu, 11/06/2015 - 20:25

DrupalCI is the next-generation version of our beloved testbot. The MVP ("minimum viable product") is coming soon (rolled out in parallel with the old testbot for awhile).

Here's a sneak peak at what it'll look like and some of its new capabilities: https://groups.drupal.org/node/471473

Catégories: Elsewhere

Pages

Subscribe to jfhovinne agrégateur - Elsewhere