Feed aggregator

Jingjie Jiang: Week2-Week3 OPW Journey.

Planet Debian - 41 min 19 sec ago
The Tropy

In this period, I have tackled several bugs and got them finally about to be merged in the codebase. Namely, they are:
#761121 allow symbolic links within same version, #761861 override detected language type. I also spent some time on making debsouces runnable on sor.debian.org.
But still there is some db related problem on it. I am not quite familiar with psql, and kinda at a loss as what to do. Zack said he would take it over and I shall focus on what really I likes. Cool.

For some non-code tasks, I have a detailed read on “machine-readable debian/copyright”. The other task is on “flask blueprint”. The idea of “flask blueprint”, as far as I am concerned, is sort of what apps are in django.

Zack has drafted a specification on debian/copyright which serves as the goal of copyright.debian.net. Combined with the above reading knowledge, and with the help of Flask expert matthieu, I will get my hands dirty in the comming weeks to create a fantastic new site, aka, copyright.d.n. Stay tuned.

some thought

I did spend some time learning how to use git, and read quite a lot of materials. But, I shamely forgot most of them. So when I am frequently using git these days, I feel kinda incompetent and sometimes awkward. I am thinking now, maybe I shall stop overlearn some technology that I might never use. Only real usage and practice could help me get comfort with those tools. Overlearn something which I am not currently using and either won’t in the future might just be a waste of time.

and finally, I spent a great Christmas. Wish everyone happy, and merry Christmas!


Categories: Elsewhere

Dirk Eddelbuettel: rfoaas 0.0.5

Planet Debian - 3 hours 1 min ago

A new version of rfoaas is now on CRAN. The rfoaas package provides an interface for R to the most excellent FOAAS service--which provides a modern, scalable and RESTful web service for the frequent need to tell someone to eff off.

This version aligns the rfoaas version number with the (at long last) updated upstream version number, and brings a change suggested by Richie Cotton to set the encoding of the returned object.

As usual, CRANberries provides a diff to the previous release. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Code Karate: Drupal 7 Draggable Captcha - a more friendly way to prevent Spam

Planet Drupal - 13 hours 27 min ago
Episode Number: 187

The Drupal 7 Draggable Captcha module is not like most captchas. A captcha is a way to catch or capture spam and prevent a bot from completing a form. This is one of the most widely used ways to prevent SPAM on a website. Drupal has many different types of captchas available and the Draggable Captcha is one of the more fun and easy ones.

Tags: DrupalDrupal 7Drupal PlanetSpam Prevention
Categories: Elsewhere

Dirk Eddelbuettel: RcppArmadillo 0.4.600.0

Planet Debian - Sun, 28/12/2014 - 14:47

Conrad produced another minor release 4.600 of Armadillo. As before, I had created a GitHub-only pre-release(s) of his pre-release(s), and tested a pre-release as well as the actual release against the now over one hundred CRAN dependents of our RcppArmadillo package. The tests passed fine as usual with less than a handful of checks not passing, all for known cases -- and results are as always in the rcpp-logs repository.

Changes are summarized below based on the NEWS.Rd file.

Changes in RcppArmadillo version 0.4.600.0 (2014-12-27)
  • Upgraded to Armadillo release Version 4.600 ("Off The Reservation")

    • added .head() and .tail() to submatrix views

    • faster matrix transposes within compound expressions

    • faster accu() and norm() when compiling with -O3 -ffast-math -march=native (gcc and clang)

    • workaround for a bug in GCC 4.4

Courtesy of CRANberries, there is also a diffstat report for the most recent release. As always, more detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Ian Donnelly: New Release and Farewell to Planet Debian

Planet Debian - Sun, 28/12/2014 - 07:13

Hi Everyone,

So first I want to say, that I appreciate all the interest in our project from the folks who read our posts at Planet Debian. This will be our last post there for now, I hope you enjoyed the posts about my Google Summer of Code project and other info about Elektra we posted there. To stay informed about Elektra’s releases, please subscribe to this RSS feed. Today I want to thank you all and tell you about the latest exciting Elektra news, our newest release 0.8.10!

This new release brings many exciting updates and features, since I didn’t to a post for Elektra 0.8.9, this post will cover changes and additions from Elektra 0.8.8 to 0.8.10.

First of all, there is now a new gui for KDB! A big thanks goes out to Raffael Pancheri for developing this GUI! Once included in your Elektra build, this gui can be run using the command kdb qt-gui. It is very important to note that the gui is in version 0.0.2, meaning it is an alpha project and shouldn’t be considered stable yet! This GUI is a convenient tool that lets you mount, unmount, search, import and export configurations. We are very excited about this tool and we hope it makes Elektra more accessible to all users. Check out the screenshot below to see this new GUI in action:

A screenshot of the new qt-gui tool

A few other new features are that Felix added support for multi-line into the ini plugin. Users are able to turn this feature on or off for each configuration. Additionally, Kai-Uwe added support for Windows 7 to Elektra! The Windows 7 port relies on a new resolver called wresolver which is a simpler version of the regular resolver which is compatible with Windows. Additionally, there is a new Java binding, so now Elektra is compatible with applications and plugins written in Java! On top of these features, a new hosts plugin and a new rename plugin have been written. The new hosts in conjunction with improvements to the ini plugin allows for multiple styles of comments (because some ini dialects allow ; and # to be used to differentiate comments) to be used and preserved in configuration files. Additionally, this new hosts plugin is compatible with and differentiates between ipv4 and ipv6 entries.

There have also been a huge number of under-the-hood improvements, changes, and fixes; there have been so many in-fact that I am going to list them:

  • The CMake variables regarding SWIG bindings are now abandoned in favor to the new variable BINDINGS that works like PLUGINS and TOOLS.
  • A proof of concept storage plugin regexstore was added.
  • A new race detection tool was added that will help us improve the resolver.
  • Most plugins use the CMake add_plugin function which allows plugin developers to more easily and consistently add plugins when building Elektra.
  • Elektra is now fully XDG 0.8 complaint.
  • Elektra also now meets OpenICC Draft specifications and we added a new tool, kdb mount-openicc
  • A new command was added, kdb list-tools which lists the available kdb tools.
  • The DPLUGINS flag for CMake now allows you to exempt a plugin from a build using the - prefix such as DPLUGINS=ALL;-xmltool" which would exclude xmltool.

There are many more new changes that I could not mention here. Please keep up-to-date with Elektra news and releases by subscribing to this RSS feed and following our news page on GitHub. As always feel free to comment on our posts and I will get back to you as soon as possible. Again, thanks for reading our blog, enjoy the new release!

Sincerely,
Ian S. Donnelly

P.S. Happy New Year to everybody!

Categories: Elsewhere

Russ Allbery: Review: Some Remarks

Planet Debian - Sun, 28/12/2014 - 05:47

Review: Some Remarks, by Neal Stephenson

Publisher: William Morrow Copyright: June 2013 ISBN: 0-06-202444-2 Format: Trade paperback Pages: 336

This is going to be another weird review, since I read this essay collection about three months ago, and I borrowed it from a friend. So this is both from fading memory and without a handy reference other than Internet searches. Apologies in advance for any important details that I miss. The advantage is that you'll see what parts of this collection stuck in my memory.

Some Remarks is, as you might guess from the title, a rather random collection of material. There's one long essay that for me was the heart of the book (more on that in a moment), two other longer essays, two short stories, and thirteen other bits of miscellaneous writing of varying lengths.

I found most of the short essays unremarkable. Stephenson uses a walking desk because sitting is bad for you — that sentence contains basically all of the interesting content of one of the essays. I think it takes a large topic and some running room before Stephenson can get up to speed and produce something that's more satisfying than technological boosterism. That means the most interesting parts of this book are the three longer works.

"In the Kingdom of Mao Bell" was previously published in Wired and is still available. Some Remarks contains only excerpts; Stephenson says that some of the original essay is no longer that interesting. I had mixed feelings about this one. Some of the sense of place he creates was fun to read, but Stephenson can't seem to quite believe that the Chinese don't care about "freedom" according to his definitions in the same way and therefore don't have the same political reaction to hacker culture that he does. This could have been an opportunity for him to question assumptions, but instead it's mostly an exercise in dubious, sweeping cultural evaluation, such as "the country has a long history of coming up with technologies before anyone else and then not doing a lot with them." A reminder that the detail with which Stephenson crams his writing is not always... true.

Stronger is "Atoms of Cognition: Metaphysics in the Royal Society 1715–2010," which covers material familiar to readers of Stephenson's Baroque Cycle. The story of Newton, Leibniz, their rivalry, and the competing approaches to thinking about mathematics and science was my favorite part of that series, and in some ways the non-fiction treatment is better than the fictional one. If you liked the Baroque Cycle, this is worth reading.

But the highlight of the book for me was "Mother Earth Mother Board." This is a long essay (50,000 words, practically a small book and the largest part of this collection) about the laying of undersea fiber-optic cables. Those who have read Cryptonomicon will recognize some of the themes here, but there's way more to this essay than there was to the bits about undersea cables in Cryptonomicon. It's mostly about technology, rather than people, which puts Stephenson on firmer ground. The bit about people reads more like a travelogue, full of Stephenson's whole-hearted admiration of people who build things and make things work. There's a bit of politics, a bit of history, a bit of tourism, and a lot of neat trivia about a part of the technological world that I'd not known much about before. I would say this is worth the price of the collection, but it too was previously published in Wired, so you can just read it online.

Those reading this review on my web site will notice that I filed it in non-fiction. There are a couple of stories, but they're entirely forgettable (in fact, I had entirely forgotten them, and had to skim them again). But, for the record, here are short reviews of those:

"Spew": This originally appeared in Wired and can still be read on-line. The protagonist takes a job as a sort of Internet marketing inspector who looks for deviations from expected profiles. While tracing down an anomaly, though, he finds another use of the Internet that's outside of the marketing framework he's using.

It's unlikely that anyone who's been online for long will find much new in this story. Some of that is because it was originally published in 1994, but most of it is just that this isn't a very good story. Stephenson seems to have turned up his normal manic infodump to 11 to satisfy the early Wired aesthetic, and the result is a train wreck of jargon, bad slang, and superficial social observation. (3)

"The Great Simoleon Caper": Originally published in TIME, this story too is still available online. It's primarily interesting because it's a story about Bitcoin (basically), written in 1995. And it's irritating for exactly the same reason that Bitcoin enthusiasm often tends to be irritating: the assumption that cryptocurrency is somehow a revolutionary attack on government-run currency systems. I'm not going to get into the ways in which this doesn't make sense given how money is used socially (David Graeber's Debt is the book to read if you want more information); just know that the story follows that path and doesn't engage with any of the social reasons why that outcome is highly unlikely. Indeed, the lengths to which the government tries to go to discredit cryptocurrency in this story are rather silly.

Apart from that, this is typical early Stephenson writing. It's very in love with ideas, not as much with characterization, and consists mostly of people explaining things to each other. Sometimes this is fun, but when focused on topics about which considerably more information has become available, it doesn't age very well. (5)

Overall, there was one great essay and a few interesting bits, but I wouldn't have felt I was missing much if I'd never read this collection. I borrowed Some Remarks from a friend, and I think that's about the right level of effort. If it falls into your hands, or you see it in a library, some of the essays, particularly "Mother Earth Mother Board," are worth reading, but given that the best parts are available on-line for free, I don't think it's worth a purchase.

Rating: 6 out of 10

Categories: Elsewhere

Benjamin Mako Hill: My Government Portrait

Planet Debian - Sun, 28/12/2014 - 00:01

A friend recently commented on my rather unusual portrait on my (out of date) page on the Berkman website.  Here’s the story.

I joined Berkman as a fellow with a fantastic class of fellows that included, among many other incredibly accomplished people, Vivek Kundra: first Chief Information Officer of the United States. At Berkman, all the fellows are all asked for photos and Vivek apparently sent in his official government portrait.

You are probably familiar with the genre. In the US at least, official government portraits are mostly pictures of men in dark suits, light shirts, and red or blue ties with flags draped blurrily in the background.

Not unaware of the fact that Vivek sat right below me on the alphabetically sorted Berkman fellows page, a small group that included Paul Tagliamonte —  very familiar with the genre from his work with government photos in Open States — decided to create a government portrait of me using the only flag we had on hand late one night.

The result — shown in the screenshot above and in the WayBack Machine — was almost entirely unnoticed (at least to my knowledge) but was hopefully appreciated by those who did see it.

Categories: Elsewhere

Richard Hartmann: Release Critical Bug report for Week 52

Planet Debian - Sat, 27/12/2014 - 09:29

Sadly, I am a day late.

This post brought to you by download speeds of ~2-9kb/s and upload speeds of 1 kb/s.

Even though I am only a few kilometers away from Munich, I have worse Internet connection here than I had in the middle of nowhere in Finland

Also, the bug count jumped up by about 40 between Thursday and today. Else, we would have been ahead of squeeze.

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1088 (Including 171 bugs affecting key packages)
    • Affecting Jessie: 147 (key packages: 95) That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 112 (key packages: 72) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 24 bugs are tagged 'patch'. (key packages: 16) Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 7 bugs are marked as done, but still affect unstable. (key packages: 0) This can happen due to missing builds on some architectures, for example. Help investigate!
        • 81 bugs are neither tagged patch, nor marked done. (key packages: 56) Help make a first step towards resolution!
      • Affecting Jessie only: 35 (key packages: 23) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 21 bugs are in packages that are unblocked by the release team. (key packages: 14)
        • 14 bugs are in packages that are not unblocked. (key packages: 9)

How do we compare to the Squeeze release cycle?

Week Squeeze Wheezy Jessie 43 284 (213+71) 468 (332+136) 319 (240+79) 44 261 (201+60) 408 (265+143) 274 (224+50) 45 261 (205+56) 425 (291+134) 295 (229+66) 46 271 (200+71) 401 (258+143) 427 (313+114) 47 283 (209+74) 366 (221+145) 342 (260+82) 48 256 (177+79) 378 (230+148) 274 (189+85) 49 256 (180+76) 360 (216+155) 226 (147+79) 50 204 (148+56) 339 (195+144) ??? 51 178 (124+54) 323 (190+133) 189 (134+55) 52 115 (78+37) 289 (190+99) 147 ((112+35)) 1 93 (60+33) 287 (171+116) 2 82 (46+36) 271 (162+109) 3 25 (15+10) 249 (165+84) 4 14 (8+6) 244 (176+68) 5 2 (0+2) 224 (132+92) 6 release! 212 (129+83) 7 release+1 194 (128+66) 8 release+2 206 (144+62) 9 release+3 174 (105+69) 10 release+4 120 (72+48) 11 release+5 115 (74+41) 12 release+6 93 (47+46) 13 release+7 50 (24+26) 14 release+8 51 (32+19) 15 release+9 39 (32+7) 16 release+10 20 (12+8) 17 release+11 24 (19+5) 18 release+12 2 (2+0)

Graphical overview of bug stats thanks to azhag:

Categories: Elsewhere

Joey Hess: shell monad day 3

Planet Debian - Sat, 27/12/2014 - 04:18

I have been hard at work on the shell-monad ever since it was born on Christmas Eve. It's now up to 820 lines of code, and has nearly comprehensive coverage of all shell features.

Time to use it for something interesting! Let's make a shell script and a haskell program that both speak a simple protocol. This kind of thing could be used by propellor when it's deploying itself to a new host. The haskell program can ssh to a remote host and run the shell program, and talk back and forth over stdio with it, using the protocol they both speak.

abstract beginnings

First, we'll write a data type for the commands in the protocol.

data Proto = Foo String | Bar | Baz Integer deriving (Show)

Now, let's go type class crazy!

class Monad t => OutputsProto t where output :: Proto -> t () instance OutputsProto IO where output = putStrLn . fromProto

So far, nothing interesting; this makes the IO monad an instance of the OutputsProto type class, and gives a simple implementation to output a line of the protocol.

instance OutputsProto Script where output = cmd "echo" . fromProto

Now it gets interesting. The Script monad is now also a member of the OutputsProto. To output a line of the protocol, it just uses echo. Yeah -- shell code is a member of a haskell type class. Awesome -- most abstract shell code evar!

Similarly, we can add another type class for monads that can input the protocol:

class Monad t => InputsProto t p where input :: t p instance InputsProto IO Proto where input = toProto <$> readLn instance InputsProto Script Var where input = do v <- newVar () readVar v return v

While the IO version reads and deserializes a line back to a Proto, the shell script version of this returns a Var, which has the newly read line in it, not yet deserialized. Why the difference? Well, Haskell has data types, and shell does not ...

speaking the protocol

Now we have enough groundwork to write haskell code in the IO monad that speaks the protocol in arbitrary ways. For example:

protoExchangeIO :: Proto -> IO Proto protoExchangeIO p = do output p input fooIO :: IO () fooIO = do resp <- protoExchangeIO (Foo "starting up") -- etc

But that's trivial and uninteresting. Anyone who has read to here certianly knows how to write haskell code in the IO monad. The interesting part is making the shell program speak the protocol, including doing various things when it receives the commands.

foo :: Script () foo = do stopOnFailure True handler <- func (NamedLike "handler") $ handleProto =<< input output (Foo "starting up") handler output Bar handler handleFoo :: Var -> Script () handleFoo v = toStderr $ cmd "echo" "yay, I got a Foo" v handleBar :: Script () handleBar = toStderr $ cmd "echo" "yay, I got a Bar" handleBaz :: Var -> Script () handleBaz num = forCmd (cmd "seq" (Val (1 :: Int)) num) $ toStderr . cmd "echo" "yay, I got a Baz" serialization

I've left out a few serialization functions. fromProto is used in both instances of OutputsProto. The haskell program and the script will both use this to serialize Proto.

fromProto :: Proto -> String fromProto (Foo s) = pFOO ++ " " ++ s fromProto Bar = pBAR ++ " " fromProto (Baz i) = pBAZ ++ " " ++ show i pFOO, pBAR, pBAZ :: String (pFOO, pBAR, pBAZ) = ("FOO", "BAR", "BAZ")

And here's the haskell function to convert the other direction, which was also used earlier.

toProto :: String -> Proto toProto s = case break (== ' ') s of (w, ' ':rest) | w == pFOO -> Foo rest | w == pBAR && null rest -> Bar | w == pBAZ -> Baz (read rest) | otherwise -> error $ "unknown protocol command: " ++ w (_, _) -> error "protocol splitting error"

We also need a version of that written in the Script monad. Here it is. Compare and contrast the function below with the one above. They're really quite similar. (Sadly, not so similar to allow refactoring out a common function..)

handleProto :: Var -> Script () handleProto v = do w <- getProtoCommand v let rest = getProtoRest v caseOf w [ (quote (T.pack pFOO), handleFoo =<< rest) , (quote (T.pack pBAR), handleBar) , (quote (T.pack pBAZ), handleBaz =<< rest) , (glob "*", do toStderr $ cmd "echo" "unknown protocol command" w cmd "false" ) ]

Both toProto and handleProto split the incoming line apart into the first word, and the rest of the line, then match the first word against the commands in the protocol, and dispatches to appropriate actions. So, how do we split a variable apart like that in the Shell monad? Like this...

getProtoCommand :: Var -> Script Var getProtoCommand v = trimVar LongestMatch FromEnd v (glob " *") getProtoRest :: Var -> Script Var getProtoRest v = trimVar ShortestMatch FromBeginning v (glob "[! ]*[ ]")

(This could probably be improved by using a DSL to generate the globs too..)

conclusion

And finally, here's a main to generate the shell script!

main :: IO () main = T.writeFile "protocol.sh" $ script foo

The pretty-printed shell script that produces is not very interesting, but I'll include it at the end for completeness. More interestingly for the purposes of sshing to a host and running the command there, we can use linearScript to generate a version of the script that's all contained on a single line. Also included below.

I could easily have written the pretty-printed version of the shell script in twice the time that it took to write the haskell program that generates it and also speaks the protocol itself.

I would certianly have had to test the hand-written shell script repeatedly. Code like for _x in $(seq 1 "${_v#[!\ ]*[\ ]}") doesn't just write and debug itself. (Until now!)

But, the generated scrpt worked 100% on the first try! Well, it worked as soon as I got the Haskell program to compile...

But the best part is that the Haskell program and the shell script don't just speak the same protocol. They both rely on the same definition of Proto. So this is fairly close to the kind of type-safe protocol serialization that Fay provides, when compiling Haskell to javascript.

I'm getting the feeling that I won't be writing too many nontrivial shell scripts by hand anymore! :)

the pretty-printed shell program #!/bin/sh set -x _handler () { : _v= read _v case "${_v%%\ *}" in FOO) : echo 'yay, I got a Foo' "${_v#[!\ ]*[\ ]}" >&2 ;; BAR) : echo 'yay, I got a Bar' >&2 ;; BAZ) : for _x in $(seq 1 "${_v#[!\ ]*[\ ]}") do : echo 'yay, I got a Baz' "$_x" >&2 done ;; *) : echo 'unknown protocol command' "${_v%%\ *}" >&2 false ;; esac } echo 'FOO starting up' _handler echo 'BAR ' _handler the one-liner shell program set -p; _handler () { :; _v=; read _v; case "${_v%%\ *}" in FOO) :; echo 'yay, I got a Foo' "${_v#[!\ ]*[\ ]}" >&2; ;; BAR) :; echo 'yay, I got a Bar' >&2; ;; BAZ) :; for _x in $(seq 1 "${_v#[!\ ]*[\ ]}"); do :; echo 'yay, I got a Baz' "$_x" >&2; done; ;; *) :; echo 'unknown protocol command' "${_v%%\ *}" >&2; false; ;; esac; }; echo 'FOO starting up'; _handler; echo 'BAR '; _handler
Categories: Elsewhere

Sven Hoexter: on call one-liner: Who is sitting in my swap space?

Planet Debian - Fri, 26/12/2014 - 21:24

A "nearly out of swap"-alarm¹ during on call duty led me to quickly assemble a one-liner to grab a list of PIDs and the amount of memory swapped out from /proc/[pid]/smaps. That one-liner later got a bit polishing from my colleague H. to look like this:

for x in $(ps -eo pid h); do s=/proc/${x}/smaps; [ -e ${s} ] && awk -vp=${x} 'BEGIN { sum=0 } /Swap:/ { sum+=$2 } END { if (sum!=0) print sum " PID " p}' ${s}; done | sort -rg

After I shared this one with some friends V. came up with a faster version (and properly formatted output :), that relies on the "VMSwap" value in /proc/[pid]/status. Since the one-liner above has to add up one "Swap" value per memory segment it's obvious why it's very slow on systems with many processes and a lot of memory.

awk 'BEGIN{printf "%-7s %-16s %s (KB)\n", "PID","COMM","VMSWAP"} { if($1 == "Name:"){n=$2} if($1 == "Pid:"){p=$2} if($1 == "VmSwap:" && $2 != "0"){printf "%-7s %-16s %s\n", p,n,$2 | "sort -nrk3"} }' /proc/*/status

Drawback of this second version is that it relies on the "VmSwap" value which is only part of Linux 2.6.34 and later. It also ended up in the 2.6.32 based kernel of RHEL 6, so this one should work from RHEL 6 and Debian/wheezy onwards. The first version also works on RHEL 5 and Debian/squeeze since both have a (default) kernel with "CONFIG_PROC_PAGE_MONITOR" enabled, which is what you need to enable /proc/[pid]/smaps.

¹ The usefulness of swap and why there is no automatic ressource assignment check and bookkeeping, based on calculations around things like the JVM -Xmx settings and other input, is a story on its own. There is room for improvement. A CMDB (Chapter 10.7) would be a starting point.

Categories: Elsewhere

Ritesh Raj Sarraf: Linux Containers and Productization

Planet Debian - Fri, 26/12/2014 - 18:16

Linux has improved many many things over the last couple of years. Of the many improvements, the one that I've started leveraging the most today, are Control Groups.

In the past, when there was a need to build a prototype for a solution, we needed hardware.

Then came the virtualization richness to Linux. It came in 2 major flavors, KVM (Full Virtualization) and Xen (Para Virtualization). Over the years, the difference of para vs full, for both the implementations, is almost none. KVM now has support for Para-Virtualizaiton, with para-virtualized drviers for most resource intensive tasks, like network and I/O. Similarly, Xen has Full Virtualization support with the help of Qemu-KVM.

But, if you had to build a prototype implementation comprising of a multi node setup, virtualization could still be resource hungry. Otherwise too, if your focus was an application (say like a web framework), virtualization was an overkill.

All thanks to Linux Containers, prototyping applicaiton based solutions, is now a breeze in Linux. The LXC project is very well designed, and well balanced, in terms of features (as compared to the recently introduced Docker implementation).

From an application's point of view, linux containers provide virtualization for namespace, network and resources. Thus making more than 90% of your application's needs fulfilled. For some apps, where a dependency on the kernel is needed, linux containers will not serve the need.

Beyond the defaults provided by the distribution, I like to create a base container with my customizations, as a template. This allows me to quickly create environements, without too much housekeeping to do for the initial setup.

My base config, looks like:

rrs@learner:~$ sudo cat /var/lib/lxc/deb-template/config [sudo] password for rrs: # Template used to create this container: /usr/share/lxc/templates/lxc-debian # Parameters passed to the template: # For additional config options, please look at lxc.container.conf(5) # CPU lxc.cgroup.cpuset.cpus = 0,1 lxc.cgroup.cpu.shares = 1234 # Mem lxc.cgroup.memory.limit_in_bytes = 2000M lxc.cgroup.memory.soft_limit_in_bytes = 1500M # Network lxc.network.type = veth lxc.network.hwaddr = 00:16:3e:0c:c5:d4 lxc.network.flags = up lxc.network.link = lxcbr0 # Root file system lxc.rootfs = /var/lib/lxc/deb-template/rootfs # Common configuration lxc.include = /usr/share/lxc/config/debian.common.conf # Container specific configuration lxc.mount = /var/lib/lxc/deb-template/fstab lxc.utsname = deb-template lxc.arch = amd64 # For apt lxc.mount.entry = /var/cache/apt/archives var/cache/apt/archives none defaults,bind 0 0 23:07 ♒♒♒ ☺ rrs@learner:~$

Some of the important settings to have in the templace are the mount point, to point to your local apt cache, and CPU and Memory limits.

If there was one feature request to ask the LXC developers, I'd ask them to provide a util-lxc tools suite. Currently, to know the memory (soft/hard) allocation for the container, one needs to do the following:

rrs@learner:/sys/fs/cgroup/memory/lxc/deb-template$ cat memory.soft_limit_in_bytes memory.limit_in_bytes 1572864000 2097152000 23:21 ♒♒♒ ☺ rrs@learner:/sys/fs/cgroup/memory/lxc/deb-template$ bc bc 1.06.95 Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. 1572864000/1024/1024 1500 quit 23:21 ♒♒♒ ☺ rrs@learner:/sys/fs/cgroup/memory/lxc/deb-template$

Tools like lxc-cpuinfo, lxc-free would be much better.

Finally, there's been a lot of buzz about Docker. Docker is an alternate product offering, like LXC, for Linux Containers. From what I have briefly looked at, docker doesn't seem to be providing any ground breaking new interface than what is already possible with LXC. It does take all the tidbit tools, and presents you with a unified docker interface. But other than that, I couldn't find it much appealing. And the assumption that the profiles should be pulled off the internet (Github ?) is not very exciting. I am hoping they do have other options, where dependence on the network is not really required.

Categories: Keywords: 
Categories: Elsewhere

Ritesh Raj Sarraf: Linux Desktop in 2014

Planet Debian - Fri, 26/12/2014 - 17:32

We are almost at the end of 2014. While 2014 has been a year with many mixed experiences, I think it does warrant one blog entry ;-)

Recently, I've again started spending more time on Linux products / solutions, than spending time focused on a specfic subsystem. This change has been good. It has allowed me to re-cap all the advancements that have happened in the Linux world, umm... in the last 5 years.

Once upon a time, the Linux kernel sucked on the Desktop. It led to many desktop improvement related initiatives. Many accepted in kernel, while others stood as it is (out-of-tree) still as of today. Over the years, there are many people that advocate for such out-of-tree features, for example the -ck patchset, claiming it has better performance. Most of the times, these are patches not carried by your distribution vendor, which leads you to alternate sources, if you want to try. Having some spare time, I tried the Alternative Kernel project. It is nothing but a bunch of patchsets, on top of the stock kernel.

After trying it out, I must say that these patchsets are out-of-tree, for good. I hardly could make out any performance gain. But I did notice a considerable increase in the power consumption. On my stock Debian kernel, the power consumption lies around 15-18 W. That increased to 20+ W on the alternate kernels. I guess most advocates for the out-of-tree patchsets, only measure the 1-2% performance gain, where as completely neglect the fact that that kernel sleeps less often.

But back to the generic Linux kernel performance problem......

Recently, in the last 2 years, the performance suckiness of the Linux kernel is hardly noticed. So what changed ?

The last couple of years have seen a rise in high capacity RAM, at affordable consumer price. 8 - 16 GiB of RAM is common on laptops these days.

If you go and look at the sucky bug report linked above, it is marked as closed, justified Working as Designed. The core problem with the bug reported, has to do with slow media. The Linux scheduler is (in?)efficient. It works hard to give you the best throughput and performance (for server workloads). I/O threads are a high priority task in the Linux kernel. Now map this scene to the typical Linux desktop. If you end up with doing too much buffered I/O, thus exhausting  all your available cache, and trigger paging, you are in for some sweet experience.

Given that the kernel highly priotizes I/O tasks, and if your underneath persistent storage device is slow (which is common if you have an external USB disk, or even an internal rotating magnetic disk), you end up blocking all your CPU cycles against the slow media. Which further leads to no available CPU cycles for your other desktop tasks. Hence, when you do I/O at such level, you find your desktop go terribly sluggish.

It is not that your CPU is slow or in-capable. It is just that all your CPU slices are blocked. Blocked waiting for your write() to report a completion.

So what exactly changed that we don't notice that problem any more ????

  1. RAM - Increase in RAM has led to more I/O be accommodated in cache. The best way to see this in action is to do a copy of a large file, something almost equivalent to the amount of RAM you have. But make sure it is less than the overall amount. For example, if you have 4 GiB of RAM, try copying a file of size 3.5 GiB in your graphical file manager. And at the same time, on the terminal, keep triggering the `sync` command. Check how long does it take for the `sync` to complete. By being able to cache large amount of data, the Linux kernel has been better at improving the overall performance in the eyes of the user.
  2. File System - But RAM is not alone. The file system has played a very important role too. Earlier, with ext3 file system, we had a commit interval of (5?) 30 seconds. That led to the above mentioned `sync` equivalent to get triggered every 30 secs. It was a safety measure to ensure, that at worst, you lose 30 secs worth of data. But it did hinder performance. With ext4, came delayed allocation. Delayed Allocation allowed the write() to return immediate while the data was in cache, and deferred the task of actual write() to the file system. This allowed for the allocator to find the best contiguous slot for the data to be written. Thus it improved the file system. It also brough corruption for some of the apps. :-)
  3. Solid State Drives - The file system and RAM alone aren't the sole factors that led to the drastic improvement in the overall experience of the Linux desktop. If you read through the bug report linked in this article, you'll find the core root cause to be slow persistent storage devices. Could the allocator have been improved (like Windows) to not be so pressing of the Linux desktop ? Maybe, yes. But that was a decision for the kernel devs and they believed (and believe) to keep those numbers to minimum. Thus for I/O, as for today, you have 3 schedulers and for CPU, just 1. What dramatically improved the overall Linux Desktop performance was the general availability of solid state devices. These device are real fast, which in effect made the write() calls return immediate, and did not block the CPU.

So, it was the advancement in both hardware and software that led to better overall desktop performance. Does the above mentioned bug still exist ? Yes. Its just that it is much harder to trigger it now. You'll have to ensure that you max out your cache and trigger paging. And then try to do ask for some CPU cycles.

But it wasn't that back then we didn't use Linux on the desktop / laptop. It sure did suck more than, say, Windows. But hey, sometimes we have to eat our own dog food. Even then, there sure were some efforts to overcome the then limitations. The first and obvious one is the out-of-tree patchset. But ther were also some other efforts to improve the situation.

The first such effort, that I can recollect, was ulatency. With Linux adding support for Control Groups, there were multiple avenues open on how to tackle and tame the resource starvation problem. The crux of the problem was that Linux gave way too much priority to the I/O tasks. I still wish Linux has a profile mechanism, where in on the kernel command line, we could specify what profile should Linux boot into. Anyways, with ulatency, we saw improvements in the Linux Desktop experience. ulatency had in-built policies to whitelist / blacklist a set of profiles. For example, KDE was a profile. Thus, ulatency would club all KDE processes into a group and give that group a higher precedence to ensure that it had its fair share of CPU cycles.

Today, at almost the end of 2014, there are many more consumer of Linux's control groups. Prominent names would be: LXC and systemd.

ulatency has hardly seen much development in the last year. Probably it is time for systemd to take over.

systemd is expected to bring lots of features to the Linux world, thus bridging the (many) gap Linux has had on the desktop. It makes extensive use of Control Groups for a variety of (good) reasons, which has led it to be a linux-only product. I think it should have never marketed itself as the init daemon. It rather fits better when called as the System Management Daemon.

The path to Linux Desktop looks much brighter in 2015 and beyond thanks to all the advancements that have happened so far. The other important players, who should be thanked are Mobile and Low Latency products (Android, ChromeBook), whose engagement to productize Linux has led to better features overall.

Categories: Keywords: 
Categories: Elsewhere

Lars Wirzenius: 'It Will Never Work in Theory' and software engineering research

Planet Debian - Fri, 26/12/2014 - 10:35

It Will Never Work in Theory is a web site that blogs, though slowly, of important research and findings about software development. It's one of the most interesting sites I've found recently, possibly for a long time.

I disagree with the term "software engineering" to describe the software development that happens today. I don't think it's accurate, and indeed I think the concept's too much of a fantasy for the term to be used seriously about practicing developers do. For software development to be an engineering discipline, it needs a strong foundation based on actual research. In short, we need to know what works, what doesn't work, and preferably why in both cases. We don't have much of that.

This website is one example of how that's now changing, and that's good. As a practicing software developer, I want to know, for example, whether code review actually helps improve software quality, the speed of software development, and the total cost of a software project, and also under what the limits of code review are, how it should be done well, and what kind of review doesn't work. Once I know that, I can decide whether and how to do reviews in my development teams.

The software development field is full of anecdotal evidence about these things. It's also full of people who've done something once, and then want to sell books, seminars, and lectures about it. That's not been working too well: it makes research be mostly about fads, and that's no way to build a strong foundation.

Now I just need the time to read everything, and the brain to understand big words.

Categories: Elsewhere

Francois Marier: Making Firefox Hello work with NoScript and RequestPolicy

Planet Debian - Fri, 26/12/2014 - 05:40

Firefox Hello is a new beta feature in Firefox 34 which give users the ability to do plugin-free video-conferencing without leaving the browser (using WebRTC technology).

If you cannot get it to work after adding the Hello button to the toolbar, this post may help.

Preferences to check

There are a few preferences to check in about:config:

  • media.peerconnection.enabled should be true
  • network.websocket.enabled should be true
  • loop.enabled should be true
  • loop.throttled should be false
NoScript

If you use the popular NoScript add-on, you will need to whitelist the following hosts:

  • firefox.com
  • loop.services.mozilla.com
  • opentok.com
  • tokbox.com
RequestPolicy

If you use the less popular but equally annoying RequestPolicy add-on, then you will need to whitelist the following destination hosts:

  • mozilla.com
  • opentok.com
  • tokbox.com

I have unfortunately not been able to find a way to restrict the above to a set of (source, destination) pairs. I suspect that the use of websockets confuses RequestPolicy.

Categories: Elsewhere

Russ Allbery: pam-krb5 4.7

Planet Debian - Fri, 26/12/2014 - 05:01

It's been a long, long time since the last upstream release. Rather too long, as the changes to the portability and test framework were larger than the changes to the module itself. But there are a few bug fixes here and one new feature.

The new feature is a new option, no_update_user, which disables the normal update of PAM_USER for the rest of the PAM stack to the canonicalized local username. This allows users to do things like enter Kerberos principals into the login prompt and have the right thing happen, but sometimes it's important to keep the authentication credentials as originally entered and not canonicalize, even if there's a local canonicalization available. This new option allows that.

In the bug-fix department, the module now suppresses spurious password prompts from Heimdal while using PKINIT and understands more Kerberos errors for purposes of try_first_pass support and returning better PAM errors.

The documentation now notes next to each option the version of pam-krb5 at which it was introduced with its current meaning.

You can get the latest version from the pam-krb5 distribution page.

Categories: Elsewhere

Vasudev Kamath: Notes: LXC How-To

Planet Debian - Fri, 26/12/2014 - 03:41

LXC - Linux Containers allows us to run multiple isolated Linux system under same control host. This will be useful for testing application without changing our existing system.

To create an LXC container we use the lxc-create command, it can accepts the template option, with which we can choose the OS we would like to run under the virtual isolated environment. On a Debian system I see following templates supported

[vasudev@rudra: ~/ ]% ls /usr/share/lxc/templates lxc-alpine* lxc-archlinux* lxc-centos* lxc-debian* lxc-fedora* lxc-openmandriva* lxc-oracle* lxc-sshd* lxc-ubuntu-cloud* lxc-altlinux* lxc-busybox* lxc-cirros* lxc-download* lxc-gentoo* lxc-opensuse* lxc-plamo* lxc-ubuntu*

For my application testing I wanted to create a Debian container for my By default the template provided by lxc package creates Debian stable container. This can be changed by passing the option to debootstrap after -- as shown below.

sudo MIRROR=http://localhost:9999/debian lxc-create -t debian \ -f container.conf -n container-name -- -r sid

-r switch is used to specify the release, MIRROR environment variable is used to choose the required Debian mirror. I wanted to use my own local approx installation, so I can save some bandwidth.

container.conf is the configuration file used for creating the LXC, in my case it contains basic information on how container networking should b setup. The configuration is basically taken from LXC Debian wiki

lxc.utsname = aptoffline-dev lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0.1 lxc.network.name = eth0 lxc.network.ipv4 = 192.168.3.2/24 lxc.network.veth.pair = vethvm1

I'm using VLAN setup described in Debian wiki: LXC VLAN Networking page. Below is my interfaces file.

iface eth0.1 inet manual iface br0.1 inet manual bridge_ports eth0.1 bridge_fd 0 bridge_maxwait 0

Before launching the LXC make sure you run below command

sudo ifup eth0.1 sudo ifup br0.1 # Also give ip to bridge in same subnet as lxc.network.ipv4 sudo ip addr add 192.168.3.1/24 dev br0.1

I'm giving ip address to bridge so that I can communicate with container from my control host, once it comes up.

Now start the container using below command

sudo lxc-start -n container -d -c tty8

We are starting lxc in daemon mode and attaching it to console tty8. If you want, you can drop -d and -c option to start lxc in foreground. But its better you start it in background and attach using lxc-console command shown below.

sudo lxc-console -n container -t tty8

You can detach from console using Ctrl+a q combination and let lxc execute in background.

Its also possible to simply ssh into the running container since we have enabled networking.

Stopping the container should be done using lxc-stop command, but without -k switch (kill) this command never returned. Even with timeout container is not stopped.

sudo lxc-stop -n container

-r can be used for reboot of container. Since I couldn't get clean shutdown I normally attach the console and issue a halt command in container itself. Not sure if this is the right way, but it gets the thing done.

I consider Linux container as a better alternative for spawning a virtual Linux environment instead of running a full blown VM like Virtualbox or VMware.

Categories: Elsewhere

Russ Allbery: rra-c-util 5.6

Planet Debian - Fri, 26/12/2014 - 01:28

rra-c-util is my personal collection of infrastructure for C and Perl packages. This release has a quite-large accumulation of small fixes, mostly from work Julien ÉLIE has done on merging it into INN and testing INN's portability on a wide variety of platforms.

The highlights:

  • Add a new module-version.t test and helper script for Perl modules that checks that all modules included in a distribution have the correct version.

  • Add a new portable/socket-unix.h portability layer that includes sys/un.h and defines SUN_LEN if the implementation does not do so. (Use in combination with the RRA_MACRO_SUN_LEN Autoconf macro.)

  • Check for interview overflow in vector_join and vector_cjoin.

  • Avoid strlcpy in the getnameinfo and setenv replacements. I'm slowly trying to eliminate strlcpy and strlcat in my code in favor of asprintf and other cleaner replacements following an extensive discussion on the glibc mailing lists.

  • Fix network_addr_match with the empty string on AIX 7.1, whose inet_aton accepts the empty string.

  • network_connect, network_connect_host, and network_client_create all accept "any" as a synonym for "all" as a source, for parallelism with other functions.

  • Add PIPE_READ and PIPE_WRITE macros to util/macros.h to name the elements of the array passed to pipe.

  • Fix Windows portability for socket functions that, on UNIX, want to return an error code of EINVAL.

  • Fix visibility of some of the utility and portability functions.

You can get the latest version from the rra-c-util distribution page.

Categories: Elsewhere

Dirk Eddelbuettel: rfoaas 0.0.4.20141225 -- not on CRAN

Planet Debian - Thu, 25/12/2014 - 23:28

A new version of rfoaas was prepared for CRAN, but refused on the grounds of having been updated within 24 hours. Oh well.

To recap, the rfoaas package provides an interface for R to the most excellent FOAAS service -- which provides a modern, scalable and RESTful web service for the frequent need to tell someone to eff off.

And having seen the Christmas Eve (ie December 24) update, upstream immediatly and rather kindly added a new xmas(name, from) function -- so now we could do rfoaas::xmas("CRAN Maintainers") to wish the CRAN Maintainers a very Merry Christmas.

So for once, there is no CRANberries report as the package is not CRAN :-/ Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Gizra.com: Yo Hedley!

Planet Drupal - Thu, 25/12/2014 - 23:00

Bingo! I think we're on to something here. It's called yo hedley - and its one command that brings a true headless Drupal to everybody.

In my last DrupalCon Bof presentation "Gizra - we've got your headless covered" I've taken the time to explain why "headless" was in fact mostly a buzz word. While I encourage you to have a look at the presentation, I'm actually more excited about telling you why I feel this is no longer the case.
Go ahead an check the live demo!

Continue reading…

Categories: Elsewhere

Russ Allbery: C TAP Harness 3.2

Planet Debian - Thu, 25/12/2014 - 22:38

This is a fairly minor release, mostly to have a release of files that I'm updating in rra-c-util. Most of the changes are typos and other cosmetic fixes. But I hate releasing a new version of something without a little bit of new functionality, so I implemented comment support in test lists. The runtests driver now ignores comments (lines starting with #) and blank lines in test list files, and leading whitespace in front of test names.

You can get the latest version from the C TAP Harness distribution page.

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator