Elsewhere

Mike Hommey: Announcing git-cinnabar 0.1.1

Planet Debian - Fri, 13/03/2015 - 02:17

0.1.1 is a one-bugfix release fixing an issue where git-cinnabar could be confused if commands were started from a subdirectory of the repository. It might be the cause for corruptions leading to issues such as the impossibility to push.

If you do encounter failures please report them. Please also paste the output of git cinnabar fsck.

Categories: Elsewhere

John Goerzen: Suggestions for visiting the UK?

Planet Debian - Fri, 13/03/2015 - 02:14

My wife and I have been thinking of visiting the UK for awhile, and we’re finally starting to make some plans. I would be grateful to anyone reading this that might have some time to make suggestions on places to go, things to do, etc.

Here’s a bit of background, if it helps.

We have both traveled internationally before; this would actually be the first time either of us has visited another English-speaking country. We are content, and in fact would prefer, to venture outside the most popular touristy areas (though that doesn’t mean we’d want to miss something awesome just because it’s popular.)

We will have a little less than a week, most likely. That means staying on one area, or two at the most. Some other tidbits:

  • I particularly enjoy old buildings: churches, castles, cathedrals. A 400-year-old country church off a dusty road would be as interesting to me as Westminster Abbey — I’d consider both awesome to see. Castles that are in a good state of repair would also be great. I’d enjoy getting in the countryside as well.
  • My wife would enjoy literary connections: anything related to Dickens, A. A. Milne, Jane Austen, C. S. Lewis, etc.
  • Although I know London is full of amazing sights, and that is certainly one destination we’re considering, I am also fine with things like Roman ruines in Wales, A. A. Milne’s estate, country churches, sites in Scotland, etc.
  • Ireland / Northern Ireland might be a possibility, but we are mainly focused on Great Britain so far.
  • Would we be able to accomplish this without a car? I understand it is rather difficult for Americans to have to drive in the UK.
  • Are there non-conventional lodging options such as bed and breakfasts that might let us get to know our hosts a little better?
  • If it helps understand me better, some highlights from my past trips included the amazing feeling of stepping back in time at Marienkirche in Lübeck or the Museum in der Runden Ecke (a former Stasi building in Leipzig), exploring the old city of Rhodes, or becoming friends with a shop owner in Greece and attending an amateur radio club meeting with him.

Finally, we would probably not be able to go until September. Is that a reasonable time? Is part of September maybe less busy but still decent weather?

Thanks for any advice!

Categories: Elsewhere

Chen Hui Jing: Drupal 101: What I learnt from hours of troubleshooting Feeds

Planet Drupal - Fri, 13/03/2015 - 01:00

Feeds is a very useful module when it comes to importing content into your Drupal site. However, it’s not very forgiving, in that your data has to be formatted just right for the feed to take. This post will run through the basic feed importers and some key points I learnt from hours upon hours of troubleshooting. I’m pretty sure I’ve spent upwards of 50 hours dealing with feeds thus far in my life.

Before I begin, I have a short rant on the importance of content. You could skip directly to the bits on feeds but then, it’ll be less entertaining.

The heart of every website is its content. At least, most of the time. And as much...

Categories: Elsewhere

Joey Hess: 7drl 2015 day 6 must add more

Planet Debian - Fri, 13/03/2015 - 00:02

Last night I put up a telnet server and web interface to play a demo of scroll and send me playtester feedback, and I've gotten that almost solid today. Try it!

Today was a scramble to add more features to Scroll and fix bugs. The game still needs some balancing, and generally seems a little too hard, so added a couple more spells, and a powerup feature to make it easier.

Added a way to learn new spells. Added a display of spell inventory on 'i'. For that, I had to write a quick windowing system (20 lines of code).

Added a system for ill effects from eating particular letters. Interestingly, since such a letter is immediately digested, it doesn't prevent the worm from moving forwards. So, the ill effects can be worth it in some situations. Up to the player to decide.

I'm spending a lot of time now looking at letter frequency historgrams to decide which letter to use for a new feature. Since I've several times accidentially used the same letter for two different things (most amusingly, I assigned 'k' to a spell, forgetting it was movement), I refactored all the code to have a single charSet which defines every letter and what it's used for, be that movement, control, spell casting, or ill effects. I'd like to use that to further randomize which letters are used for spell components, out of a set that have around the same frequency. However, I doubt that I'll have time to do that.

In the final push tonight/tomorrow, I hope to add an additional kind of level or two, make the curses viewport scroll when necessary instead of crashing, and hopefully work on game balance/playtester feedback.

I've written ~2800 lines of code so far this week!

Categories: Elsewhere

Andrew Cater: Windows 8.1 / Debian 8 dual boot on Lenovo laptop

Planet Debian - Thu, 12/03/2015 - 23:16
Just proved this to be feasible on a new SSD. This was done using Windows bootable media and a Jessie netinst. The Debian netinst was copied onto a USB stick.

1. Switch  Secure Boot to "off" in BIOS / system settings

2. Set machine to UEFI boot.

3. Install Windows 8.1 to first part of the disk in a custom install. On a 128GB disk, I gave Windows 8.1 30GB.

4. Install Jessie from USB: allow Grub efi to install.

It does work and was a useful demonstration for a friend.
Categories: Elsewhere

Lunar: Paranoia, uh?

Planet Debian - Thu, 12/03/2015 - 23:08

A couple days ago The Intercept has released new documents provided by Edward Snowden. They show the efforts of the CIA to break the security of Apple plateforms.

One of the document introduces the Strawhorse program: Attacking the MacOS and iOS Software Development Kit:

(S//NF) Ken Thompson's gcc attack […] motivates the StrawMan work: what can be done of benefit to the US Intelligence Community (IC) if one can make an arbritrary modification to a system compiler […]? A (whacked) SDK can provide a subtle injection vector onto standalone developer networks, or it can modify any binary compiled by that SDK. In the past, we have watermarked binaries for attribution, used binaries as an exfiltration mechanism, and inserted Trojans into compiled binaries.

I knew it was a plausible hypothesis, but just reading it black on white gives me shivers.

Reproducible builds need to become the standard.

Categories: Elsewhere

Enrico Zini: crypttab-reuse-passwords

Planet Debian - Thu, 12/03/2015 - 22:45
Reuse passwords in /etc/crypttab

Today's scenario was a laptop with an SSD and a spinning disk, and the goal was to deploy a Debian system on it so that as many things as possible are encrypted.

My preferred option for it is to setup one big LUKS partition in each disk, and put a LVM2 Physical Volume inside each partition. At boot, the two LUKS partition are opened, their contents are assembled into a Volume Group, and I can have everything I want inside.

This has advantages:

  • if any of the disks breaks, the other can still be unlocked, and it should still be possible to access the LVs inside it
  • once boot has happened, any layout of LVs can be used with no further worries about encryption
  • I can use pvmove to move partitions at will between SSD and spinning disks, which means I can at anytime renegotiate the tradeoffs between speed and disk space.

However, by default this causes cryptsetup to ask for the password once for each LUKS partition, even if the passwords are the same.

Searching for ways to mitigate this gave me unsatisfactory results, like:

  • decrypt the first disk, and use a file inside it as the keyfile to decrypt the second one. But in this case if the first disk breaks, I also lose the data in the second disk.
  • reuse the LUKS session key for the first disk in the second one. Same problem as before.
  • put a detached LUKS header in /boot and use it for both disks, then make regular backups of /boot. It is an interesting option that I have not tried.

The solution that I found was something that did not show up in any of my search results, so I'm documenting it here:

# <target name> <source device> <key file> <options> ssd /dev/sda2 main luks,initramfs,discard,keyscript=decrypt_keyctl spin /dev/sdb1 main luks,initramfs,keyscript=decrypt_keyctl

This caches each password for 60 seconds, so that it can be reused to unlock other devices that use it. The documentation can be found at the beginning of /lib/cryptsetup/scripts/decrypt_keyctl, beware of the leopard™.

main is an arbitrary tag used to specify which devices use the same password.

This is also useful to work easily with multiple LUKS-on-LV setups:

# <target name> <source device> <key file> <options> home /dev/mapper/myvg-chome main luks,discard,keyscript=decrypt_keyctl backup /dev/mapper/myvg-cbackup main luks,discard,keyscript=decrypt_keyctl swap /dev/mapper/myvg-cswap main swap,discard,keyscript=decrypt_keyctl
Categories: Elsewhere

Bits from Debian: DebConf15: Call for Proposals

Planet Debian - Thu, 12/03/2015 - 22:20

The DebConf Content team is pleased to announce the Call for Proposals for the DebConf15 conference, to be held in Heidelberg, Germany from the 15th through the 22nd of August, 2015.

Submitting an Event

In order to submit an event, you must be registered as an attendee of DebConf15. If you have any questions about the registration process, please check the related information on the conference website.

Once registered, go to "Propose an event" and describe your proposal. Please note, events are not limited to traditional presentations or informal sessions (BoFs). We welcome submissions of tutorials, performances, art installations, debates, or any other format of event that you think would be beneficial to the Debian community.

Please include a short title, suitable for a compact schedule, and an engaging description of the event. You should use the field "Notes for Content Team" to provide us information such as additional speakers, scheduling restrictions, or any special requirements we should consider for your event.

Regular sessions may either be 20 or 45 minutes long (including time for questions) and will be followed by a 10 or 15 minutes break, respectively. Other kinds of sessions (like workshops) could have different durations. Please make sure to choose the most suitable duration for your event and justify any special requests.

Timeline

The first batch of accepted proposals will be announced in May. If you depend on having your proposal accepted in order to attend the conference, please submit it as soon as possible so that it can be considered during this first evaluation period.

All proposals must be submitted before June 15th, 2015 to be evaluated for the official schedule.

Topics and Tracks

Though we invite proposals on any Debian or FLOSS related subject, we will have some broad topics arranged as tracks for which we encourage people to submit proposals. The currently proposed list is:

  • Debian Packaging, Policy, and Infrastructure
  • Security, Safety, and Hacking
  • Debian System Administration, Automation and Orchestration
  • Containers and Cloud Computing with Debian
  • Debian Success Stories
  • Debian in the Social, Ethical, Legal, and Political Context
  • Blends, Subprojects, Derivatives, and Projects using Debian
  • Embedded Debian and Hardware-Level Systems

If you have ideas for further tracks, or would like to volunteer as a track coordinator, please contact content@debconf.org. In order for a track to take place during DebConf15, we must have received a sufficient amount of proposals on that specific theme. Track coordinators will play an important role in inviting people to submit events.

Video Coverage

Providing video of sessions amplifies DebConf achievements and is one of the conference goals. Unless speakers opt-out, official events will be streamed live over the Internet to promote remote participation. Recordings will be published later under the DebConf license, as well as presentation slides and papers whenever available.

Contact and Thanks to Sponsors

DebConf would not be possible without the generous support of all our sponsors, especially our platinum sponsor HP. DebConf15 is still accepting sponsors; if you are interested, please get in touch!

You are welcome to contact the Content Team with any concerns about your event, or with any ideas or questions ambout DebConf events in general. You can reach us at content@debconf.org.

We hope to see you all in Heidelberg!

Categories: Elsewhere

Drupal Easy: Florida DrupalCamp 2015 - It's SSSSuper

Planet Drupal - Thu, 12/03/2015 - 20:00

For the sixth year in a row, Central Florida will host the Sunshine State's largest gathering of Drupalists for two full days of learning, networking, and sharing at Florida DrupalCamp 2015. To be held Saturday and Sunday, April 11-12, 2015 at Florida Technical College in Orlando, approximately 300 people will gather for a full day of sessions and a full day of community contributions. Attendees will be provided with knowledge, food, and clothing - and maybe a surprise or two as well!

-->

read more

Categories: Elsewhere

Marko Lalic: solicit - an HTTP/2 Library for Rust

Planet Debian - Thu, 12/03/2015 - 18:55

For quite some time now, I've been following Rust, a new(ish) programming language by Mozilla. It's been around for a while, but recently it's been attracting a lot of buzz. Around the new year, I finally decided to take a deeper look into the language.

I find that the best way to learn a new programming language is to simply dive in and try to build something nontrivial. At that point I'd also been interested in learning more about the specifics behind the new HTTP/2 spec, so I decided to combine the two... The result is a (work in progress) client library for HTTP/2 written in Rust, which can be found at this GitHub repository: https://github.com/mlalic/solicit

What follows is a description of what I aim to achieve with the library, as well as an overview of its current design. If that doesn't interest you, you can just check out the GitHub repo for some examples or the source code itself.

Goals

Given that there are already some fairly popular HTTP libraries for Rust (hyper leading the way), I decided that I wouldn't aim to replace them. Rather, the idea is to focus on the lower levels of the HTTP/2 protocol—what is essentially another transport protocol—and expose an API that allows those libraries to use an HTTP/2 connection to send requests and receive responses. HTTP/2 doesn't change the semantics of the protocol, so once any HTTP/1.1 library gets the raw headers and body, it should be able to represent the response regardless of the fact that it came on an HTTP/2 connection. The same holds for sending requests: only the representation of the request changes, not its content.

There are some minor differences, such as HTTP/2 doing away with the reason phrases being part of the response, but those were only meant to be human readable and "MUST NOT" have had any influence on the semantics of the response.

Therefore, the primary goal for the library is to provide an API for using an HTTP/2 connection on the lowest level, but making it easy for clients to customize or augment any particular step of handling a response, without having to know the details of functionality below the layer they want to customize.

As an example, if one wanted to build a client that sends out progress notifications for downloads, it wouldn't be good if they needed to muck about with raw HTTP/2 frames—ideally, they'd only need to implement a new way to handle incoming data chunks for streams. The same applies for implementing a client that streams the download into a file (instead of saving it in memory) or one that sends the chunks to a different component for processing as soon as they're received (e.g. parsing HTML).

Since the Rust standard library only exposes blocking IO primitives, I decided to base this library on blocking IO, as well. However, the goal was to make it flexible enough so that clients that would like to allow multiple requests to be sent concurrently on the same HTTP/2 connection can be built, but without the library itself spawning any threads.

Finally, on a less conceptual level (and more of an implementation level), the code should have extensive test coverage.

Design

The library is split into several layers, each of which provides services to the one above it. A brief description of each layer follows.

Transport Layer

This layer provides the API for raw byte stream communication with the peer.

Essentially, it's a wrapper around the particular transport protocol that is used for the HTTP/2 connection. As the HTTP/2 spec allows the transport protocol to be either a cleartext TCP connection or a TLS-protected one, those are the ones that need to be implemented at this level.

The API of the layer is represented by a single trait TransportStream that simply requires io::Read and io::Write to be implemented and (for now) provides a single convenience method based on the underlying io::Read implementation.

For a clear-text TCP connection, the only thing that was required to implement it was one line of code:

impl TransportStream for TcpStream {}
HTTP/2 Connection Layer

The connection layer provides an API for reading and writing to a raw HTTP/2 connection. This means reading and writing HTTP/2 frames, not arbitrary bytes.

Each HTTP/2 frame has to follow the same basic structure(a 9-byte header followed by a sequence of byte representing the payload of the frame). This is represented by the RawFramestruct.

Each HTTP/2 frame type can have its own specific format for the content of the payload and assign meaning to the flags found in the header. Therefore, each frame needs to have its own implementation that can parse a RawFrame or return an error if the raw frame does not represent a valid frame of that type. Some common methods that each frame type implementation needs to support are given by the Frametrait that must be implemented by the implementations.

Finally, the HttpConnectionstruct handles reading the byte stream from the underlying TransportStream instance, correctly delineating frames and parsing them into the appropriate frame type implementation (by delegating to the correct Frame implementation based on the frame type found in the header).

Obviously, varying the implementation of the transport layer is a matter of choosing the appropriate TransportStream implementation. This includes using mock implementations for testing the HttpConnection itself.

HTTP/2 Client Connection

The ClientConnectionbuilds upon the HttpConnection by using it to expose functionality that is specific to the client side of an HTTP/2 connection. This includes sending requests and reading responses. The struct takes care of creating frames that need to be sent for a given request (including HPACK header encoding) and shipping them off to the server (using the underlying HttpConnection instance).

As for handling a response, the ClientConnection expects to get an instance of a struct implementing the Sessiontrait when constructed. This trait essentially defines a number of callbacks: methods that will be called on this instance when the corresponding event occurs in the ClientConnection.

Therefore, the ClientConnection understands the semantics of the particular frames that it reads (e.g. performs the decoding of the HEADERS frames' payloads, passing the raw data received in a DATA frame to the session, etc.). However, it does not know anything about streams; it only passes the stream ID up to the session layer, without thinking about whether the stream is open, valid, etc.

HTTP/2 Session

As mentioned in the previous section, each ClientConnection requires an instance of a struct implementing the Sessiontrait to be passed to it at construction time.

The Session trait defines a number of callbacks that are invoked in the appropriate moments during the processing of frames read on a client connection. For example, new_headers (takes a list of heaers and a stream on which they were received) or new_data_chunk (a buffer and a stream on which the data arrived).

The session is in charge of handling these events as they arise. It is up to it to decide what (if anything) should be done with the data that it receives.

There is a default implementation of the Session trait (unimaginatively named DefaultSession), which provides an implementation that tracks which streams are currently open and delegates the processing of new events on particular streams to instances of a Streamtrait.

This allows for client implementations to simply vary the implementation of a Stream that they provide to the DefaultSession in order to change how a particular part of a response is handled, but let the DefaultSession take care of tracking which streams are open and valid.

Different implementations of a Session still can exist, if the particular client needs more fine-grained customization over how these events are handled, but for most purposes the default session, along with Stream customization should probably be enough.

HTTP/2 Stream

Again, as mentioned previously, the DefaultSessionimplementation of the Session trait relies on a Streamtrait providing the handling for events arising on individual streams.

This level should be the one that gets customized most often in alternate implementations.

A default stream implementation—DefaultStreamhandles the messages by building up a response in-memory. Different implementations could instead send progress notifications on an associated channel, for instance.

HTTP/2 Client

The final layeris that of a client that exposes methods that would usually be expected for a client HTTP library: sending requests, obtaining responses. This layer is not the primary focus, though, and it only exposes two simple implementations that showcase how the lower level APIs can be used to build a client. The second purpose it fulfills is as a "sanity check" that the APIs actually do allow the goals to be met.

Therefore, one of the clients allows for requests to be sent from concurrently running threads and the responses to those requests to be obtained asynchronously (in the form of a Future<Response>). However, the responses are basic and very raw: they are stored fully in memory (not available until they've been read until the end) and both the body, as well as the headers, exposed as raw byte sequences.

As an example how this clientcan be used (as well as the simpler single-threaded version), check out the Examplessection in the GitHub repository's README file.

Future Work Refactoring APIs

As the project was started as a Rust learning exercise, the initial APIs are slightly awkward and fairly inefficient to work with. For example, the solicit::hpackmodule requires headers to be represented as a tuple of Vec<u8>, which means that whenever we need to encode a header list, even though their values might be (statically known) byte sequence literals, such as (b":method", b"GET"), we need to make a bunch of to_vec calls on the &[u8]s before passing them to the encoder, leading not only to a really ugly and tedious way of obtaining the encoded representation of the headers, but also to a lot of unnecessary allocations.

Another API that will see some refactoring and improvement is the TransportStream, as well as the HttpConnection in order to make it possible to have separate threads perform reads and writes on the connection. Currently, the socket is owned by the HttpConnection instance and all writes and reads require a &mut reference, making this impossible. This change would tackle this by embracing Rust's new try_clone method on the TcpStream into the TransportStream and refactoring the HttpConnection to allow splitting it into a write and read handle.

HTTP/2 Features

As already mentioned, only a small subset of the full HTTP/2 spec is currently implemented, so, obviously, implementing the rest of it falls under the most important future work.

Some near- to mid-term priorities that will be tackled are:

  • TLS-backed connections: a new TransportStream implementation that wraps a TCP socket into a TLS wrapper. This will probably rely on OpenSSL bindings.
  • Support for all HTTP/2 frames: only the three vital frames are implemented so far; other frames are a prerequisite to implementing properly closing streams, handling closed streams etc.
  • Signaling stream- and connection-level errors to the peer
  • Flow control: right now this aspect is completely ignored
  • Server push: the idea is that multithreaded clients could handle pushed responses as soon as they arive, whereas single-threaded ones would be allowed to fetch a list of them every so often.

Obviously, those aren't all the features of HTTP/2, but it's a rough list of things I'm planning on working on myself as the next steps.

Integration With an (existing?) HTTP Library

I would like it if in the end the library or at least some of its pieces ended up being used in the scope of one of the fully featured HTTP libraries. If there's any interest in this, I wouldn't mind putting in some extra legwork to help make it happen.

Conclusion

While I'm certain that there can be better, more efficient, more elegant implementations, I've had a lot of fun working on this so far (learning new things is always fun!) and if nothing else I'll continue in order to gain the deepest possible understanding of HTTP/2. For anyone that might have read all the way through, thanks for reading and I'm open for any sort of feedback!

Categories: Elsewhere

Cheeky Monkey Media: Importing and Exporting Databases with Drush

Planet Drupal - Thu, 12/03/2015 - 18:25

A few weeks ago, I was pulled into a Non-Drupal project. As I was configuring the site to run on my local computer, I realized that I have been taking advantage of Drupal and Drush. I forgot how easy it was to import and export MYSQL databases. If you're a drupal developer and are not using drush to import and export your databases, you should. It will save you time, and its easy.

Configure Settings.php

Before you attempt to import a new database, make sure you have the database configurations setup properly in settings.php. If you don't have this specified, drush...Read More

Categories: Elsewhere

Jonathan Dowland: R.I.P. Terry Pratchett

Planet Debian - Thu, 12/03/2015 - 17:15

Pratchett and I, around 1998

Terry Pratchett dies, aged 66.

It looks like his last novel will be The Long Utopia, the fourth book in the Long Earth series, co-written with Stephen Baxter.

Categories: Elsewhere

Francesca Ciceri: RIP Terry Pratchett

Planet Debian - Thu, 12/03/2015 - 16:59

“DON’T THINK OF IT AS DYING, said Death. JUST THINK OF IT AS LEAVING EARLY TO AVOID THE RUSH.”

― Terry Pratchett, Good Omens: The Nice and Accurate Prophecies of Agnes Nutter, Witch

Thank you for everything you wrote. Each and every line was a gem, with another gem hidden inside.

Categories: Elsewhere

Erich Schubert: The sad state of sysadmin in the age of containers

Planet Debian - Thu, 12/03/2015 - 14:04
System administration is in a sad state. It in a mess. I'm not complaining about old-school sysadmins. They know how to keep systems running, manage update and upgrade paths. This rant is about containers, prebuilt VMs, and the incredible mess they cause because their concept lacks notions of "trust" and "upgrades". Consider for example Hadoop. Nobody seems to know how to build Hadoop from scratch. It's an incredible mess of dependencies, version requirements and build tools. None of these "fancy" tools still builds by a traditional make command. Every tool has to come up with their own, incomptaible, and non-portable "method of the day" of building. And since nobody is still able to compile things from scratch, everybody just downloads precompiled binaries from random websites. Often without any authentication or signature. NSA and virus heaven. You don't need to exploit any security hole anymore. Just make an "app" or "VM" or "Docker" image, and have people load your malicious binary to their network. The Hadoop Wiki Page of Debian is a typical example. Essentially, people have given up in 2010 to be able build Hadoop from source for Debian and offer nice packages. To build Apache Bigtop, you apparently first have to install puppet3. Let it download magic data from the internet. Then it tries to run sudo puppet to enable the NSA backdoors (for example, it will download and install an outdated precompiled JDK, because it considers you too stupid to install Java.) And then hope the gradle build doesn't throw a 200 line useless backtrace. I am not joking. It will try to execute commands such as e.g. /bin/bash -c "wget http://www.scala-lang.org/files/archive/scala-2.10.3.deb ; dpkg -x ./scala-2.10.3.deb /" Note that it doesn't even install the package properly, but extracts it to your root directory. The download does not check any signature, not even SSL certificates. (Source: Bigtop puppet manifests) Even if your build would work, it will involve Maven downloading unsigned binary code from the internet, and use that for building. Instead of writing clean, modular architecture, everything these days morphs into a huge mess of interlocked dependencies. Last I checked, the Hadoop classpath was already over 100 jars. I bet it is now 150, without even using any of the HBaseGiraphFlumeCrunchPigHiveMahoutSolrSparkElasticsearch (or any other of the Apache chaos) mess yet. Stack is the new term for "I have no idea what I'm actually using". Maven, ivy and sbt are the go-to tools for having your system download unsigned binary data from the internet and run it on your computer. And with containers, this mess gets even worse. Ever tried to security update a container? Essentially, the Docker approach boils down to downloading an unsigned binary, running it, and hoping it doesn't contain any backdoor into your companies network. Feels like downloading Windows shareware in the 90s to me. When will the first docker image appear which contains the Ask toolbar? The first internet worm spreading via flawed docker images? Back then, years ago, Linux distributions were trying to provide you with a safe operating system. With signed packages, built from a web of trust. Some even work on reproducible builds. But then, everything got Windows-ized. "Apps" were the rage, which you download and run, without being concerned about security, or the ability to upgrade the application to the next version. Because "you only live once".
Categories: Elsewhere

Hideki Yamane: localized directory name is harmful

Planet Debian - Thu, 12/03/2015 - 13:51
Summary: xdg user directory spec is broken, I want to fix it.

One of annoyed things in Linux Desktop environment is localized user's directory (e.g. $HOME/ダウンロード, instead of $HOME/Download). I know it is handled by XDG, my ~/.config/user-dirs.dirs has setting as below (by default).

XDG_DESKTOP_DIR="$HOME/デスクトップ"
XDG_DOWNLOAD_DIR="$HOME/ダウンロード"
XDG_TEMPLATES_DIR="$HOME/テンプレート"
XDG_PUBLICSHARE_DIR="$HOME/公開"
XDG_DOCUMENTS_DIR="$HOME/ドキュメント"
XDG_MUSIC_DIR="$HOME/音楽"
XDG_PICTURES_DIR="$HOME/画像"
XDG_VIDEOS_DIR="$HOME/ビデオ"
So XDG utlility program changes it as setting, moves ~/Desktop to ~/デスクトップ. However, it is NOT what users want.

Because if you would play with shell, you should also input localized characters in your terminal. Imagine, sometimes I download files with browser, then use it in terminal - download some tar.xz file and extract it. Well, it is downloaded in ~/ダウンロード directory... it's not convinient - you cannot use tab completion without IME (input method editor), e.g. Anthy or Mozc for Japanese. And if you don't want to install it, what do you do? Use mouse to copy&paste?

Then, some Japanese people create webpages about "How to change localized user directory back to English name" ;)

Windows and OS X has localized Desktop, Download, Document directory, too. But it's not directory itself. In Windows, you can see "デスクトップ" (Desktop) with Windows Explorer but it is seen as just "Desktop" directory with cmd.exe (I don't know how to trick it. Maybe Windows registory magic). And in OS X, same thing with Finder. Those users don't annoy with localized directory name, it's friendly and convinient for cmd/shell users, and also for GUI (Windows' Explorer/OS X's Finder) users.


And also, it seems that localized directory name is good for average users who use File manager (Nautilus, etc). Just labeled with only English is not friendly to non-English native users like my mother ;) (Yes, localization is important!) But it's _better_ for *everyone* to set directory name with English _and_ localized with hardlink (and probably not harm anyone).

So lazyweb, could you tell me where should I go to talk this issue with upstream? It's not big thing, but still annoying. I can change it with xdg-user-dirs-update for my environment but it's just a workaround, not solution for this problem.
Categories: Elsewhere

Code Karate: Drupal 7 Excluding Node ID from URL

Planet Drupal - Thu, 12/03/2015 - 13:11
Episode Number: 197

In this installment of the Daily Dose of Drupal, we are looking not at a module, but rather how to exclude a node from a view using the node/content ID.

The video explanation will put a lot more context around exactly what I mean, but the general idea is using a view we will be able to exclude the current node id we are on (grabbed from the URL) from the view. In other words, if you are on a page about grasshoppers the view possibly on the sidebar that displays other insects won't have the grasshopper listed (ie since we are already on this page).

Tags: DrupalBlocksContent TypesViewsDrupal 7Drupal Planet
Categories: Elsewhere

Gergely Nagy: Lost in a maze, looking for cheese

Planet Debian - Thu, 12/03/2015 - 11:18
Apology

As a twisted turn of fate, I need to start my platform with an apology. Last year, I withdrew my nomination, because my life turned upside down, and the effects of that are still felt today, and will continue to have a considerable footprint for the rest of my life. But this is not what I want to apologise for! I want to apologise because my nomination, my platform, and my plans are rather uncommon - or so I believe. Unlike past Project Leaders, if elected, I will not have as much time to travel, as some of them. Nor will I have the resources to do all the things one may expect from a Project Leader. But - as you'll read later - I consider this both the biggest weakness and the greatest strength of my platform.

Furthermore, I will not be available from the second half of the voting period until after the new Project Leader's term starts: I'll be getting married, and will enjoy a nice honeymoon, and as such, my email will remain unread from the 9th of April until we're back on the 21st of April.

On the mouse behind the keyboard

My name is Gergely Nagy, or algernon, for those of you who may know me by my online presence. I used to be a lot of things: a flaming youth, an application manager, package maintainer, upstream, ftp-assistant, a student, a mentor, a hacker. In the end, however, I am but a simple, albeit sometimes crazy person. I imagine, a lot of people reading this will not know me, and that is actually not only fine, but in my case, desirable.

The Role of the Project Leader

The role of the Project Leader, what people perceive about the role, and what is actually done by one single person changed dramatically over time. In some ways for the better, some ways for the worse. It should be sufficiently clear by now that if we combine all the things past Project Leaders have done, and what people expect the Project Leader to do, then not even all the time in the world would be enough to accomplish everything. On the other hand, we should not cut the responsibilities back to constitution-granted ones only.

There has been some discussion about this very topic on debian-project@, back in February, with excellent observations by both past Project Leaders and the current one. I encourage my dear readers to stop reading this platform now, and read at least Lucas's mail. I'll be right here when you get back.

Back? Good.

You may notice that unlike in previous years, I do not have a Grand Vision, not in the same sense at least. No matter what I envision, however noble that vision may be, it is not the Project Leader's job to save the world, so to say. While I still believe that we have serious problems with motivation, inspiration and innovation within the project, the Project Leader is the wrong person to try and solve these issues with.

Apart from the constitutionally required responsibilities, the primary purpose of the Project Leader in my opinion, is to be an enabler: the Project Leader is not a front runner to lead the herd to victory, but a gentle shepherd to make them happy.

Happy

There are many ways to make people happy. One such way is to enable them to pursue their passion - and not just enable them, but encourage and fuel that passion, too! -, to remove barriers, to empower them, or in a lot of cases, to include them in the first place. My vision is a project that is greater than just an amazing distribution. More than a role model for technical excellence. I want to do away with territorial pissings (with which I don't mean to imply that this would be the general atmosphere within the project - far from it!), and would rather see a warm, welcoming culture within the project - the kind of "I'm home!" feeling I had at the DebConfs I had the privilege to attend -, on public and private media alike.

That is the vision I have, but I can't do it. At best, I can hope to enable people much better at the these things to do what needs to be done. I wish to take the burden of administration, bureaucracy off their shoulders, so they can focus on what they do best. I feel that this is the most important part of being a Project Leader: to enable the project to grow. And this is why I feel that most of you not knowing my name is an advantage here: When you look at a beautiful flower, you admire the flower itself, rarely the gardener.

In the end

I wish to be the Project Leader no one remembers. I'd rather see people remember all the great things the Project - as a whole - accomplished, for there are many. My purpose is to let You pursue your passion, which in turn enables even more people to pursue their happiness, and allows the greater Free Software community to bask in the warm glow of accomplished dreams.

Categories: Elsewhere

Matthew Garrett: Vendors continue to break things

Planet Debian - Thu, 12/03/2015 - 11:03
Getting on for seven years ago, I wrote an article on why the Linux kernel responds "False" to _OSI("Linux"). This week I discovered that vendors were making use of another behavioural difference between Linux and Windows to change the behaviour of their firmware and breaking things in the process.

The ACPI spec defines the _REV object as evaluating "to the revision of the ACPI Specification that the specified \_OS implements as a DWORD. Larger values are newer revisions of the ACPI specification", ie you reference _REV and you get back the version of the spec that the OS implements. Linux returns 5 for this, because Linux (broadly) implements ACPI 5.0, and Windows returns 2 because fuck you that's why[1].

(An aside: To be fair, Windows maybe has kind of an argument here because the spec explicitly says "The revision of the ACPI Specification that the specified \_OS implements" and all modern versions of Windows still claim to be Windows NT in \_OS and eh you can kind of make an argument that NT in the form of 2000 implemented ACPI 2.0 so handwave)

This would all be fine except firmware vendors appear to earnestly believe that they should ensure that their platforms work correctly with RHEL 5 even though there aren't any drivers for anything in their hardware and so are looking for ways to identify that they're on Linux so they can just randomly break various bits of functionality. I've now found two systems (an HP and a Dell) that check the value of _REV. The HP checks whether it's 3 or 5 and, if so, behaves like an old version of Windows and reports fewer backlight values and so on. The Dell checks whether it's 5 and, if so, leaves the sound hardware in a strange partially configured state.

And so, as a result, I've posted this patch which sets _REV to 2 on X86 systems because every single more subtle alternative leaves things in a state where vendors can just find another way to break things.

[1] Verified by hacking qemu's DSDT to make _REV calls at various points and dump the output to the debug console - I haven't found a single scenario where modern Windows returns something other than "2"

comments
Categories: Elsewhere

Bits from Debian: Debian Project Leader elections 2015

Planet Debian - Thu, 12/03/2015 - 11:00

It's that time of year again for the Debian Project: the elections of its Project Leader! Starting on April 1st, and during the following two weeks, the Debian Developers will vote to choose the person who will guide the project for one year. The results will be published on April 15th and the term for new the project leader will start on April 17th, 2015.

Lucas Nussbaum who has held the office for the last two years won't be seeking reelection this year and Debian Developers will have to choose between three candidates:

Gergely Nagy and Neil McGovern previously ran for DPL in past years; it's the first run for Mehdi Dogguy.

The campaigning period started today and will last until March 31st. The candidates are expected to engage in debates and discussions on the debian-vote mailing list where they'll reply to questions from users and contributors.

Categories: Elsewhere

Joey Hess: 7drl 2015 day 5 type directed spell system development

Planet Debian - Wed, 11/03/2015 - 23:51

I want my 7drl game Scroll to have lots of interesting spells. So, as I'm designing its spell system, I've been looking at the types, and considering the whole universe of possible spells that fit within the constraints of the types.

My first throught was that a spell would be a function from World -> World. That allows any kind of spell that manipulates the game map. Like, for instance a "whiteout" that projects a stream of whitespace from the player's mouth.

Since Scroll has a state monad, I quickly generalized that; making spell actions a state monad M (), which lets spells reuse other monadic actions, and affect the whole game state, including the player. Now I could write a spell like "teleport", or "grow".

But it quickly became apparent this was too limiting: While spells could change the World map, the player, and even change the list of supported spells, they had no way to prompting for input.

I tried a few types of the Event -> M () variety, but they were all too limiting. Finally, I settled on this type for spell actions: M NextStep -> M NextStep.

And then I spent 3 hours exploring the universe of spells that type allows! To understand them, it helps to see what a NextStep is:

type Step = Event -> M NextStep data NextStep = NextStep View (Maybe Step)

Since NextStep is a continuation, spells take the original continuation, and can not only modify the game state, but can return an altered continuation. Such as one that prompts for input before performing the spell, and then calls the original continuation to get on with the game.

That let me write "new", a most interesting spell, that lets the player add a new way to cast an existing spell. Spells are cast using ingredients, and so this prompts for a new ingredient to cast a spell. (I hope that "new farming" will be one mode of play to possibly win Scroll.)

And, it lets me write spells that fail in game-ending ways. (Ie, "genocide @"). A spell can cause the game to end by returning a continuation that has Nothing as its next step.

Even better, I could write spells that return a continuation that contains a forked background task, using the 66 line contiuation based threading system I built in day 3. This allows writing lots of fun spells that have an effect that lasts for a while. Things like letting the player quickly digest letters they eat, or slow down the speed of events.

And then I thought of "dream". This spell stores the input continuation and game state, and returns a modified continuation that lets the game continue until it ends, and then restores from the point it saved. So, the player dreams they're playing, and wakes back up where they cast the spell. A wild spell, which can have different variants, like precognitive dreams where the same random numbers are used as will be used upon awaking, or dreams where knowledge carries back over to the real world in different ways. (Supports Inception too..)

Look how easy it was to implement dreaming, in this game that didn't have any notion of "save" or "restore"!

runDream :: M NextStep -> M NextStep -> (S -> S) -> M NextStep runDream sleepcont wakecont wakeupstate = go =<< sleepcont where go (NextStep v ms) = return $ NextStep v $ Just $ maybe wake (go <=<) ms wake _evt = do modify wakeupstate wakecont

I imagine that, if I were not using Haskell, I'd have just made the spell be an action, that can do IO in arbitrary ways. Such a spell system can of course do everything I described above and more. But, I think that using a general IO action is so broad that it hides the interesting possibilities like "dream".

By starting with a limited type for spells, and exploring toward more featureful types, I was able to think about the range of possibilities of spells that each type allowed, be inspired with interesting ideas, and implement them quickly.

Just what I need when writing a roguelike in just 7 days!

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator - Elsewhere