Elsewhere

Matthew Palmer: Doing Password Complexity Wrong

Planet Debian - mar, 08/07/2014 - 07:00

I just made an account on yet another web service. On the suggestion of my password manager, I attempted to use the password “W:9[$X*F”. It was rejected because “Password must contain at least one non-alphabet character, one lowercase letter, one uppercase letter”. OK, how about “Passw0rd”? Yep, that’s fine.

Anyone want to guess which of those two passwords is going to fall victim to a brute-force attack first? Go on, don’t be shy, take a wild shot in the dark!

Catégories: Elsewhere

Kristian Polso: Fix Drupal Registry with Registry Rebuild

Planet Drupal - mar, 08/07/2014 - 06:45
It has happened to all of us. You mistakenly remove a module directory or migrate your site and forget to include some necessary modules. This causes your Drupal site only to show the WSOD and perhaps the following error:
Catégories: Elsewhere

Joey Hess: laptop death

Planet Debian - mar, 08/07/2014 - 02:00

So I was at Ocracoke island, camping with family, and I brought my laptop along as I've done probably half a dozen times before. An enormous thuderstorm came up. It rained for 8 hours and thundered for 3 of those. Some lightning cracks quite close by as we crouched in the food tent, our feet up off the increasingly wet ground "just in case". The campground flooded. Luckily we were camped in the dunes and tents mostly avoided being flooded with 2-3 inches of water. (That was just the warmup; a hurricane hit a week after we left.)

My laptop was in my tent when this started, and I got soaked to the skin just running over there and throwing it up on the thermarest to keep it out of any flooding and away from any drips. It seemed ok, so best not to try to move it to the car in that downpour.

Next time I checked, it turned out the top vent of the tent was slightly open and dripping. The laptop bag was damp. But inside it seemed ok. Rain had slackened to just heavy, so I ran it down to the car. Laptop appeared barely damp, but it was hard to tell as I had quite forgotten what "dry" was. Turned it on for 10 seconds to check the time. It was 7:30 and we still had to cook dinner in this mess. Transferred it to a dry bag.

(By the way, in some situations, discovering you have a single dry towel you didn't know you had is the best gift in the world!)

Next morning, the laptop was dead. When powered on, the fan came on full, the screen stayed black, and after a few seconds it turned itself back off.

I need this for work, so it was a crash priority to get it fixed or a replacement. Before I even got home, I had logged onto Lenovo's website to check warantee status and found 2 things:

  1. They needed some number from a sticker on the bottom of my laptop. Which was no longer there.
  2. The process required some stange login on an entirely different IBM website.

At this point, I had a premonition of how the beuracracy would go. Reading Sesse's Blehnovo, I see I was right. I didn't even try. I ordered a replacement with priority shipping.

When I got home, I pulled the laptop apart to try to debug it. I still don't know what's wrong with it. The SSD may be damaged; it seems to cause anything I put it into to fail to work.

New laptop arrived in 2 days. Since this model is now a year old, it was a few hundred dollars cheaper this time around. And now I have an extra power supply, and a replacment keyboard, and a replacement fan etc. And I've escaped the dead USB port and broken rocker switch of the old laptop too.

The only weird thing is that, while my old laptop had no problem with my Toshiba passport USB drive, this new one refuses to recognize it unless I plug it into a USB 1.0 hub. Oh well..

Catégories: Elsewhere

Steinar H. Gunderson: Blehnovo

Planet Debian - mar, 08/07/2014 - 00:00

Here's my own little (ongoing) story about Lenovo's customer support; feel free to skip if you don't like rants. (You may remember that it took me several months to get to actually buy this laptop in the first place.) Everything within “quotes” are actual quotes from Lenovo, except where otherwise noted.

May 30th: My laptop accidentially goes into the ground, and the screen cracks. Gah. Oh well, I'll be without laptop over the weekend, but I have this nice accident warranty and NBD thing from Lenovo, right? I go to their support web site; they recommend that I register with IBM and file a service ticket. I do so. Their site says I will receive a confirmation email within ten minutes.

Jun 1st: I realize I haven't received anything from Lenovo or IBM, despite 36 hours passing. Oh well.

Jun 2nd: The web system claims Lenovo has “successfully contacted” me several times, despite me never hearing anything from them.

Jun 3rd: I call Lenovo. They don't speak any English. They say there's an error in the “type” I've given them; seemingly “X240” is an invalid type, I needed to write “20AL”. I get it corected.

Jun 4th: Lenovo calls. I talk to them in German and explain what happened (again). They say that I have the choice between paying €150 + parts and sending it in, or €450 + parts to have a serviceman come to me. (I am not 100% sure these numbers are correct, but they're in the right ballpark.) I say that this sounds very weird since I have accident insurance, but the guy from Lenovo seems unfazed and says they will only cover things under warranty if it's a design mistake. Eventually I say that sure, I'll pay for the serviceman; I just want my laptop fixed, fast. They ask for photos of the damage, which I send immediately.

Jun 6th: A week after the damage, and nothing has happened.

Jun 12th: Still nothing has happened. I press the “escalate” button on the web page.

Jun 18th: Still nothing has happened. I send Lenovo email asking what the heck is going on. My case now changes to “the customer will send the machine in to the depot for servicing” (not an exact quote; I don't have this text anymore), and I get an email with an address. I reply asking why on Earth this is, quoting their web page for saying “If you are entitled to Onsite Warranty, your Accidental Damage Protection claim may be repaired at your location”.

Later that day, Lenovo calls me again. It turns out they have no extended warranty or insurance registered on me. They ask me to provide “proof of purchase”, and give me a new case number (since the old one is now seemingly locked into a “will send to depot” situation). I send them the warranty email they originally sent me, including a long warranty code (20 alphanumeric digits) and a PIN. (In passing, I notice that due to a very delayed shipment, this warranty seemingly started running a month or so before I actually received the laptop, so the so-called 4-year warranty is seemingly 3 years 11 months. Oh well.)

Jun 19th: I am contacted by Lenovo. They say this information is not good enough as “proof of purchase”. They reiterate that I need to send them “proof of purchase”. I send them every single email I have ever received for them regarding my purchase.

Jun 24th: Nothing has happened. I email Lenovo asking for a status update. I get an email saying they have “forwarded all the needed information to the warranty service, so that the extended warranty will be registered”. All I can do is wait.

Jun 30th: I miss a telephone call from Lenovo. I get an email saying they'll close the case in two days. I call them, choosing English in the telephone menu. I get to a polite gentleman who speaks English well, but all the case notes are in German, so he can't make heads or tails of my case. He says he'll have the technician responsible for my case call me back.

He does really call me back the same day. He says what they have received is not valid as “proof of purchase”. I become agitated over the phone, pointing out that it should not be my problem if their internal systems are messed up; I've obviously paid 228 CHF for something. He claims to understand, but says that the systems will not work without a “proof of purchase”. He says I need to call Digital River (the company that operates shop.lenovo.ch). He gives me their telephone number. I think it looks funny, and asks him if this is really the right number; he says oh, no, that's the German one, not the Swiss one. He gives me the Swiss one. I call the number and it's for some completely different company, so I try the German one. It gives me a telephone menu, which says that for ThinkPad warranty questions, I need to call <some number>. I call that number; it's for Lenovo tech support in Germany. The tech in the other end of the line does not understand why Digital River would send me to Lenovo for warranty questions, but gives me their Swiss number and email address. The Swiss number is indeed correct, but just sends me to exactly the same menu. I send them an email.

On a whim, I check my warranty page on lenovo.com. It clearly says I have the extended warranty properly registered already! I forward a screenshot to Lenovo.

Jul 1st: I get an email from Lenovo: “Although the warranty appears on Lenovo website to be ok, please send us the proof of purchase from the extended warranty, so we can register it in our database(it appears NOT to be registered). Thank you.”

Jul 3rd: I get an email from Digital River, pointing me to a web page where I can print out some very nondescript-looking bill. I make a PDF out of it and send it to Lenovo.

Jul 7th: I still haven't heard anything from Lenovo. But! Now I am in Norway on vacation, which means I have a new trick up my sleeve: I call Lenovo Norway. I describe the case. The man says that this won't be covered by warranty, and I point out that I have accident insurance. He says (my translation/paraphrasing): “Oh, you're right, it does show up in this other system here! Don't worry, we'll fix this.” He asks me to send him an email with the screenshot of the warranty. I do so. He opens a new case, tells me that I'll have to send it in (seemingly onsite is only for warranty coverage after all?), but that it'll usually take less than a week. I receive an email with a link to DHL for ordering pickup, packaging instructions and pre-filled customs documents. It also has a form where I am supposed to briefly describe the case again (sure), say what I want them to do if the SSD is damaged (give it back to me unrepaired so I can do my own rescue; no Windows 8.1 reimaging, please) and write down all my passwords (fat chance).

So, there we are. Seven minutes with Lenovo Norway got me where 38 days of talking to Lenovo Switzerland/Germany couldn't—now let's just hope that DHL actually picks it up tomorrow and that I get it repaired and back within reasonable time.

The end? I hope.

Catégories: Elsewhere

Drupal.org Featured Case Studies: Newstica

Planet Drupal - lun, 07/07/2014 - 22:51
Completed Drupal site or project URL: http://www.newstica.com

Newstica.com is an intelligent news reading application operated by a Canadian company. The website collects hundreds of news stories daily and creates a unique set of articles on each page view with the use of sophisticated algorithms that operate off individual users' preferences.

Key modules/theme/distribution used: PanelsViewsZenFeedsFeeds XPath ParserTeam members: highvrahos
Catégories: Elsewhere

Mediacurrent: Using Sass Breakpoints Effectively

Planet Drupal - lun, 07/07/2014 - 22:02

There have been plenty of blog posts touting the reasons to use Sass as a CSS preprocessor, and if you've been doing responsive design for a while, you're probably already using the Breakpoint gem with Sass. But there are many ways to use both of these tools, so let's talk about using breakpoints effectively. 

Start with the small screen first, then expand until it looks like sh*t. Time for a breakpoint!
- Stephen Hay.

Catégories: Elsewhere

Chuva Inc.: Entity Metathing what? -- A very brief introduction on entity_metadata_wrappers

Planet Drupal - lun, 07/07/2014 - 21:57

Are you familiar with entity_metadata_wrappers? If you’re not, oh boy, you should be!

Entity Metadata Wrapper is the right way - and, after you get the grip of it, the easiest way - for you to manipulate anything with a field when coding your module. Sure, since the old days of CCK we are used with dealing with our fields in our nodes. Except they are a little messy.

Cleaner code!

Instead of doing this:

<?php
$first_name = '';
if (!empty($node->field_first_name)) {
  $name = $node->field_first_name[LANGUAGE_NONE][0]['value'];
}
?>

Let’s condense that, shall we?

<?php
$node_wrapper = entity_metadata_wrapper('node', $node);
$first_name = $node_wrapper->field_first_name->value();
?>

Sure, the name “metadata wrapper” may be a little intimidating, but it does shortens your code and makes it clearer. Oh, and if you have an entity reference field, or a file field, you can just do this:

<?php
$image = $node_wrapper->field_image->value();
?>

And the $referenced_node is already a loaded file object, not a useless “fid”.

Wrappers for dealing with entity reference: cleaner-er code!

Suppose you have two node types: Employee and Department. There is an Entity reference field from "Employee" to "Department" and on the "Department" node you have a field called "field_dept_phone" that stores the phone number. (for simplicity, I'm assuming that field_employee_dept is required).

If you have the $employee node, how to fetch the phone number?

Hard way:

<?php
$phone = '';
$department = node_load($employee->field_employee_dept[LANGUAGE_NONE][0]['target_id']);
if ($department && !empty($department->field_dept_phone[LANGUAGE_NONE][0]['value'])) {
  $phone = $department->field_dept_phone[LANGUAGE_NONE][0]['value'];
}
?>

And the wrapper way:

<?php
$wrapper = entity_metadata_wrapper('node', $employee);
$phone = $wrapper->field_employee_dept->field_dept_phone->value();
?>

 

Now what?

Well, this post is not intended to be a full entity metadata wrapper course, so, if I have convinced you, take 15 minutes of your day and do this:

  1. Download Entity API from http://drupal.org/project/entity
  2. Read this, now: https://drupal.org/node/1021556
  3. Your life quality will improve, proportionally to your code quality!

Photo credits: https://www.flickr.com/photos/81564552@N00/3208209972/

PHPentityentity_metadata_wrappersdrupal planet
Catégories: Elsewhere

Drupal governance announcements: Shared Values and the Drupal Community

Planet Drupal - lun, 07/07/2014 - 21:50

Dries recently wrote a blog post about the challenges of fostering diversity and inclusivity in the Drupal community. This is the latest installment of a conversation that’s been going on for years.

In 2012, a group of Drupal community members worked together to draft a Code of Conduct that could be used to supplement the Drupal community’s Code of Conduct at DrupalCon and other in-person events.

This effort prompted a large (and sometimes heated) conversation that involved people from all corners of the Drupal community. This conversation was a difficult one, and many of us disagreed about many different things, but ultimately, we all agreed on several general principles:

We are a group of diverse people from a wide variety of ethnic, cultural, and religious backgrounds, and we embrace that.
Making all attendees feel welcome and included at DrupalCon is everyone’s job.
We treat each other with dignity and respect.
We take responsibility for our words and actions and the impact that they may have on others.

These principles informed the DrupalCon Code of Conduct, which was ratified by the Drupal Association in the summer of 2012 and has been used at every DrupalCon since.

At the last few DrupalCons, there have been a number of reported incidents, including groping, sexual harassment, physical assault, inappropriate comments made about female speakers, and more. While we are grateful that these incidents are being reported, even a single incident is too many.

In early 2013, the Community Working Group was chartered by Dries to uphold the Drupal Code of Conduct and to maintain a friendly and welcoming community for the Drupal project.

As a community, it’s important that we always keep our shared principles and values in mind when interacting with others, whether that be in person at DrupalCon, in the issue queues on Drupal.org, on IRC, or via social media. As the DrupalCon Code of Conduct states, the purpose is not to restrict the diversity of ideas and expression, but instead to ensure that there is a place for everyone in the Drupal community who agrees to abide by these basic principles.

Even when everyone has the best intentions, however, it’s inevitable that conflicts will occur. To ensure that these are addressed in a manner consistent with our shared values, the Community Working Group has worked with the community to develop a conflict resolution policy that lays out the process for addressing disagreements. This policy was developed by participants in the Community Summits at DrupalCons Prague and Austin, with additional review on Drupal.org.

This policy seeks to first and foremost empower individuals to resolve issues between themselves when possible, asking for help when needed, and only after that fails to escalate further. This approach gives people more control over their dispute and is the most likely to result in a positive outcome for everyone involved.

For matters that cannot or should not be resolved in any other way, the Community Working Group is available as a point of escalation. Incidents can be confidentially reported to the Community Working Group using the Incident Report Form. If the issue falls within the purview of the Community Working Group’s jurisdiction, we will then work with the involved individuals to find a remedy.

In her DrupalCon Austin keynote Erynn Petersen talked about how diversity is a key component of a healthy and productive community. While the Drupal community is one of the most diverse and welcoming communities in open source, we still have room for improvement. If you’re interested in joining us in that effort, let us know by responding to our call for volunteers or by participating in a Community Summit at an upcoming DrupalCon.

Actively supporting and maintaining a welcoming environment is something that every one of us in the Drupal community needs to be a part of, and it’s essential to the long-term health and growth of the project and community that we all love so much.

Catégories: Elsewhere

Jonathan McDowell: 2014 SPI Board election nominations open

Planet Debian - lun, 07/07/2014 - 21:30

I put out the call for nominations for the 2014 Software in the Public Interest (SPI) Board election last week. At this point I haven't yet received any nominations, so I'm mentioning it here in the hope of a slightly wider audience. Possibly not the most helpful as I would hope readers who are interested in SPI are already reading spi-announce. There are 3 positions open this election and it would be good to see a bit more diversity in candidates this year. Nominations are open until the end of Tuesday July 13th.

The primary hard and fast time commitment a board member needs to make is to attend the monthly IRC board meetings, which are conducted publicly via IRC (#spi on the OFTC network). These take place at 20:00 UTC on the second Thursday of every month. More details, including all past agendas and minutes, can be found at http://spi-inc.org/meetings/. Most of the rest of the board communication is carried out via various mailing lists.

The ideal candidate will have an existing involvement in the Free and Open Source community, though this need not be with a project affiliated with SPI.

Software in the Public Interest (SPI, http://www.spi-inc.org/) is a non-profit organization which was founded to help organizations develop and distribute open hardware and software. We see it as our role to handle things like holding domain names and/or trademarks, and processing donations for free and open source projects, allowing them to concentrate on actual development.

Examples of projects that SPI helps includes Debian, LibreOffice, OFTC and PostgreSQL. A full list can be found at http://www.spi-inc.org/projects/.

Catégories: Elsewhere

Jan Wagner: Monitoring Plugins release ahead

Planet Debian - lun, 07/07/2014 - 15:41

It seems to be a great time for monitoring solutions. Some of you may have noticed that Icinga has released it's first stable version of the completely redeveloped Icinga 2.

After several changes in the recent past, where the Team maintaining the Plugins used for several Monitoring solutions was busy moving everything to new infrastructure, they are now back on track. The recent development milestone is reached and a call for testing was also sent out.

In the meanwhile I prepared the packaging for this bigger move. The packages are now moved to the source package monitoring-plugins, the whole packaging changes can be observed in the changelog. With this new release we have also some NEWS, which might be useful to check. Same counts for upstream NEWS.

You can give the packages a go and grab them from my 'unstable' and 'wheezy-backports' repositories at http://ftp.cyconet.org/debian/. Right after the stable release, the packages will be uploaded into debian unstable, but might get delayed by the NEW queue due the new package names.

Catégories: Elsewhere

Károly Négyesi: Prejudices

Planet Drupal - lun, 07/07/2014 - 12:23

At Szeged, I asked a female Drupal contributor in Hungarian (I'm glad she did not understand) what was up with the coffee maker, because I readily presumed she was staff.
I saw one of the female geek role models at Austin with her baby. I got confused for a second, because apparently I think the übergeek and mother roles can't overlap.
On IRC, I almost said "Wow, that's impressive from a girl.".
I do not know how I can avoid these thoughts, but I am aware of them, I am bothered by them, and I try not to act on them. I also try to point out to fellow Drupalers when they act on their thoughts that these are not appropriate. I'm not sure what else I can do.
If you have good ideas on overcoming prejudice, please share!

Catégories: Elsewhere

DrupalCon Amsterdam: Convince Your Boss to Send You to DrupalCon Amsterdam

Planet Drupal - lun, 07/07/2014 - 09:00

Attending DrupalCon is a great investment in skills, professional development and relationships. And it's also a lot of fun!

Here is your chance to demonstrate the value of attending DrupalCon to your employer.

We’ve developed a set of materials to help you demonstrate the value of attending DrupalCon to your employer.

Why Attend DrupalCon?
  • Learn the latest technology and grow your Drupal skills
  • Build a stronger network in the community
  • Collaborate and share your knowledge with others

Resources About DrupalCon - includes program summary, demographics, budget worksheet PDF Letter to your employer template Word or GoogleDoc Trip report template PDF Request a Certificate of Attendance Available following the conference.
Catégories: Elsewhere

Károly Négyesi: Easier configuration development for Drupal 8

Planet Drupal - lun, 07/07/2014 - 00:56

With config_devel, when you are editing a migration, you can just enter the name of the file being edited at admin/config/config_devel and on every request the module will check for changes and import the file into the active storage. The other direction works as well: say you are working on a contrib module and have a view. Provide the path of the file (this time in the auto export box) and on every change Drupal will automatically export. Once satisfied, just commit. Or perhaps you just want to follow what's in a config file as it's being edited -- provide sites/default/files/some.config.name.yml and it'll be right there on every save.

Both import and export are doable manually with the config module core provides. But I think the automatism makes life easier and I hope the module will be popular among D8 developers. Finally, let me thank beejeebus for cooking up the module originally and handing it over to me despite he knew I will rewrite it from the ground up.

Catégories: Elsewhere

Dominique Dumont: Status and next step on lcdproc automatic configuration upgrade with Perl and Config::Model

Planet Debian - dim, 06/07/2014 - 18:42

Back in March, I uploaded a Debian’s version of lcdproc with a unique feature: user and maintainer configurations are merged during package upgrade: user customizations and developers enhancements are both preserved in the new configuration file. (See this blog for more details). This avoids tedious edition of the configuration LCDd.conf file after every upgrade of lcdproc package.

At the beginning of June, a new version of lcdproc (0.5.7-1) was uploaded. This triggered another round of automatic upgrades on user’s systems.

According to the popcon rise of libconfig-model-lcdproc-perl, about 100 people have upgraded lcdproc on their system. Since automatic upgrade has an opt-out feature, one cannot say for sure that 100 people are actually using automatic upgrade, but I bet a fair portion are them are.

So far, only one people has complained: a bug report was filed about the many dependencies brought by libconfig-model-lcdproc-perl.

The next challenge for lcdproc configuration upgrade is brought by a bug reported on Ubuntu: the device file provided by imon kernel module is a moving target: The device file created by the kernel can be /dev/lcd0 or /dev/lcd1 or even /dev/lcd2. Static configuration files and moving target don’t mix well.

The obvious solution is to provide a udev rule so that a symbolic link is created from a fixed location (/dev/lcd-imon) to the moving target. Once the udev rule is installed, the user only has to update LCDd.conf file to use the symlink as imon device file and we’re done.

But, wait… The whole point of automatic configuration upgrade is to spare the user this kind of trouble: the upgrade must be completely automatic.

Moreover, the upgrade must work in all cases: whether udev is available (Linux) or not. If udev is not available, the value present in the configuration file must be preserved.

To know whether udev is available, the upgrade tool (aka cme) will check whether the file provided by udev (/dev/lcd-imon) is present or not. This will be done by lcdproc postinst script (which is run automatically at the end of lcdproc upgrade). Which means that the new udev rule must also be
activated in the postinst script before the upgrade is done.

In other words, the next version of lcdproc (0.5.7-2) will:

  • Install a new udev rule to provide lcd-imon symbolic link
  • Activate this rule in lcdproc postinst script before upgrading the configuration (note to udev experts: yes, the udev rule is activated with “--action=change” option)
  • Upgrade the configuration by running “cme migrate” in lcdproc postinst script.

In the lcdproc configuration model installed by libconfig-model-lcdproc-perl, the “imon device” parameter is enhanced so that running cme check lcdproc or cme migrate lcdproc issues a warning if /dev/lcd-imon exists and if imon driver is not configured to use it.

This way, the next installation of lcdproc will deliver a fix for imon and cme will fix user’s configuration file without requiring user input.

The last point is admittedly bad marketing as users will not be aware of the magic performed by Config::Model… Oh well…

In the previous section, I’ve briefly mentioned that “imon_device” parameter is “enhanced” in lcdproc configuration model. If you’re not already bored, let’s lift the hood and see what kind of enhancements was added.

Let’s peek in lcdproc configuration file, LCDd.conf file which is used to generate lcdproc configuration model. You may remember that the formal description of all LCDd.conf parameters and their properties is generated from LCDd.conf to provide lcdproc configuration model. The comments in LCDd.conf follow a convention so that most properties of the parameters can be extracted from the comments. In the example below, the comments show that NewFirmware is a boolean value expressed as yes or no, the latter being the default :

# Set the firmware version (New means >= 2.0) [default: no; legal: yes, no] NewFirmware=no

Back to the moving target. In LCDd.conf, imon device file parameter is declared this way:

# Select the output device to use Device=/dev/lcd0

This means that device is a string where the default value is /dev/lcd0.

Which is wrong once the special udev rule provided with Debian packages is activated. With this rule, the default value must be /dev/lcd-imon.

To fix this problem, a special comment is added in the Debian version of LCDd.conf to tune further the properties of the device parameter:

# select the device to use # {% # default~ # compute # use_eval=1 # formula="my $l = '/dev/lcd-imon'; -e $l ? $l : '/dev/lcd0';" # allow_override=1 - # warn_if:not_lcd_imon # code="my $l = '/dev/lcd-imon';defined $_ and -e $l and $_ ne $l ;" # msg="imon device does not use /dev/lcd-imon link." # fix="$_ = undef;" # warn_unless:found_device_file # code="defined $_ ? -e : 1" # msg="missing imon device file" # fix="$_ = undef;" # - %} Device=/dev/lcd0

This special comment between “{%” and “%}” follows the syntax of Config::Model::Loader. A small configuration model is declared there to enhance the model generated from LCDd.conf file.

Here are the main parts:

  • default~ suppress the default value of the “device” parameter declared in the original LCDd.conf (i.e. “/dev/ldcd0“)
  • compute and the 3 lines below computes a default value for the device file. Since “use_eval” is true, the formula is evaluated as Perl code. This code will return /dev/lcd-imon if this file is found. Otherwise, /dev/lcd0 is returned. Hence, either /dev/lcd-imon or /dev/lcd0 will be used a as default value. allow_override=1 lets the user override this computed value
  • warn_if and the 3 lines below test the configured device file with the Perl instructions provided by the code parameter. There, the device value is available in the $_ variable. This code will return true if /dev/lcd-imon exists and if the configured device does not use it. This will trigger a warning that will show the specified message.
  • Similarly warn_unless and the 3 lines below warns the user if the configured device file is not found.

In both warn_unless and warn_if parts, the fix code snippet is run when by the command cme fix lcdproc and is used to “repair” the warning condition. In this case, the fix consists in resetting the device configuration value so the computed value above can be used.

cme fix lcdproc is triggered during package post install script installed by dh_cme_upgrade.

Come to think of it, generating a configuration model from a configuration file can probably be applied to other projects: for instance, php.ini and kdmrc are also shipped with detailed comments. May be I should make a more generic model generator from the example used to generate lcdproc model…

Well, I will do it if people show interest. Not in the form “yeah, that would be cool”, but in the form, “yes, I will use your work to generate a configuration model for project [...]“. I’ll let you fill the blank ;-)


Tagged: Config::Model, configuration, debian, lcdproc, Perl, upgrade
Catégories: Elsewhere

Eugene V. Lyubimkin: (Finland) FUUG foundation gives money for FLOSS development

Planet Debian - dim, 06/07/2014 - 17:46
You live in Finland? You work on a FLOSS project or a project helping FLOSS in a way or another? Apply for FUUG's limited sponshorship program! Rules and details (in Finnish): http://coss.fi/2014/06/27/fuugin-saatio-jakaa-apurahoja-avoimen-koodin-edistamiseksi/ .
Catégories: Elsewhere

Ian Campbell: Setting absolute date based Amazon S3 bucket lifecycles with curl

Planet Debian - dim, 06/07/2014 - 12:45

For my local backup regimen I use flexbackup to create a full backup twice a year and differential/incremental backups on a weekly/monthly basis. I then upload these to a new amazon S3 bucket for each half year (so each bucket corresponds to the a full backup plus the associated differentials and incrementals).

I then set the bucket's lifecycle to archive to glacier (cheaper offline storage) from the month after that half year has ended (reducing costs) and to delete it a year after the half ends. It used to be possible to do this via the S3 web interface but the absolute date based options seem to have been removed in favour of time since last update, which is not what I want. However the UI will still display such lifecycles if they are configured and directs you to the REST API to set them up.

I had a look around but couldn't any existing CLI tools to do this directly but I figured it must be possible with curl. A little bit of reading later I found that it was possible but it involved some faff calculating signatures etc. Luckily EricW has written Amazon S3 Authentication Tool for Curl (AKA s3curl) which automates the majority of that faff. The tool is "New BSD" licensed according to that page or Apache 2.0 license according to the included LICENSE file and code comments.

Setup

Following the included README setup ~/.s3curl containing your id and secret key (I called mine personal which I then use below).

Getting the existing lifecycle

Retrieving an existing lifecycle is pretty easy. For the bucket which I used for the first half of 2014:

$ s3curl --id=personal -- --silent http://$bucket.s3.amazonaws.com/?lifecycle | xmllint --format - <?xml version="1.0" encoding="UTF-8"?> <LifecycleConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <Rule> <ID>Archive and Expire</ID> <Prefix/> <Status>Enabled</Status> <Transition> <Date>2014-07-31T00:00:00.000Z</Date> <StorageClass>GLACIER</StorageClass> </Transition> <Expiration> <Date>2015-01-31T00:00:00.000Z</Date> </Expiration> </Rule> </LifecycleConfiguration>

See GET Bucket Lifecycle for details of the XML.

Setting a new lifecycle

The desired configuration needs to be written to a file. For example to set the lifecycle for the bucket I'm going to use for the second half of 2014:

$ cat s3.lifecycle <LifecycleConfiguration> <Rule> <ID>Archive and Expire</ID> <Prefix/> <Status>Enabled</Status> <Transition> <Date>2015-01-31T00:00:00.000Z</Date> <StorageClass>GLACIER</StorageClass> </Transition> <Expiration> <Date>2015-07-31T00:00:00.000Z</Date> </Expiration> </Rule> </LifecycleConfiguration> $ s3curl --id=personal --put s3.lifecycle --calculateContentMd5 -- http://$bucket.s3.amazonaws.com/?lifecycle

See PUT Bucket Lifecycle for details of the XML.

Catégories: Elsewhere

Daniel Pocock: News team jailed, phone hacking not fixed though

Planet Debian - dim, 06/07/2014 - 10:20

This week former News of the World executives were sentenced, most going to jail, for the British phone hacking scandal.

Noticeably absent from the trial and much of the media attention are the phone companies. Did they know their networks could be so systematically abused? Did they care?

In any case, the public has never been fully informed about how phones have been hacked. Speculation has it that phone hackers were guessing PIN numbers for remote voicemail access, typically trying birthdates and inappropriate PIN numbers like 0000 or 1234.

There is more to it

Those in the industry know that there are additional privacy failings in mobile networks, especially the voicemail service. It is not just in the UK either.

There are various reasons for not sharing explicit details on a blog like this and comments concerning such techniques can't be accepted.

Nonetheless, there are some points that do need to be made:

  • it is still possible for phones, especially voicemail, to be hacked on demand
  • an attacker does not need expensive equipment nor do they need to be within radio range (or even the same country) as their target
  • the attacker does not need to be an insider (phone company or spy agency employee)
Disable voicemail completely - the only way to be safe

The bottom line is that the only way to prevent voicemail hacking is to disable the phone's voicemail service completely. Voicemail is not really necessary given that most phones support email now. For those who feel they need it, consider running the voicemail service on your own private PBX using free software like Asterisk or FreeSWITCH. Some Internet telephony service providers also offer third-party voicemail solutions that are far more secure than those default services offered by mobile networks.

To disable voicemail, simply do two things:

  • send a letter to the phone company telling them you do not want any voicemail box in their network
  • in the mobile phone, select the menu option to disable all diversions, or manually disable each diversion one by one (e.g. disable forwarding when busy, disable forwarding when not answered, disable forwarding when out of range)
Catégories: Elsewhere

Russell Coker: Desktop Publishing is Wrong

Planet Debian - dim, 06/07/2014 - 08:53

When I first started using computers a “word processor” was a program that edited text. The most common and affordable printers were dot-matrix and people who wanted good quality printing used daisy wheel printers. Text from a word processor was sent to a printer a letter at a time. The options for fancy printing were bold and italic (for dot-matrix), underlines, and the use of spaces to justify text.

It really wasn’t much good if you wanted to include pictures, graphs, or tables. But if you just wanted to write some text it worked really well.

When you were editing text it was typical that the entire screen (25 rows of 80 columns) would be filled with the text you were writing. Some word processors used 2 or 3 lines at the top or bottom of the screen to display status information.

Some time after that desktop publishing (DTP) programs became available. Initially most people had no interest in them because of the lack of suitable printers, the early LASER printers were very expensive and the graphics mode of dot matrix printers was slow to print and gave fairly low quality. Printing graphics on a cheap dot matrix printer using the thin continuous paper usually resulted in damaging the paper – a bad result that wasn’t worth the effort.

When LASER and Inkjet printers started to become common word processing programs started getting many more features and basically took over from desktop publishing programs. This made them slower and more cumbersome to use. For example Star Office/OpenOffice/LibreOffice has distinguished itself by remaining equally slow as it transitioned from running on an OS/2 system with 16M of RAM in the early 90′s to a Linux system with 256M of RAM in the late 90′s to a Linux system with 1G of RAM in more recent times. It’s nice that with the development of PCs that have AMD64 CPUs and 4G+ of RAM we have finally managed to increase PC power faster than LibreOffice can consume it. But it would be nicer if they could optimise for the common cases. LibreOffice isn’t the only culprit, it seems that every word processor that has been in continual development for that period of time has had the same feature bloat.

The DTP features that made word processing programs so much slower also required more menus to control them. So instead of just having text on the screen with maybe a couple of lines for status we have a menu bar at the top followed by a couple of lines of “toolbars”, then a line showing how much width of the screen is used for margins. At the bottom of the screen there’s a search bar and a status bar.

Screen Layout

By definition the operation of a DTP program will be based around the size of the paper to be used. The default for this is A4 (or “Letter” in the US) in a “portrait” layout (higher than it is wide). The cheapest (and therefore most common) monitors in use are designed for displaying wide-screen 16:9 ratio movies. So we have images of A4 paper with a width:height ratio of 0.707:1 displayed on a wide-screen monitor with a 1.777:1 ratio. This means that only about 40% of the screen space would be used if you don’t zoom in (but if you zoom in then you can’t see many rows of text on the screen). One of the stupid ways this is used is by companies that send around word processing documents when plain text files would do, so everyone who reads the document uses a small portion of the screen space and a large portion of the email bandwidth.

Note that this problem of wasted screen space isn’t specific to DTP programs. When I use the Google Keep website [1] to edit notes on my PC they take up a small fraction of the screen space (about 1/3 screen width and 80% screen height) for no good reason. Keep displays about 70 characters per line and 36 lines per page. Really every program that allows editing moderate amounts of text should allow more than 80 characters per line if the screen is large enough and as many lines as fit on the screen.

One way to alleviate the screen waste on DTP programs is to use a “landscape” layout for the paper. This is something that all modern printers support (AFAIK the only printers you can buy nowadays are LASER and ink-jet and it’s just a big image that gets sent to the printer). I tried to do this with LibreOffice but couldn’t figure out how. I’m sure that someone will comment and tell me I’m stupid for missing it, but I think that when someone with my experience of computers can’t easily figure out how to perform what should be a simple task then it’s unreasonably difficult for the vast majority of computer users who just want to print a document.

When trying to work out how to use landscape layout in LibreOffice I discovered the “Web Layout” option in the “View” menu which allows all the screen space to be used for text (apart from the menu bar, tool bars, etc). That also means that there are no page breaks! That means I can use LibreOffice to just write text, take advantage of the spelling and grammar correcting features, and only have screen space wasted by the tool bars and menus etc.

I never worked out how to get Google Docs to use a landscape document or a single webpage view. That’s especially disappointing given that the proportion of documents that are printed from Google Docs is probably much lower than most word processing or DTP programs.

What I Want

What I’d like to have is a word processing program that’s suitable for writing draft blog posts and magazine articles. For blog posts most of the formatting is done by the blog software and for magazine articles the editorial policy demands plain text in most situations, so there’s no possible benefit of DTP features.

The ability to edit a document on an Android phone and on a Linux PC is a good feature. While the size of a phone screen limits what can be done it does allow jotting down ideas and correcting mistakes. I previously wrote about using Google Keep on a phone for lecture notes [2]. It seems that the practical ability of Keep to edit notes on a PC is about limited to the notes for a 45 minute lecture. So while Keep works well for that task it won’t do well for anything bigger unless Google make some changes.

Google Docs is quite good for editing medium size documents on a phone if you use the Android app. Given the limitations of the device size and input capabilities it works really well. But it’s not much good for use on a PC.

I’ve seen a positive review of One Note from Microsoft [3]. But apart from the fact that it’s from Microsoft (with all the issues that involves) there’s the issue of requiring another account. Using an Android phone requires a Gmail account (in practice for almost all possible uses if not in theory) so there’s no need to get an extra account for Google Keep or Docs.

What would be ideal is an Android editor that could talk to a cloud service that I run (maybe using WebDAV) and which could use the same data as a Linux-X11 application.

Any suggestions?

Related posts:

  1. Desktop Equivalent Augmented Reality Augmented reality is available on all relatively modern smart phones....
  2. Linux on the Desktop I started using Linux in 1993. I initially used it...
  3. Lenny SE Linux on the Desktop I have been asked about the current status of Lenny...
Catégories: Elsewhere

Matthew Palmer: Witness the security of this fully DNSSEC-enabled zone!

Planet Debian - dim, 06/07/2014 - 07:00

After dealing with the client side of the DNSSEC puzzle last week, I thought it behooved me to also go about getting DNSSEC going on the domains I run DNS for. Like the resolver configuration, the server side work is straightforward enough once you know how, but boy howdy are there some landmines to be aware of.

One thing that made my job a little less ordinary is that I use and love tinydns. It’s an amazingly small and simple authoritative DNS server, strong in the Unix tradition of “do one thing and do it well”. Unfortunately, DNSSEC is anything but “small and simple” and so tinydns doesn’t support DNSSEC out of the box. However, Peter Conrad has produced a patch for tinydns to do DNSSEC, and that does the trick very nicely.

A brief aside about tinydns and DNSSEC, if I may… Poor key security is probably the single biggest compromise vector for crypto. So you want to keep your keys secure. A great way to keep keys secure is to not put them on machines that run public-facing network services (like DNS servers). So, you want to keep your keys away from your public DNS servers. A really great way of doing that would be to have all of your DNS records somewhere out of the way, and when they change regenerate the zone file, re-sign it, and push it out to all your DNS servers. That happens to be exactly how tinydns works. I happen to think that tinydns fits very nicely into a DNSSEC-enabled world. Anyway, back to the story.

Once I’d patched the tinydns source and built updated packages, it was time to start DNSSEC-enabling zones. This breaks down into a few simple steps:

  1. Generate a key for each zone. This will produce a private key (which, as the name suggests, you should keep to yourself), a public key in a DNSKEY DNS record, and a DS DNS record. More on those in a minute.

    One thing to be wary of, if you’re like me and don’t want or need separate “Key Signing” and “Zone Signing” keys. You must generate a “Key Signing” key – this is a key with a “flags” value of 257. Doing this wrong will result in all sorts of odd-ball problems. I wanted to just sign zones, so I generated a “Zone Signing” key, which has a “flags” value of 256. Big mistake.

    Also, the DS record is a hash of everything in the DNSKEY record, so don’t just think you can change the 256 to a 257 and everything will still work. It won’t.

  2. Add the key records to the zone data. For tinydns, this is just a matter of copying the zone records from the generated key into the zone file itself, and adding an extra pseudo record (it’s all covered in the tinydnssec howto).

  3. Publish the zone data. Reload your BIND config, run tinydns-sign and tinydns-data then rsync, or do whatever it is PowerDNS people do (kick the database until replication starts working again?).

  4. Test everything. I found the Verisign Labs DNSSEC Debugger to be very helpful. You want ticks everywhere except for where it’s looking for DS records for your zone in the higher-level zone. If there are any other freak-outs, you’ll want to fix those – because broken DNSSEC will take your domain off the Internet in no time.

  5. Tell the world about your DNSSEC keys. This is simply a matter of giving your DS record to your domain registrar, for them to add it to the zone data for your domain’s parent. Wherever you’d normally go to edit the nameservers or contact details for your domain, you probably want to do to the same place and look for something about “DS” or “Domain Signer” records. Copy and paste the details from the DS record in your zone into there, submit, and wait a minute or two for the records to get published.

  6. Test again. Before you pat yourself on the back, make sure you’ve got a full board of green ticks in the DNSSEC Debugger. if anything’s wrong, you want to rollback immediately, because broken DNSSEC means that anyone using a DNSSEC-enabled resolver just lost the ability to see your domain.

That’s it! There’s a lot of complicated crypto going on behind the scenes, and DNSSEC seems to revel in the number of acronyms and concepts that it introduces, but the actual execution of DNSSEC-enabling your domains is quite straightforward.

Catégories: Elsewhere

Pages

Subscribe to jfhovinne agrégateur - Elsewhere