Elsewhere

Niels Thykier: Jessie is coming the 2015-04-25

Planet Debian - mar, 31/03/2015 - 21:28

Indeed, we settled on a release date for Jessie – and pretty quick too.  I sent out a poll on the 28th of March and yesterday, it was clear that the 25th of April was our release date. :)

With that said, we still have some items left that needs to be done.

  • Finishing the release notes.  This is mostly pending myself and a few others.
  • Translation of the release-notes.  I sent out a heads up earlier today about what sections I believe to be done.
  • The d-i team got another release planned as well.
  • All the RC bugs you can manage to fix before the 18th of April. :)

Filed under: Debian, Release-Team
Catégories: Elsewhere

Laura Arjona: Upgrading my home server (HP Microserver N54L G7) to Debian Jessie

Planet Debian - mar, 31/03/2015 - 18:46

You may know the story… TL;DR

  • I wanted to self host my web services.
  • I bought a Microserver (N54L).
  • I installed Debian stable there, RAID1 (BIOS) + cryptsetup + LVM (/ and swap, /boot in another disk, unencrypted).
  • I installed GNU MediaGoblin, and it works!
  • When rebooting, the password to unencrypt the disk (and then, find the LVM volumes and mount the partitions), was not accepted. But it was accepted after I shutdown, unplug the electricity, replug, and turn on the machine.

After searching a bit for information about my problem and not finding anything helpful, I began to think that maybe upgrading to Jessie could fix it (recent versions of kernel and cryptsetup…). And the Jessie freeze was almost there, and I also thought that trying to make my MediaGoblin work in Jessie now that I still didn’t upload lots of content, would be a nice idea… And, I wanted to feel the adventure!

Whatever. I decided to upgrade to Jessie. This is the glory of “free software at home”: you only waste your time (and probably not, because you can learn something, at least, what not to do).

Upgrading my system to Jessie, and making it boot!

I changed sources.list, updated, did a safe-upgrade, and then upgrade. Then reboot… and the system didn’t boot.

What happened? I’m not sure, everything looked “ok” during the upgrade… But now, my system even was not asking for the passphrase to unlock the encrypted disk. It was trying to access the physical volume group as if it was in an unencrypted disk, and so, failing. The boot process left me in a “initramfs” console in which I didn’t know what to do.

I asked help from @luisgf, the system administrator of mipump.es (a Pump.io public server) and mijabber.es (an XMPP public server). We met via XMPP and with my “thinking aloud” and his patient listening and advice, we solved the problem, as you will see:

I tried to boot my rescue system (a complete system installed in different partitions in a different disk) and it booted. I tried then to manually unencrypt the encrypted disk (cryptsetup luksopen /dev/xxx), and it worked, and I could list the volume group and the volumes, and activate them, and mount the partitions. Yay! my (few) data was safe.

I rebooted and in the initramfs console I tried to do the same, but cryptsetup was not present in my initramfs.

Then I tried to boot in the old Squeeze kernel: it didn’t asked for the passphrase to unencrypt the disk, but in that initramfs console, cryptsetup was working well. So after manually unencrypt the system, activate the volumes and mount the partitions, I could exit the console and the system was booting #oleole!

So, how to tell the boot process to ask for the encryption password?

Maybe reinstalling the kernel was enough… I tried to reinstall the 3.16 kernel package. It (re)generated /boot/initrd.img-3.16.0-4-amd64 and then I restarted the system, and the problem was solved. It seems that the first time, the kernel didn’t generate the initrd image correctly, and I didn’t notice about that.

Well, problem solved. My system was booting again! No other boot problems and Jessie seemed to run perfectly. Thanks @luisgf for your help!

In addition to that, since then, my password has been accepted in every reboot, so it seems that the original problem is also gone.

A note on systemd

After all the noise of last months, I was a bit afraid that any of the different services that run on my system would not start with the migration to systemd.
I had no special tweaks, just two ‘handmade’ init scripts (for MediaGoblin, and for NoIP), but I didn’t write them myself (I just searched about systemd init scripts for the corresponding services), so if it was any problem there I was not sure that I could solve it. However, everything worked fine after the migration. Thanks Debian hackers to make this transition as smooth as possible!

Reinstalling MediaGoblin

My MediaGoblin was not working, and I was not sure why. Maybe it was just that I need to tune nginx or whatever, after the upgrade… But I was not going to spend time trying to know what part of the stack was the culprit, and my MediaGoblin sites were almost empty… So I decided to follow again the documentation and reinstall (maybe update would be enough, who knows). I reused the Debian user(s), the PostgreSQL users and databases, and the .ini files and nginx configuration files. So it was quick, and it worked.

Updating Jessie

I have updated my Jessie system several times since then (kernel updates, OpenSSL, PostgreSQL, and other security updates and RC bugs fixes, with the corresponding reboots or service restarts) and I didn’t experience the cryptsetup problem again. The system is working perfectly. I’m very happy.

Using dropbear to remotely provide the cryptsetup password

The last thing I made in my home server was setting up dropbear so I can remotely provide the encryption password, and then, remotely reboot my system. I followed this guide and it worked like a charm.

Some small annoyances and TODO list
  • I have some warnings at boot. I think they are not important, but anyway, I post them here, and will try to figure out what do they mean:
[    0.203617] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored [    0.214828] ACPI: Dynamic OEM Table Load: [    0.214841] ACPI: OEMN 0xFFFF880074642000 000624 (v01 AMD    NAHP     00000001 INTL 20051117) [    0.226879] \_SB_:_OSC evaluation returned wrong type [    0.226883] _OSC request data:1 1f [    0.227055] ACPI: Interpreter enabled [    0.227062] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [\_S1_] (20140424/hwxface-580) [    0.227067] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [\_S2_] (20140424/hwxface-580) [    0.227070] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [\_S3_] (20140424/hwxface-580) [    0.227083] ACPI: (supports S0 S4 S5) [    0.227085] ACPI: Using IOAPIC for interrupt routing [    0.227298] HEST: Table parsing has been initialized. [    0.227301] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug

And this one

[    1.635130] ERST: Failed to get Error Log Address Range. [    1.645802] [Firmware Warn]: GHES: Poll interval is 0 for generic hardware error source: 1, disabled. [    1.645894] GHES: APEI firmware first mode is enabled by WHEA _OSC.

And this one, about the 250GB disk (it came with the server, it’s not in the RAID):

[ 3.320913] ata6: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 3.321551] ata6.00: failed to enable AA (error_mask=0x1) [ 3.321593] ata6.00: ATA-8: VB0250EAVER, HPG9, max UDMA/100 [ 3.321595] ata6.00: 488397168 sectors, multi 0: LBA48 NCQ (depth 31/32) [ 3.322453] ata6.00: failed to enable AA (error_mask=0x1) [ 3.322502] ata6.00: configured for UDMA/100
  • It would be nice to learn a bit about benchmarching tools and test my system with the nonfree VGA Radeon driver and without it.
  • I need to setup an automated backup system…
A note about RAID

Some people commented about the benefits of the software RAID (mainly, not to depend on a particular, proprietary firmware, what happens if my motherboard dies and I cannot find a compatible replacement?).

Currenty I have a RAID 1  (mirror) using the capabilities of the motherboard.

The problem is that, frankly, I am not sure about how to migrate the current setup (BIOS RAID + cryptsetup + LVM + partitions) to the new setup (software RAID + cryptsetup + LVM + partitions, or better other order?).

  • Would it be enough to make a Clonezilla backup of each partition, wipe my current setup, boot with the Debian installer, create the new setup (software RAID, cryptsetup, LVM and partitions), and after that, stop the installation, boot with Clonezilla and restore the partition images?
  • Or even better, can I (safely) remove the RAID in the BIOS, boot in my system (let’s say, from the first disk), and create the software RAID with that 2nd disk that appeared after removing the BIOS RAID (this sounds a bit like science fiction, but who knows!).
  • Is it important “when” or in which “layer” do I setup the software RAID?

As you see, lots of things to read/think/try… I hope I can find time for my home server more often!


Filed under: My experiences and opinion Tagged: Debian, encryption, English, Free Software, libre software, MediaGoblin, Moving into free software, N54L, selfhosting, sysadmin
Catégories: Elsewhere

Advomatic: Style guides in Drupal

Planet Drupal - mar, 31/03/2015 - 17:02
Heading into Chicago’s Midcamp, my coworker Andy and I were excited to talk to other front end developers about using style guides with Drupal. We decided to put the word out and organize a BOF (birds of a feather talk) to find our kindred front end spirits. Indeed, we found a small group of folks... Read more »
Catégories: Elsewhere

ERPAL: Agile projects for a fixed price? Yes you can!

Planet Drupal - mar, 31/03/2015 - 16:52

In the first part of our blog serie we discovered why we need "objectives" to give projects a solid base to succeed. In this blog post we will describe how to manage agile projects for a fixed price. Doesn't work? Es it does, if you respect some rules and do a detailled planning.

In general you should be careful with agile projects on a fixed price agreement. Both parties, the vendor and the customer should be aware of what this agreement means.

What does that mean exactly? Agile:

Changes are allowed in a running project and they are needed. Especially for large projects, changes need to be allowed to continue work even with changing conditions in the defined goals. As described in the blogpost about 3 rules for setting objectives in projects a project should reach certain goals. Agile methods allow to evaluate milestone deliveries often, validate requirements against the end users and involve the feedback in the next sprint. This ensures that the final project result has really the attributes to be accepted by end users.

Fixed price:

The price is fixed and must not be exceeded.

And where is the problem?

The price to be paid always reflects the value of a service or the result a project should deliver. The price tag is a fixed unit. The price and the value of a project should be in a direct relation, since prices are arbitrary otherwise. As the basis for a good and fair calculated project, the requirement description consisting of performance definition, specification and design is valid. If this is detailed enough, realistic assessments can be made. One of the first steps in a project creating the specifications. The more detailled the better. In contrast, if estimations are not realistic and not understanderstandable, conflicts will inevitably arise at some point during the project. This happens latest when additional expenses arise due to new or changing requirements. If it is not possible to proof the changing requirements objectivly, conflicts may appear as one of the project parties may feel disadventaged. So it is urgently to ensure that services and prices are clearly related and are both traceable and transparent. Then also later occurring changes can be considered smoothly. This is the base to manage agile projects for a fixed price. As you know all detailled requirements and their price tag or estimation, it is possible to change existing requirements that are obsolete with new requirements that appeared. Priorities of in the backlog of all requirements help to stick to the initial estimation. Otherwise it is easy to argument, why the fixed price will be overrun.

Another problem arises when you schedule a buffer without a fixed size. It will never be possible for you to find an explanation for when the buffer is finally exhausted, if you dont have clearly estimated and specified requirements that will help to realize a real change request. Because you do not know how big a change is and what changes have already been posted with what size of the buffer. So you should offer a fixed time schedule and buffer, which can be used for changing requests. These "change requests" have to be transparent and documentated for your costumer. So all parties can understand why sometime the buffer is depleted. This avoids conflicts.

In the next part of the series we will draw attention on the topic "Specifications".

Catégories: Elsewhere

Jonathan McDowell: Shipping my belongings across the globe

Planet Debian - mar, 31/03/2015 - 16:20

I previously wrote about tracking a ship around the world, but never followed up with the practical details involved with shipping my life from the San Francisco Bay Area back to Belfast. So here they are, in the hope they provide a useful data point for anyone considering a similar move.

Firstly, move out. I was in a one bedroom apartment in Fremont, CA. At the time I was leaving the US I didn’t have anywhere for my belongs to go - the hope was I’d be back in the Bay Area, but there was a reasonable chance I was going to end up in Belfast or somewhere in England. So on January 24th 2014 I had my all of my belongings moved out and put into storage, pending some information about where I might be longer term. When I say all of my belongings I mean that; I took 2 suitcases and everything else went into storage. That means all the furniture for probably a 2 bed apartment (I’d moved out of somewhere a bit larger) - the US doesn’t really seem to go in for the concept of a furnished lease the same way as the UK does.

I had deliberately picked a moving company that could handle the move out, the storage and the (potential) shipping. They handed off to a 3rd party for the far end bit, but that was to be expected. Having only one contact to deal with throughout the process really helped.

Fast forward 8 months and on September 21st I contacted my storage company to ask about getting some sort of rough shipping quote and timescales to Belfast. The estimate came back as around a 4-6 week shipping time, which was a lot faster than I was expecting. However it turned out this was the slow option. On October 27th (delay largely due to waiting for confirmation of when I’d definitely have keys on the new place) I gave the go ahead.

Container pickup (I ended up with exclusive use of a 20ft container - not quite full, but not worth part shipment) from the storage location was originally due on November 7th. Various delays at the Port of Oakland meant this didn’t happen until November 17th. It then sat in Oakland until December 2nd. At that point the ETA into Southampton was January 8th. Various other delays, including a week off the coast of LA (yay West Coast Port Backups) meant that the ship finally arrived in Southampton on January 13th. It then had to get to Belfast and clear customs. On January 22nd 2015, 2 days shy of a year since I’d seen them, my belongings and I were reunited.

So, on the face of it, the actual time on the ship was only slightly over 6 weeks, but all of the extra bits meant that the total time from “Ship it” to “I have it” was nearly 3 months. Which to be honest is more like what I was expecting. The lesson: don’t forget to factor in delays at every stage.

The relocation cost in the region of US$8000. It was more than I’d expected, but far cheaper than the cost of buying all my furniture again (plus the fact there were various things I couldn’t easily replace that were in storage). That cost didn’t cover the initial move into storage or the storage fees - it covered taking things out, packing them up for shipment and everything after that. Including delivery to a (UK) 3rd floor apartment at the far end and insurance. It’s important to note that I’d included this detail before shipment - the quote specifically mentioned it, which was useful when the local end tried to levy an additional charge for the 3rd floor aspect. They were fine once I showed them the quote as including that detail.

Getting an entire apartment worth of things I hadn’t seen in so long really did feel a bit like a second Christmas. I’d forgotten a lot of the things I had, and it was lovely to basically get a “home in a container” delivered.

Catégories: Elsewhere

Drupal Association News: Please help us welcome Matt, Tina, and Bradley

Planet Drupal - mar, 31/03/2015 - 15:49

After a very busy year and a half, we're nearly done shoring up on new hires here at the Drupal Association. We’ve been working hard to bring in the best talent around, and are thrilled to announce our three new staff members: Matt, Tina, and Brad!

Matt Tsugawa, CFO, Finance and HR Team

Matt (mtsugawa) is joining the Association as our new CFO, where he will be responsible for Finance and HR, and will help develop and drive the strategy of the organization as a member of our leadership team. He brings a rich professional history with him: he has worked in industries across the country and around the world. Early in his career, Matt worked in Japan as a management consultant at the professional services firms, KPMG and Arthur Andersen. After spending a few years as an analyst and business development manager in New York at A&E Television Networks, Matt returned to Portland, where he was born and raised.

Most recently, Matt worked in the energy efficiency industry as the Head of Finance. He holds a BA from University of Colorado at Boulder and an MBA from Yale. When not at work, Matt enjoys “managing" his three children and overgrown puppy with his wife, and when not doing that, he is an enthusiastic, if not yet expert, practitioner of Brazilian Jiu Jitsu.

Tina Krauss, DrupalCon Coordinator, Events Team

Tina (tinakrauss) is the newest member of the DrupalCon team, and came on board in mid March. As a DrupalCon Coordinator, Tina will work with each con’s volunteers, assist in con programming and logistics, and work with website content. Tina is also focused on customer support and responds to tickets submitted to our Contact Us form related to the Cons.

A native of Germany, Tina moved to Portland, Oregon several years ago, where she currently resides. In her free time, Tina is an adventurer. She loves to travel around the world -- the farther, the better! She also enjoys outdoor activities like hiking, biking, backpacking, skiing, and more.

Bradley Fields, Content Manager, Marcomm and Membership Team

Bradley (bradleyfields) joins the Marketing and Communications team as Content Manager. He will focus on the planning, creation, and maintenance of content—across all of the Association-managed platforms—that engages and strengthens the Drupal community. For the last six years, he worked to help associations, federal agencies, and universities make their content work better for all sorts of users and audiences.

When he is not at his desk, Bradley is curating Spotify playlists, watching one of his 50+ animated Disney movies, on the hunt for great whisky, or reading Offscreen magazine. He wishes he were Batman, but his superhero powers are definitely still under development.

Catégories: Elsewhere

Drupalize.Me: Funding Core Development with D8 Accelerate

Planet Drupal - mar, 31/03/2015 - 15:15

Drupalize.Me and Lullabot together have made a donation of $5,000 to the Drupal 8 Accelerate Fund, becoming an anchor donor of this critical funding initiative. We heartily believe in funding core development and are so excited to be a part of providing a much needed final push to a Drupal 8 stable release. Learn more about how you can be a part of accelerating the release of Drupal 8.

Catégories: Elsewhere

Dirk Eddelbuettel: R / Finance 2015 Open for Registration

Planet Debian - mar, 31/03/2015 - 14:12

The annoucement below just went to the R-SIG-Finance list. More information is as usual at the R / Finance page.

Registration for R/Finance 2015 is now open!

The conference will take place on May 29 and 30, at UIC in Chicago. Building on the success of the previous conferences in 2009-2014, we expect more than 250 attendees from around the world. R users from industry, academia, and government will joining 30+ presenters covering all areas of finance with R.

We are very excited about the four keynote presentations given by Emanuel Derman, Louis Marascio, Alexander McNeil, and Rishi Narang.
The conference agenda (currently) includes 18 full presentations and 19 shorter "lightning talks". As in previous years, several (optional) pre-conference seminars are offered on Friday morning.

There is also an (optional) conference dinner at The Terrace at Trump Hotel. Overlooking the Chicago river and skyline, it is a perfect venue to continue conversations while dining and drinking.

Registration information and agenda details can be found on the conference website as they are being finalized.
Registration is also available directly at the registration page.

We would to thank our 2015 sponsors for the continued support enabling us to host such an exciting conference:

International Center for Futures and Derivatives at UIC

Revolution Analytics
MS-Computational Finance and Risk Management at University of Washington

Ketchum Trading
OneMarketData
RStudio
SYMMS

On behalf of the committee and sponsors, we look forward to seeing you in Chicago!

For the program committee:
Gib Bassett, Peter Carl, Dirk Eddelbuettel, Brian Peterson,
Dale Rosenthal, Jeffrey Ryan, Joshua Ulrich

See you in Chicago in May!

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Catégories: Elsewhere

Deeson: Creative design for community websites

Planet Drupal - mar, 31/03/2015 - 13:53

A powerful, branded online community can be a great way to promote interaction between a brand and its users.

It's often much more successful than relying solely on pre-designed social media platforms.

At Deeson we’ve built plenty of community sites and we’ve helped organisations develop and deliver their community strategies too. 

As a digital designer, I wanted to take a look at how creative design is being used across the web to help create a more engaging experience for members in online communities.

Long gone are the days when forums were just long lists of grey gradient boxes, with a place reserved for only the highest tech-savvy geeks.

In 2015 they can be bright, engaging, highly topical and definitely not to be hidden at the bottom of your website.

 

Don’t be afraid to let your brand come through

Macmillan Cancer Support is an example I like of an online community that takes a holistic approach. The charity has embraced its branding strategies from across print and advertising, successfully pulling them into one effective online resource.

Covering such a sensitive issue and with the primary purpose as a charity to provide support to others, the brand and online forum needs to be informal, approachable and familiar. Careful layout and colour palette choices help achieve this. The instantly recognisable Macmillan typeface reinforces brand trust.

As a charity that deals with cancer, the topics and emotions when using the resource can be very overwhelming for a user.  The design here does a very good job of counteracting this by breaking each section of the community down into smaller accessible blocks of information.

Posts of the week and feature quotes are highlighted with the bright green and blue colour palette, but do not over power the more detailed threads of conversation that can be scanned easily too.

Encourage user participation

Online communities can be a great tool to generate content outside the services of your own team.

Not only can this broaden topics and save money, it also makes users feel valued and involved.

The Buzzfeed Community, ‘home for awesome posts created by BuzzFeeders', is one of the most successful examples of this.

With the rise of social media came the unstoppable need to gain online credibility and profile. A leaderboard page ranking top posts and users is the perfect mechanism for BuzzFeed to successfully play on this thirst for online fame and self promotion.

Larger than normal profile images, a unique set of designed award virtual stickers and a delightful visual rating system of ‘cat power’ certainly provides encouragement to post valuable submissions.

The design of this online community is somewhat cringeworthy but very well considered and tailored to the audience.

Unlike some other communities, the user's main purpose here is often not to gain a specific answer from the site, but to browse freely until something sparks interest.

Large bold story titles and on trend icons over thumbnails are the perfect tool for free spirited navigation.

Consider your audience

It doesn’t always have to be text heavy and not all communities need to focus around conversation.

DIY is a fantastic online community for kids that encourages them to develop skills across a huge range of subjects - its main feature is delightful illustrations.

Full of creativity, with a unique badge drawn for every skill set, it remains on trend with flat design and the merging of pastel and bright colours.

Every page is instantly visually engaging, perfect for the target audience.

The community works by allowing members to upload videos and images of them completing a certain skill task. This instantly provides a visual portfolio for each member that children can engage with.

Buttons are large and accessible and the navigation is simple to use, with another unique set of icons.

As a neat little commemoration to the origin of earning 'skill set' badges (from the likes of Scouts and Girl Guides), you can purchase hand-sewn patches to show off the illustrated badges you’ve earned, offline.

Consider medium

It’s not just children’s online communities that benefit from avoiding text heavy sites. For brands like YouTube and Spotify, the main medium associated is video and sound so it makes sense to make these the feature.

Soundcloud, an audio platform for sharing and promoting originally created sounds, is doing exactly that through creative visuals of sound.

Rather than opting for simple play button and song title, a visualisation of each soundtrack is the feature of each post. A creative display that is both engaging and informative.

Community members' comments can also be placed along exact positions of tracks, removing the need to pause and type out the time you're referring to.

This simple design idea is a great example of finding creative ways to address specific subject user needs and it is addressed with great attention to detail.

The brand's Orange is re-introduced to mark which sections have been played and the background to the player is customisable. This takes in to account that community members have their own 'brands' they may want to display.

Don’t forget that simplicity can work

Lonely Planet’s forum may not be the most groundbreaking in design, but it’s simple, clean, modern and effective. Covering a broad range of topics and conversations, the amount of content could easily become overwhelming, so in this case the design is better stripped back and decluttered.

With a clean menu to the left, the main content paragraphs sit to the right with a short, readable line length and generously spaced line height for maximum readability.

The design is minimal, with titles differentiated in the brand blue. Its only flaw would perhaps be the slightly small font size used on the navigation, which could prove negative to accessibility.

The online Lego community also keeps it simple, but has a creative addition to tools available within forum conversations.

Expanding on the ever popular emoticon series, now standard with most phones, Lego has devised their own set of characterised Lego heads. Bringing in this iconic shape allows users to subtly interact further with the brand.

Overall, the design of an online community should consider your existing brand, the community members and the purpose of the community.

Sometimes a huge number of features is not always best and the information in the community needs to speak louder than the members or brand. Equally, it’s good to get creative and break away from the dated, plain, grey forum list.

Catégories: Elsewhere

Deeson: Google has a new algorithm, will your site be judged as mobile-friendly?

Planet Drupal - mar, 31/03/2015 - 11:33

What is it and what does it mean?

Last week Google officially confirmed suspicions that they are due to release a mobile ranking algorithm. The update, due to be released on 21st April 2015, will mean that mobile-friendly websites will be given a higher ranking within mobile search results.  

When making the announcement, Google said they expect a significant change, with some sources saying it will have a bigger impact than both their Panda and Penguin updates. With 50% of all Google searches coming from a mobile device - I certainly think so too!

The new algorithm will be applied globally meaning that the update will affect mobile searches and results in all countries, across all languages at the same time - rather than being phased in country by country.

It’s real-time!

For those of you panicking at the deadline, do not fear (just yet). Google will review your site in real-time meaning that once you’ve updated your design, the next time Google trawl your site they’ll identify it as mobile friendly.

With that in mind, it’s important to remember that this works both ways. If you update your site so it is unfriendly, Google’s algorithm will kick in, which may have a negative effect on your ranking.

It’s good news for those who have part-friendly sites (e.g. your entire site is friendly but parts are not) as Google will be working on a page by page basis. Google will identify the friendly pages but the unfriendly sections will not cause your entire site to be marked negatively - phew!

You’re already being labelled.

You may have already seen, but Google has already got the ball rolling with this latest update. In an effort to help mobile searchers know which sites are mobile-friendly, they have added a text label under the URL (near the snippet) that reads “Mobile-friendly’.   

Discussing the recent label additions Google said, “Users will find it easier to get relevant, high quality search results that are optimised for their devices”. Personally, I’ll always choose a mobile friendly search result over non-labelled one when on the move.

As the fight for the top positions in Google becomes ever-more competitive, it’s important that you identify any potential advantages over your competitors - and this certainly is one. In a recent report by Alexa (October 2014) which reviewed the the top 1000 sites, just 18% of them were mobile friendly - and these are the big guns!

Mobile users hold a grudge

Figures show that 61% of users are unlikely to return to a mobile site they’ve had trouble with. Of those, 40% said they would then go on to visit a competitor's site instead. These figures emphasise the huge importance of making your site user-friendly across all devices, particularly if you have e-commerce.

According to Nielsen, 87% of mobile users used their mobile device for shopping activities such as searching for a product or service, price comparisons, or brick & mortar address search. Having a seamless experience across all devices can have a big effect on the amount of users that finally go through to purchase.

What can I do?

There are a number of things you to can do to help make your site more mobile friendly and give yourself greater chance of obtaining a mobile friendly ‘label’. It all depends on what the Googlebot detects when it’s trawling your site, it’s looking for the following:

  • The site avoids software that is not common on mobile devices e.g Adobe Flash

  • The site uses text that is readable without zooming

  • The site sizes content to the screen so users don’t have to scroll horizontally or zoom

  • The site places links far enough apart so that the correct one can be easily tapped

Basic example showing a mobile-friendly layout.

If you’re unsure, there are two easy ways to check to see whether your site is mobile-friendly or not. These are:

  • The simple one: look up your site on a phone yourself and judge.

  • Use the Google Mobile-Friendly Tool to see if Google thinks you’re mobile-friendly.

Final thoughts.

  • To stay ahead of your competitors and provide a seamless experience for your users, updating your site so it’s mobile friendly is a no-brainer. Not only will it give you better chance of retaining customers but it will give you more klout with the big search engines.

  • As more and more sites begin refreshing their design, it’s important to be ahead of the trend and not seen to be lagging behind your competitors. Run tests on your site to make sure that your site is mobile-friendly and meets the requirements set by Google’s new algorithm.

  • If you have any questions surrounding mobile friendly design, SEO or are looking to update your site design, just drop us a line.

Image Credit: SabianMaggy

Catégories: Elsewhere

Deeson: What makes a good digital strategist?

Planet Drupal - mar, 31/03/2015 - 11:28

Strategy has always fascinated me.

The intellectual challenge of developing innovative yet rationale responses to a unique context is something I'm really passionate about.

Digital strategy is all too often conflated with knowing which social media channels to be on or whether you should have a mobile app. But to be a good digital strategist you've got to know more than your Vines from your Meerkats.

I've been thinking about what sort of traits make someone a good digital strategist and I'm firmly convinced that the underlying strategic nous isn't something people are born with.

With focused personal development and discipline the traits of an effective strategist can be quite easily developed. 

Combining these traits with a professional curiosity and thirst for knowledge about the world around is what makes a good digital strategist.

So what do good digital strategists do to keep their skills sharp?

Here are my top five ways that digital strategists can stay on top of their game.

  1. Act like a counsellor

    As a strategist, you're helping a brand, organisation or person solve a problem. At the early stages of planning you should be absorbing more information than you're providing. As the Greek philosopher Epictetus wrote "We have two ears and one mouth so that we can listen twice as much as we speak."

    In a briefing session, probe the problem and interrogate the request to determine the true goal. A good strategist is like a good counsellor, reflective, intuitive and never pushy; they talk about you more than themselves.

    'Counselling' is essential to a strategist as it helps them develop the insights which are so critical to creating the final plan. 

  2. Never stop thinking 

    Whilst it's not healthy to be working constantly, a good strategist just can't help analysing campaigns she sees in her daily life.

    When watching TV ads, she wonders what led the creative team to sit around and devise that particular commercial. A good strategist has 'x-ray vision' when looking at other people's marketing activity- an ability to see through the consumer facing proposition, right back to the brief.   

  3. Show empathy

    Empathy is one of the key attributes of a good strategist; the ability to put themselves in another's shoes.

    Strategists need to be able to see across 'lines' in society, unaffected by taboo, moral issues, class or race. If a strategist is unable to put themselves in the consumer's shoes, they won't be able to succesfully relay the needs of the brand or organisation to them.   

  4. Be the glue

    A strategist should be the 'glue' in your campaign team, understanding the professional and personal needs of each contributor. She should be able to draw together findings using stats, creative insight and consumer research.

    The strategist must be the 'voice of reason' in collaborative sessions, listening and encouraging, whilst reigning back overconfidence, even in their superiors.   

  5. Quiet confidence

    A strategist is often a 'thinker', who prefers to absorb information and work alone to come up with ideas to present back to the team.

    Though they may spend a good proportion of their time in quiet contemplation, they're by no means a wallflower. When a strategist gets up to present, the client sits back and listens- they're well-informed, interesting and confident. You'll find most heads nodding as they present their insights.   

In truth, a strategist is simply one of many people contributing to a problem solving expedition. 

However their unique capacity to join up insight from across a marketing team means they are essential for building a successful digital proposition.

Whilse individual disciplines focus heavily on their own area, the strategist must be able to take a holistic view of the evidence and devise a clever, effective solution. 

Sounds simple in theory, yet fascinatingly complex in practice - which is probably why I enjoy it so much!

 

Catégories: Elsewhere

Konstantinos Margaritis: "Advanced Java® EE Development with WildFly" released by Packt (I was one of the reviewers!)

Planet Debian - mar, 31/03/2015 - 11:10

For the past months I had the honour and pleasure of being one of the reviewers of "Advanced Java® EE Development with WildFly" by Deepak Vohra. Today, I'm pleased to announce that the book has just been released by Packt:

https://www.packtpub.com/application-development/advanced-java-ee-development-wildfly

It was my first time being a reviewer and it was a very interesting experience. I would like to thank the two Project Coordinators from Packt, Aboli Ambardekar and Suzanne Coutinho, who guided me with the reviewing process, so that my review would be as accurate as possible and only related to technical aspect of the book. Looking at the process retrospectively I now begin to understand the complication of achieving a balance between the author's vision for the book and the scrutiny of the (many) reviewers.

And of course I would like to thank the author, Deepak Vohra, for writing the book in the first place, I'm looking forward to reading the actual physical book :)

Catégories: Elsewhere

Web Wash: How to Control Summary Text using Smart Trim in Drupal 7

Planet Drupal - mar, 31/03/2015 - 10:36

The "Long text and summary" field has a pretty handy formatter called "Summary or trimmed". This will display a summary, if one is supplied, or Drupal will simply trim the text and display it.

The problem with this formatter is that you can't trim the summary. For example, if an editor adds three paragraphs into the summary section, then the whole summary is shown. But sometimes you may need control over how much of the summary is displayed. This is especially true if your design requires the teaser to have a consistent height.

What's the best way of offering a summary to your editors that also trims it? Enter Smart Trim.

The Smart Trim module is an improved version of the "Summary or Trimmed" formatter and a whole lot more. It does a lot of useful stuff, but the one we want to discuss is the ability to trim summaries.

Catégories: Elsewhere

Drupal core announcements: Drupal 7 core release on Wednesday, April 1

Planet Drupal - mar, 31/03/2015 - 08:32
Start:  2015-04-01 (All day) America/New_York Online meeting (eg. IRC meeting) Organizers:  David_Rothstein

The monthly Drupal core bug fix/feature release window is this Wednesday, April 1, and since it has been a while since the last one, I plan to release Drupal 7.36 on that date.

The final patches for 7.36 have been committed and the code is frozen (excluding documentation fixes and fixes for any regressions that may be found in the next couple days). So, now is a wonderful time to update your development/staging servers to the latest 7.x code and help us catch any regressions in advance.

There are three relevant change records for Drupal 7.36 which are listed below. This is not the full list of changes, rather only a list of notable API additions and other changes that might affect a number of other modules, so it's a good place to start looking for any problems:

You might also be interested in the tentative CHANGELOG.txt for Drupal 7.36 and the corresponding list of important issues that will be highlighted in the Drupal 7.36 release notes.

If you do find any regressions, please report them in the issue queue. Thanks!

Upcoming release windows after this week include:

  • Wednesday, April 15 (security release window)
  • Wednesday, May 6 (bug fix/feature release window)

For more information on Drupal core release windows, see the documentation on release timing and security releases, and the discussion that led to this policy being implemented.

Catégories: Elsewhere

Chen Hui Jing: Drupal 101: Mapping with Leaflet and IP Geolocation

Planet Drupal - mar, 31/03/2015 - 02:00

Store locators are a useful functionality for businesses who have multiple outlets. Drupal has a number of map rendering modules that allow us to provide store locator functionality. This article will cover the basics of setting up a simple store locator with proximity search functionality.

Create and setup location content type

Required modules

  1. Install the required modules. drush dl addressfield geocoder geofield geophp ctools -y
  2. Enable the required modules. drush en addressfield geocoder geofield geofield_map...
Catégories: Elsewhere

John Goerzen: ssh suddenly stops communicating with some hosts

Planet Debian - mar, 31/03/2015 - 00:13

Here’s a puzzle I’m having trouble figuring out. This afternoon, ssh from my workstation or laptop stopped working to any of my servers (at OVH). The servers are all running wheezy, the local machines jessie. This happens on both my DSL and when tethered to my mobile phone. They had not applied any updates since the last time ssh worked. When looking at it with ssh -v, they were all hanging after:

debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr umac-64@openssh.com none debug1: kex: client->server aes128-ctr umac-64@openssh.com none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY

Now, I noticed that a server on my LAN — running wheezy — could successfully connect. It was a little different:

debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY

And indeed, if I run ssh -o MACs=hmac-md5, it works fine.

Now, I tried rebooting machines at multiple ends of this. No change. I tried connecting from multiple networks. No change. And then, as I was writing this blog post, all of a sudden it works normally again. Supremely weird! Any ideas what I can blame here?

Catégories: Elsewhere

Carl Chenet: Verify the backups of backup-manager

Planet Debian - mar, 31/03/2015 - 00:00

Follow me on Identi.ca  or Twitter  or Diaspora*

Backup-manager is a tool creating backups and storing them locally. It’s really usefult to keep a regular backup of a quickly-changing trees of files (like a development environment) or for traditional backups if you have a NFS mount on your server. Backup-managers is also able to send backup itself to another server by FTP.

In order to verify the backups created by backup-manager, we will use also Backup Checker (stars appreciated :) ), the automated tool to verify backups. For each newly-created backup we want to control that:

  • the directory wip/data exists
  • the file wip/dump/db.sql exists and has a size greater than 100MB
  • the files wip/config/accounts did not change and has a specific md5 hash sum.
Installing what we need

We install backup-manager and backup checker. If you use Debian Wheezy, just use the following command:

apt-key adv --keyserver pgp.mit.edu --recv-keys 2B24481A \ && echo "deb http://debian.mytux.fr wheezy main" > /etc/apt/sources.list.d/mytux.list \ && apt-get update \ && apt-get install backupchecker backup-manager

Backup Checker is also available for Debian Squeeze, Debian Sid, FreeBSD. Check out the documentation to install it from PyPi or from sources.

Configuring Backup-Manager

Backup-manager will ask what directory you want to store backups, in our case we choose /home/joe/dev/wip

In the configuration file /etc/backup-manager.conf, you need to have the following lines:

export BM_BURNING_METHOD="none" export BM_UPLOAD_METHOD="none" export BM_POST_BACKUP_COMMAND="backupchecker -c /etc/backupchecker -l /var/log/backupchecker.log" Configuring Backup Checker

In order to configure Backup Checker, use the following commands:

# mkdir /etc/backupchecker && touch /var/log/backupchecker.log

Then write the following in /etc/backupchecker/backupmanager.conf:

[main] name=backupmanager type=archive path=/var/archives/laptop-home-joe-dev-wip.%Y%m%d.master.tar.gz files_list=/etc/backupchecker/backupmanager.list

You can see we’re using placeholders for the path value, in order to match each time the latest archive. More information about Backup Checker placeholders in the official documentation.

Last step, the description of your controls on the backup:

[files] wip/data| type|d wip/config/accounts| md5|27c9d75ba5a755288dbbf32f35712338 wip/dump/dump.sql| >100mb Launch Backup Manager

Just launch the following command:

# backup-manager

After Backup Manager is launched, Backup Checker is automatically launched and verify the new backup of the day where Backup Manager stores the backups.

Possible control failures

Lets say the dump does not have the expected size. It means someone may have messed up with the database! Backup Checker will warn you with the following message in /var/log/backupchecker.log:

$ cat /var/log/backupchecker.log WARNING:root:1 file smaller than expected while checking /var/archives/laptop-home-joe-dev-wip-20150328.tar.gz: WARNING:root:wip/dump/dump.sql size is 18. Should have been bigger than 104857600.

Other possible failures : someone created an account without asking anyone. The hash sum of the file will change. Here is the alert generated by Backup Checker:

$ cat /var/log/backupchecker.log WARNING:root:1 file with unexpected hash while checking /var/archives/laptop-home-joe-dev-wip-20150328.tar.gz: WARNING:root:wip/config/accounts hash is 27c9d75ba5a755288dbbf32f35712338. Should have been 27c9d75ba3a755288dbbf32f35712338.

Another possible failure: someone accidentally (or not) removed the data directory! Backup Checker will detect the missing directory and warn you:

$ cat /var/log/backupchecker.log WARNING:root:1 file missing in /var/archives/laptop-home-joe-dev-wip-20150328.tar.gz: WARNING:root:wip/data

Awesome isn’t it? The power of a backup tool combined with an automated backup checker. No more surprise when you need your backups. Moreover you spare the waste of time and efforts to control the backup by yourself.

What about you? Let us know what you think of it. We would be happy to get your feedbacks. The project cares about our users and the outdated feature was a awesome idea in a feature request by one of the Backup Checker user, thanks Laurent!

 


Catégories: Elsewhere

Yves-Alexis Perez: 3.2.68 Debian/grsec kernel and update on the process

Planet Debian - lun, 30/03/2015 - 22:27

It's been a long time since I updated my repository with a recent kernel version, sorry for that. This is now done, the kernel (sources, i386 and amd64) is based on the (yet unreleased) 3.2.68-1 Debian kernel, patched with grsecurity 3.1-3.2.68-201503251805, and has the version 3.2.68-1~grsec1.

It works fine here, but as always, no warranty. If any problem occurs, try to reproduce using vanilla 3.2.68 + grsec patch before reporting here.

And now that Jessie release approaches, the question of what to do with those Debian/grsec kernel still arrise: the Jessie kernel is based on the 3.16 branch, which is not a (kernel.org) long term branch. Actually, the support already ended some times ago, and the (long term) maintainance is now assured by the Canonical Kernel Team (thus the -ckt suffix) with some help from the Debian kernel maintainers. So there's no Grsecurity patch following 3.16, and there's no easy way to forward-port the 3.14 patches.

At that point, and considering the support I got the last few years on this initiative, I don't think it's really worth it to continue providing those kernels.

One initiative which might be interesting, though, is the Mempo kernels. The Mempo team works on kernel reproducible builds, but they also include the grsecurity patch. Unfortunately, it seems that building the kernel their way involves calling a bash script which calls another one, and another one. A quick look at the various repositories is only enough to confuse me about how actually they build the kernel, in the end, so I'm unsure it's the perfect fit for a supposedly secure kernel. Not that the Debian way of building the kernel doesn't involves calling a lot of scripts (either bash or python), but still. After digging a bit, it seems that they're using make-kpkg (from the kernel-package package), which is not the recommended way anymore. Also, they're currently targeting Wheezy, so the 3.2 kernel, and I have no idea what they'll chose for Jessie.

In the end, for myself, I might just do a quick script which takes a git repository at the right version, pick the latest grsec patch for that branch, applies it, then run make deb-pkg and be done with it. That still leaves the problem of which branch to follow:

  • run a 3.14 kernel instead of the 3.16 (I'm unsure how much I'd lose / not gain from going to 3.2 to 3.14 instead of 3.16);
  • run a 3.19 kernel, then upgrade when it's time, until a new LTS branch appears.

There's also the config file question, but if I'm just using the kernels for myself and not sharing them, it's also easier, although if some people are actually interested it's not hard to publish them.

Catégories: Elsewhere

Matthias Klumpp: Limba Project: Another progress report

Planet Debian - lun, 30/03/2015 - 21:46

And once again, it’s time for another Limba blogpost

Limba is a solution to install 3rd-party software on Linux, without interfering with the distribution’s native package manager. It can be useful to try out different software versions, use newer software on a stable OS release or simply to obtain software which does not yet exist for your distribution.

Limba works distribution-independent, so software authors only need to publish their software once for all Linux distributions.

I recently released version 0.4, with which all most important features you would expect from a software manager are complete. This includes installing & removing packages, GPG-signing of packages, package repositories, package updates etc. Using Limba is still a bit rough, but most things work pretty well already.

So, it’s time for another progress report. Since a FAQ-like list is easier to digest. compared to a long blogpost, I go with this again. So, let’s address one important general question first:

How does Limba relate to the GNOME Sandboxing approach?

(If you don’t know about GNOMEs sandboxes, take a look at the GNOME Wiki – Alexander Larsson also blogged about it recently)

First of all: There is no rivalry here and no NIH syndrome involved. Limba and GNOMEs Sandboxes (XdgApp) are different concepts, which both have their place.

The main difference between both projects is the handling of runtimes. A runtime is the shared libraries and other shared ressources applications use. This includes libraries like GTK+/Qt5/SDL/libpulse etc. XdgApp applications have one big runtime they can use, built with OSTree. This runtime is static and will not change, it will only receive critical security updates. A runtime in XdgApp is provided by a vendor like GNOME as a compilation of multiple single libraries.

Limba, on the other hand, generates runtimes on the target system on-the-fly out of several subcomponents with dependency-relations between them. Each component can be updated independently, as long as the dependencies are satisfied. The individual components are intended to be provided by the respective upstream projects.

Both projects have their individual up and downsides: While the static runtime of XdgApp projects makes testing simple, it is also harder to extend and more difficult to update. If something you need is not provided by the mega-runtime, you will have to provide it by yourself (e.g. we will have some applications ship smaller shared libraries with their binaries, as they are not part of the big runtime).

Limba does not have this issue, but instead, with its dynamic runtimes, relies on upstreams behaving nice and not breaking ABIs in security updates, so existing applications continue to be working even with newer software components.

Obviously, I like the Limba approach more, since it is incredibly flexible, and even allows to mimic the behaviour of GNOMEs XdgApp by using absolute dependencies on components.

Do you have an example of a Limba-distributed application?

Yes! I recently created a set of package for Neverball – Alexander Larsson also created a XdgApp bundle for it, and due to the low amount of stuff Neverball depends on, it was a perfect test subject.

One of the main things I want to achieve with Limba is to integrate it well with continuous integration systems, so you can automatically get a Limba package built for your application and have it tested with the current set of dependencies. Also, building packages should be very easy, and as failsafe as possible.

You can find the current Neverball test in the Limba-Neverball repository on Github. All you need (after installing Limba and the build dependencies of all components) is to run the make_all.sh script.

Later, I also want to provide helper tools to automatically build the software in a chroot environment, and to allow building against the exact version depended on in the Limba package.

Creating a Limba package is trivial, it boils down to creating a simple “control” file describing the dependencies of the package, and to write an AppStream metadata file. If you feel adventurous, you can also add automatic build instructions as a YAML file (which uses a subset of the Travis build config schema)

This is the Neverball Limba package, built on Tanglu 3, run on Fedora 21:

Which kernel do I need to run Limba?

The Limba build tools run on any Linux version, but to run applications installed with Limba, you need at least Linux 3.18 (for Limba 0.4.2). I plan to bump the minimum version requirement to Linux 4.0+ very soon, since this release contains some improvements in OverlayFS and a few other kernel features I am thinking about making use of.

Linux 3.18 is included in most Linux distributions released in 2015 (and of course any rolling release distribution and Fedora have it).

Building all these little Limba packages and keeping them up-to-date is annoying…

Yes indeed. I expect that we will see some “bigger” Limba packages bundling a few dependencies, but in general this is a pretty annoying property of Limba currently, since there are so few packages available you can reuse. But I plan to address this. Behind the scenes, I am working on a webservice, which will allow developers to upload Limba packages.

This central ressource can then be used by other developers to obtain dependencies. We can also perform some QA on the received packages, map the available software with CVE databases to see if a component is vulnerable and publish that information, etc.

All of this is currently planned, and I can’t say a lot more yet. Stay tuned! (As always: If you want to help, please contact me)

Are the Limba interfaces stable? Can I use it already?

The Limba package format should be stable by now – since Limba is still Alpha software, I will however, make breaking changes in case there is a huge flaw which makes it reasonable to break the IPK package format. I don’t think that this will happen though, as the Limba packages are designed to be easily backward- and forward compatible.

For the Limba repository format, I might make some more changes though (less invasive, but you might need to rebuilt the repository).

tl;dr: Yes! Plase use Limba and report bugs, but keep in mind that Limba is still in an early stage of development, and we need bug reports!

Will there be integration into GNOME-Software and Muon?

From the GNOME-Software side, there were positive signals about that, but some technical obstancles need to be resolved first. I did not yet get in contact with the Muon crew – they are just implementing AppStream, which is a prerequisite for having any support for Limba[1].

Since PackageKit dropped the support for plugins, every software manager needs to implement support for Limba.

So, thanks for reading this (again too long) blogpost There are some more exciting things coming soon, especially regarding AppStream on Debian/Ubuntu!

 

[1]: And I should actually help with the AppStream support, but currently I can not allocate enough time to take that additional project as well – this might change in a few weeks. Also, Muon does pretty well already!

Catégories: Elsewhere

Drupal for Government: Govermenty forms even more governmenty with FillPDF and Rules

Planet Drupal - lun, 30/03/2015 - 21:08

When working with government agencies the sacred form may raise it's fugly formatted head now and again.  Despite attempts at logic "Wouldn't an XLS spreadsheet be easier for everyone?" it sometimes comes down to what's simpler - gettin' er done vs doin' it right.... and if no one really cares about doin' it right, gettin' er done becomes the (sloppy) way, (half)truth, and (dim) light....

So yeah - I had a form that needed to be pixel perfect so that a state-wide agency could print the forms up and store them in a manilla folder... I started working with Views PDF.  This did generate pdf's... and along with mimemail and rules we were sending PDF's out... but they just weren't looking like folks wanted them... FillPDF - thank you.

To use FillPDF we started by installing pdftk (apt-get install pdftk on ubuntu) and then installing the module as per usual....  here's the rest step-by-step 

Catégories: Elsewhere

Pages

Subscribe to jfhovinne agrégateur - Elsewhere