Feed aggregator

Drupal Watchdog: VIDEO: DrupalCon Amsterdam Interview: Robert Vandenburg

Planet Drupal - Mon, 11/05/2015 - 18:32

Lingotek does translation management (whatever that is). As President and CEO, ROBERT VANDENBURG gets to travel to exotic locations like Austin and Amsterdam, and hobnob with the locals.

How many languages does Bob speak? We’ll ask him at DrupalCon LA. Stay tuned.

Tags:  DrupalCon Amsterdam DrupalCon Video Video: 
Categories: Elsewhere

nielsdefeyter.nl: Create a View with Organic Group content

Planet Drupal - Mon, 11/05/2015 - 16:08
If you want to list Organic Group content in a View you must do a little more that just add one reference. The default example is available if you enable Organic Groups. The name of that View is "OG content" (machine-name: og_nodes ). I put it here on the blog because our usecase didn't use Organic...
Categories: Elsewhere

iterate.: Drupal Open Days 2015

Planet Drupal - Mon, 11/05/2015 - 16:03

Drupal Open Days is the annual Irish Drupal conference. This year we are presenting three talks at the event covering business strategy, UX and development.

Categories: Elsewhere

Julien Danjou: My interview about software tests and Python

Planet Debian - Mon, 11/05/2015 - 15:39

I've recently been contacted by Johannes Hubertz, who is writing a new book about Python in German called "Softwaretests mit Python" which will be published by Open Source Press, Munich this summer. His book will feature some interviews, and he was kind enough to let me write a bit about software testing. This is the interview that I gave for his book. Johannes translated to German and it will be included in Johannes' book, and I decided to publish it on my blog today. Following is the original version.

How did you come to Python?

I don't recall exactly, but around ten years ago, I saw more and more people using it and decided to take a look. Back then, I was more used to Perl. I didn't really like Perl and was not getting a good grip on its object system.

As soon as I found an idea to work on – if I remember correctly that was rebuildd – I started to code in Python, learning the language at the same time.

I liked how Python worked, and how fast I was to able to develop and learn it, so I decided to keep using it for my next projects. I ended up diving into Python core for some reasons, even doing things like briefly hacking on projects like Cython at some point, and finally ended up working on OpenStack.

OpenStack is a cloud computing platform entirely written in Python. So I've been writing Python every day since working on it.

That's what pushed me to write The Hacker's Guide to Python in 2013 and then self-publish it a year later in 2014, a book where I talk about doing smart and efficient Python.

It had a great success, has even been translated in Chinese and Korean, so I'm currently working on a second edition of the book. It has been an amazing adventure!

Zen of Python: Which line is the most important for you and why?

I like the "There should be one – and preferably only one – obvious way to do it". The opposite is probably something that scared me in languages like Perl. But having one obvious way to do it is and something I tend to like in functional languages like Lisp, which are in my humble opinion, even better at that.

For a python newbie, what are the most difficult subjects in Python?

I haven't been a newbie since a while, so it's hard for me to say. I don't think the language is hard to learn. There are some subtlety in the language itself when you deeply dive into the internals, but for beginners most of the concept are pretty straight-forward. If I had to pick, in the language basics, the most difficult thing would be around the generator objects (yield).

Nowadays I think the most difficult subject for new comers is what version of Python to use, which libraries to rely on, and how to package and distribute projects. Though things get better, fortunately.

When did you start using Test Driven Development and why?

I learned unit testing and TDD at school where teachers forced me to learn Java, and I hated it. The frameworks looked complicated, and I had the impression I was losing my time. Which I actually was, since I was writing disposable programs – that's the only thing you do at school.

Years later, when I started to write real and bigger programs (e.g. rebuildd), I quickly ended up fixing bugs… I already fixed. That recalled me about unit tests and that it may be a good idea to start using them to stop fixing the same things over and over again.

For a few years, I wrote less Python and more C code and Lua (for the awesome window manager), and I didn't use any testing. I probably lost hundreds of hours testing manually and fixing regressions – that was a good lesson. Though I had good excuses at that time – it is/was way harder to do testing in C/Lua than in Python.

Since that period, I have never stopped writing "tests". When I started to hack on OpenStack, the project was adopting a "no test? no merge!" policy due to the high number of regressions it had during the first releases.

I honestly don't think I could work on any project that does not have – at least a minimal – test coverage. It's impossible to hack efficiently on a code base that you're not able to test in just a simple command. It's also a real problem for new comers in the open source world. When they are no test, you can hack something and send a patch, and get a "you broke this" in response. Nowadays, this kind of response sounds unacceptable to me: if there is no test, then I didn't break anything!

In the end, it's just too much frustration to work on non tested projects as I demonstrated in my study of whisper source code.

What do you think to be the most often seen pitfalls of TDD and how to avoid them best?

The biggest problems are when and at what rate writing tests.

On one hand, some people starts to write too precise tests way too soon. Doing that slows you down, especially when you are prototyping some idea or concept you just had. That does not mean that you should not do test at all, but you should probably start with a light coverage, until you are pretty sure that you're not going to rip every thing and start over. On the other hand, some people postpone writing tests for ever, and end up with no test all or a too thin layer of test. Which makes the project with a pretty low coverage.

Basically, your test coverage should reflect the state of your project. If it's just starting, you should build a thin layer of test so you can hack it on it easily and remodel it if needed. The more your project grow, the more you should make it sold and lay more tests.

Having too detailed tests is painful to make the project evolve at the start. Having not enough in a big project makes it painful to maintain it.

Do you think, TDD fits and scales well for the big projects like OpenStack?

Not only I think it fits and scales well, but I also think it's just impossible to not use TDD in such big projects.

When unit and functional tests coverage was weak in OpenStack – at its beginning – it was just impossible to fix a bug or write a new feature without breaking a lot of things without even noticing. We would release version N, and a ton of old bugs present in N-2 – but fixed in N-1 – were reopened.

For big projects, with a lot of different use cases, configuration options, etc, you need belt and braces. You cannot throw code in a repository thinking it's going to work ever, and you can't afford to test everything manually at each commit. That's just insane.

Categories: Elsewhere

Craig Small: procps using GitLab CI

Planet Debian - Mon, 11/05/2015 - 14:26

The procps project for a few years has been hosted at Gitorious.  With the announcement that Gitorious has been acquired by GitLab and that all repositories need to move there, procps moved along to GitLab. At first I thought it would just be a like for like thing, but then I noticed that GitLab has this GitLab CI feature and had to try it out.

CI here stands for Continuous Integration and is a way of automatically testing your program builds using a bunch of test scripts.  procps already has a set of tests, with some a level of coverage that has room for improvement, so it was a good candidate to use for CI. The way GitLab works is they have a central control point that is linked to the git repo and you create runners, which are the systems that actually compile the programs and run the tests. The runners then feed back their results and GitLab CI shows it all in pretty red or green.

The first problem was building this runner.  Most of the documentation assumes you are building testing for Ruby. procps is built using C with the test scripts in TCL, so there was going to be some need to make some adjustments.  I chose to use docker containers so that there was at least some isolation between the runners and the base system.  I soon found the docker container I used (ruby:2.6 as suggested by GitLab) didn’t have autopoint which mean autogen.sh failed so I had no configure script so no joy there.

Now my second problem was I had never used docker before and beyond that it was some sort of container thing so like virtual machines lite, I didn’t know much about it. The docker docs are very good and soon I had built my own custom docker image that had all the programs I need to compile and test procps. It’s basically a cut-down Debian image with a few things like gettext-bin and dejagnu added in.

Docker is not a full system

With that all behind me and a few “oh I need that too” (don’t forget you need git) moments we had a working CI runner.  This was a Good Thing.  You then find that your assumptions for your test cases may not always be correct.  This is especially noticeable when testing something like procps which needs to work of the proc filesystem.  A good example is, what uses session ID 1?  Generally its init or systemd, but in Docker this is what everything runs as. A test case which assumes things about SID=1 will fail, as it did.

This probably won’t be a problem for testing a lot of “normal” programs that don’t need to dig a deep into the host system as procps does, but it is something to remember. The docker environment looks a lot like a real system from the inside, but there are differences, so the lesson here is to write better tests (or fix the ones that failed, like I did).

The runner and polling

Now, there needs to be communication between the CI website and the runner, so the runner knows there is something for it to do.  The gitlab runner has this setup rather well, except the runner is rather aggressive about its polling, hitting the website every 3 seconds. For someone on some crummy Australian Internet with low speeds and download quotas, this can get expensive in network resources. As there is on average an update once a week or so these seems rather excessive.

My fix is pretty manual, actually its totally manual. I stop the daemon, then if I notice there are pending jobs I start the daemon, let it do its thing, then shut it down again.  This is certainly not an optimal solution but works for now.  I will look into doing something more clever later, possibly with webhooks.

Related articles
Categories: Elsewhere

Amazee Labs: Kickstart your Career with Amazee Talents

Planet Drupal - Mon, 11/05/2015 - 14:04
Kickstart your Career with Amazee Talents

Today our Group is launching Amazee Talents, a program to win bright minds like you for a 3 month internship at Amazee Labs or Amazee Metrics.

If you have completed, or are about to complete your apprenticeship, higher education or university degree and happen to live in Switzerland, the European Union, the United States or South Africa, then read on.

We are looking for above-average talents in web development, web analytics or online marketing who love to tackle tech problems and want to join our hard working team.

Depending on the available positions and your domicile you will be assigned to an internship in Zurich (Switzerland) or Austin (Texas, USA). The Amazee Group covers all travel and accommodation cost to and at your place of work. 

So, if you want to learn from the best in the field of Drupal development or web analytics & online marketing and be part of an open and creative corporate culture, visit the Amazee Talent program on our Group website.

We are looking forward to your application (scroll down for the talent program)!

Categories: Elsewhere

Zlatan Todorić: Reboot to roots

Planet Debian - Mon, 11/05/2015 - 13:17

I am working in software development field for more then 5 years. During that period I came across of other developers which vast majority can be described as terrible people. That is what sets probably difference between hacker mind and average software engineer. People tend to think if they have position of software developer they are god-like and should be treated like that. Hackers know who they are and tend to be Socrates-wise "I know that I know nothing". As student of mechanical engineering (majoring in mechatronics) I didn't have much of programming so naturally I had some personal difficulties to grasp software field but once I got in I stopped almost totally doing my student duties as it didn't cross path with my love towards hacking.

And now after that many years in software field, in which I mostly worked as freelancer for other people and companies (where it wasn't unusual that I got hired personally by a guy who couldn't or didn't know how to finish his work, I hand the code which he would present as his own work, but I don't mind as I needed the money to get on) and my work ended as proprietary - I am just fed up with all this nonsense. Someone would think what is problem there - well for me it is, as being part of Debian and wider Free software ecosystem I know proprietary software is not fostering better society in all together so it made me a bit unhappy about it. The thing that actually is really annoying me are asshole so-called software engineers and the most - bad behavior towards users. Users should be treated as supporters of your software and if proprietary company treats their users as shit then it's obvious that they don't care for the users but rather for the money and only money. Proprietary isn't here the only bad thing, open source community has it share of terrible responses - which comes naturally if you take the sheer amount of people in open source. I for instance was called on Debian IRC a peasant and even I sheep in my early days - but Debian as all communities has its bad seeds and sometimes even non-Debianities trolls hop on our channels. I didn't let myself put down because I personally got the chance to meet many of them and they change the flow of my life at that point.

I am unhappy at the current state of things worldwide and personally but I have now made a decision that I am shifting towards being technical customer support engineer or for me better named as Happiness Hero. Why, one would ask because as software engineer you would make more? Well, firstly I don't care that much about money (or to be even more precise, as capitalism has made a sick money-grabbing society, you can never pay me enough as I am more worth then entire capitalistic system so in the end I don't care that much how I am paid) and secondly happiness hero can earn more then software engineer which latest #talkpay hashtag on Twitter showed. I have made IMHO many good pieces of code that will never be contributed as mine, but I achieved no happiness and rather achieved few months ago a rather bad burnout. My shift towards happiness hero should look like this - I am pretty well prepared technically for that position, I have experience with helping users and in general with people and I tend to be pretty good at that (but that we would need to ask them) and with part that I enjoy helping people to debug their technical problem so they can spend more time on others things (read family, creativity, hobbies) is something I believe will put me well into position to achieve happiness so I can create for myself time to finish damn faculty (where TBH professors showed that they can be one of the worst enemies of their students - and I am highly doubting in this generic way of schooling for centuries) and if I get some decent paycheck with this capitalistic system I will have a chance to sponsor a poor family that has child talented in artistic field - or to say SHARE something material for cultural and emotional gain. Take that capitalism!

And the best part, I will start to believe more in myself so I have already few offers for that kind of position but am waiting for few more and I am screening companies to find one that fits me best - shares values with mine and has a great interaction with their users. One of the companies that makes me want to screen more companies is Buffer. When you read this and this you will see their way of life and perks are really something every company should do - then you will understand why I want to give my self a chance which I believe I finally deserved. And, no, I didn't apply for Buffer as I don't use their service nor did have time to read the two books needed for applying (which I think is really a great thing to foster people into getting more knowledge). I can only wish them best luck in future.

I am currently doing one small software development thing (freelance) for next two weeks and after that I am hoping to get enough good offer to start with happiness job and retreat from software development for some good amount of time. That doesn't mean I will cease my Free software activities - on contrary I think I will have more time and even more will to work on Free software development in my spare time. So if anyone or any company that works with open source community has a position for happiness hero, get in touch with me and we will have a good talk.

End point - I want to grow and become who am I, and not who I should be according to others.

Categories: Elsewhere

Microserve: The importance of being Open

Planet Drupal - Mon, 11/05/2015 - 10:11

Many years ago I started my working life as a trainee tech journalist. Every month the magazine that I worked on was accompanied by a CD-ROM containing dozens of (mostly useless) applications for readers to install on their PCs. As the new boy on the team I remember asking what it meant that some of the products were listed as ‘open source’. “It just means that we can put them on the disc without paying anyone” came the explanation.

I didn’t think about it much at the time, but years later I’ve come to realise that for many people ‘open source’ still just means ‘free’. No doubt that’s one of the most attractive things about open source software, but it’s only half the story. As well as the software being ‘free of charge’ the source code is freely available too, allowing anyone with the right skills to use it, customise it and improve it. Any useful developments can then be donated back to the source code for everyone to benefit from, making it a virtuous circle for those involved.

One of the best ways to illustrate the importance of open source software development is to consider the history of the internet itself. From the TCP/IP protocols, to the concept of hypertext, to the LAMP stack and (some) modern browsers, the internet as we know it relies on royalty-free technologies that have been made openly available over the years for everyone’s benefit. My guess is that the early pioneers of the web weren’t aiming to become millionaires (although some consequently did), rather they wanted to use their new discoveries to make people’s lives better. In the era of internet billionaires and an obsession with intellectual property, it’s important for everyone involved in web-related industries to occasionally remind ourselves that we’re all standing on the shoulders of these altruistic giants.

So yes, open source always means ‘free’, but it means so much more besides. It means thinking about the bigger picture rather than the quick buck. It means having the courage to let other people make your ideas better. It means developing re-usable solutions and not reinventing the wheel. All of this can only lead to better software for everyone.

But why should the principles of open source only apply to software development? If the principles work, then why couldn’t they be applied to other disciplines? When I look at the briefs that land on my desk, I see clients asking for solutions to the same types of problems again and again, which makes me think: “wouldn’t it be great if ‘open UX design’ was a thing, or ‘open business analysis’?” Organisations always think they are unique, but their requirements are often almost identical to others in the same sector.

I recently pitched to rebuild the website of a major county council, which had already done a superb job on the UX and design phase of the project, producing some very focused wireframes and prototypes. Since this great work was funded by public money, it feels right that the documentation could be made publicly available, and potentially save other local councils from spending tens of thousands of pounds to reach similar (probably worse) solutions.

I’m not suggesting one-size-fits-all solutions, rather solutions that can be adapted for each instance and which evolve as we collectively learn more about what works and what doesn’t. By sharing and collaborating more openly the evolution of ‘best practice’ will accelerate, which will benefit everyone as ever-more smart and effective solutions emerge.

And after open UX design, why not open hardware design? Or open pharmaceutical development? Or open product design? I’m pretty sure that ‘open’ movements are happening in all these industries to some extent, but it would great to see them reaching a critical mass and getting a higher profile. Maybe there will come a time when the the Dragons on 'Dragons’ Den' don’t ask “do you own the patent?” but “will you open-source it?”. The benefits for society would be huge.

At Microserve we use Drupal CMS and are very proud to be heavily involved in the Drupal community. Our developers make frequent contributions to the open source project, and we also contribute financially, attend events and are co-organising DrupalCamp Bristol 2015 (buy your tickets now). It’s our way of giving something back to the open-source movement, which has gifted us and millions of others a great way of earning a living. We hope, in our own small way, that we’re making the web, and the world, a better place.

Dan McNamara
Categories: Elsewhere

Microserve: The importance of being Open

Planet Drupal - Mon, 11/05/2015 - 10:11
The importance of being OpenMonday, May 11, 2015 - 09:11

Many years ago I started my working life as a trainee tech journalist. Every month the magazine that I worked on was accompanied by a CD-ROM containing dozens of (mostly useless) applications for readers to install on their PCs. As the new boy on the team I remember asking what it meant that some of the products were listed as ‘open source’. “It just means that we can put them on the disc without paying anyone” came the explanation.

I didn’t think about it much at the time, but years later I’ve come to realise that for many people ‘open source’ still just means ‘free’. No doubt that’s one of the most attractive things about open source software, but it’s only half the story. As well as the software being ‘free of charge’ the source code is freely available too, allowing anyone with the right skills to use it, customise it and improve it. Any useful developments can then be donated back to the source code for everyone to benefit from, making it a virtuous circle for those involved.

One of the best ways to illustrate the importance of open source software development is to consider the history of the internet itself. From the TCP/IP protocols, to the concept of hypertext, to the LAMP stack and (some) modern browsers, the internet as we know it relies on royalty-free technologies that have been made openly available over the years for everyone’s benefit. My guess is that the early pioneers of the web weren’t aiming to become millionaires (although some consequently did), rather they wanted to use their new discoveries to make people’s lives better. In the era of internet billionaires and an obsession with intellectual property, it’s important for everyone involved in web-related industries to occasionally remind ourselves that we’re all standing on the shoulders of these altruistic giants.

So yes, open source always means ‘free’, but it means so much more besides. It means thinking about the bigger picture rather than the quick buck. It means having the courage to let other people make your ideas better. It means developing re-usable solutions and not reinventing the wheel. All of this can only lead to better software for everyone.

But why should the principles of open source only apply to software development? If the principles work, then why couldn’t they be applied to other disciplines? When I look at the briefs that land on my desk, I see clients asking for solutions to the same types of problems again and again, which makes me think: “wouldn’t it be great if ‘open UX design’ was a thing, or ‘open business analysis’?” Organisations always think they are unique, but their requirements are often almost identical to others in the same sector.

I recently pitched to rebuild the website of a major county council, which had already done a superb job on the UX and design phase of the project, producing some very focused wireframes and prototypes. Since this great work was funded by public money, it feels right that the documentation could be made publicly available, and potentially save other local councils from spending tens of thousands of pounds to reach similar (probably worse) solutions.

I’m not suggesting one-size-fits-all solutions, rather solutions that can be adapted for each instance and which evolve as we collectively learn more about what works and what doesn’t. By sharing and collaborating more openly the evolution of ‘best practice’ will accelerate, which will benefit everyone as ever-more smart and effective solutions emerge.

And after open UX design, why not open hardware design? Or open pharmaceutical development? Or open product design? I’m pretty sure that ‘open’ movements are happening in all these industries to some extent, but it would great to see them reaching a critical mass and getting a higher profile. Maybe there will come a time when the the Dragons on 'Dragons’ Den' don’t ask “do you own the patent?” but “will you open-source it?”. The benefits for society would be huge.

At Microserve we use Drupal CMS and are very proud to be heavily involved in the Drupal community. Our developers make frequent contributions to the open source project, and we also contribute financially, attend events and are co-organising DrupalCamp Bristol 2015 (buy your tickets now). It’s our way of giving something back to the open-source movement, which has gifted us and millions of others a great way of earning a living. We hope, in our own small way, that we’re making the web, and the world, a better place.

Dan McNamara Main Image: 
Categories: Elsewhere

Propeople Blog: FFW: Our New Digital Agency

Planet Drupal - Mon, 11/05/2015 - 08:18

Today, I am excited to introduce you to our new digital agency: FFW. Over the past several months, we have been working at Blink Reaction and Propeople to bring the two agencies together under a single unified brand. Through the process, we have reflected on the great successes achieved by the individual agencies throughout our histories. But more importantly, we have come together to define the core vision that will drive our new joint agency into the future.

FFW is a global digital agency built on technology, driven by data, and focused on user experience. We bring together 420 people working across 19 offices in 11 countries, to form a new agency that is a part of the Intellecta Group (listed on the NASDAQ OMX).

We find ourselves in a unique position in the digital agency marketplace as recognized technology experts that also excel in data-driven digital strategy and creative work.

No other agency understands the intersection of technology, strategy and creativity as well as we do.

We are excited to begin a whole new chapter together as FFW. It is a bittersweet moment, as the individual stories of Blink Reaction and Propeople come to an end, but I absolutely can’t wait to see what the future holds for our new agency.

Tags: FFWdigital agencyCheck this option to include this post in Planet Drupal aggregator: planetTopics: Business & Strategy
Categories: Elsewhere

DrupalCon News: Registration is Open! Come on By!

Planet Drupal - Mon, 11/05/2015 - 00:13

Registration is officially open!  We will be at the Los Angeles Convention Center until 6:00pm today and will open tomorrow 7:00am!

When you walk down South Figueroa street, you will turn right on 12th Avenue to the entrance of the West Hall Lobby.  It should look like this:

As you get closer you will see the blue carpet, your official cue that you have arrived at DrupalCon Los Angeles!  Welcome!  Enter the doors and we will see you at registration.

Categories: Elsewhere

Harry Slaughter: Dang, it happened to me!

Planet Drupal - Sun, 10/05/2015 - 21:40

After using the new gmail 'tabs' for a while, I began just ignoring anything not in the primary tab. It turns out, this included notifications from Godaddy regarding domain renewals. I neglected to renew devbee.com, and as sure as the sun rises, it got scooped up immediately by a ... person.

read more

Categories: Elsewhere

ThinkDrop Consulting: DevShop Workshop & Sprints at DrupalCon Los Angeles

Planet Drupal - Sun, 10/05/2015 - 19:51

I'm headed to DrupalCon on Monday morning, and hope to spend most of my time recruiting users and sprinters to DevShop development.

The DrupalCon sprints are an amazing opportunity to work together with people in person. Despite being very remote-oriented, there really is no replacement for face to face work, especially when it comes to complex projects like DevShop.

There are a number of opportunities this week to come learn about devshop.

Sprints

 

I'll be sprinting on DevShop on Monday afternoon, all day Wednesday, Thursday, Friday and Saturday! There are a few sessions and other BoF's I plan on attending, but my main focus will be to help the devshop community meet, recruit developers, give demos, and assisting setup with anyone who is interested.

Keep an eye on #devshop on IRC, @opendevshop on twitter, and follow me @jonpugh for more information throughout the week on where and when sprints will be happening.

There is also a master Sprints spreadsheet available with the schedule for all sprinters on all topics. 

Add your name to the list!

https://docs.google.com/spreadsheets/d/109drAI9MUofxh_JP7GSayfYrRZWYuK_pLFLOYswoOew/edit#gid=0 

Birds of a Feather: DevShop Workshop

https://events.drupal.org/losangeles2015/bofs/devshop-workshop

I've got a BoF scheduled on Tuesday at 5pm for a DevShop Workshop. I'll be there ready to help anyone who's interested in installing, using, or developing devshop. See you in room 510!

Birds of a Feather: Aegir!

https://events.drupal.org/losangeles2015/bofs/aegir

Come join us for discussing Aegir at large at the Aegir BoF. We'll probably be mostly talking about the upcoming 3.0 release, and what the plans are for 4.0 after that.

This is an important time for Aegir, as with any new release, we've got the opportunity for improvements and major changes in the next version. Come join us to discuss what the future might look like.

Other Sessions & BoFs

There are a number of sessions and BoFs I am going to keep my eye on to learn about more techniques people are using for dev ops, continuous integration and testing, and more:

BoF: Continous Delivery

https://events.drupal.org/losangeles2015/bofs/drupal-and-delivery-pipeline

Now that DevShop can be used as a continuous integration, testing, and delivery platform, it will be interesting to discuss the overall process in this group.

BoF: Ansible for Drupal Infrastructure and Deployments

https://events.drupal.org/losangeles2015/bofs/ansible-drupal-infrastructure-and-deployments

I am a huge fan of GeerlingGuy, even if he doesn't know it yet. We use Ansible to install devshop, and ansible roles are at the core of the next version of devshop, which will be able to manage and deploy servers for any software in any language. I'm really looking forward to this BoF.

Session: PHP Containers at Scale: 5K Containers per Server

https://events.drupal.org/losangeles2015/sessions/php-containers-scale-5k-containers-server

David Strauss of Pantheon is going to pop the hood to show how they can handle massive amounts of sites per server using containers. DevShop doesn't use containers, yet, so I am very keen to learn as much as I can so we can get started.

BoF: Leveraging Hybrid Cloud Orchestration

https://events.drupal.org/losangeles2015/bofs/how-manage-your-cloud

This BoF is being put on by the author of the Cloud module for Drupal: https://www.drupal.org/project/cloud

This should be very interesting because there is a lot of similarities between this module and the vision for DevShop 2.0.0.

Session: 4x High Performance for Drupal - Step by Step

https://events.drupal.org/losangeles2015/sessions/4x-high-performance-drupal-step-step

I'm looking forward to this session because we want to incorporate high performance tools into devshop out of the box. I hope to learn enough from this walkthrough to start adding performance enhancements to devshop hosting immediately.

Get in Touch

I'm available all week to give demos or chat devshop, so if you are interested or have any questions, please feel free to get in touch via our contact form or on Twitter!

Tags: DrupalConPlanet Drupaldevshop
Categories: Elsewhere

Eduard Sanou: Reproducible builds on Debian for GSoC 2015

Planet Debian - Sun, 10/05/2015 - 17:14

This is the first blog post of a series I will be writing about my experiences contributing to Debian for Google Summer of Code 2015.

A bit about myself

I’m a Spanish student doing a master’s in Computer Science in Barcelona. I graduated on Electrical Engineering (we call it Telecommunications here). I’ve always been interested in computing and programming and I have worked on several projects on my own using C, python and go. My main interests in the field are programming languages, security, cryptography, distributed and decentralized systems and embedded systems.

I’m an advocate of free software, I try to use it as much as I’m able to in my devices and also try to convince my friends and family of its benefits. I have been using GNU/Linux for nearly ten years as my main operating system and I have tried several *BSD’s recently.

One of my latest personal projects is a gameboy emulator written in C (still work in progress) which already plays many games (without sound though) . You can find other minor projects in my github page (I try to publish all the code I write online, under free software licence)

After so many years of using free software and benefiting from it, I thought it was about time to contribute back! That’s why I gave GSoC a try and applied to work on the Reproducible Builds project for Debian :) And I got accepted!

Reproducible Builds

The idea behind this project is that currently many packages aren’t built in a reproducible manner; that is, they contain timestamps, building machine name, unique IDs, and results from other processes that happen differently between machines, like file ordering in compressed files. The project aims to patch all the Debian packages / the building scripts in order to generate the same binary (bit by bit) independently of the machine, timezone, etc where it is built. This way, a cryptographic hash of the built package can be distributed and many people can rebuild the package to verify that the binary in the repositories indeed corresponds to the right source code by means of comparing the hash.

Motivation

One of the main advantages of the free software is that source code is available for peer review. This makes it easier for users to trust their software, as they can check the source to verify that the program is not doing anything bad. Even if the user doesn’t do that, they can trust the wider community with that task. But many distributions serve packages in binary form, so how do we know that the binary comes from the publicly available source code? The current solution is that the developers who build the packages sign them cryptographically; but this lands all the trust to the developer and the machines used for building.

I became interested in this topic with a very nice talk given at 31c3 by Mike Perry from Tor and Seth Schoen from the EFF. They focused on reproducible builds applied to the tor browser bundle, showing a small demo of how a building machine could be compromised to add hidden functionalities when compiling code (so that the developer could be signing a compromised package without their knowledge).

Benefits

There are two main groups who benefit with reproducible builds:

For users

The user can be more secure when installing packages in binary form since they don’t need to trust a specific developer or building machine. Even if they don’t rebuild the package by themselves to verify it, there would be others doing so, who will easily alert the community when the binary doesn’t match the source code.

For developers

The developer no longer has the responsibility of using his identity to sign the package for wide distribution, nor is that much responsible of the damage to users if their machine is compromised to alter the building process, since the community will easily detect it and alert them.

This later point is specially useful with secure and privacy aware software. The reason is that there are many powerful organizations around the world with interest on having backdoors in widely used software, be it to spy on users or to target specific groups of people. Considering the amount of money these organizations have for such purposes, it’s not hard to imagine that they could try to blackmail developers into adding such backdoors on the built packages. Or they could try to compromise the building machine. With reproducible builds the developer is safer, as such attack is no longer useful.

Reproducible Builds in Debian

The project kicked-off at Debian at mid 2013 , leaded by Lunar and soon followed by many other developers (h01ger, deki, mapreri, …). Right now about 80% of the packages in the unstable branch of Debian can be built reproducibly. The project is very active, with many developers sending patches every week.

A machine running Jenkins (which was set up at the end of 2012 for other purposes) is being used since late 2014 to continuously build packages in different settings to check if they are built reproducibly or not.

In order to analyze why packages fail to build reproducibly, a tool called debbindiff has been developed, which is able to output in text or html form a smart diff of two builds.

Another tool called strip-nondeterminism has been developed to remove non-determinism from files during the building process.

For this GSoC I plan on helping improving these tools (mainly debbindiff), write many patches to achieve reproducibility in more packages and write documentation about it. Some of the packages fail to build reproducibly due to specifics of their building processes, whereas others fail due to the usage of toolchains that add non-determinism. I’ll focus more on the later ones in order to improve the state more packages. akira will also be working on this project for this GSoC.

Finally, I just want to add that I’m looking forward to contribute to Debian, meet the community and learn more about the internals of this awesome distribution!

Categories: Elsewhere

Code Karate: Using Jquery Isotope to Display Content in Views, It’s Fancy.

Planet Drupal - Sun, 10/05/2015 - 16:08
Episode Number: 207

Taking a content type and displaying it in a Drupal View is core to any Drupal website. As you venture into views you will learn hundreds of ways to manipulate content to change the way the end user is able to interact with the content. To help enhance this, you can use the Views Isotope module. This module uses the jQuery isotope library to dynamically filter views content. As the title states, it’s pretty fancy.

To get an idea of what the library does visit the website for the library at http://isotope.metafizzy.co.

Tags: DrupalContent TypesViewsDrupal 7Site BuildingDrupal PlanetJavascriptJQuery
Categories: Elsewhere

Petter Reinholdtsen: Norwegian citizens now required by law to give their fingerprint to the police

Planet Debian - Sun, 10/05/2015 - 16:00

5 days ago, the Norwegian Parliament decided, unanimously, that all citizens of Norway, no matter if they are suspected of something criminal or not, are required to give fingerprints to the police (vote details from Holder de ord). The law make it sound like it will be optional, but in a few years there will be no option any more. The ID will be required to vote, to get a bank account, a bank card, to change address on the post office, to receive an electronic ID or to get a drivers license and many other tasks required to function in Norway. The banks plan to stop providing their own ID on the bank cards when this new national ID is introduced, and the national road authorities plan to change the drivers license to no longer be usable as identity cards. In effect, to function as a citizen in Norway a national ID card will be required, and to get it one need to provide the fingerprints to the police.

In addition to handing the fingerprint to the police (which promised to not make a copy of the fingerprint image at that point in time, but say nothing about doing it later), a picture of the fingerprint will be stored on the RFID chip, along with a picture of the face and other information about the person. Some of the information will be encrypted, but the encryption will be the same system as currently used in the passports. The codes to decrypt will be available to a lot of government offices and their suppliers around the globe, but for those that do not know anyone in those circles it is good to know that the encryption is already broken. And they can be read from 70 meters away. This can be mitigated a bit by keeping it in a Faraday cage (metal box or metal wire container), but one will be required to take it out of there often enough to expose ones private and personal information to a lot of people that have no business getting access to that information.

The new Norwegian national IDs are a vehicle for identity theft, and I feel sorry for us all having politicians accepting such invasion of privacy without any objections. So are the Norwegian passports, but it has been possible to function in Norway without those so far. That option is going away with the passing of the new law. In this, I envy the Germans, because for them it is optional how much biometric information is stored in their national ID.

And if forced collection of fingerprints was not bad enough, the information collected in the national ID card register can be handed over to foreign intelligence services and police authorities, "when extradition is not considered disproportionate".

Update 2015-05-12: For those unable to believe that the Parliament really could make such decision, I wrote a summary of the sources I have for concluding the way I do (Norwegian Only, as the sources are all in Norwegian).

Categories: Elsewhere

Ian Campbell: Qcontrol Homepage Moved to www.hellion.org.uk/qcontrol

Planet Debian - Sun, 10/05/2015 - 15:01

Since gitorious has now shutdown I've (finally!) moved the qcontrol homepage to: http://www.hellion.org.uk/qcontrol.

Source can now be found at http://git.hellion.org.uk/qcontrol.git.

Categories: Elsewhere

Deeson: Evolving our culture at Deeson - facilitation over management

Planet Drupal - Sun, 10/05/2015 - 10:33

We see our company's purpose as being to facilitate smart people in getting great things done - not telling them exactly how to do it.

Nothing limits people's motivation and productivity more than jumping through irrelevant hoops.

As we've grown we've realised that it has become more and more important to decentralise decision making and share information effectively.

Over the past six months we’ve been experimenting with some big changes to enable a more empowered and self-directed approach.

Underlying principles

We have some beliefs backing the changes:

  • our team members are intrinsically motivated and know how to get their own job done best
  • open access to information is power - use tools to make your company data useful and accessible
  • everyone must have meaningful input for a culture of continous improvement to be effective 
  • we should focus on results not process
  • we do knowledge work and our approach needs to embrace this - measuring who is in the office at 9am is resassuringly tangible but mostly irrelevant to doing great work
  • principles used by people with good judgement are a lot more useful than lots of rules
Business benefits

There are strong commercial drivers:

  • our growth means we need to be able to recruit from a wider talent pool
  • we wanted to reduce the need to introduce middle management in our expanding team
  • our increasingly international client base means that better distributed working practices were part of delivering great client service
  • people told us that flexible working is an important part of our overall recruitment package
Start at the top

We realised that we needed to change the way that the agency was managed.

Simon and I worked to understand which the things we both did were leadership driven and which were management that could be shared.

Through continuing review we enabled team members to take on responsibility and control as much as possible. We were confident this would increase team engagement.

The role of the leadership team rapidly changed, and our focus became to facilitate and coach our teams, rather than to manage and direct.

Our focus has been on getting out of the way of our increasingly self-organised teams delivering digital projects.

The challenge has been as much to change our own individual people management habits as it has been to evolve the company's processes.

Old habits die hard!

We're by no means the only ones thinking this way

We’d seen, heard and read a lot about organisational culture, leadership and change in other companies.

Some examples of thinking that really resonated included Ricardo Semler’s Maverick and 37Signal’s Rework

Aaron Dignan of Undercurrent also has an interesting roundup of different organisational models including Holacracy, Agile Squads and Self Organising.

We invested a lot of time in well configured and well used project and management information systems such as Xero, Harvest and Forecast, which means everyone can now see what’s going live across the agency.

Having better information helps team members make the best decisions on how to deliver their part of a project.

We’ve strengthened our specification, delivery and testing practices so that project delivery is standardised across the agency and easy to monittor.

We’ve also deployed communications tools that work seamlessly, so we don’t think about whether colleagues are on a different floor or on site in a different country. This has included adding softphones on our laptops and embracing collaboration tools such as Slack and Google Hangouts.

Results orientated working patterns
  1. Our team members can work when and where they want - as long as they are effectively collaborating with clients and colleagues
  2. We focus and target on the output- not whether you login at 8:55am or 9:05am.
  3. Top down manager-led appraisals are a thing of the past. Our team members now undertake a personal and peer review every three months to set their own development goals.
  4. Self-led development is backed with an unlimited training budget.
  5. Every team member has their own annual budget to spend on R&D, technology, tools, equipment - anything that helps them do their job better.
How did we work out and describe the detail?

We put together a draft Google Doc 'Deeson Handbook' that everyone could contribute to and had a dedicated Slack channel for discussion.

We spent most of January at a discussion stage. There was very little debate about the principles behind the approach but through collaboration we made some pretty big changes on the details, for example:

  • contactability - making it clear that the principle was that you were easily contactable rather than exactly how
  • holidays and leave - simplifying the holiday policy to just work in days and half days

The handbook is perpetually in beta as we’re always reviewing our approach. It is the single document that explains our new approach, acts as a reference for new joiners and a guide to how things work in practice at Deeson.

How has it gone?

Three months in and we’re really pleased with the results of the changes so far.

We were expecting at least one major problem and we haven't had it (yet!)

Team engagement and satisfaction has improved and we’re seeing increased ownership and autonomy among project teams.

Most importantly we’re confident that this is leading to better digital products and services for our clients - which is something we’re very proud of.

If you’re interested in the details then feel free to ask questions or request a copy of our Handbook document.

Categories: Elsewhere

DrupalOnWindows: How to choose the best Drupal Cloud Provider

Planet Drupal - Sun, 10/05/2015 - 07:00
Language English

There are many articles out there talking about the advantages and disadvantages of one or another cloud provider. But most of them are sponsored, biased and really none of them poinpointing what really matters. They are distracting, focusing your attention away of what they don't want you to know.

More articles...
Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator