Elsewhere

Iztok Smolic: 4 essential tips on implementing best practices

Planet Drupal - ven, 13/02/2015 - 13:27

Drupal community talks a lot about best practices. When I talk about best practices I mean code driven development, code reviews, SCRUM, automated tests… I immediately realised that introducing new ways of working is not going to be easy. So I figured, why not asking one of the smart people how to start. Amitai (CTO of Gizra) was very kind to have […]

The post 4 essential tips on implementing best practices appeared first on Iztok.

Catégories: Elsewhere

Daniel Leidert: Motion picture capturing: Debian + motion + Logitech C910 - part II

Planet Debian - ven, 13/02/2015 - 12:40

In my recent attempt to setup a motion detection camera I was disappointed, that my camera, which should be able to record with 30 fps in 720p mode only reached 10 fps using the software motion. Now I got a bit further. This seems to be an issue with the format used by motion. I've check the output of v4l2-ctl ...

$ v4l2-ctl -d /dev/video1 --list-formats-ext
[..]
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: 'YUYV'
Name : YUV 4:2:2 (YUYV)
[..]
Size: Discrete 1280x720
Interval: Discrete 0.100s (10.000 fps)

Interval: Discrete 0.133s (7.500 fps)
Interval: Discrete 0.200s (5.000 fps)
[..]

Index : 1
Type : Video Capture
Pixel Format: 'MJPG' (compressed)
Name : MJPEG
[..]
Size: Discrete 1280x720
Interval: Discrete 0.033s (30.000 fps)

Interval: Discrete 0.042s (24.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.133s (7.500 fps)
Interval: Discrete 0.200s (5.000 fps)
[..]

... and motion:

$ motion
[..]
[1] [NTC] [VID] v4l2_set_pix_format: Config palette index 17 (YU12) doesn't work.
[1] [NTC] [VID] v4l2_set_pix_format: Supported palettes:
[1] [NTC] [VID] v4l2_set_pix_format: (0) YUYV (YUV 4:2:2 (YUYV))
[1] [NTC] [VID] v4l2_set_pix_format: 0 - YUV 4:2:2 (YUYV) (compressed : 0) (0x56595559)
[1] [NTC] [VID] v4l2_set_pix_format: (1) MJPG (MJPEG)
[1] [NTC] [VID] v4l2_set_pix_format: 1 - MJPEG (compressed : 1) (0x47504a4d)

[1] [NTC] [VID] v4l2_set_pix_format Selected palette YUYV
[1] [NTC] [VID] v4l2_do_set_pix_format: Testing palette YUYV (1280x720)
[1] [NTC] [VID] v4l2_do_set_pix_format: Using palette YUYV (1280x720) bytesperlines 2560 sizeimage 1843200 colorspace 00000008
[..]

Ok, so both formats YUYV and MJPG are supported and recognized and I can choose both via the v4l2palette configuration variable, citing motion.conf:

# v4l2_palette allows to choose preferable palette to be use by motion
# to capture from those supported by your videodevice. (default: 17)
# E.g. if your videodevice supports both V4L2_PIX_FMT_SBGGR8 and
# V4L2_PIX_FMT_MJPEG then motion will by default use V4L2_PIX_FMT_MJPEG.
# Setting v4l2_palette to 2 forces motion to use V4L2_PIX_FMT_SBGGR8
# instead.
#
# Values :
# V4L2_PIX_FMT_SN9C10X : 0 'S910'
# V4L2_PIX_FMT_SBGGR16 : 1 'BYR2'
# V4L2_PIX_FMT_SBGGR8 : 2 'BA81'
# V4L2_PIX_FMT_SPCA561 : 3 'S561'
# V4L2_PIX_FMT_SGBRG8 : 4 'GBRG'
# V4L2_PIX_FMT_SGRBG8 : 5 'GRBG'
# V4L2_PIX_FMT_PAC207 : 6 'P207'
# V4L2_PIX_FMT_PJPG : 7 'PJPG'
# V4L2_PIX_FMT_MJPEG : 8 'MJPEG'
# V4L2_PIX_FMT_JPEG : 9 'JPEG'
# V4L2_PIX_FMT_RGB24 : 10 'RGB3'
# V4L2_PIX_FMT_SPCA501 : 11 'S501'
# V4L2_PIX_FMT_SPCA505 : 12 'S505'
# V4L2_PIX_FMT_SPCA508 : 13 'S508'
# V4L2_PIX_FMT_UYVY : 14 'UYVY'
# V4L2_PIX_FMT_YUYV : 15 'YUYV'
# V4L2_PIX_FMT_YUV422P : 16 '422P'
# V4L2_PIX_FMT_YUV420 : 17 'YU12'
#
v4l2_palette 17

Now motion uses YUYV as default mode as shown by its output. So it seems that all I have to do is to choose MJPEG in my motion.conf:

v4l2_palette 8

Testing again ...

$ motion
[..]
[1] [NTC] [VID] vid_v4lx_start: Using V4L2
[1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 1 items
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 25 (ret 0 )
Corrupt JPEG data: 5 extraneous bytes before marker 0xd6
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 14 (ret 0 )
Corrupt JPEG data: 1 extraneous bytes before marker 0xd5
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 36 (ret 0 )
Corrupt JPEG data: 3 extraneous bytes before marker 0xd2
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 58 (ret 0 )
Corrupt JPEG data: 1 extraneous bytes before marker 0xd7
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 80 (ret 0 )
Corrupt JPEG data: 4 extraneous bytes before marker 0xd7
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
[1] [ERR] [ALL] motion_init: Error capturing first image
[1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 16 items
Corrupt JPEG data: 4 extraneous bytes before marker 0xd1
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
Corrupt JPEG data: 11 extraneous bytes before marker 0xd1
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
Corrupt JPEG data: 3 extraneous bytes before marker 0xd4
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
Corrupt JPEG data: 7 extraneous bytes before marker 0xd1
[..]

... and another issue is turning up :( The output above goes on and on and on and there is no video capturing. So accordingly to $searchengine the above happens to a lot of people. I just found one often suggested fix: pre-load v4l2convert.so from libv4l-0:

$ LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libv4l/v4l2convert.so motion

But the problem persists and I'm out of ideas :( So atm it lokks like I cannot use the MJPEG format and don't get 30 fps at 1280x720 pixels. During writing I then discovered a solution by good old trial-and-error: Leaving the v4l2_palette variable at its default value 17 (YU12) and pre-loading v4l2convert.so makes use of YU12 and the framerate at least raises to 24 fps:

$ LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libv4lg/v4l2convert.so motion
[..]
[1] [NTC] [VID] v4l2_do_set_pix_format: Testing palette YU12 (1280x720)
[1] [NTC] [VID] v4l2_do_set_pix_format: Using palette YU12 (1280x720) bytesperlines 1280 sizeimage 1382400 colorspace 00000008
[..]
[1] [NTC] [EVT] event_new_video FPS 24
[..]

Finally! :) The results are nice. It would maybe even be a good idea to limit the framerate a bit, to e.g. 20. So that is a tested configuration for the Logitech C910 running at a resolution of 1280x720 pixels:

v4l2_palette 17
width 1280
height 720
framerate 20
minimum_frame_time 0
pre_capture 10 # 0,5 seconds pre-recording
post_capture 50 # 2,5 seconds after-recording
auto_brightness on
ffmpeg_variable_bitrate 2 # best quality

Now all this made me curious, which framerate is possible at a resolution of 1920x1080 pixels now and how the results look like. Although I get 24 fps too, the resulting movie suffers of jumps every few frames. So here I got pretty good results with a more conservative setting. By increasing framerate - tested up to 15 fps with good results - pre_capture needed to be decreased accordingly to values between 1..3 to minimize jumps:

v4l2_palette 17
width 1920
height 1080
framerate 12
minimum_frame_time 0
pre_capture 6 # 0,5 seconds pre-recording
post_capture 30 # 2,5 seconds after-recording
auto_brightness on
ffmpeg_variable_bitrate 2 # best quality

Both configurations lead to satisfying results. Of course the latter will easily fill your hardrive :)

TODO

I guess, the results can be optimzed further by playing around with ffmpeg_bps and ffmpeg_variable_bitrate. Maybe then it is possible to record without jumps at higher framerates too(?). I also didn't test the various norm settings (PAL, NTSC, etc.).

Catégories: Elsewhere

OpenLucius: Dependency injection in Drupal 8, an introduction.

Planet Drupal - ven, 13/02/2015 - 10:45
Introduction

So, like a bunch of other Drupal people, we're also experimenting with Drupal 8; for our Drupal distro OpenLucius. Us being 'less is more'-developers, one aspect we really like is dependency injection.

Catégories: Elsewhere

Steve McIntyre: Linaro VLANd v0.2

Planet Debian - ven, 13/02/2015 - 07:53

I've been working on this for too long without really talking about it, so let's fix that now!

VLANd is a simple (hah!) python program intended to make it easy to manage port-based VLAN setups across multiple switches in a network. It is designed to be vendor-agnostic, with a clean pluggable driver API to allow for a wide range of different switches to be controlled together.

There's more information in the README file. I've just released v0.2, with a lot of changes included since the last release:

  • Massive numbers of bugfixes and code cleanups
  • Improve how we talk to the Cisco switches - disable paging on long output
  • Switch from "print" to "logging.foo" for messages, and add logfile support
  • Improved test suite coverage, and added core test scripts for the lab environment

I've demonstrated this code today in Hong Kong at the Linaro Connect event, and now I'm going on vacation for 4 weeks. Australia here I come! :-)

Catégories: Elsewhere

Neil Williams: OpenTAC mailing list

Planet Debian - ven, 13/02/2015 - 03:10

After the OpenTAC session at Linaro Connect, we do now have a mailing list to support any and all discussions related to OpenTAC. Thanks to Daniel Silverstone for the list.

List archive: http://listmaster.pepperfish.net/pipermail/opentac-vero-apparatus.com

More information on OpenTAC: http://wiki.vero-apparatus.com/OpenTAC

Catégories: Elsewhere

Jimmy Berry: The woes of the testbot

Planet Drupal - ven, 13/02/2015 - 01:43

For those not familiar with me, a little research should make it clear that I am the person behind the testbot deployed in 2008 that has revolutionized Drupal core development, stability, etc. and that has been running tens of thousands of assertions with each patch submitted against core and many contributed modules for 6 years.

My intimate involvement with the testbot came to a rather abrupt and unintended end several years ago due to a number of factors (which only a select few members of this community are clearly aware). After several potholes, detours, and bumps in the road, it became clear to me the impossibility of maintaining and enhancing the testbot under the policies and constraints imposed upon me.

Five years ago we finished writing an entirely new testing system, designed to overcome the technical obstacles of the current testbot and to introduce new features that would enable an enormous improvement in resource utilization that could then be used for new and more frequent QA.

Five years ago we submitted a proposal to the Drupal Association and key members of the community for taking the testbot to the next level, built atop the new testing system. This proposal was ignored by the Association and never evaluated by the community. The latter is quite puzzling to me given:

  • the importance of the testbot
  • the pride this open source community has in openly evaluating and debating literally everything (a healthy sentiment especially in the software development world)
  • I had already freely dedicated years of my life to the project.

The remainder of this read will:

  • list some of the items included in our proposal that were dismissed with prejudice five years ago, but since have been adopted and implemented
  • compare the technical merits of the new system (ReviewDriven) with the current testbot and a recent proposal regarding "modernizing" the testbot
  • provide an indication of where the community will be in five years if it does nothing or attempts to implement the recent proposal.

This read will not cover the rude and in some cases seemingly unethical behavior that led to the original proposal being overlooked. Nor will this cover the roller coaster of events that led up to the proposal. The intent is to focus on a technical comparison and to draw attention to the obvious disparity between the systems.

About Face

Things mentioned in our proposal that have subsequently been adopted include:

  • paying for development primarily benefiting drupal.org instead of clinging to the obvious falacy of "open source it and they will come"
  • paying for machine time (for workers) as EC2 is regularly utilized
  • utilizing proprietary SaaS solutions (Mollom on groups.drupal.org)
  • automatically spinning up more servers to handle load (e.g. during code sprints) which has been included in the "modernize" proposal
Comparison

The following is a rough, high-level comparison of the three systems that makes clear the superior choice. Obviously, this comparison does not cover everything.

table#testbot-comparison td { border: 1px solid white; } table#testbot-comparison td:nth-child(2), table#testbot-comparison td:nth-child(3), table#testbot-comparison td:nth-child(4) { width: 33% } table#testbot-comparison tr:nth-child(1), table#testbot-comparison td:nth-child(1) { font-weight: bold; font-size: 120%; } table#testbot-comparison td:nth-child(2) { background-color: #FFCC00; } table#testbot-comparison td:nth-child(3) { background-color: #D46A6A; color: white; } table#testbot-comparison td:nth-child(4) { background-color: #55AA55; color: white; }

Baseline Backwards modernization True step forward System Current qa.drupal.org "Modernize" Proposal ReviewDriven Status It's been running for over 6 years Does not exist Existed 5 years ago at ReviewDriven.com Complexity Custom PHP code and Drupal Does not make use of contrib code Mish mash of languages and environments: ruby, python, bash, java, php, several custom config formats, etc.

Will butcher a variety of systems from their intended purpose and attempt to have them all communicate

Adds a number of extra levels of communication and points of failure Minimal custom PHP code and Drupal

Uses commonly understood contrib code like Views Maintainability Learning curve but all PHP Languages and tools not common to Drupal site building or maintenance

Vast array of systems to learn and the unique ways in which they are hacked Less code to maintain and all familiar to Drupal contributors Speed Known; gets slower as test suite grows due to serial execution Still serial execution and probably slower than current as each separate system will add additional communication delay An order of magnitude faster thanks to concurrent execution

Limited by the slowest test case

*See below Extensibility (Plugins) Moderately easy, does not utilize contrib code so requires knowledge of current system Several components, one on each system used

New plugins will have to be able to pass data or tweak any of the layers involved which means writing a plugin may involve a variety of languages and systems and thus include a much wider breadth of required knowledge Much easier as it heavily uses commons systems like Views

Plugin development is almost entirely common to Drupal development:
define storage: Fields
define display: Views
define execution: CTools function on worker

And all PHP Security Runs as same user as web process Many more surfaces for attack and that require proper configuration Daemon to monitor and shutdown job process, lends itself to Docker style with added security 3rd party integration Basic RSS feeds and restricted XML-RPC client API Unknown Full Services module integration for public, versioned, read API and write for authorized clients Stability When not disturbed, has run well for years, primary causes of instability include ill-advised changes to the code base

Temporary and environment reset problems easily solved by using Docker containers with current code base Unknown but multiple systems imply more points of failure Same number of components as current system

Services versioning which allows components to be updated independently

Far less code as majority depends on very common and heavily used Drupal modules which are stable

2-part daemon (master can react to misbehaving jobs)

Docker image could be added with minimal effort as system (which predates Docker) is designed with same goals as Docker Resource utilization Entire test suite runs on single box and cannot utilize multiple machines for single patch Multiple servers with unshared memory resources due to variety of language environments

Same serial execution of test cases per patch which does not optimally utilize resources An order of magnitude better due to concurrent execution across multiple machines

Completely dynamic hardware; takes full advantage of available machines.

*See below Human interaction Manually spin up boxes; reduce load by turning on additional machines Intended to include automatic EC2 spin up, but does not yet exist; more points of failure due to multiple systems Additional resources are automatically turned on and utilized Test itself Tests could be run on development setup, but not within the production testbot Unknown Yes, due to change in worker design.

A testbot inside a testbot! Recursion! API Does the trick, but custom XML-RPC methods Unknown Highly flexible input configuration is similar to other systems built later like travis-ci

All entity edits are done using Services module which follows best practices 3rd party code Able to test security.drupal.org patches on public instance Unknown, but not a stated goal Supports importing VCS credentials which allows testing of private code bases and thus supports the business aspect to provide as a service and to be self sustaining

Results and configuration permissioned per user to allow for drupal.org results to be public on the same instance as private results Implemented plugins Simpletest, coder None exist Simpletest, coder, code coverage, patch conflict detection, reroll of patch, backport patch to previous branch Interface Well known; designed to deal with display of several 100K distinct test results; lacks revision history; display uses combination of custom code and Views Unknown as being built from scratch and not begun

Jenkins can not support this interface (in Jenkins terminology multiple 100K jobs) so will have to be written from scratch (as proposal confirms and was reason for avoiding Jenkins in past)

Jenkins was designed for small instances within businesses or projects, not a large central interface like qa.drupal.org Hierarchical results navigation from project, branch, issue, patch

Context around failed assertion (like diff -u)

Minimizes clutter, focuses on results of greatest interest (e.g. failed assertions); entirely built using Views so highly customizable

Simplified to help highlight pertinent information (even icons to quickly extract status)

Capable of displaying partial results as they are concurrently streamed in from the various workers Speed and Resource Utilization

Arguably one of the most important advantages of the ReviewDriven system is concurrency. Interestingly, after seeing inside Google I can say this approach is far more similar to the system Google has in place than Jenkins or anything else.

Systems like Jenkins and especially travis-ci, which for the purpose of being generic and simpler, do not attempt to understand the workload being performed. For example Travis simply asks for commands to execute inside a VM and presents the output log as the result. Contrast that with the Drupal testbot which knows the tests being run and what they are being run against. Why is this useful? Concurrency.

Instead of running all the test cases for a single patch on one machine, the test cases for a patch may be split out into separate chunks. Each chunk is processed on a different machine and the results are returned to the system. Because the system understands the results it can reassemble the chunked results in a useful way. Instead of an endlessly growing wait time as more tests are added and instead of having nine machines sitting idle while one machine runs the entire test suite all ten can be used on every patch. The wait time effectively becomes the time required to run the slowest test case. Instead of waiting 45 minutes one would only wait perhaps 1 minute. The difference becomes more exaggerated over time as more tests are added.

In addition to the enormous improvement in turnaround time which enables the development workflow to process much faster you can now find new ways to use those machine resources. Like testing contrib projects against core commits, or compatibility tests between contrib modules, or retesting all patches on commit to related project, or checking what other patches a patch will break (to name a few). Can you even imagine? A Drupal sprint where the queue builds up an order of magnitude more slowly and runs through the queue 40x faster?

Now imagine having additional resources automatically started when the need arises. No need to imagine...it works (and did so 5 years ago). Dynamic spinning up of EC2 resources which could obviously be applied to other services that provide an API.

This single advantage and the world of possibility it makes available should be enough to justify the system, but there are plenty more items to consider which were all implemented and will not be present in the proposed initiative solution.

Five Years Later

Five years after the original proposal, Drupal is left with a testbot that has languished and received no feature development. Contrast that with Drupal having continued to lead the way in automated testing with a system that shares many of the successful facets of travis-ci (which was developed later) and is superior in other aspects.

As was evident five years ago the testbot cannot be supported in the way much of Drupal development is funded since the testbot is not a site building component placed in a production site. This fact drove the development of a business model that could support the testbot and has proven to be accurate since the current efforts continue to be plagued by under-resourcing. One could argue the situation is even more dire since Drupal got a "freebie" so to speak with me donating nearly full-time for a couple of years versus the two spare time contributors that exist now.

On top of lack of resources the current initiative, whose stated goal is to "modernize" the testbot, is needlessly recreating the entire system instead of just adding Docker to the existing system. None of the other components being used can be described as "modern" since most pre-date the current system. Overall, this appears to be nothing more than code churn.

Assuming the code churn is completed some time far in the future; a migration plan is created, developed, and performed; and everything goes swimmingly, Drupal will have exactly what it has now. Perhaps some of the plugins already built in the ReviewDriven system will be ported and provide a few small improvements, but nothing overarching or worth the decade it took to get there. In fact the system will needlessly require a much rarer skill set, far more interactions between disparate components, and complexity to be understood just to be maintained.

Contrast that with an existing system that can run the entire test suite against a patch across a multitude of machines, seamlessly stitch the results together, and post back the result in under a minute. Contrast that with having that system in place five years ago. Contrast that with the whole slew of improvements that could have also been completed in the four years hence by a passionate, full-time team. Contrast that with at the very least deploying that system today. Does this not bother anyone else?

Contrast that with Drupal being the envy of the open source world, having deployed a solution superior to travis-ci and years earlier.

Please post feedback on drupal.org issue.

Tags:
Catégories: Elsewhere

DrupalCon News: A place for Drupal.org at DrupalCon

Planet Drupal - ven, 13/02/2015 - 00:19

DrupalCon Los Angeles will be the first Con where Drupal.org, home of Drupal and the Drupal community, has its very own track.

The track will feature presentations from the Drupal Association Engineering Team, where they share long and short term plans for website development, demo new and upcoming features, and gather community feedback.

A limited amount of spots are available for sessions submitted from the community. That’s where you come in.

Catégories: Elsewhere

Mediacurrent: Efficient Drupal Development with Tmux and Tmuxinator

Planet Drupal - jeu, 12/02/2015 - 23:41

Have you ever wished you could just type one command and load up all of the things you need to work on for a project? Wouldn’t it be nice to have your terminal set up with the correct Drush alias, tailing the watchdog, with access to your servers just a couple keystrokes away? Sounds nice, right?

Catégories: Elsewhere

Victor Kane: Setting up a Reusable and DurableDrupal Lean Process Factory - Presentation 2/11/2015 at DrupalCon Latin America 2015

Planet Drupal - jeu, 12/02/2015 - 22:52

[para español ver más abajo]

The purpose of the presentation was to describe how to use reusable tools and processes, tailored and in constant evolution, in order to finally defeat waterfall and guarantee delivered value in the development of websites and web applications.

The following main topics were covered in depth:

  • Kanban (not Scrum)
  • Project Inception and Vision
  • Team Kickoff
  • Development Workflow with Everything in Code
  • DevOps, Server Provisioning and Deployment
  • User Validation

Links to resources:

This is a huge amount of material, based on both my successful and unsuccessful experiences, and I earnestly hope it will help other web centered knowledge workers. If you have questions, please ask them on twitter @victorkane with hashtag #DurableDrupalLean. There were quite a few other fascinating and very good presentations on the subject of Process and DevOps, overlapping my own substantially and it should be very worthwhile to share them here: I greatly appreciate having had the opportunity to present at this incredibly important, fun and well-organized DrupalCon. See you all in Los Angeles and Rio de Janeiro!

read more

Catégories: Elsewhere

Richard Hartmann: A Dance with Dragons

Planet Debian - jeu, 12/02/2015 - 22:51

Yesterday, I went to the Federal Office for Information Security (BSI) on an invitation to their "expert round-table on SDN".

While the initial mix of industry attendees was of.. varied technical knowledge.. I was pleasantly surprised by the level of preparation by the BSI. None of them were networkers, but they did have a clear agenda and a pretty good idea of what they wanted to know.

During the first round-table, they went through

  • This is our idea of what we think SDN is
  • Is SDN a fad or here to stay?
  • What does the industry think about SDN?
  • What are the current, future, and potential benefits of SDN?
  • What are the current, future, and potential risks of SDN?
  • How can SDN improve the security of critical infrastructure?
  • How can you ensure that the whole stack from hardware through data plane to control plane can be trusted?
  • How can critical parts of the SDN stack be developed in, or strongly influenced from, players in Germany or at least Europe?

Yes, some of those questions are rather basic and/or generic, but that was on purpose. The mix of clear expectations and open-ended questions was quite effective at getting at what they wanted to know.

During lunch, we touched on the more general topic of how to reach and interact with technical audiences, with regards to both networks and software. The obvious answer for initial contact in regards to networks was DENOG; which they didn't know about.

With software, the answer is not quite as simple. My suggestion was to engage in a positive way and thus build trust over time. Their clear advantage is that, contrary to most other services, their raison d'être is purely defensive and non-military so they can focus on audits, support of key pieces of software, and, most important of all, talk about their results. No idea if they will actually pursue this, but here's to hoping; we could all use more government players on the good side.

Catégories: Elsewhere

Morpht: How to Use Custom Markers for OpenLayers

Planet Drupal - jeu, 12/02/2015 - 22:00

OpenLayers module is a popular solution for mapping in Drupal. The biggest benefit is the ability to use different map providers, complete Feature support and, last but not least, the simplicity of creating custom markers.

Catégories: Elsewhere

DrupalCon News: Apply for a DrupalCon Grant or Scholarship

Planet Drupal - jeu, 12/02/2015 - 21:11

In 2014 we received over 200 DrupalCon grant and scholarship applications. Thanks to our generous sponsor contributions, we were able to get over 60 individuals to DrupalCon Austin and Amsterdam. This year, we hope to award even more!

If you need help getting to DrupalCon Los Angeles, and are an active Drupal contributor or community leader, we're here to help you make YOUR dreams of attending DrupalCon a reality. Apply for a Grant or Scholarship!

Catégories: Elsewhere

groups.drupal.org frontpage posts: GCI 2014 Wrap Up and GSoC 2015 Kick Off

Planet Drupal - jeu, 12/02/2015 - 20:35

Congratulations to Google Code-In Winners

Did you know Drupal recently participated in Google's Code-In contest for high school students aged 13-17 and they contributed over one hundred tasks? For example, did you see the Drupal 8 Installation Guide @ https://www.youtube.com/watch?v=bthkQCkrH30 or the following video on how to create modules for Drupal 8 @ https://www.youtube.com/watch?v=CEIUbFoAg0I ? Maybe you plan to use this event template at next user group meetup @ https://groups.drupal.org/node/453328. Learn more about Drupal's GCI efforts from Google @ http://www.google-melange.com/gci/homepage/google/gci2014. These students also contributed to lots of contrib modules such as FB Like Button, Login Destination, Scroll to Top etc. Most importantly it is exciting to note that Drupal gained several Drupal 8 core contributors under the age of 18.

Although we value the contributions of all the GCI participants but since this was a contest, there has to be winners. We are proud to announce our grand prize winners: Getulio Valentin Sanchez Ozuna (gvso: https://www.drupal.org/u/gvso) and Tasya Aditya Rukmana (tadityar: https://www.drupal.org/u/tadityar) who'll be attending an all expense paid trip to Google HQ in Mountain View California.

Google Summer of Code 2015 Announcement

GCI was fun, but now it is time for Google Summer of Code 2015 @ http://www.google-melange.com/gsoc/homepage/google/gsoc2015. GSoC is an annual program for university students organized by Google with projects managed by open source organization mentors such as us (Drupal!). Are you or any colleagues available to be a mentor and/or provide a project idea? Please share project ideas even if you're not available to be a mentor in our wiki @ https://groups.drupal.org/node/455978. This is perfect timing for our our community and GSOC as Drupal 8 is almost stable providing plenty of projects to port common modules.

Did you know each accepted organization sends two mentors on an all expense paid trip to visit GooglePlex for the "Mentor Summit"? Organization applications started February 9th and we're currently working on our organization application. We'd like to apply with at least 30 solid project ideas, so if you have ideas for any project that might be suitable for GSoC, add them our wiki @ https://groups.drupal.org/node/455978. If you are unsure whether or not your project idea will be a good fit for GSoC, have a look at the projects from GSoC 2014 @ http://www.google-melange.com/gsoc/org2/google/gsoc2014/drupal.

Feel free to contact myself (Slurpee: https://www.drupal.org/u/Slurpee ) or Chandan Singh (cs_shadow: https://www.drupal.org/u/cs_shadow) directly or create nodes in https://groups.drupal.org/google-summer-code for additional information.

If you're a student, you can start by reading our getting started guide for GSoC @ https://www.drupal.org/node/2415225. Below is some useful information which may help you get selected in GSoC this year.

How to be a Drupal GSoC student in 10 Steps


  1. Register an account @ https://drupal.org
  2. Join Drupal's group for Summer of Code @ https://groups.drupal.org/google-summer-code
  3. Find a project on our ideas page @ https://groups.drupal.org/node/455978

    • Add your name as an interested student to project idea
    • Add your project idea summary (with or without a mentor)

  4. Contact mentors listed on project idea via drupal.org contact page

    • If you don't hear back after 48 hours, try creating an issue in issue queue for project and contact org/mentor
    • Contact myself directly via drupal.org contact page @ https://www.drupal.org/u/Slurpee
    • Chat with us in real time on IRC in #drupal-google or specifically during office hours listed below

  5. Complete "Drupal Ladder for GSoC Students" @ http://drupalladder.org/ladders

    • Completing additional ladders will help your application!
    • Creating additional ladders with lessons will help too!

  6. Utilize drupalmentoring.org to find issues to work on with mentors willing to help
  7. Test and reroll patches in issue queue
  8. Write a patch that is contributed into Drupal 8 (making you a "core contributor")
  9. Become a maintainer of the project you're planning to work on by contributing code/patches/tests/documentation
  10. Hangout on IRC in #drupal-google on Freenode helping other students

10 Tips for Students Writing Applications


  1. Follow the Student application template @ https://groups.drupal.org/node/411293
  2. Treat this as a real job, would any software company actually pay you to work on this project all summer?
  3. Demonstrate your ability to contribute to Drupal and that you can immediately start producing code from day one of GSoC
  4. Create a complete project plan broken down by every week of GSoC
  5. Document and diagram the workflow of user experience by creating wireframes/mockups of UI and UX (http://codecondo.com/free-wireframe-tools/)
  6. Research and contact initiatives looking to accomplish related tasks
  7. Plan out your "support contract", do you plan to stay in the Drupal community after GSoC (example, how long will you support/update your code for the community after GSoC?)
  8. Explain your workflow for project, time, and task management (a tool such a Basecamp or Trello?)
  9. Describe your methods, tools, and frequency of communication with mentor for collaboration in a virtual environment (g+ hangout twice per week?)
  10. Request mentors and helpers in #drupal-google to review application via Google Drive with comments enabled prior to application deadline

10 Tips for Mentors Helping Students Write Applications


  1. List a project on our ideas page @ https://groups.drupal.org/node/455978
  2. Review the "Drupal Ladder for GSoC Students" to learn student prerequisites
  3. Update any of the Drupal Ladders to help students learn faster
  4. Respond to interested students that contact you via drupal.org contact page

    • Please respond within 48 hours

  5. Test and review patches from students
  6. Facilitate contact with discussion between student and module maintainer of projects of interested student
  7. Create a project plan and timeline that student agrees on with specific deliverables, understanding you may need to fail student at midterm or final
  8. Review Google's guide on being a mentor in Melange (non-Drupal stuff) @ http://en.flossmanuals.net/gsocmentoring/
  9. Contact Drupal's org admins (Slurpee, slashrsm, cs_shadow) if you have any questions
  10. Hangout in #drupal-google answering student questions

Drupal's GSoC Office Hours (help in real time!)

Mentors are available on IRC in #drupal-google @Freenode thrice each weekday for one hour from March 16th until March 27th. Join us in real time at scheduled times below to chat with mentors in real time to ask questions, request application reviews, or simply hangout.


  • Asia/Australia 04:00 - 05:00 UTC (IST 09:30-10:30)
  • Europe 13:00 - 14:00 UTC (CET 14:00-15:00)
  • Americas 18:00 - 19:00 UTC (PDT 11:00-12:00)

Contributing to Drupal

Did you know many successful students started with zero Drupal experience prior to GSoC? If new to Drupal and willing to contribute, come to participate in core contribution mentoring. It helps anyone without any experience to get started with Drupal contribution development. Google wants to see students contributing to organizations prior to the starting of their GSoC project and this is a chance to demonstrate your skills. Office hours provide a chance for students that have problems with their patches or can't find issues to work on to seek guidance. Create an account at http://drupalmentoring.org before you participate in core mentoring. Drupal core contribution office hours are Tuesdays, 02:00 - 04:00 UTC AND Wednesdays, 16:00 - 18:00 UTC. If you need help outside of office hours, join #drupal-contribute to chat with the community members willing to assist 24/7.

Details about core mentoring office hours @ https://drupal.org/core-office-hours and http://drupalmentoring.org. More information about contributing to Drupal @ http://drupal.org/new-contributors and http://drupal.org/novice.

Final notes from Google to Students

We are pleased to announce that we are now accepting applications from students to participate in Google Summer of Code 2015. Please check out the FAQs [1], timeline [2], and student manual [3] if you are unfamiliar with the process. You can also read the Melange manual if you need help with Melange [4]. The deadline to apply is 27 March at 19:00 UTC [5]. Late proposals will not be accepted for any reason.

[1] - http://www.google-melange.com/gsoc/document/show/gsoc_program/google/gso...


[2] - http://www.google-melange.com/gsoc/events/google/gsoc2015


[3] - http://en.flossmanuals.net/GSoCstudentguide/


[4] - http://en.flossmanuals.net/melange/students-students-application-phase/


[5] - http://goo.gl/W5ATLA

AttachmentSize gsoc2015.jpg46.76 KB
Catégories: Elsewhere

Daniel Leidert: Motion picture capturing: Debian + motion + Logitech C910

Planet Debian - jeu, 12/02/2015 - 20:02

Winter time is a good time for some nature observation. Yesterday I had a woodpecker (picture) in front of my kitchen window. During the recent weeks there were long-tailed tits, a wren and other rarely seen birds. So I thought, it might be a good idea to capture some of these events :) I still own a Logitech C910 USB camera which allows HD video capturing up to 1080p. So I checked the web for some software that would begin video capturing in the case of motion detection and found motion, already available for Debian users. So I gave it a try. I tested all available resolutions of the camera together with the capturing results. I found that the resulting framerate of both the live stream and the captured video is highly depending on the resolution and some few configuration options. Below is a summary of my tests and the results I've achieved so far.

Logitech C910 HD camera

Just a bit of data regarding the camera. AFAIK it allows for fluent video streams up to 720p.


$ dmesg
[..]
usb 7-3: new high-speed USB device number 5 using ehci-pci
usb 7-3: New USB device found, idVendor=046d, idProduct=0821
usb 7-3: New USB device strings: Mfr=0, Product=0, SerialNumber=1
usb 7-3: SerialNumber: 91CF80A0
usb 7-3: current rate 0 is different from the runtime rate 16000
usb 7-3: current rate 0 is different from the runtime rate 32000
uvcvideo: Found UVC 1.00 device (046d:0821)
input: UVC Camera (046d:0821) as /devices/pci0000:00/0000:00:1a.7/usb7/7-3/7-3:1.2/input/input17

$ lsusb
[..]
Bus 007 Device 005: ID 046d:0821 Logitech, Inc. HD Webcam C910
[..]

$ v4l2-ctl -V -d /dev/video1
Format Video Capture:
Width/Height : 1280/720
Pixel Format : 'YUYV'
Field : None
Bytes per Line: 2560
Size Image : 1843200
Colorspace : SRGB

Also the uvcvideo kernel module is loaded and the user in question is part of the video group.

Installation and start

Installation of the software is as easy as always:

apt-get install motion

It is possible to run the software as a service. But for testing, I copied /etc/motion/motion.conf to ~/.motion/motion.conf, fixed its permissions (you cannot copy the file as user - it's not world readable) and disabled the daemon mode.


daemon off

Note that in my case the correct device is /dev/video1 because the laptop has a built-in camera, that is /dev/video0. Also the target directory should be writeable by my user:


videodevice /dev/video1
target_dir ~/Videos

Then running motion from the command line ...


motion
[..]
[0] [NTC] [ALL] motion_startup: Motion 3.2.12+git20140228 Started
[..]
[1] [NTC] [ALL] motion_init: Thread 1 started , motion detection Enabled
[0] [NTC] [ALL] main: Thread 1 is device: /dev/video1 input -1
[1] [NTC] [VID] v4l2_get_capability:
------------------------
cap.driver: "uvcvideo"
cap.card: "UVC Camera (046d:0821)"
cap.bus_info: "usb-0000:00:1a.7-1"
cap.capabilities=0x84000001
------------------------
[1] [NTC] [VID] v4l2_get_capability: - VIDEO_CAPTURE
[1] [NTC] [VID] v4l2_get_capability: - STREAMING
[1] [NTC] [VID] v4l2_select_input: name = "Camera 1", type 0x00000002, status 00000000
[1] [NTC] [VID] v4l2_select_input: - CAMERA
[..]
[1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 1 items

... will begin to capture motion detection events and also output a live stream. CTRL+C will stop it again.

Live stream

The live stream is available by pointing the browser to localhost:8081. However, the stream seems to run at 1 fps (frames per second) and indeed does. The stream gets more quality by this configuration:


stream_motion on
stream_maxrate 100

The first option is responsible, that the stream only runs at one fps if there is no motion detection event. Otherwise the framerate increases to its maximum value, which is either the one given by stream_maxrate or the camera limit. The quality of the stream picture can be increased a bit further too by increasing the stream_quality value. Because I neither need the stream nor the control feed I disabled both:


stream_port 0
webcontrol_port 0
Picture capturing

By default there is video and picture capturing if a motion event is detected. I'm not interested in these pictures, so I turned them off:

output_pictures off

FYI: If you want a good picture quality, then the value of quality should very probably be increased.

Video capturing

This is the really interesting part :) Of course if I will "shoot" some birds (with the camera), then a small image of say 320x240 pixels is not enough. The camera allows for a capture resolution up to 1920x1080 pixels (1080p). It is advertised for fluent video streams up to 720p (1280x720 pixels). So I tried the following resolutions: 320x240, 640x480, 800x600, 640x360 (360p), 1280x720 (720p) and 1920x1080 (1080p). These are easily configured by the width and height variables. For example the following configures motion for 1280x720 pixels (720p):


width 1280
height 720

The result was really disappointing. No event is captured with more then 20 fps. At higher resolutions the framerate drops even further and at the highest resolution of 1920x1080 pixels, the framerate is only two(!) fps. Also every created video runs much too fast and even faster by increasing the framerate variable. Of course its default value of 2 (fps) is not enough for fluent videos. AFAIK the C910 can run with 30 fps at 1280x720 pixels. So increasing the value of framerate, the maximum framerate recorded, is a must-do. (If you wanna test yourself, check the log output for the value following event_new_video FPS.)

The solution to the issue, that videos are running too fast, however is to increase the pre_capture value, the number of pre-captured (buffered) pictures from before motion was detected. Even small values like 3..5 result in a distinctive improvement of the situation. Though increasing the value further didn't have any effect. So the values below should almost get the most out of the camera and result in videos in normal speed.


framerate 100
pre_capture 5

Videos in 1280x720 pixels are still captured with 10 fps and I don't know why. Running guvcview, the same camera allows for 30 fps in this resolution (even 60 fps in lower resolutions). However, even if the framerate could be higher, the resulting video runs fluently. Still the quality is just moderate (or to be honest, still disappointing). It looks "pixelated". Only static pictures are sharp. It took me a while to fix this too, because I first thought, the reason is the camera or missing hardware support. It is not :) The reason is, that ffmpeg is configured to produce a moderate(?)-quality video. The relevant variables are ffmpeg_bps and ffmpeg_variable_bitrate. I got the best results just changing the latter:


ffmpeg_variable_bitrate 2

Finally the resulting video quality is promising. I'll start with this configuration setting up an observation camera for the bird feeding ground.

There is one more tweak for me. I got even better results by enabling the auto_brightness feature.


auto_brightness on
Complete configuration

So the complete configuration looks like this (only those options changed to the original config file)


daemon off
videodevice /dev/video1
width 1280
height 720
framerate 100
auto_brightness on
ffmpeg_variable_bitrate 2
target_dir /home/user/Videos
stream_port 0 #8081
stream_motion on
stream_maxrate 100
webcontrol_port 0 #8080
Links
Catégories: Elsewhere

Chromatic: Automated Servers and Deployments with Ansible & Jenkins

Planet Drupal - jeu, 12/02/2015 - 19:47

In a previous post, Dave talked about marginal gains and how, in aggregate, they can really add up. We recently made some infrastructure improvements that I first thought would be marginal, but quickly proved to be rather significant. We started leveraging Ansible for server creation/configuration and Jenkins to automate our code deployments.

We spend a lot of time spinning up servers, configuring them and repeatedly deploying code to them. As a Drupal-focused shop, this process can get repetitive very quickly. The story usually goes something like this:

  1. Project begins
  2. Dev/Staging server(s) built from scratch (usually on Linode)
    1. Install Ubuntu
    2. Install Apache
    3. Install PHP
    4. Install MariaDB
    5. Install Git
    6. Install Drush
    7. Install Redis, etc.
  3. Deploy project codebase from GitHub
  4. Development happens
  5. Pull Requests opened, reviewed, merged
  6. Manually login to server via SSH, git pull
  7. More development happens
  8. Pull Requests opened, reviewed, merged
  9. Manually login to server via SSH, git pull
  10. and so on…

All of this can be visualized in a simple flowchart:

Development Workflow Visualized

View in Google Docs

This story is repeated over and over. New client, new server new deployments. How does that old programmer’s adage go? “Don’t Repeat Yourself?” Well, we finally got around to doing something about all of this server configuration and deployment repetition nonsense. We configured a Jenkins server to automatically handle our deployments and created Ansible roles and playbooks to easily spin up and configure new servers (specifically tuned for Drupal) at will. So now our story looks something like this:

Development Workflow Visualized w/Ansible & Jenkins

View in Google Docs

What is Ansible?

“Ansible is an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates.”

Sounds like voodoo magic doesn’t it? Well, I’m here to tell you it isn’t, that it works, and that you don’t have to be a certified sysadmin to use it. Though you may need one to set it all up for you. The basic premise is that you create “playbooks” to control your remote servers. These can be as complex as a set of steps to build a LAMP server up from scratch (see below), or as simple as a specific configuration that you wish to enforce. Typically, playbooks are made up of “roles”. Roles are “reusable abstractions” as their docs page explains. You might have roles for installing Apache, adding Git, or adding a group of user’s public keys. String your roles together in a YAML file and that’s a playbook. Have a look at the official Ansible examples GitHub repo to see some real life examples.

Automate Server Creation/Configuration with Ansible

We realized we were basically building the same Drupal-tuned servers over and over. While the various steps for this process are well documented, doing the actual work takes loads of time, is prone to error and really isn’t all that fun. Ansible to the rescue! We set out to build a playbook that would build a LAMP stack up from scratch, with all the tools we use consistently across all of our projects. Here’s an example playbook:

Benefits:

  • Consistent server environments: Adding additional servers to your stack is a piece of cake and you can be sure each new box will have the same exact configuration.
  • Quickly roll out updates: Update your playbook and rerun against the affected servers and each will get the update. Painless.
  • Add-on components: Easily tack on custom server components like Apache Solr by adding a single line to a server’s playbook.
  • Allow your ops team to focus on real problems: Developers can quickly create servers without needing to bug your ops guys about how to compile PHP or install Drush, allowing them to focus on higher priority tasks.
What is Jenkins?

“Jenkins is an award-winning application that monitors executions of repeated jobs, such as building a software project or jobs run by cron.”

Think of Jenkins as a very well-trained, super organized, exceptionally good record-keeping ops robot. Train Jenkins a job once and Jenkins will repeat it over and over to your heart’s content. Jenkins will keep records of everything and will let you know should things ever go awry.

Deploy Code Automatically with Jenkins

Here’s the rundown of how we’re currently using Jenkins to automatically deploy code to our servers:

  1. Jenkins listens for changes to master via the Jenkins Github Plugin
  2. When changes are detected, Jenkins automatically kicks off a deployment by SSHing into our box and executing our standard deployment commands:
    1. cd /var/www/yourProject/docroot/
    2. git pull
    3. drush updb -y
    4. drush fra -y
    5. drush cc all
  3. Build statuses are reported to Slack via the Slack Notification Plugin

Here’s a full view of a configuration page for a deployment job:

The biggest benefit here is saving time. No more digging for SSH credentials. No more trying to remember where the docroot is on this machine. No more of the, “I can’t access that server, Bob usually handles…” nonsense. Jenkins has access to the server, Jenkins knows where the docroot is, and Jenkins runs the exact same deployment code every single time. The other huge win here, at least for me personally, is that it takes the worry out of deployments. Setting it up right the first time means a project lifetime of known workflow/deployments. No more worrying about if pushing the button breaks all the things.

What else is great about using Jenkins to deploy your code? Here’s some quick hits:

  • Historical build data: Jenkins stores a record of every deployment. Should a deploy fail, you can see exactly when things broke down and why. Jenkins records everything that happened in a Console Output tab.
  • Empower non server admins: Jenkins users can login to Jenkins and kick off manual deployments or jobs at the push of a button. They don’t need to know how to login via ssh or even how to run a single command from the command line.
  • Enforce Consistent Workflow: By using Jenkins to deploy your code you also end up enforcing consistent workflow. In our example, drush will revert features on every single deployment. This means that devs can’t be lazy and just switch things in production. Those changes would be lost on the next deploy!
  • Status Indicators across projects: The Jenkins dashboard shows a quick overview of all of your jobs. There’s status of the last build, an aggregated “weather report” of the last few builds, last build duration, etc. Super useful.
  • Slack Integration: You can easily configure jobs to report statuses back to Slack. We have ours set to report to each project channel when a build begins and when it succeeds or fails. Great visibility for everyone on the project.
Other possible automations with Jenkins
  • Automate scheduled tasks (like Drupal’s cron, mass emailing, report generation, etc.)
  • Run automated tests

Both of these tools have done wonders for our workflow. While there was certainly some up-front investment to get these built out, the gains on the back end have been tremendous. We’ve gained control of our environments and their creation. We’ve taken the worry and the repetition out of our deployments. We’ve freed up our developers to focus on the work at hand. Our clients are getting their code sooner. Our team members are interrupted less often. Win after win after win. If you’re team is facing similar, consider implementing one or both of these tools. You’re sure to see similar results.

Catégories: Elsewhere

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, January 2015

Planet Debian - jeu, 12/02/2015 - 19:24

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In January, 48 work hours have been equally split among 4 paid contributors. Their reports are available:

Evolution of the situation

During the last month, the number of paid work hours has made a noticeable jump: we’re now at 58 hours per month. At this rate, we would need 3 more months to reach our minimal goal of funding the equivalent of a half-time position. Unfortunately, the number of new sponsors actually in the process is not likely to be enough to have a similar raise next month.

So, as usual, we are looking for more sponsors.

In terms of security updates waiting to be handled, the situation looks a bit worse than last month: the dla-needed.txt file lists 37 packages awaiting an update (7 more than last month), the list of open vulnerabilities in Squeeze shows about 63 affected packages in total (7 more than last month).

The increase is not too worrying, but the waiting time before an issue is dealt with is sometimes more problematic. To be able to deal with all incoming issues in a timely manner, the LTS team needs more resources: some months will have more issues than usual, some issues will be longer to handle than others, etc.

Thanks to our sponsors

The new sponsors of the month are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Catégories: Elsewhere

Drupal Watchdog: Contributing to Open Source Projects

Planet Drupal - jeu, 12/02/2015 - 17:54
Article

Drupal is one of the largest and most successful open source projects, and much of our success is due to the vibrant and thriving community of contributors who make the platform what it is – the individuals who help put on Drupal Conferences and events, the documentation writers, the designers and usability experts, the developers who help write the software, and countless others.

Participating in open source communities is a rewarding experience that will help you advance and develop, both personally and professionally. Through participation, you gain an opportunity to learn from your peers. You are constantly challenged and exposed to new and interesting ideas, perspectives, and opinions. You are not only learning the current best practices, you are also helping develop innovative new solutions, which will improve the tools in your arsenal and take your career to the next level – not to mention contributing to your personal growth. (One of the five Drupal core committers for Drupal 8, Angie Byron got her start only a few years ago – as a student in the Google Summer of Code – and has rapidly advanced her skills and career through open source participation.)

Participation gives you significantly better insight and awareness. By attending Drupal events and engaging online, you place yourself in a better position to understand and leverage the solutions that are already available, know where and how to find those solutions, and have a clearer sense of how you can leverage them to achieve your goals. With this knowledge and experience you become capable of executing faster and more efficiently than your peers who don’t engage.

Catégories: Elsewhere

Sven Hoexter: Out of the comfort zone: What I learned from my net-snmp backport on CentOS 5 and 6

Planet Debian - jeu, 12/02/2015 - 16:19

This is kind roundup of things I learned after I've rolled out the stuff I wrote about here.

Bugs and fighting with multiarch rpm/yum

One oversight led me to not special case the perl-devel dependency on the net-snmp-devel package for CentOS 5, to depend on perl instead. That was easily fixed but afterwards a yum install net-snmp-devel still failed because it tried to install the stock CentOS 5 net-snmp-devel package and its dependencies. Closer investigation showed that it did so because I only provided x86_64 packages in our internal repository but it wanted to install i386 and x86_64 packages.

Looking around this issue is documented in the Fedora Wiki. So the modern way to deal with it is to make the dependency architecture dependend with the

%{?_isa}

macro. That is evaluated at build time of the src.rpm and then the depedency is hardcoded together with the architecture. Compare

$ rpm -qRp net-snmp-devel-5.7.2-18.el6.x86_64.rpm|grep perl perl-devel(x86-64)

to

$ rpm -qRp net-snmp-devel-5.7.2-19.el5.centos.x86_64.rpm |grep perl perl

As you can see it's missing for the el5 package and that's because it's too old, or more specific, rpm is too old to know about it.

The workaround we use for now is to explicitly install only the x86_64 version of net-snmp-devel on CentOS 5 when required. That's a

yum install net-snmp-devel.x86_64

Another possible workaround is to remove i386 from your repository or just blacklist it in your yum configuration. I've read that's possible but did not try it.

steal time still missing on CentOS 6

One of the motivations for this backport is support for steal time reporting via SNMP. For CentOS 5 that's solved with our net-snmp backport but for CentOS 6 we still do not see steal time. A short look around showed that it's also missing in top, vmstat and friends. Could it be a kernel issue?

Since we're already running on the CentOS Xen hypervisor we gave the Linux 3.10.x packages a spin in a domU and now we also have steal time reporting. Yes, a kernel issue with the stock CentOS 6/RHEL 6 kernel. I'm not sure where and how to fill it since RedHat moved to KVM.

Catégories: Elsewhere

Sven Hoexter: Comparing apples and oranges: IPv6 adoption vs HTTP/2 adoption

Planet Debian - jeu, 12/02/2015 - 15:34

I did not yet read the HTTP/2 draft but I've read the SPDY draft about 2.5 years ago. So I might be wrong here with my assumptions.

We're now about 17 years into the IPv6 migration and still we do not have widespread IPv6 adoption on the client side. That's mostly an issue of CPE device rotation at the customer side and updating provisioning solutions. For the server side we're more or less done. Operating system support is there, commercial firewall and router vendors picked it up in the last ten years, datacenter providers are ready, so we're waiting for the users to put some pressure on providing a service via IPv6.

Looking at HTTP/2 it's also a different protocol. The gap might be somewhat closer than the one between IPv4 and IPv6, but it's still huge. Now I'd bet that in the case of HTTP/2 we'll also see a really slow adoption, but this time it's not the client side that's holding back the adoption. For HTTP/2 I've no doubts about a fast adoption on the client side. Google and Mozilla are nowadays providing some kind of continues delivery of new features to the end user on a monthly basis (surprisingly that also works for mobile devices!). So the web browser near you will soon have a HTTP/2 implementation. Even Microsoft is switching to a new development model of rolling updates with Windows 10 and the Internet Explorer successor. But looking at the server side I doubt we'll have widespread HTTP/2 support within the next 5 years. Maybe in 10. But I doubt even that.

With all those reverse proxies, interception devices for header enrichment at mobile carriers, application servers, self written HTTP implementations, load balancers and so on I doubt we'll have a fast and smooth migration ahead. But maybe we're all lucky and I'm wrong. I'd really love to be wrong here.

Maybe we can provide working IPv4-IPv6 dual-stack setups for everyone in the meantime.

Catégories: Elsewhere

Code Karate: Drupal 7 Options Element: A quicker way to add radio and checkbox options

Planet Drupal - jeu, 12/02/2015 - 15:20
Episode Number: 192

We kept things simple for this episode of the DDoD. The options element module uses Javascript to create an easy way to create radio button and checkbox options for fields on a Drupal content type. Before this module you had to add key|value for each options you wanted. Using this module the key and value is broken down into two fields making it easier to distinguish the difference.

Tags: DrupalContent TypesFieldsDrupal 7Drupal PlanetUI/DesignJavascript
Catégories: Elsewhere

Pages

Subscribe to jfhovinne agrégateur - Elsewhere