David Moreno: I quit

Planet Debian - Sun, 14/08/2016 - 18:53

I just recently quit my job at the startup company I had been working in for almost five years. In startup terms, such long time might be a whole lifetime, but in my case, I grew liking it more and more as the years came, I had evolved from being just another engineer, to lead a team of seven great developers, with decision-making tasks and strategy planning for our technical infrastructure. It’s been such a great long teaching journey that I’m nothing but pleased with my own performance, learned lessons and skills and all I provided and was provided by the project.

Leaving a city like New York is not an easy task. You have it all there, you start making a life and suddenly, before you know it, you already have a bunch of ties to the place, people, leases, important dates, all kinds of shit. Seriously, all kinds of crazy ass shit start to fill up your baggage. You wake up everyday to get into the subway and commute surrounded by all of this people that are just like you: so similar yet so immensely different. No, leaving the city is not an easy task, it’s not something to take lightly. You know how people just say “my cycle has ended in this place” as an euphemism not to end in bad terms with anyone? Well, ending a cycle is indeed a reality, I got to a point where I felt like I needed to head into a different direction, take on new challenges and overall, peace out and hope the best to everyone, specially to myself.

This was me, on my last day at work, last Friday of June:

(Some) people seem to be anxious to know what I’m doing next, and my answer is, go mind your own fucking business. However, life is short and I would love to do any of the following:

  • Go back to Brazil again, now as a blue belt in Brazilian jiujitsu, and train non-stop in Rio, this time as a local. I happened to come to Rio last November (as a four stripe white belt) and it’d been a great experience, with the Connection Rio guys. I kind of regretted not staying any longer, as a lot of people use to do, maybe three months. You don’t get to do anything else but train and roll with black belts on a daily basis, eat the healthy good stuff that a wonderful country like Brazil has to offer, hang out with amazing people and chill the fuck out all day long.

  • Make a road trip through Central America and get to know all of those countries where I’ve never been to even when I’ve travelled extensively around them for the last few years. I would head to the southernmost tip of Mexico and then take a bus to backpack travel in the cities all the way to Panama. Beer all along, a lot of swimming, plenty of heaven.

  • Head to any Russian consulate so I can get an entry visa for their amazing country and travel to any chess club on any of its big cities. Or maybe Hungary (do I need a visa to visit it?). Stay on small hostels where all I could use is a few good chess books and a chessboard, absorbe myself into chess sounds like a dream come true.

  • Stop procrastinating and write all the good Perl stuff I’ve wanting to do on my own time. All of those good projects I always thought of and only had opportunity to try at work but not on a giving-back-to-the-community kind of way.

Decisions, decisions…

For the time being, I’m chilling with my people, friends and family in beautiful Mexico City. I’ve been doing so for the entire month of July and I couldn’t be more content. August will see my 28th birthday and as I approach thirty, I believe I need to continue moving forward.

This stupid world is a tiny place and our lives are short, I for one, will definitively try to take the bull by the horns.

Thanks for reading, more updates soon. Peace.

Categories: Elsewhere

Jaminy Prabaharan: GSoC-2016 Journey (In brief)

Planet Debian - Sun, 14/08/2016 - 18:21

Three months of coding is about to end.It has officially begun on May 23rd and we are getting near to the final submission deadline on August 15th.The following is the time line of three months about what we have gone through.


You can checkout my Debian wiki page to know more about myself.

I have worked on improving voice, video and chat communication (Real Time Communication) with free software, RTC project for Debian (The Universal OS).

My mentors are Iain.R.Learmonth and Daniel Pocock.Both of them were dedicative and I could learn many new things from them within these three months.I have contacted my mentors through personal mail, Debian outreach mailing lists and IRC(#debian-data and #debian-soc). They were very responsive to my queries. Thank you Iain and Daniel for improving and enlightening me.

My initial task is e-mail mining. I have to allow the client to login to the mail using IMAP, extracts the “To”, “From” and “CC” headers of every email message in the folder and then scan for the phone numbers, SIP addresses, XMPP addresses in the body  of the message.These extracted details should be written in the CSV file also.The extracted phone numbers, SIP addresses,  XMPP addresses and ham call signs should be made into a click able link using Flask.

I have also attended DebConf-16 (conference of Debian developers, contributors and users) in Cape Town in the middle of three months (Form July 2nd to July 9th).I gave a talk on my progressing GSoC project.I have learnt many new things about Debian, their projects and their packages apart form my GSoC project.I have also met some of the fellow GSoC students.

I have written previos blog posts related to GSoC-2016 in the following links.

GSoC-Journey till Mid term

Weekly Report for GSoC16-week 1 and week2

Weekly Report for GSoC16-Community bonding period

I have also sent my weekly reports till the last week (i.e week-11) to debian-outreach@lists.debian.org mailing list.

Email-Mining is the repository I have created on GitHub to work on my project.

I have divided the tasks and coded individually to combine together.Snippet folder in the file contains the code for each tasks.

Following are the commits I have made in the repository.


My tasks have been extended to add a gravatar on the page listing details for each email address, maintain a counter for each hour of the day in the scraper for each mail, show a list of other people that have been involved in email conversations and make the contact information on the detail pages machine readable.


My mentor suggested me to work on at least three issues before final submission.I have worked on each of them individually in the snippet folder except the last one.I will be working on it after GSoC.

Mailmine.py script contains the final code which combines all snippets into one.

Three pull requests are to be merged after the confirmation from my mentor.

These are the abstract about what I have done within these three months.

It was an amazing and thrilling coding ride.

Stay tuned for the elaborated blog posts with DebConf experience and many more.



Categories: Elsewhere

David Moreno: One year

Planet Debian - Sun, 14/08/2016 - 16:52

A year ago today, I started working for Booking.com. I wish I could say that the time has flown by, but it really hasn’t. It has been one hell of a ride on all fronts: work, learning experiences, friends, but specially at home, since working for this company didn’t come without a complete life change. So far so good, and for that I’m grateful :)

Categories: Elsewhere

David Moreno: Twitter feed for Planet Perl Iron Man

Planet Debian - Sun, 14/08/2016 - 16:13

I like to read Planet Perl Iron Man, but since I’m less and less of a user of Google Reader these days, I prefer to follow interesting feeds on Twitter instead: I don’t have to digest all of the content on my timeline, only when I’m in the mood to see what’s going on out there. So, if you wanna follow the account, find it as @perlironman. If interested, you can also follow me. That is all.

Categories: Elsewhere

Jamie McClelland: Noam use Gnome

Planet Debian - Sun, 14/08/2016 - 02:41

I don't quite remember when I read John Goerzen's post about teaching a 4 year old to use the linux command line with audio on planet Debian. According to the byline it was published nearly 2 years before Noam was born, but I seem to remember reading it in the weeks after his birth when I was both thrilled at the prospect of teaching my kid to use the command line and, in my sleepless stupor, not entirely convinced he would ever be old enough.

Well, the time came this morning. He found an old USB key board and discovered that a green light came on when he plugged it in. He was happily hitting the keys when Meredith suggested we turn on the monitor and open a program so he could see the letters appear on the screen and try to spell his name.

After 10 minutes in Libre Office I remembered John's blog and was inspired to start writing a bash script in my head (I would have to stop the fun with Libre Office to write it so the pressure was on...). In the end it was only a few minutes and I came up with:

#!/bin/bash while [ 1 ]; do read -p "What shall I say? " espeak "$REPLY" done

It was a hit. He said what he wanted to hear and hit the keys, my job was to spell for him.

Oh, also: he discovered key combinations that did things that were unsurprising to me (like taking the screen grab above) and also things that I'm still scratching my head about (like causing a prompt on the screen that said: "Downloading shockwave plugin." No thanks. And, how did he do that?

Categories: Elsewhere

Russ Allbery: git-pbuilder 1.42

Planet Debian - Sat, 13/08/2016 - 23:31

A minor update to my glue script for building software with pdebuild and git-buildpackage. (Yes, still needs to get rewritten in Python.)

This release stops using the old backport location for oldstable builds since oldstable is now wheezy, which merged the backports archive into the regular archive location. The old location is still there for squeeze just in case anyone needs it.

It also adds a new environment variable, GIT_PBUILDER_PDEBUILDOPTIONS, that can be used to pass options directly to pdebuild. Previously, there was only a way to pass options to the builder, via pdebuild, but not to configure pdebuild itself. There are some times when that's useful, such as to pass --use-pdebuild-internal. This was based on a patch from Rafał Długołęcki.

You can get the latest version of git-pbuilder from my scripts page.

Categories: Elsewhere

Dariusz Dwornikowski: Automatic PostgreSQL config with Ansible

Planet Debian - Sat, 13/08/2016 - 18:18

If for some reasons you can’t use dedicated DBaaS for your PostgreSQL (like AWS RDS) then you need to run your database server on a cloud instance. In these kind of setup, when you scale up or down your instance size, you need to adjust PostgreSQL parameters according to the changing RAM size. There are several parameters in PostgreSQL that highly depend on RAM size. An example is shared_buffers for which a rule of thumb says that is should be set to 0.25*RAM.

In DBaaS, when you scale the DB instance up or down, parameters are adjusted for you by the cloud provider, e.g. AWS RDS uses parameter groups for that reason, where particular parameters are defined depending on the size of the RAM of the RDS instance.

So what can you when you do not have RDS or any other DBaaS? You can always keep several configuration files on your instance, each for a different memory size, you can rewrite you config every time you change the size of the instance… or you can use Ansible role for that.

Our Ansible role will be very simple, we will have two tasks. One will change the PostgreSQL config, the second one will just restart the database server:

--- - name: Update PostgreSQL config template: src=postgresql.conf.j2 dest=/etc/postgresql/9.5/main/postgresql.conf register: pgconf - name: Restart postgresql service: name=postgresql state=restarted when: pgconf.changed

Now we need the template, where are the calculations take place. RAM size will be taken from the Ansible’s fact called ansible_memtotal_mb. Since it returns RAM size in MBs, we will stick to MBs. We will define the following parameters, you can adjust them to your needs:

  • shared_buffers, as 0.25*RAM size,
  • work_mem, as shared_buffers/max_connections,
  • maintenance_work_mem, as RAM GBs times 64MB,
  • effective_cache_size, as 0.75*RAM size.

For max_connections we will define a default role variable of 100 but we will allow to specify it at a runtime. The relevant parts of the postgresql.conf.j2 are below:

max_connections = {{ max_connections }} shared_buffers = {{ (((ansible_memtotal_mb/1024.0)|round|int)*0.25)|int*1024 }}MB work_mem = {{ ((((ansible_memtotal_mb/1024.0)|round|int)*0.25)/max_connections*1024)|round|int }}MB maintenance_work_mem = {{ ((ansible_memtotal_mb/1024.0)|round|int)*64 }}MB effective_cache_size = {{ (((ansible_memtotal_mb/1024.0)|round|int)*0.75)|int*1024 }}MB

You can now run the role every time you change the instance size, and the config will be changed accordingly to the RAM size. You can extend the role and maybe add other constraints and change max_connections to you specific needs. An example playbook could look like:

--- hosts: my_postgres roles: - postgres-config vars: - max_connection: 300

And run it:

$ ansible-playbook playbook.yml

The complete role can be found in my github repo.

Categories: Elsewhere

Russell Coker: SSD and M.2

Planet Debian - Sat, 13/08/2016 - 16:35
The Need for Speed

One of my clients has an important server running ZFS. They need to have a filesystem that detects corruption, while regular RAID is good for the case where a disk gives read errors it doesn’t cover the case where a disk returns bad data and claims it to be good (which I’ve witnessed in BTRFS and ZFS systems). BTRFS is good for the case of a single disk or a RAID-1 array but I believe that the RAID-5 code for BTRFS is not sufficiently tested for business use. ZFS doesn’t perform very well due to the checksums on data and metadata requiring multiple writes for a single change which also causes more fragmentation. This isn’t a criticism of ZFS, it’s just an engineering trade-off for the data integrity features.

ZFS supports read-caching on a SSD (the L2ARC) and write-back caching (ZIL). To get the best benefit of L2ARC and ZIL you need fast SSD storage. So now with my client investigating 10 gigabit Ethernet I have to investigate SSD.

For some time SSDs have been in the same price range as hard drives, starting at prices well below $100. Now there are some SSDs on sale for as little as $50. One issue with SATA for server use is that SATA 3.0 (which was released in 2009 and is most commonly used nowadays) is limited to 600MB/s. That isn’t nearly adequate if you want to serve files over 10 gigabit Ethernet. SATA 3.2 was released in 2013 and supports 1969MB/s but I doubt that there’s much hardware supporting that. See the SATA Wikipedia page for more information.

Another problem with SATA is getting the devices physically installed. My client has a new Dell server that has plenty of spare PCIe slots but no spare SATA connectors or SATA power connectors. I could have removed the DVD drive (as I did for some tests before deploying the server) but that’s ugly and only gives 1 device while you need 2 devices in a RAID-1 configuration for ZIL.


M.2 is a new standard for expansion cards, it supports SATA and PCIe interfaces (and USB but that isn’t useful at this time). The wikipedia page for M.2 is interesting to read for background knowledge but isn’t helpful if you are about to buy hardware.

The first M.2 card I bought had a SATA interface, then I was unable to find a local company that could sell a SATA M.2 host adapter. So I bought a M.2 to SATA adapter which made it work like a regular 2.5″ SATA device. That’s working well in one of my home PCs but isn’t what I wanted. Apparently systems that have a M.2 socket on the motherboard will usually take either SATA or NVMe devices.

The most important thing I learned is to buy the SSD storage device and the host adapter from the same place then you are entitled to a refund if they don’t work together.

The alternative to the SATA (AHCI) interface on an M.2 device is known as NVMe (Non-Volatile Memory Express), see the Wikipedia page for NVMe for details. NVMe not only gives a higher throughput but it gives more command queues and more commands per queue which should give significant performance benefits for a device with multiple banks of NVRAM. This is what you want for server use.

Eventually I got a M.2 NVMe device and a PCIe card for it. A quick test showed sustained transfer speeds of around 1500MB/s which should permit saturating a 10 gigabit Ethernet link in some situations.

One annoyance is that the M.2 devices have a different naming convention to regular hard drives. I have devices /dev/nvme0n1 and /dev/nvme1n1, apparently that is to support multiple storage devices on one NVMe interface. Partitions have device names like /dev/nvme0n1p1 and /dev/nvme0n1p2.

Power Use

I recently upgraded my Thinkpad T420 from a 320G hard drive to a 500G SSD which made it faster but also surprisingly quieter – you never realise how noisy hard drives are until they go away. My laptop seemed to feel cooler, but that might be my imagination.

The i5-2520M CPU in my Thinkpad has a TDP of 35W but uses a lot less than that as I almost never have 4 cores in use. The z7k320 320G hard drive is listed as having 0.8W “low power idle” and 1.8W for read-write, maybe Linux wasn’t putting it in the “low power idle” mode. The Samsung 500G 850 EVO SSD is listed as taking 0.4W when idle and up to 3.5W when active (which would not be sustained for long on a laptop). If my CPU is taking an average of 10W then replacing the hard drive with a SSD might have reduced the power use of the non-screen part by 10%, but I doubt that I could notice such a small difference.

I’ve read some articles about power use on the net which can be summarised as “SSDs can draw more power than laptop hard drives but if you do the same amount of work then the SSD will be idle most of the time and not use much power”.

I wonder if the SSD being slightly thicker than the HDD it replaced has affected the airflow inside my Thinkpad.

From reading some of the reviews it seems that there are M.2 storage devices drawing over 7W! That’s going to create some cooling issues on desktop PCs but should be OK in a server. For laptop use they will hopefully release M.2 devices designed for low power consumption.

The Future

M.2 is an ideal format for laptops due to being much smaller and lighter than 2.5″ SSDs. Spinning media doesn’t belong in a modern laptop and using a SATA SSD is an ugly hack when compared to M.2 support on the motherboard.

Intel has released the X99 chipset with M.2 support (see the Wikipedia page for Intel X99) so it should be commonly available on desktops in the near future. For most desktop systems an M.2 device would provide all the storage that is needed (or 2*M.2 in a RAID-1 configuration for a workstation). That would give all the benefits of reduced noise and increased performance that regular SSDs provide, but with better performance and fewer cables inside the PC.

For a corporate desktop PC I think the ideal design would have only M.2 internal storage and no support for 3.5″ disks or a DVD drive. That would allow a design that is much smaller than a current SFF PC.

Related posts:

  1. Breaking SATA Connectors I’ve just broken my second SATA connector. This isn’t a...
  2. How I Partition Disks Having had a number of hard drives fail over the...
  3. Swap Space and SSD In 2007 I wrote a blog post about swap space...
Categories: Elsewhere

Drupal core announcements: Drupal core security release window on Wednesday, August 17, 2016

Planet Drupal - Sat, 13/08/2016 - 15:07
Start:  2016-08-17 00:00 - 23:30 UTC Organizers:  xjm catch mlhess David_Rothstein stefan.r Event type:  Online meeting (eg. IRC meeting)

The monthly security release window for Drupal 8 and 7 core will take place on Wednesday, August 17.

This does not mean that a Drupal core security release will necessarily take place on that date for any of the Drupal 8 or 7 branches, only that you should watch for one (and be ready to update your Drupal sites in the event that the Drupal security team decides to make a release).

There will be no bug fix or feature release on this date. The next window for a Drupal core patch (bug fix) release for all branches is Wednesday, September 07. The next scheduled minor (feature) release for Drupal 8 will be on Wednesday, October 5.

Drupal 6 is end-of-life and will not receive further security releases.

For more information on Drupal core release windows, see the documentation on release timing and security releases, and the discussion that led to this policy being implemented.

Categories: Elsewhere

Hook 42: Iterative Web Design: A Sneak Peek at Hook 42's Drupal 8 Redesign

Planet Drupal - Sat, 13/08/2016 - 05:34

It’s always a good idea to give your website a new coat of proverbial paint every so often. With the release of Drupal 8, we took the opportunity to not only upgrade our technology, but to completely redesign the look and feel along with the Drupal architecture.

Categories: Elsewhere

ImageX Media: How to Reduce Your Bounce Rate and Optimize Your Site's Content Experiences

Planet Drupal - Sat, 13/08/2016 - 02:03

Data-collection platforms like Google Analytics (GA) and Google Search Console gives site owners, admins, and marketings the type of data needed to make confident user experience decisions. It gives you data-rich view of your site's performance -- and not all data is created accurately.

Categories: Elsewhere

myDropWizard.com: YouTube videos stop working on your Drupal 6 site? Here's the fix!

Planet Drupal - Fri, 12/08/2016 - 23:47

If you have a Drupal 6 site that uses Embedded Media Field and the Media: YouTube module to embed the YouTube player on your site, you may have noticed it stopped working in the last couple days.

While we can't seem to find any announcement from Google, it appears that the old YouTube embed code which those modules use has stopped working.

Luckily, it's pretty easy to fix!

Read more to find out how...

Categories: Elsewhere

Jonathan Dowland: Lush (and friends)

Planet Debian - Fri, 12/08/2016 - 18:20

The gig poster

On July 31st a friend and I went to see Maxïmo Park and support at a mini-festival day in Times Square, Newcastle. The key attraction for me to this gig was the top support band, Lush who are back after a nearly 20 year hiatus.

Nano Kino 7"

I first heard of Lush quite recently from the excellent BBC Documentary Girl in a Band: Tales from the Rock 'n' Roll Front Line. They were excellent: the set was quite heavy on material from their more dreampop/shoegaze albums which is to my taste.

Maxïmo 7"s

I also particularly enjoyed Warm Digits, motorik instrumental dance stuff that reminded me of Lemon Jelly mixed with Soulwax, who had two releases very reasonably priced on the merch stand; Nano Kino in the adjacent "Other Rooms", also channelling dreampop/shoegaze; and finally Maxïmo Park themselves. I was there for Lush really but I still really enjoyed the headliners. I've seen them several times but I've lost track of what they've been up to in recent years. Both their earliest material and several newer songs were well received by the home crowd and atmosphere in the enclosed Times Square was excellent.

Categories: Elsewhere

Markus Koschany: My Free Software Activities in July 2016

Planet Debian - Fri, 12/08/2016 - 16:52

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian.

Debian Android Debian Games
  • This month GCC-6 bugs became release critical. I fixed and triaged those kind of bugs in games like supertransball2, berusky2, freeorion, bloboats, armagetronad and megaglest.
  • I packaged new upstream releases of scorched3d, bzflag, spring, springlobby, freeorion, freeciv and extremetuxracer.
  • Freeciv, one of the best strategy games ever by the way, also got a new binary package freeciv-client-gtk3. This package will eventually become the new default client to play the game in the future. You are welcome to test it.
  • I packaged a new upstream release of adonthell and adonthell-data. This game is built with Python 3 and SDL 2 now and also uses the latest version of swig to generate its sources. We will probably see only one other future upstream release of adonthell because the main developer has decided to move on after more than 15 years of development.
  • I fixed another RC bug in minetest, updated whichwayisup for this release cycle and moved the package to Git.
Debian Java Debian LTS

This was my sixth month as a paid contributor and I have been paid to work 14,7 hours on Debian LTS. In that time I did the following:

  • DLA-554-1. I spent most of the time this month on completing my work on libarchive. I issued DLA-554-1 and fixed 18 CVE plus another issue which was later assigned CVE-2016-6250.
  • DLA-555-1. Issued a security update for python-django fixing 1 CVE.
  • DLA-561-1. Issued a security update for uclibc fixing 3 CVE.
  • DLA-562-1. Issued a security update for gosa fixing 1 CVE. I could triage another open CVE as not-affected after confirming that the issue had already been fixed two years ago.
  • DLA-568-1. Issued a security update for wordpress fixing 6 CVE. I decided to go ahead with this update because I could not find any regressions. Unfortunately this wasn’t true for my intended fix for CVE-2015-8834. The database upgrade did not succeed hence I decided to postpone the fix for CVE-2015-8834 until we can narrow down the issue.
  • DLA-576-1. Issued a security update for libdbd-mysql-perl fixing 2 CVE.
  • From 04. July to 10. July I was in charge of our LTS frontdesk. I triaged CVEs in librsvg, bind9, trn, pdns and drupal7 and answered questions on the debian-lts mailing list.
Misc and QA
  • I fixed another GCC-6 bug in wbar, a light and fast launch bar.
  • Childsplay and gvrng were orphaned last month. I updated both of them, fixed the RC-bug in childsplay (non-free font) and moved the packages to the Debian QA Group.
Categories: Elsewhere


Subscribe to jfhovinne aggregator - Elsewhere