Zlatan Todorić: Defcon24

Planet Debian - ven, 19/08/2016 - 03:15

I went to Defcon24 as Purism representative. It was (as usual) held in Las Vegas, the city of sin. In the same module as with DebConf, here we go with good, bad and ugly.


Badges are really cool. You can find good hackers here and there (but very small number compared to total number). Some talks are good and workshop + village idea looks good (although I didn't manage to attend any workshop as there was place for 1100 and there were 22000 attendees). The movie night idea is cool and Arcade space (where you can play old arcade games, relax and hack and also listen to some cool music) is really lovely. Also you have a camp/village for kids learning things such as electronics, soldering etc but you need to pay attention that they don't see too much of twisted folks that also gather on this con. And that's it. Oh, yea, Dark Tangent appears actually to be cool dude.


One does not simply hold a so-called hacker conference in Las Vegas. Having a conference inside hotel/casino where you mix with gamblers and casino workes (for good or for bad) is simply not in hacker spirit and certainly brings all kind of people to the same place. Also, there were simply not enough space for 22000 Defcon attendees, and you don't get proud of having on average ONLY 40min lines. You get proud if you don't have lines! Organization is not the strongest part of Defcon.

Huge majority of attendees are not hackers. They are script kiddies, hacker wannabes, comic con people, few totally lost souls etc etc. That simply brings the quality of a conference down. Yes it is cool to have mix of many diverse people but not for the sake of just having people.


They lack Code of Conduct (everyone knows I am not in favor of any writens rules how people should behave but after Defcon I clearly see need for it). Actually, tbh, they do have it but no one gives a damn about it. And you should report to Goons, more about them below. Sexism is huge here. I remember and hear about stories of sexual harassment in IT industry, but Debian somehow mitigated that before me entering its domains, so I never experienced it. The sheer number of sexist behavior on Defcon is tremendous. It appears to me that those people had lonely childhood and now they act as a spoiled 6 year old: they're spoiled, they need to yell to show their point, they have low and stupid sexist jokes and they simply think that is cool.

Majority of Goons (their coordinators or whatever) are simply idiots. I don't know do they feel they have some superpowers, or are drunk or just stupid but yelling on people, throwing low jokes on people, more yelling, cursing all the time, more yelling - simply doesn't work for me. So now you can see the irony of CoC on Defcon. They even like to say, hey we are old farts, let us our con be as we want it to be. So no real diversity there. Either it is their way, and god forsaken if you try to change something for better and make them stop cursing or throwing sexist jokes ("squeeze, people. together, touch each other, trust me it will feel good"), or highway.

Also it appears that to huge number of vocal people, word "fuck" has some fetish meaning. Either it needs to show how "fucking awesome this con or they are" or to "fucking tell few things about random fucking stuff". Thank you, but no thank you.

So what did I do during con. I attended few talks, had some discussion with people, went to one party (great DJs, again people doing stupid things, like breaking invertory to name just one of them) and had so much time (read "I was bored") that I bought domain, brough up server on which I configured nginx and cp'ed this blog to blog.zlatan.tech (yes, recently I added letsencrypt because it is, let me be in Defcon mood, FUCKING AWESOME GRRR UGH) and now I even made .onion domain for it. What can boredom do to people, right?

So the ultimate question is - would I go again to Defcon. I am strongly leaning to no, but in my nature is to give second chance and now I have more experience (and I also have thick skin so I guess I can play calm for one more round).

Catégories: Elsewhere

Cocomore: The Central Data Hub of VDMA - Tango REST Interface (TRI)

Planet Drupal - ven, 19/08/2016 - 00:00

On the VDMA website (Association of German Machinery and Plant Engineering) various professional associations are specifically listed with their individual information. To provide each page with information from the Tango Backend, a specific interface has been developed: The so-called Tango REST interface. In the seventh part of our series “The Central Data Hub of VDMA” we will introduce this interface, its technical realization and its functions. 

Catégories: Elsewhere

Zivtech: Staff Augmentation and Outsourced Training: Do You Need It?

Planet Drupal - jeu, 18/08/2016 - 21:53
The goal of any company is to reduce costs and increase profit, especially when it comes to online and IT projects. When an IT undertaking is a transitional effort, it makes sense to consider staff augmentation and outsourcing.

Consider the marketing efforts of one worldwide corporation. Until recently, each brand and global region built and hosted its own websites independently, often without a unified coding and branding standard. The result was a disparate collection of high maintenance, costly brand websites. A Thousand Sites: One Goals
The organization has created nearly a thousand sites in total, but those sites were not developed at the same time or with the same goals. That’s a pain point. To solve this problem, the company decided to standardize all of its websites onto a single reference architecture, built on Drupal.

The objective of the new proprietary platform includes universal standards, a single platform that can accommodate regional feature sets, automated testing, and sufficient features that work for 95% of use cases for the company’s websites globally.

While building a custom platform is a great step forward, it must then be implemented, and staff needs to be brought up to speed. To train staff on technical skills and platforms, often the best solution is to outsource the training to experts who step in, take over training and propel the effort forward quickly.

As part of an embedded team, an outsourced trainer is an adjunct team member, attending all of the scrum meetings, with a hand in the future development of the training materials. Train Diverse Audiences
A company may invest a lot of money into developing custom features, and trainers become a voice for the company, showing people how easy it is to implement, how much it is going to help, and how to achieve complex tasks such as activation processes. The goal is to get people to adopt the features and platform. Classroom style training allows for exercises on live sites and familiarity with specific features. The Training Workflow
Trainers work closely with the business or feature owner to build a curriculum. It’s important to determine the business needs that inspired the change or addition.

Starting with an initial outline, trainers and owners work together. Following feedback, more information gets added to flesh it out. This first phase can take four to five sessions to get the training exactly right for the business owner. For features that follow, the process becomes streamlined. It's more intuitive because the trainer has gotten through all the steps and heard the pain points, but it’s important to always consult the product owner. Once there is a plan, the trainers rehearse the curriculum to see what works, what doesn’t work, what’s too long, and where they need to cut things. Training Now & Future
Training sessions may be onsite or remote. It is up to the business to decide if attendance is mandatory. Some staffers may wish to attend just to keep up with where the business is going.

Sessions are usually two hours with a lot of time for Q&A. With trainings that are hands-on, it’s important to factor in time for technical difficulties and different levels of digital competence.

Remote trainings resemble webinars. Trainers also create videos to enable on demand trainings. They may be as simple as screencasts with a voiceover, but others have a little more work involved. Some include animations to demo tasks in a friendlier way before introducing a more static backend form. It is the job of the trainer to tease out what’s relevant to a wide net of audiences.

The training becomes its own product that can live on. The recorded sessions are valuable to onboard and train up future employees. Trainers add more value to existing products and satisfy management goals.
Catégories: Elsewhere

Chromatic: Migrating (away) from the Body Field

Planet Drupal - jeu, 18/08/2016 - 19:46

As we move towards an ever more structured digital world of APIs, metatags, structured data, etc., and as the need for content to take on many forms across many platforms continues to grow, the humble “body” field is struggling to keep up. No longer can authors simply paste a word processing document into the body field, fix some formatting issues and call content complete. Unfortunately, that was the case for many years and consequently there is a lot of valuable data locked up in body fields all over the web. Finding tools to convert that content into useful structured data without requiring editors to manually rework countless pieces of content is essential if we are to move forward efficiently and accurately.

Here at Chromatic, we recently tackled this very problem. We leveraged the Drupal Migrate module to transform the content from unstructured body fields into re-usable entities. The following is a walkthrough.


On this particular site, thousands of articles from multiple sources were all being migrated into Drupal. Each article had a title and body field with all of the images in each piece of content embedded into the body as img tags. However, our new data model stored images as separate entities along with the associated metadata. Manually downloading all of the images, creating new image entities, and altering the image tags to point to the new image paths, clearly was not a viable or practical option. Additionally, we wanted to convert all of our images to lazy loaded images, so having programmatic control over the image markup during page rendering was going to be essential. We needed to automate these tasks during our migration.

Our Solution

Since we were already migrating content into Drupal, adapting Migrate to both migrate the content in and fully transform it all in one repeatable step was going to be the best solution. The Migrate module offers many great source classes, but none can use img elements within a string of HTML as a source. We quickly realized we would need to create a custom source class.

A quick overview of the steps we’d be taking:

  1. Building a new source class to find img tags and provide them as a migration source.
  2. Creating a migration to import all of the images found by our new source class.
  3. Constructing a callback for our content migration to translate the img tags into tokens that reference the newly created image entities.
Building the source class

Migrate source classes work by finding all potential source elements and offering them to the migration class in an iterative fashion. So we need to find all of the potential image sources and put them into an array that can used a source for a migration. Source classes also need to have a unique key for each potential source element. During a migration the getNextRow() method is repeatedly called from the parent MigrateSource class until it returns FALSE. So let's start there and work our way back to the logic that will identify the potential image sources.

** * Fetch the next row of data, returning it as an object. * * @return object|bool * An object representing the image or FALSE when there is no more data available. */ public function getNextRow() { // Since our data source isn't iterative by nature, we need to trigger our // importContent method that builds a source data array and counts the number // of source records found during the first call to this method. $this->importContent(); if ($this->matchesCurrent < $this->computeCount()) { $row = new stdClass(); // Add all of the values found in @see findMatches(). $match = array_shift(array_slice($this->matches, $this->matchesCurrent, 1)); foreach ($match as $key => $value) { $row->{$key} = $value; } // Increment the current match counter. $this->matchesCurrent++; return $row; } else { return FALSE; } }

Next let's explore our importContent() method called above. First, it verifies that it hasn't already been executed, and if it has not, it calls an additional method called buildContent().

/** * Find and parse the source data if it hasn't already been done. */ private function importContent() { if (!$this->contentImported) { // Build the content string to parse for images. $this->buildContent(); // Find the images in the string and populate the matches array. $this->findImages(); // Note that the import has been completed and does not need to be // performed again. $this->contentImported = TRUE; } }

The buildContent() method calls our contentQuery() method which allows us to define a custom database query object. This will supply us with the data to parse through. Then back in the buildContent() method we loop through the results and build the content property that will be parsed for image tags.

/** * Get all of the HTML that needs to be filtered for image tags and tokens. */ private function buildContent() { $query = $this->contentQuery(); $content = $query->execute()->fetchAll(); if (!empty($content)) { // This builds one long string for parsing that can done on long strings without // using too much memory. Here, we add fields ‘foo’ and ‘bar’ from the query. foreach ($content as $item) { $this->content .= $item->foo; $this->content .= $item->bar; } // This builds an array of content for parsing operations that need to be performed on // smaller chunks of the source data to avoid memory issues. This is is only required // if you run into parsing issues, otherwise it can be removed. $this->contentArray[] = array( 'title' => $item->post_title, 'content' => $item->post_content, 'id' => $item->id, ); } }

Now we have the the logic setup to iteratively return row data from our source. Great, but we still need to build an array of source data from a string of markup. To do that, we call our custom findImages() method from importContent(), which does some basic checks and then calls all of the image locating methods.

We found it is best to create methods for each potential source variation, as image tags often store data in multiple formats. Some examples are pre-existing tokens, full paths to CDN assets, relative paths to images, etc. Each often requires unique logic to parse properly, so separate methods makes the most sense.

/** * Finds the desired elements in the markup. */ private function findImages() { // Verify that content was found. if (empty($this->content)) { $message = 'No html content with image tags to download could be found.'; watchdog('example_migrate', $message, array(), WATCHDOG_NOTICE, 'link'); return FALSE; } // Find images where the entire source content string can be parsed at once. $this->findImageMethodOne(); // Find images where the source content must be parsed in chunks. foreach ($this->contentArray as $id => $post) { $this->findImageMethodTwo($post); } }

This example uses a regular expression to find the desired data, but you could also use PHP Simple HTML DOM Parser or the library of your choice. It should be noted that I opted for a regex example here to keep library-specific code out of my code sample. However, we would highly recommend using a DOM parsing library instead.

/** * This is an example of a image finding method. */ private function findImageMethodOne() { // Create a regex to look through the content. $matches = array(); $regex = '/regex/to/find/images/'; preg_match_all($regex, $this->content, $matches, PREG_SET_ORDER); // Set a unique row identifier from some captured pattern of the regex- // this would likely be the full path to the image. You might need to // perform cleanup on this value to standardize it, as the path // to /foo/bar/image.jpg, example.com/foo/bar/image.jpg, and // http://example.com/foo/bar/image.jpg should not create three unique // source records. Standardizing the URL is key for not just avoiding // creating duplicate source records, but the URL is also the ID value you // will use in your destination class mapping callback that looks up the // resulting image entity ID from the data it finds in the body field. $id = ‘http://example.com/foo/bar/image.jpg’; // Add to the list of matches after performing more custom logic to // find all of the correct chunks of data we need. Be sure to set // every value here that you will need when constructing your entity later. $this->matches[$id] = array( 'url' => $src, 'alt' => $alttext, 'title' => $description, 'credit' => $credit, 'id' => $id, 'filename' => $filename, 'custom_thing' => $custom_thing, ); } Importing the images

Now that we have our source class complete, let's import all of the image files into image entities.

/** * Import images. */ class ExampleImageMigration extends ExampleMigration { /** * {@inheritdoc} */ public function __construct($arguments) { parent::__construct($arguments); $this->description = t('Creates image entities.'); // Set the source. $this->source = new ExampleMigrateSourceImage(); ...

The rest of the ExampleImageMigration is available in a Gist, but it has been omitted here for brevity. It is just a standard migration class that maps the array keys we put into the matches property of the source class to the fields of our image entity.

Transforming the image tags in the body

With our image entities created and the associated migration added as a dependency, we can begin sorting out how we will convert all of the image tags to tokens. This obviously assumes you are using tokens, but hopefully this will shed light on the general approach, which can then be adapted to your specific needs.

Inside our article migration (or whatever you happen to be migrating that has the image tags in the body field) we implement the callbacks() method on the body field mapping.

// Body. $this->addFieldMapping('body', 'post_content') ->callbacks(array($this, 'replaceImageMarkup'));

Now let's explore the logic that replaces the image tags with our new entity tokens. Each of the patterns references below will likely correspond to unique methods in the ExampleMigrateSourceImage class that find images based upon unique patterns.

/** * Converts images into image tokens. * * @param string $body * The body HTML. * * @return string * The body HTML with converted image tokens. */ protected function replaceImageMarkup($body) { // Convert image tags that follow a given pattern. $body = preg_replace_callback(self::IMAGE_REGEX_FOO, `fooCallbackFunction`, $body); // Convert image tags that follow a different pattern. $body = preg_replace_callback(self::IMAGE_REGEX_BAR, `barCallbackFunction`, $body); return $body;

In the various callback functions we need to do several things:

  1. Alter the source string following the same logic we used when we constructed our potential sources in our source class. This ensures that the value passed in the $source_id variable below matches a value in the mapping table created by the image migration.
  2. Next we call the handleSourceMigration() method with the altered source value, which will find the destination id associated with the source id.
  3. We then use the returned image entity id to construct the token and replace the image markup in the body data.
$image_entity_id = self::handleSourceMigration('ExampleImageMigration', $source_id); Implementation Details

Astute observers will notice that we called self::handleSourceMigration(), not $this->handleSourceMigration. This is due to the fact that the handleSourceMigration() method defined in the Migrate class is not static and uses $this within the body of the method. Callback functions are called statically, so the reference to $this is lost. Additionally, we can't instantiate a new Migration class object to get around this, as the Migrate class is an abstract class. You also cannot pass the current Migrate object into the callback function, due to the Migrate class not supporting additional arguments for the callbacks() method.

Thus, we are stuck trying to implement a singleton or global variable that stores the current Migrate object, or duplicating the handleSourceMigration() method and making it work statically. We weren’t a fan of either option, but we went with the latter. Other ideas or reasons to choose the alternate route are welcome!

If you go the route we chose, these are the lines you should remove from the handleSourceMigration method in the Migrate class when you duplicate it into one of your custom classes.

- // If no destination ID was found, give each source migration a chance to - // create a stub. - if (!$destids) { - foreach ($source_migrations as $source_migration) { - // Is this a self reference? - if ($source_migration->machineName == $this->machineName) { - if (!array_diff($source_key, $this->currentSourceKey())) { - $destids = array(); - $this->needsUpdate = MigrateMap::STATUS_NEEDS_UPDATE; - break; - } - } - // Break out of the loop if a stub was successfully created. - if ($destids = $source_migration->createStubWrapper($source_key, $migration)) { - break; - } - } - }

Before we continue, let's do a quick recap of the steps along the way.

  1. We made an iterative source of all images from a source data string by creating the ExampleMigrateSourceImage class that extends the MigrateSource class.
  2. We then used ExampleMigrateSourceImage as a migration source class the in the ExampleImageMigration class to import all of the images as new structured entities.
  3. Finally, we built our "actual" content migration and used the callbacks() method on the body field mapping in conjunction with the handleSourceMigration() method to convert the existing image markup to entity based tokens.
The end result

With all of this in place, you simply sit back and watch your migrations run! Of course before that, you get the joy of running it countless times and facing edge cases with malformed image paths, broken markup, new image sources you were never told about, etc. Then at the end of the day you are left with shiny new image entities full of metadata that are able to be searched, sorted, filtered, and re-used! Thanks to token rendering (if you go that route), you also gain full control over how your img tags are rendered, which greatly simplifies the implementation of lazy-loading or responsive images. Most importantly, you have applied structure to your data, and you are now ready to transform and adapt your content as needed for any challenge that is thrown your way!

Catégories: Elsewhere

Jeff Geerling's Blog: Increase the Guzzle HTTP Client request timeout in Drupal 8

Planet Drupal - jeu, 18/08/2016 - 18:56

During some migration operations on a Drupal 8 site, I needed to make an HTTP request that took > 30 seconds to return all the data... and when I ran the migration, I'd end up with exceptions like:

Migration failed with source plugin exception: Error message: cURL error 28: Operation timed out after 29992 milliseconds with 2031262 out of 2262702 bytes received (see http://curl.haxx.se/libcurl/c/libcurl-errors.html).

The solution, it turns out, is pretty simple! Drupal's \Drupal\Core\Http\ClientFactory is the default way that plugins like Migrate's HTTP fetching plugin get a Guzzle client to make HTTP requests (though you could swap things out if you want via services.yml), and in the code for that factory, there's a line after the defaults (where the 'timeout' => 30 is defined) like:

Catégories: Elsewhere

Simon Désaulniers: [GSOC] Week 10&11&12 Report

Planet Debian - jeu, 18/08/2016 - 17:09
Week 10 & 11

During these two weeks, I’ve worked hard on paginating values on the DHT.

Value pagination

As explained on my post on data persistence, we’ve had network traffic issues. The solution we have found for this is to use the queries (see also this) to filter data on the remote peer we’re communicating with. The queries let us select fields of a value instead of fetching whole values. This way, we can fetch values with unique ids. The pagination is the process of first selecting all value ids for a given hash, then making a separate “get” request packet for each of the values.

This feature makes the DHT more friendly with UDP. In fact, UDP packets can be dropped when of size greater than the UDP MTU. Paginating values will help this as all UDP packets will now contain only one value.

Week 12

I’ve been working on making the “put” request lighter, again using queries. This is a key feature which will make it possible to enable data persistence. In fact, it enables us to send values to a peer only if it doesn’t already have the value we’re announcing. This will substantially reduce the overall traffic. This feature is still being tested. The last thing I have to do is to demonstrate the reduction of network traffic.

Catégories: Elsewhere

Valuebound: Get your Drupal8 Development platform ready with Drush8!

Planet Drupal - jeu, 18/08/2016 - 15:17

As we all know, we need Drush8 for our Drupal8 development platform. I have tried installing Drush 8 using composer, but sometimes it turns out to be a disaster, especially when you try to install Drush 8 on the Digital Ocean Droplet having Ubuntu 16.04.

I have faced the same issue in the last few months to get the Drupal8 development platform ready with Drush8. So I have decided to find a solution to fix that forever. Well, finally found one which are the following lines of commands.

cd ~ php -r "readfile('http://files.drush.org/drush.phar');" > drush chmod +x drush sudo mv drush /usr/bin drush init

If you are…

Catégories: Elsewhere

Drupal core announcements: We can add big new things to Drupal 8, but how do we decide what to add?

Planet Drupal - jeu, 18/08/2016 - 14:48

Drupal 8 introduced the use of Semantic Versioning, which practically means the use of three levels of version numbers. The current release is Drupal 8.1.8. While increments of the last number are done for bugfixes, the middle number is incremented when we add new features in a backwards compatible way. That allows us to add big new things to Drupal 8 while it is still compatible with all your themes and modules. We successfully added some new modules like BigPipe, Place Block, etc.

But how do we decide what will get in core? Should people come up with ideas, build them, and once they are done, they are either added in core or not? No. Looking for feedback at the end is a huge waste of time, because maybe the idea is not a good fit for core, or it clashes with another improvement in the works. But how would one go about getting feedback earlier?

We held two well attended core conversations at the last DrupalCon in New Orleans titled The potential in Drupal 8.x and how to realize it and Approaches for UX changes big and small both of which discussed a more agile approach to avoid wasting time.

The proposal is to separate the ideation and prototyping process from implementation. Within the implementation section the potential use of experimental modules helps with making the perfecting process more granular for modules. We are already actively using that approach for implementation. On the other hand the ideation process is still to be better defined. That is where we need your feedback now.

See https://www.drupal.org/node/2785735 for the issue to discuss this. Looking forward to your feedback there.

Catégories: Elsewhere

Mediacurrent: How Drupal won an SEO game without really trying

Planet Drupal - jeu, 18/08/2016 - 14:23

At Mediacurrent we architected and built a Drupal site for a department of a prominent U.S. university several years ago. As part of maintaining and supporting the site over the years, we have observed how well it has performed in search engine rankings, often out-performing other sites across campus built on other platforms.

Catégories: Elsewhere

KnackForge: Drupal Commerce - PayPal payment was successful but order not completed

Planet Drupal - jeu, 18/08/2016 - 12:00
Drupal Commerce - PayPal payment was successful but order not completed

Most of us use PayPal as a payment gateway for our eCommerce sites. Zero upfront, No maintenance fee, API availability and documentation makes anyone easy to get started. At times online references offer out-dated documentation or doesn't apply to us due to account type (Business / Individual), Country of the account holder, etc. We had this tough time when we wanted to set up Auto return to Drupal website.

Thu, 08/18/2016 - 15:30 Tag(s) Drupal planet Drupal 7 DropThemes.in drupal-commerce
Catégories: Elsewhere

Zlatan Todorić: DebConf16 - new age in Debian community gathering

Planet Debian - jeu, 18/08/2016 - 11:19


Finally got some time to write this blog post. DebConf for me is always something special, a family gathering of weird combination of geeks (or is weird a default geek state?). To be honest, I finally can compare Debian as hacker conference to other so-called hacker conferences. With that hat on, I can say that Debian is by far the most organized and highest quality conference. Maybe I am biased, but I don't care too much about that. I simply love Debian and that is no secret. So lets dive into my view on DebConf16 which was held in Cape Town, South Africa.

Cape Town

This was the first time we had conference on African continent (and I now see for the first time DebConf bid for Asia, which leaves only Australia and beautiful Pacific islands to start a bid). Cape Town by itself, is pretty much Europe-like city. That was kinda a bum for me on first day, especially as we were hosted at University of Cape Town (which is quite beautiful uni) and the surrounding neighborhood was very European. Almost right after the first day I was fine because I started exploring the huge city. Cape Town is really huge, it has by stats ~4mil people, and unofficially it has ~6mil. Certainly a lot to explore and I hope one day to be back there (I actually hope as soon as possible).

The good, bad and ugly

I will start with bad and ugly as I want to finish with good notes.

Racism down there is still HUGE. You don't have signs on the road saying that, but there is clearly separation between white and black people. The houses near uni all had fences on walls (most of them even electrical ones with sharp blades on it) with bars on windows. That just bring tensions and certainly doesn't improve anything. To be honest, if someone wants to break in they still can do easily so the fences maybe need to bring intimidation but they actually only bring tension (my personal view). Also many houses have sign of Armed Force Response (something in those lines) where in case someone would start breaking in, armed forces would come to protect the home.

Also compared to workforce, white appear to hold most of profit/big business positions and fields, while black are street workers, bar workers etc etc. On the street you can feel from time to time the tension between people. Going out to bars also showed the separation - they were either almost exclusively white or exclusively black. Very sad state to see. Sharing love and mixing is something that pushes us forward and here I saw clear blockades for such things.

The bad part of Cape Town is, and this is not only special to Cape Town but to almost all major cities, is that small crime is on wide scale. Pickpocketing here is something you must pay attention to it. To me, personally, nothing happened but I heard a lot of stories from my friends on whom were such activities attempted (although I am not sure did the criminals succeed).

Enough of bad as my blog post will not change this and it is a topic for debate and active involvement which I can't unfortunately do at this moment.


There are so many great local people I met! As I mentioned, I want to visit that city again and again and again. If you don't fear of those bad things, this city has great local cuisine, a lot of great people, awesome art soul and they dance with heart (I guess when you live in rough times, you try to use free time at your best). There were difference between white and black bars/clubs - white were almost like standard European, a lot of drinking and not much dancing, and black were a lot of dancing and not much drinking (maybe the economical power has something to do with it but I certainly felt more love in black bars).

Cape Town has awesome mountain, the Table Mountain. I went on hiking with my friends, and I must say (again to myself) - do the damn hiking as much as possible. After every hike I feel so inspired, that I will start thinking that I hate myself for not doing it more often! The view from Table mountain is just majestic (you can even see the Cape of Good Hope). The WOW moments are just firing up in you.

Now lets transfer to DebConf itself. As always, organization was on quite high level. I loved the badge design, it had a map and nice amount of information on it. The place we stayed was kinda not that good but if you take it into account that those a old student dorms (in we all were in female student dorm :D ) it is pretty fancy by its own account. Talks were near which is always good. The general layout of talks and front desk position was perfect in my opinion. All in one place basically.

Wine and Cheese this year was kinda funny story because of the cheese restrictions but Cheese cabal managed to pull out things. It was actually very well organized. Met some new people during the party/ceremony which always makes me grow as a person. Cultural mix on DebConf is just fantastic. Not only you learn a lot about Debian, hacking on it, but sheer cultural diversity makes this small con such a vibrant place and home to a lot.

Debian Dinner happened in Aquarium were I had nice dinner and chat with my old friends. Aquarium by itself is a thing where you can visit and see a lot of strange creatures that live on this third rock from Sun.

Speaking of old friends - I love that I Apollo again rejoined us (by missing the DebConf15), seeing Joel again (and he finally visited Banja Luka as aftermath!), mbiebl, ah, moray, Milan, santiago and tons of others. Of course we always miss a few such as zack and vorlon this year (but they had pretty okay-ish reasons I would say).

Speaking of new friends, I made few local friends which makes me happy and at least one Indian/Hindu friend. Why did I mention this separately - well we had an accident during Group Photo (btw, where is our Lithuanian, German based nowdays, photographer?!) where 3 laptops of our GSoC students were stolen :( . I was luckily enough to, on behalf of Purism, donate Librem11 prototype to one of them, which ended up being the Indian friend. She is working on real time communications which is of interest also to Purism for our future projects.

Regarding Debian Day Trip, Joel and me opted out and we went on our own adventure through Cape Town in pursue of meeting and talking to local people, finding out interesting things which proved to be a great decision. We found about their first Thursday of month festival and we found about Mama Africa restaurant. That restaurant is going into special memories (me playing drums with local band must always be a special memory, right?!).

Huh, to be honest writing about DebConf would probably need a book by itself and I always try to keep my posts as short as possible so I will try to stop here (maybe I write few bits in future more about it but hardly).

Now the notes. Although I saw the racial segregation, I also saw the hope. These things need time. I come from country that is torn apart in nationalism and religious hate so I understand this issues is hard and deep on so many levels. While the tensions are high, I see people try to talk about it, try to find solution and I feel it is slowly transforming into open society, where we will realize that there is only one race on this planet and it is called - HUMAN RACE. We are all earthlings, and as sooner we realize that, sooner we will be on path to really build society up and not fake things that actually are enslaving our minds.

I just want in the end to say thank you DebConf, thank you Debian and everyone could learn from this community as a model (which can be improved!) for future societies.

Catégories: Elsewhere

Unimity Solutions Drupal Blog: Video Annotations: A Powerful and Innovative Tool for Education

Planet Drupal - jeu, 18/08/2016 - 08:51

According to John J Medina a famous molecular biologist “Vision trumps all other senses.” Human mind is attracted to remember dynamic pictures rather than listen to words or read long texts. Advancement in multimedia has enabled teachers to impart visual representation of content in the class room.

Catégories: Elsewhere

Drupalize.Me: Learn by Mentoring at DrupalCon

Planet Drupal - jeu, 18/08/2016 - 08:37

DrupalCon is a great opportunity to learn all kinds of new skills and grow professionally. For the 3 days of the main conference in Dublin (September 27–29) there will be sessions on just about everything related to Drupal that you could want. One amazing opportunity that you may not be aware of though is the Mentored Sprint on Friday, September 30th. This is a great place for new folks to learn the ropes of our community and how to contribute back. What may be less talked about is the chance to be a mentor.

Catégories: Elsewhere

Norbert Tretkowski: No MariaDB MaxScale in Debian

Planet Debian - jeu, 18/08/2016 - 08:00

Last weekend I started working on a MariaDB MaxScale package for Debian, of course with the intention to upload it into the official Debian repository.

Today I got pointed to an article by Michael "Monty" Widenius he published two days ago. It explains the recent license change of MaxScale from GPL so BSL with the release of MaxScale 2.0 beta. Justin Swanhart summarized the situation, and I could not agree more.

Looks like we will not see MaxScale 2.0 in Debian any time soon...

Catégories: Elsewhere

Roy Scholten: Vetting Drupal product ideas

Planet Drupal - mer, 17/08/2016 - 23:57

We’ve made big strides since Drupalcon New Orleans in how we add new features to Drupal core. The concept of experimental modules has already helped us introduce features like a new way to add blocks to a page, content moderation and workflow tools and a whole new approach for editing all the things on a page while staying at that page.

In New Orleans we started to define the process for making these kinds of big changes. Probably the most important and defining aspect of it is that we’re (finally!) enabling a clearer separation between vetting ideas first, implementation second.

True to form we specified and detailed the latter part first :-)

So, on to that first part, vetting Drupal product ideas. In my core conversation I outlined the need for making bigger UX changes, faster and suggested possible approaches for how to design and develop those, borrowing heavily from the Lean UX method

Since then, we’ve been reminded that we really do need a clearly defined space to discuss the strategic value of proposed new features. A place to decide if a given idea is desirable and viable as an addition to core.

The point being: core product manager with help from Drupal UX team members wrote up a proposal for how to propose core product ideas and what’s needed to turn a good idea into an actionable plan.

It needs your feedback. Please read and share your thoughts.

Tags: drupaluxprocessdrupalplanetSub title: Agree on why and what before figuring out the how
Catégories: Elsewhere

Gunnar Wolf: Talking about the Debian keyring in Investigaciones Nucleares, UNAM

Planet Debian - mer, 17/08/2016 - 20:47

For the readers of my blog that happen to be in Mexico City, I was invited to give a talk at Instituto de Ciencias Nucleares, Ciudad Universitaria, UNAM.

I will be at Auditorio Marcos Moshinsky, on August 26 starting at 13:00. Auditorio Marcos Moshinsky is where we met for the early (~1996-1997) Mexico Linux User Group meetings. And... Wow. I'm amazed to realize it's been twenty years that I arrived there, young and innocent, the newest of what looked like a sect obsessed with world domination and a penguin fetish.

AttachmentSize llavero_chico.png220.84 KB llavero_orig.png1.64 MB
Catégories: Elsewhere

Mediacurrent: DrupalCon NOLA: The People Behind the Usernames

Planet Drupal - mer, 17/08/2016 - 20:33

As we work every day on our own projects, with our own deadlines and priorities, it is often too easy to forget about the entire community of others using Drupal in much the same way. When we're working with Drupal in our various capacities, there is no shortage of methods to interact with the community and contribute back, but those aren't the focus of this post.

Catégories: Elsewhere

myDropWizard.com: Drupal 6 security updates for Panels!

Planet Drupal - mer, 17/08/2016 - 20:16

As you may know, Drupal 6 has reached End-of-Life (EOL) which means the Drupal Security Team is no longer doing Security Advisories or working on security patches for Drupal 6 core or contrib modules - but the Drupal 6 LTS vendors are and we're one of them!

Today, there is a Critical security releases for the Panels modules for multiple Access Bypass vulnerabilities.

The first vulnerability allows anonymous users to use AJAX callbacks to pull content and configuration from Panels, which allow them to access private data. And the second, allows authenticated users with permission to use the Panels IPE to modify the Panels display for pages that they don't have permission to otherwise edit.

See the security advisory for Drupal 7 for more information.

Here you can download the patch for 6.x-3.x!

If you have a Drupal 6 site using the Panels module, we recommend you update immediately! We have already deployed the patch for all of our Drupal 6 Long-Term Support clients. :-)

If you'd like all your Drupal 6 modules to receive security updates and have the fixes deployed the same day they're released, please check out our D6LTS plans.

Note: if you use the myDropWizard module (totally free!), you'll be alerted to these and any future security updates, and will be able to use drush to install them (even though they won't necessarily have a release on Drupal.org).

Catégories: Elsewhere

Rakesh's DSoC 16 blog: Last week - GSoC16 - Detected and Solved Merge Conflicts in Drupal8.

Planet Drupal - mer, 17/08/2016 - 20:04
Last week - GSoC16 - Detected and Solved Merge Conflicts in Drupal8. rakesh Wed, 08/17/2016 - 23:34
Catégories: Elsewhere

Miloš Bovan: The final week of Google Summer of Code 2016

Planet Drupal - mer, 17/08/2016 - 19:56
The final week of Google Summer of Code 2016

It’s been more than 13 warm weeks since I’ve started to work on my Google Summer of Code 2016 - Mailhandler project. During the past few weeks, I finished the coding and worked on latest improvements related to documentation.

In the past week, the project documentation has been updated which was the last step before merging Github repository and Sandbox project into Mailhandler. Drupal 8 release section was updated and it summarizes the features of D8 version. If you are wondering where you can use the code I developed this summer, feel free to read about real Mailhanlder use-cases.

This week, I am doing the latest cleanups and preparing the project for the final evaluation. It is the second evaluation after I submitted the midterm evaluation. In case you missed the previous posts from Google Summer of Code series, I am providing the project retrospect below.

Project Retrospect

In early February, in an ordinary conversation, a friend of mine told me about this year’s Google Summer of Code program. I got interested in it since I took apart in GHOP (The Google Highly Open Participation Contest; This program was replaced with Google Code-In) during high-school.

In March, I’ve found out that Drupal will be one of the participating organizations and since I did a Drupal internship last year, this seemed to be a perfect opportunity to continue open-source contribution.

Among many interesting project ideas, I decided for “Port Mailhandler to Drupal 8”. Miro Dietiker proposed the project and during the proposal discussions, Primoz Hmeljak joined the mentoring team too.

The great news came in April. I was one of the 11 students chosen to contribute Drupal this summer! First Yay moment!

The project progress could have been followed on this blog, so I’m will not go deeper into it.

3 months of work passed quickly and at this point, I can confirm that I learned a ton of new things. I improved not only my coding skills but communication and project management skills too.

In my opinion, we reached all the goals we put on the paper in April. All the features defined in the proposal were developed, tested and documented.

Google Summer of Code is a great program and I would sincerely recommend all the students to consider participating in it.

Future plans

The future plans about the module are related to its maintenance. Inmail is still under development and it seems it will be ready for an alpha release very soon. In the following days, I am going to create an issue about nice-to-have features of Mailhandler. This issue could serve as a place for Drupal community to discuss the possible ways of the future Mailhandler development.

Thank you note

Last but not least, I would like to give huge thanks to my mentors (Miro Dietiker and Primoz Hmeljak) for being so supportive, helpful and flexible during this summer. Their mentoring helped me in many ways - from the proposal re-definition in April/May, endless code iterations, discussions of different ideas to blog post reviews, including this one :).

I would like to thank my company too for allowing me to contribute Drupal via this program and Google for giving me an opportunity to participate in this great program.



Milos Wed, 08/17/2016 - 19:56 Tags Drupal Google Summer of Code Open source Drupal Planet Add new comment
Catégories: Elsewhere


Subscribe to jfhovinne agrégateur - Elsewhere