Elsewhere

Olivier Grégoire: Community bounding + 2 weeks at GSoC!

Planet Debian - Mon, 06/06/2016 - 18:57

Welcome in my first and second report!
The community was really good, I talked with a lot of people from all around the world. All that projects are just awesome and I am happy to be part of this.
For this first week, I provided a list of all information I need to pull in my client (this list is subject to change a little bit):
-Call ID
-Resolution of the camera of all the people connect to the call
-Percentage of loosing frame
-Bandwidth (upload + download)
-Name of Codecs using on the conversation
-Time to contact the other person
-The security level
-Performance using by Ring (RAM + CPU)

I am trying to figure out how work the Ring project.
-Understand the external exchange by using Wireshark to catch some important packages.
-Understand the internal exchange between the daemon and clients on the D-Bus by using Bustle

I created my architecture program on the daemon and the D-Bus. [1] You can call the method launchSmartInfo(int x) from the D-Bus (by using D-Feet for example). That will call SmartInfo signal all x ms. This signal can only push an int for the moment, but he will push all the information we want on the clients.

I actually work on the Ring GNU Linux client. So I learn how work the QT and GTK+.

[1]github/Gasuleg/Smartlnfo-Ring: I will stop update this repository because I will push my code on the Gerrit draft of Savoir Faire Linux. It's more easy for my team to do some commentary on that platform and it's free software)

Categories: Elsewhere

Olivier Grégoire: Community bounding + 2 weeks at GSoC!

Planet Debian - Mon, 06/06/2016 - 18:57

Welcome in my first and second report!
The community was really good, I talked with a lot of people from all around the world. All that projects are just awesome and I am happy to be part of this.
For this first week, I provided a list of all information I need to pull in my client (this list is subject to change a little bit):
-Call ID
-Resolution of the camera of all the people connect to the call
-Percentage of loosing frame
-Bandwidth (upload + download)
-Name of Codecs using on the conversation
-Time to contact the other person
-The security level
-Performance using by Ring (RAM + CPU)

I am trying to figure out how work the Ring project.
-Understand the external exchange by using Wireshark to catch some important packages.
-Understand the internal exchange between the daemon and clients on the D-Bus by using Bustle

I created my architecture program on the daemon and the D-Bus. [1] You can call the method launchSmartInfo(int x) from the D-Bus (by using D-Feet for example). That will call SmartInfo signal all x ms. This signal can only push an int for the moment, but he will push all the information we want on the clients.

I actually work on the Ring GNU Linux client. So I learn how work the QT and GTK+.

[1][https://github.com/Gasuleg/Smartlnfo-Ring](https://github.com/Gasuleg/Smartlnfo-Ring)(I will stop update this repository because I will push my code on the Gerrit draft of Savoir Faire Linux. It's more easy for my team to do some commentary on that platform and it's free software)

Categories: Elsewhere

FFW Agency: Go Camping with Drupal This Summer

Planet Drupal - Mon, 06/06/2016 - 18:21
Go Camping with Drupal This Summer Ray Saltini Mon, 06/06/2016 - 16:21

Why spend all your time at the beach when you can be learning even more about Drupal. Here are just a few of the camps our staff will be participating in this summer. We hope to see you there.

DrupalNorth

Drupal North in Montreal June 16 - 19 is a great regional event. FFW Manager of Learning and Contributions David Hernandez is presenting Managing CSS and JavaScript files in Drupal 8 with Libraries

GovCon

Join us at GovCon in Bethesda July 20-22 where we’re sponsoring a full day training with Drupal Console author and FFW Drupal 8 Solutions Engineer Jesus Olivas on Building Custom Drupal 8 features and modules.  FFW Center of Excellence Director Ray Saltini and FFW Manager of Learning and Contributions David Hernandez will be there presenting on Personalization Strategies for Government Websites and Managing CSS and JavaScript files in Drupal 8 with Libraries

NYC Camp

NYC Camp is back at the United Nations this year July 8  - 11. There’s too much learning going on to list it all here. Make sure you catch FFW Center of Excellence Director Ray Saltini’s presentation Radical Digital Transformation or Die

Twin Cities Drupal Camp

FFW Drupal 8 Solutions Engineer and Drupal Console project lead Jesus Olivas is giving a full day training at Twin Cities June 16 - 19 on Drupal 8 Module Building and presenting Improving Your Drupal 8 Development Workflow. Make sure you catch him and FFW Developer Tess Flynn who’s presenting Ride the Whale! Docker for Drupalists.

Tagged with Comments
Categories: Elsewhere

Dries Buytaert: Gotthard tunnel website using Drupal

Planet Drupal - Mon, 06/06/2016 - 18:17

The Gotthard Base Tunnel, under construction for the last 17 years, was officially opened last week. This is the world's longest and deepest railroad tunnel, spanning 57 kilometers from Erstfeld to Bolio, Switzerland, underneath the Swiss Alps. To celebrate its opening, Switzerland also launched a multi-lingual multimedia website to celebrate the project's completion. I was excited to see they chose to build their site on Drupal 8! The site is a fitting digital tribute to an incredible project and launch event. Congratulations to the Gotthard Base Tunnel team!

Categories: Elsewhere

Four Kitchens: Launch Announcement: WOOD Magazine

Planet Drupal - Mon, 06/06/2016 - 18:14

We’re pleased to announce the site launch of woodmagazine.com, the online presence of WOOD Magazine, “The World’s Leading Woodworking Resource.” The new site includes online-only content, free downloadable plans for home woodworking projects, an index of articles in the print magazine, community forums, and subscription management.

Categories: Elsewhere

Reproducible builds folks: Reprotest has a preliminary CLI and configuration file handling

Planet Debian - Mon, 06/06/2016 - 17:08

Author: ceridwen

This is the first draft of reprotest's interface, and I welcome comments on how to improve it. At the moment, reprotest's CLI takes two mandatory arguments, the build command to run and the build artifact file to test after running the build. If the build command or build artifact have spaces, they have to be passed as strings, e.g. "debuild -b -uc -us". For optional arguments, it has --variations, which accepts a list of possible build variations to test, one or more of 'captures_environment', 'domain_host', 'filesystem', 'home', 'kernel', 'locales', 'path', 'shell', 'time', 'timezone', 'umask', and 'user_group' (see variations for more information); --dont_vary, which makes reprotest not test any variations in the given list (the default is to run all variations); --source_root, which accepts a directory to run the build command in and defaults to the current working directory; and --verbose, which will eventually enable more detailed logging. To get help for the CLI, run reprotest -h or reprotest --help.

The config file has one section, basics, and the same options as the CLI, except there's no dont_vary option, and there are build_command and artifact options. If build_command and/or artifact are set in the config file, reprotest can be run without passing those as command-line arguments. Command-line arguments always override config file options. Reprotest currently searches the working directory for the config file, but it will also eventually search the user's home directory. A sample config file is below.

[basics] build_command = setup.py sdist artifact = dist/reprotest-0.1.tar.gz source_root = reprotest/ variations = captures_environment domain_host filesystem home host kernel locales path shell time timezone umask user_group

At the moment, the only build variations that reprotest actually tests are the environment variable variations: captures_environment, home, locales, and timezone. Over the next week, I plan to add the rest of the basic variations and accompanying tests. I also need to write tests for the CLI and the configuration file. After that, I intend to work on getting (s)chroot communication working, which will involve integrating autopkgtest code.

Some of the variations require specific other packages to be installed: for instance, the locales variation currently requires the fr_CH.UTF-8 locale. Locales are a particular problem because I don't know of a way in Debian to specify that a given locale must be installed. For other packages, it's unclear to me whether I should specify them as depends or recommends: they aren't dependencies in a strict sense, but marking them as dependencies will make it easier to install a fully-functional reprotest. When reprotest runs with variations enabled that it can't test because it doesn't have the correct packages installed, I intend to have it print a warning but continue to run.

tests.reproducible-builds.org also has different settings, such as different locales, for different architectures. I'm not clear on why this is. I'd prefer to avoid having to generate a giant list of variations based on architecture, but if necessary, I can do that. The prebuilder script contains variations specific to Linux, to Debian, and to pbuilder/cowbuilder. I'm not including Debian-specific variations until I get much more of the basic functionality implemented, and I'm not sure I'm going to include pbuilder-specific variations ever, because it's probably better for extensibility to other OSes, e.g. BSD, to add support for plugins or more complicated configurations.

I implemented the variations by creating a function for each variation. Each function takes as input two build commands, two source trees, and two sets of environment variables and returns the same. At the moment, I'm using dictionaries for the environment variables, mutating them in-place and passing the references forward. I'm probably going to replace those at some point with an immutable mapping. While at the moment, reprotest only builds on the existing system, when I start extending it to other build environments, this will require double-dispatch, because the code that needs to be executed will depend on both the variation to be tested and the environment being built on. At the moment, I'm probably going to implement this with a dictionary with tuple keys of (build_environment, variation) or nested dictionaries. If it's necessary for code to depend on OS or architecture, too, this could end up becoming a triple or quadruple dispatch.

Categories: Elsewhere

John Goerzen: How git-annex replaces Dropbox + encfs with untrusted providers

Planet Debian - Mon, 06/06/2016 - 16:38

git-annex has been around for a long time, but I just recently stumbled across some of the work Joey has been doing to it. This post isn’t about it’s traditional roots in git or all the features it has for partial copies of large data sets, but rather for its live syncing capabilities like Dropbox. It takes a bit to wrap your head around, because git-annex is just a little different from everything else. It’s sort of like a different-colored smell.

The git-annex wiki has a lot of great information — both low-level reference and a high-level 10-minute screencast showing how easy it is to set up. I found I had to sort of piece together the architecture between those levels, so I’m writing this all down hoping it will benefit others that are curious.

Ir you just want to use it, you don’t need to know all this. But I like to understand how my tools work.

Overview

git-annex lets you set up a live syncing solution that requires no central provider at all, or can be used with a completely untrusted central provider. Depending on your usage pattern, this central provider could require only a few MBs of space even for repositories containing gigabytes or terabytes of data that is kept in sync.

Let’s take a look at the high-level architecture of the tool. Then I’ll illustrate how it works with some scenarios.

Three Layers

Fundamentally, git-annex takes layers that are all combined in Dropbox and separates them out. There is the storage layer, which stores the literal data bytes that you are interested in. git-annex indexes the data in storage by a hash. There is metadata, which is for things like a filename-to-hash mapping and revision history. And then there is an optional layer, which is live signaling used to drive the real-time syncing.

git-annex has several modes of operation, and the one that enables live syncing is called the git-annex assistant. It runs as a daemon, and is available for Linux/POSIX platforms, Windows, Mac, and Android. I’ll be covering it here.

The storage layer

The storage layer simply is blobs of data. These blobs are indexed by a hash, and can be optionally encrypted at rest at remote backends. git-annex has a large number of storage backends; some examples include rsync, a remote machine with git-annex on it that has ssh installed, WebDAV, S3, Amazon Glacier, removable USB drive, etc. There’s a huge list.

One of the git-annex features is that each client knows the state of each storage repository, as well as the capability set of each storage repository. So let’s say you have a workstation at home and a laptop you take with you to work or the coffee shop. You’d like changes on one to be instantly recognized on another. With something like Dropbox or OwnCloud, every file in the set you want synchronized has to reside on a server in the cloud. With git-annex, it can be configured such that the server in the cloud only contains a copy of a file until every client has synced it up, at which point it gets removed. Think about it – that is often what you want anyhow, so why maintain an unnecessary copy after it’s synced everywhere? (This behavior is, of course, configurable.) git-annex can also avoid storing in the cloud entirely if the machines are able to reach each other directly at least some of the time.

The metadata layer

Metadata about your files includes a mapping from the file names to the storage location (based on hashes), change history, and information about the status of each machine that participates in the syncing. On your clients, git-annex stores this using git. This detail is very useful to some, and irrelevant to others.

Some of the git-annex storage backends can support only storage (S3, for instance). Some can support both storage and metadata (rsync, ssh, local drives, etc.) You can even configure a backend to support only metadata (more on why that may be useful in a bit). When you are working with a git-backed repository for git-annex, it can hold data, metadata, or both.

So, to have a working sync system, you must have a way to transport both the data and the metadata. The transport for the metadata is generally rsync or git, but it can also be XMPP in which Git changesets are basically wrapped up in XMPP presence messages. Joey says, however, that there are some known issues with XMPP servers sometimes dropping or reordering some XMPP messages, so he doesn’t encourage that method currently.

The live signaling layer

So once you have your data and metadata, you can already do syncs via git annex sync --contents. But the real killer feature here will be automatic detection of changes, both on the local and the remote. To do that, you need some way of live signaling. git-annex supports two methods.

The first requires ssh access to a remote machine where git-annex is installed. In this mode of operation, when the git-annex assistant fires up, it opens up a persistent ssh connection to the remote and runs the git-annex-shell over there, which notifies it of changes to the git metadata repository. When a change is detected, a sync is initiated. This is considered ideal.

A substitute can be XMPP, and git-annex actually converts git commits into a form that can be sent over XMPP. As I mentioned above, there are some known reliability issues with this and it is not the recommended option.

Encryption

When it comes to encryption, you generally are concerned about all three layers. In an ideal scenario, the encryption and decryption happens entirely on the client side, so no service provider ever has any details about your data.

The live signaling layer is encrypted pretty trivially; the ssh sessions are, of course, encrypted and TLS support in XMPP is pervasive these days. However, this is not end-to-end encryption; those messages are decrypted by the service provider, so a service provider could theoretically spy on metadata, which may include change times and filenames, though not the contents of files themselves.

The data layer also can be encrypted very trivially. In the case of the “dumb” backends like S3, git-annex can use symmetric encryption or a gpg keypair and all that ever shows up on the server are arbitrarily-named buckets.

You can also use a gcrypt-based git repository. This can cover both data and metadata — and, if the target also has git-annex installed, the live signalling layer. Using a gcrypt-based git repository for the metadata and live signalling is the only way to accomplish live syncing with 100% client-side encryption.

All of these methods are implemented in terms of gpg, and can support symmetric of public-key encryption.

It should be noted here that the current release versions of git-annex need a one-character patch in order to fix live syncing with a remote using gcrypt. For those of you running jessie, I recommend the version in jessie-backports, which is presently 5.20151208. For your convenience, I have compiled an amd64 binary that can drop in over /usr/bin/git-annex if you have this version. You can download it and a gpg signature for it. Note that you only need this binary on the clients; the server can use the version from jessie-backports without issue.

Putting the pieces together: some scenarios

Now that I’ve explained the layers, let’s look at how they fit together.

Scenario 1: Central server

In this scenario, you might have a workstation and a laptop that sync up with each other by way of a central server that also has a full copy of the data. This is the scenario that most closely resembles Dropbox, box, or OwnCloud.

Here you would basically follow the steps in the git-assistant screencast: install git-annex on a server somewhere, and point your clients to it. If you want full end-to-end encryption, I would recommend letting git-annex generate a gpg keypair for you, which you would then need to copy to both your laptop and workstation (but not the server).

Every change you make locally will be synced to the server, and then from the server to your other PC. All three systems would be configured in the “client” transfer group.

Scenario 1a: Central server without a full copy of the data

In this scenario, everything is configured the same except the central server is configured with the “transfer” transfer group. This means that the actual data synced to it is deleted after it has been propagated to all clients. Since git-annex can verify which repository has received a copy of which data, it can easily enough delete the actual file content from the central server after it has been copied to all the clients. Many people use something like Dropbox or OwnCloud as a multi-PC syncing solution anyhow, so once the files have been synced everywhere, it makes sense to remove them from the central server.

This is often a good ideal for people. There are some obvious downsides that are sometimes relevant. For instance, to add a third sync client, it must be able to initially copy down from one of the existing clients. Or, if you intend to access the data from a device such as a cell phone where you don’t intend for it to have a copy of all data all the time, you won’t have as convenient way to download your data.

Scenario 1b: Split data/metadata central servers

Imagine that you have a shell or rsync account on some remote system where you can run git-annex, but don’t have much storage space. Maybe you have a cheap VPS or shell account somewhere, but it’s just not big enough to hold your data.

The answer to this would be to use this shell or rsync account for the metadata, but put the data elsewhere. You could, for instance, store the data in Amazon S3 or Amazon Glacier. These backends aren’t capable of storing the git-annex metadata, so all you need is a shell or rsync account somewhere to sync up the metadata. (Or, as below, you might even combine a fully distributed approach with this.) Then you can have your encrypted data pushed up to S3 or some such service, which presumably will grow to whatever size you need.

Scenario 2: Fully distributed

Like git itself, git-annex does not actually need a central server at all. If your different clients can reach each other directly at least some of the time, that is good enough. Of course, a given client will not be able to do fully automatic live sync unless it can reach at least one other client, so changes may not propagate as quickly.

You can simply set this up by making ssh connections available between your clients. git-annex assistant can automatically generate appropriate ~/.ssh/authorized_keys entries for you.

Scenario 2a: Fully distributed with multiple disconnected branches

You can even have a graph of connections available. For instance, you might have a couple machines at home and a couple machines at work with no ability to have a direct connection between them (due to, say, firewalls). The two machines at home could sync with each other in real-time, as could the two machines at work. git-annex also supports things like USB drives as a transport mechanism, so you could throw a USB drive in your pocket each morning, pop it in to one client at work, and poof – both clients are synced up over there. Repeat when you get home in the evening, and you’re synced there. The USB drive’s repository can, of course, be of the “transport” type so data is automatically deleted from it once it’s been synced everywhere.

Scenario 3: Hybrid

git-annex can support LAN sync even if you have a central server. If your laptop, say, travels around but is sometimes on the same LAN as your PC, git-annex can easily sync directly between the two when they are reachable, saving a round-trip to the server. You can assign a cost to each remote, and git-annex will always try to sync first to the lowest-cost path that is available.

Drawbacks of git-annex

There are some scenarios where git-annex with the assistant won’t be as useful as one of the more traditional instant-sync systems.

The first and most obvious one is if you want to access the files without the git-annex client. For instance, many of the other tools let you generate a URL that you can email to people, and then they can download files without any special client software. This is not directly possible with git-annex. You could, of course, make something like a public_html directory be managed with git-annex, but it wouldn’t provide things like obfuscated URLs, password-protected sharing, time-limited sharing, etc. that you get with other systems. While you can share your repositories with others that have git-annex, you can’t share individual subdirectories; for a given repository, it is all or nothing.

The Android client for git-annex is a pretty interesting thing: it is mostly a small POSIX environment, providing a terminal, git, gpg, and the same web interface that you get on a standalone machine. This means that the git-annex Android client is fully functional compared to a desktop one. It also has a quick setup process for syncing off your photos/videos. On the other hand, the integration with the Android ecosystem is poor compared to most other tools.

Other git-annex features

git-annex has a lot to offer besides the git-annex assistant. Besides the things I’ve already mentioned, any given git-annex repository — including your client repository — can have a partial copy of the full content. Say, for instance, that you set up a git-annex repository for your music collection, which is quite large. You want some music on your netbook, but don’t have room for it all. You can tell git-annex to get or drop files from the netbook’s repository without deleting them remotely. git-annex has quite a few ways to automate and configure this, including making sure that at least a certain number of copies of a file exist in your git-annex ecosystem.

Conclusion

I initially started looking at git-annex due to the security issues with encfs, and the difficulty with setting up ecryptfs in this way. (I had been layering encfs atop OwnCloud). git-annex certainly ticks the box for me security-wise, and obviously anything encrypted with encfs wasn’t going to be shared with others anyhow. I’ll be using git-annex more in the future, I’m sure.

Categories: Elsewhere

Petter Reinholdtsen: The new "best" multimedia player in Debian?

Planet Debian - Mon, 06/06/2016 - 12:50

When I set out a few weeks ago to figure out which multimedia player in Debian claimed to support most file formats / MIME types, I was a bit surprised how varied the sets of MIME types the various players claimed support for. The range was from 55 to 130 MIME types. I suspect most media formats are supported by all players, but this is not really reflected in the MimeTypes values in their desktop files. There are probably also some bogus MIME types listed, but it is hard to identify which one this is.

Anyway, in the mean time I got in touch with upstream for some of the players suggesting to add more MIME types to their desktop files, and decided to spend some time myself improving the situation for my favorite media player VLC. The fixes for VLC entered Debian unstable yesterday. The complete list of MIME types can be seen on the Multimedia player MIME type support status Debian wiki page.

The new "best" multimedia player in Debian? It is VLC, followed by totem, parole, kplayer, gnome-mpv, mpv, smplayer, mplayer-gui and kmplayer. I am sure some of the other players desktop files support several of the formats currently listed as working only with vlc, toten and parole.

A sad observation is that only 14 MIME types are listed as supported by all the tested multimedia players in Debian in their desktop files: audio/mpeg, audio/vnd.rn-realaudio, audio/x-mpegurl, audio/x-ms-wma, audio/x-scpls, audio/x-wav, video/mp4, video/mpeg, video/quicktime, video/vnd.rn-realvideo, video/x-matroska, video/x-ms-asf, video/x-ms-wmv and video/x-msvideo. Personally I find it sad that video/ogg and video/webm is not supported by all the media players in Debian. As far as I can tell, all of them can handle both formats.

Categories: Elsewhere

Alessio Treglia: Why children can use their imagination better than we do?

Planet Debian - Mon, 06/06/2016 - 11:19

 

Children can use their imagination better than us because they are (still) immediately in contact with the Whole and they represent the most pristine prototype of the human being. From birth and for the first years of life, the child is the mirror of our species, who carries in himself the primary elements and the roots of evolution, without conditions or interference.

When then education begins, especially school, his imagination is restrained and limited, everything is being done to concentrate his interests only for what is ‘real’ and to let him leave the world of fantasy. In the first drawing exercises to which the children are subjected at school, their imagination or the appearance of how they perceive some elements of nature are discarded; the drawing that best fit to a photographic vision of reality is rewarded, inhibiting their own imaginative potential from the very beginning, in favour of a more reassuring homologation…

<Read More…[by Fabio Marzocca]>

Categories: Elsewhere

Dries Buytaert: Advancing Drupal's web services

Planet Drupal - Mon, 06/06/2016 - 09:24

In an earlier blog post, I looked at the web services solutions available in Drupal 8 and compared their strengths and weaknesses. That blog post was intended to help developers choose between different solutions when building Drupal 8 sites. In this blog post, I want to talk about how to advance Drupal's web services beyond Drupal 8.1 for the benefit of Drupal core contributors, module creators and technical decision-makers.

I believe it is really important to continue advancing Drupal's web services support. There are powerful market trends that oblige us to keep focused on this: integration with diverse systems having their own APIs, the proliferation of new devices, the expanding Internet of Things (IoT), and the widening adoption of JavaScript frameworks. All of these depend to some degree on robust web services.

Moreover, newer headless content-as-a-service solutions (e.g. Contentful, Prismic.io, Backand and CloudCMS) have entered the market and represent a widening interest in content repositories enabling more flexible content delivery. They provide content modeling tools, easy-to-use tools to construct REST APIs, and SDKs for different programming languages and client-side frameworks.

In my view, we need to do the following, which I summarize in each of the following sections: (1) facilitate a single robust REST module in core; (2) add functionality to help web services modules more easily query and manipulate Drupal's entity graph; (3) incorporate GraphQL and JSON API out of the box; and (4) add SDKs enabling easy integration with Drupal. Though I shared some of this in my DrupalCon New Orleans keynote, I wanted to provide more details in this blog post. I'm hoping to discuss this and revise it based on feedback from you.

One great REST module in core

While core REST can be enabled with only a few configuration changes, the full extent of possibilities in Drupal is only unlocked either when leveraging modules which add to or work alongside core REST's functionality, such as Services or RELAXed, or when augmenting core REST's capabilities with additional resources to interact with (by providing corresponding plugins) or using other custom code.

Having such disparate REST modules complicates the experience. These REST modules have overlapping or conflicting feature sets, which are shown in the following table.

Feature Core REST RELAXed Services Ideal core REST Content entity CRUD Yes Yes Yes Yes Configuration entity CRUD Create resource plugin (issue) Create resource plugin Yes Yes Custom resources Create resource plugin Create resource plugin Create Services plugin Possible without code Custom routes Create resource plugin or Views REST export (GET) Create resource plugin Configurable route prefixes Possible without code Translations Not yet (issue) Yes Create Services plugin Yes Revisions Create resource plugin Yes Create Services plugin Yes File attachments Create resource plugin Yes Create Services plugin Yes Authenticated user resources (log in/out, password reset) Not yet (issue) No User login and logout Yes

I would like to see a convergence where all of these can be achieved in Drupal core with minimal configuration and minimal code.

Working with Drupal's entity graph

Recently, a discussion at DrupalCon New Orleans with key contributors to the core REST modules, maintainers of important contributed web services modules, and external observers led to a proposed path forward for all of Drupal's web services.

A visual example of an entity graph in Drupal.

Buried inside Drupal is an "entity graph" over which different API approaches like traditional REST, JSON API, and GraphQL can be layered. These varied approaches all traverse and manipulate Drupal's entity graph, with differences solely in the syntax and features made possible by that syntax. Unlike core's REST API which only returns a single level (single entity or lists of entities), GraphQL and JSON API can return multiple levels of nested entities as the result of a single query. To better understand what this means, have a look at the GraphQL demo video I shared in my DrupalCon Barcelona keynote.

What we concluded at DrupalCon New Orleans is that Drupal's GraphQL and JSON API implementations require a substantial amount of custom code to traverse and manipulate Drupal's entity graph, that there was a lot of duplication in that code, and that there is an opportunity to provide more flexibility and simplicity. Therefore, it was agreed that we should first focus on building an "entity graph iterator" that can be reused by JSON API, GraphQL, and other modules.

This entity graph iterator would also enable manipulation of the graph, e.g. for aliasing fields in the graph or simplifying the structure. For example, the difference between Drupal's "base fields" and "configured fields" is irrelevant to an application developer using Drupal's web services API, but Drupal's responses leak this internal distinction by prefixing configured fields with field_ (see the left column in the table below). By the same token, all fields, even if they carry single values, expose the verbosity of Drupal's typed data system by being presented as arrays (see the left column in the table below). While there are both advantages and disadvantages to exposing single-value fields as arrays, many developers prefer more control over the output or the ability to opt into simpler outputs.

A good Drupal entity graph iterator would simplify the development of Drupal web service APIs, provide more flexibility over naming and structure, and eliminate duplicate code.

Current core REST (shortened response) Ideal core REST (shortened response) { "nid": [ { "value": "2" } ], "title": [ { "value": "Lorem ipsum" } ], "field_product_number": [ { "value": "35" } ], "field_image": [ { "target_id": "2", "alt": "Image", "title": "Hover text", "width": "210", "height": "281", "url": "http://site.com/x.jpg" } ] } { "nid": "2" "title": "Lorem ipsum", "product_number": { "value": 35 }, "image": { "target_id": 2, "alt": "Image", "title": "Hover text", "width": 210, "height": 281, "url": "http://site.com/x.jpg" } } GraphQL and JSON API in core

We should acknowledge simultaneously that the wider JavaScript community is beginning to embrace different approaches, like JSON API and GraphQL, which both enable complex relational queries that require fewer requests between Drupal and the client (thanks to the ability to follow relationships, as mentioned in the section concerning the entity graph).

While both JSON API and GraphQL are preferred over traditional REST due to their ability to provide nested entity relationships, GraphQL goes a step further than JSON API by facilitating explicitly client-driven queries, in which the client dictates its data requirements.

I believe that GraphQL and JSON API in core would be a big win for those building decoupled applications with Drupal, and these modules can use existing foundations in Drupal 8 such as the Serialization module. Furthermore, Drupal's own built-in JavaScript-driven UIs could benefit tremendously from GraphQL and JSON API. I'd love to see them in core rather than as contributed modules, as we could leverage them when building decoupled applications backed by Drupal or exchanging data with other server-side implementations. We could also "eat our own dog food" by using them to power JavaScript-driven UIs for block placement, media management, and other administrative interfaces. I can even see a future where Views and GraphQL are closely integrated.

A comparison of different API approaches for Drupal 8, with amended and simplified payloads for illustrative purposes.

SDKs to consume web services

While a unified REST API and support for GraphQL and JSON API would dramatically improve Drupal as a web services back end, we need to be attentive to the needs of consumers of those web services as well by providing SDKs and helper libraries for developers new to Drupal.

An SDK could make it easy to retrieve an article node, modify a field, and send it back without having to learn the details of Drupal's particular REST API implementation or the structure of Drupal's underlying data storage. For example, this would allow front-end developers to not have to deal with the details of single- versus multi-value fields, optional vs required fields, validation errors, and so on. As an additional example, incorporating user account creation and password change requests into decoupled applications would empower front-end developers building these forms on a decoupled front end such that they would not need to know anything about how Drupal performs user authentication.

As starting points for JavaScript applications, native mobile applications, and even other back-end applications, these SDKs could handle authenticating against the API and juggling of the correct routes to resources without the front-end developer needing an understanding of those nuances.

In fact, at Acquia we're now in the early stages of building the first of several SDKs for consuming and manipulating data via Drupal 8's REST API. Hydrant, a new generic helper library intended for JavaScript developers building applications backed by Drupal, is the work of Acquia's Matt Grill and Preston So, and it is already seeing community contributions. We're eager to share our work more widely and welcome new contributors.

Conclusion

I believe that it is important to have first-class web services in Drupal out of the box in order to enable top-notch APIs and continue our evolution to become API-first.

In parallel with our ongoing work on shoring up our REST module in core, we should provide the underpinnings for even richer web services solutions in the future. With reusable helper functionality that operates on Drupal's entity graph available in core, we open the door to GraphQL, JSON API, and even our current core REST implementation eventually relying on the same robust foundation. Both GraphQL and JSON API could also be promising modules in core. Last but not least, SDKs like Hydrant that empower developers to work with Drupal without learning its complexities will further advance our web services.

Collectively, these tracks of work will make Drupal uniquely compelling for application developers within our own community and well beyond.

Special thanks to Preston So for contributions to this blog post and to Moshe Weitzman, Kyle Browning, Kris Vanderwater, Wim Leers, Sebastian Siemssen, Tim Millwood, Ted Bowman, and Mateu Aguiló Bosch for their feedback during its writing.

Categories: Elsewhere

KnackForge: Review of Drupal professional themes

Planet Drupal - Mon, 06/06/2016 - 07:34
Review of Drupal professional themes

DropThemes.in is one among the best Drupal professional theme selling sites. With decades of free theme and quality of paid themes, DropThemes.in has got its own place in the list professional theme selling sites. All our themes are Responsive and catered specifically for Drupal sites. A good theme gives your website a distinct feel that sends out a positive impression to your visitors. In this post, we would like to review the top themes on DropThemes.in to help you select the best.

Vamsi Mon, 06/06/2016 - 11:04
Categories: Elsewhere

Norbert Preining: TeX Live 2016 released

Planet Debian - Mon, 06/06/2016 - 06:57

After long months of testing and waiting for the DVD production, we have released TeX Live 2016 today!

Detailed changes can be found here, the most important ones are:

  • LuaTeX is updated to 0.95 with a sweeping change of primitives. Most documents and classes need to be adapted!
  • Metafont got lua-hooks, mflua and mfluajit
  • SOURCE_DATE_EPOCH support for all engines except LuaTeX and original TeX
  • pdfTeX, XeTeX: some new primitives
  • new programs: gregorio, upmendex
  • tlmgr: system level configuration support
  • installer, tlmgr: cryptographic verification (if gpg is available)

CTAN mirrors are working on getting the latest releases, but in a day or two all the servers should be updated.

Thanks to all the developers, package writers, documentation writers, testers, and especially Karl Berry for his perfect organization.

Now get the champagne and write some nice documents!

Categories: Elsewhere

Jeff Geerling's Blog: Speeding up Composer-based Drupal installation

Planet Drupal - Mon, 06/06/2016 - 05:07

Drupal VM is one of the most flexible and powerful local development environments for Drupal, but one the main goals of the project is to build a fully-functional Drupal 8 site quickly and easily without doing much setup work. The ideal would be to install Vagrant, clone or download the project, then run vagrant up. A few minutes later, you'd have a Drupal 8 site ready for hacking on!

In the past, you always had to do a couple extra steps in between, configuring a drupal.make.yml file and a config.yml file. Recently, thanks in huge part to Oskar Schöldström's herculean efforts, we achieved that ideal by switching from defaulting to a Drush make-based workflow to a Composer-based workflow (this will come in the 3.1.0 release, very soon!). But it wasn't without trial and tribulation!

Categories: Elsewhere

Ingo Juergensmann: Request for Adoption: Buildd.Net project

Planet Debian - Sun, 05/06/2016 - 18:54

I've been running Buildd.Net for quite a long time. Buildd.Net is a project that focusses on the autobuilders, not the packages. It started back then when the m68k port had a small website running on kullervo, a m68k buildd. Kullervo was too loaded to deal with the increased load of that website, so together with Stephen Marenka we moved the page from kullervo to my server under the domain m68k.bluespice.org. Over time I got many requests if that page could do the same for other archs as well, so I started to hack the code to be able to deal with different archs: Buildd.Net was born.

Since then many years passed by and Buildd.Net evolved into a rather complex project, being capable to deal with different archs and different releases, such as unstable, backports, non-free, etc. Sadly the wanna-build output changed over the years as well, but I didn't have the time anymore to keep up with the changes.

Buildd.Net is based on: 

  • some Bash scripts
  • some Python scripts
  • a PostgreSQL database
  • gnuplot for some graphs
  • some small Perl scripts
  • ... and maybe more...

As long as I was more deeply involved with the m68k autobuilders and others, I found Buildd.Net quite informative as I could get a quick overview how all of the buildds were performing. Based on the PostgreSQL database we could easily spot if a package was stuck on one of the buildds without directly watching the buildd logs.

Storing the information from the buildds about the built packages in a SQL database can give you some benefit. Originally my plan was to use that kind of information for a better autobuilder setup. In the past it happened that large packages were built by buildds with, let's say, 64 MB of RAM and smaller packages were built on the buildds with 128 MB of RAM. Eventually this led to failed builds or excessive build times. Or m68k buildds like Apple Centris boxes or so suffered from slow disk I/O, while some Amiga buildds had reasonable disk speeds (consider 160 kB/s vs. 2 MB/s). 

As you can see there is/was a lot room for optimization of how packages can be distributed between buildds. This could have been done by analyzing the statistics and some scripting, but was never implemented because of missing skills and time on my side.

The lack of time to keep up with the changes of the official wanna-build output (like new package states) is the main reason why I want to give Buildd.Net into good hands. If you are interested in this project, please contact me! I still believe that Buildd.Net can be beneficial to the Debian project. :-)

Kategorie: DebianTags: DebianBuildd.netSoftware 
Categories: Elsewhere

Iustin Pop: Short trip to Opio en Provence

Planet Debian - Sun, 05/06/2016 - 18:25
Short trip to Opio en Provence

I had a short work-related trip this week to Opio en Provence. It was not a working trip, but rather a team event, which means almost a vacation!

Getting there and back

I dislike taking the plane for very short flights (and Zürich-Nice is indeed only around one hour), as that means you're spending 3× as much going to the airport, at the airport, waiting to take off, waiting to get off the plane, and then going from the airport to the actual destination. So I took the opportunity to drive there, since I've never driven that way, and on the map the route seemed reasonably interesting. Not that it's a shorter trip by any measure, but seemed more interesting.

Leaving Zürich I went over San Bernardino pass, as I never did that before. On the north side, the pass is actually much less suited to traffic than the Gotthard pass (also on the north side), as you basically climb around 300m in a very short distance, with very sharp hairpins. There was still snow on the top, and the small lake had lots of slush/ice floating on it. As to the south side, it looked much more driveable, but I'm not sure as I made the mistake of re-joining the highway, so instead of driving reasonably nice on the empty pass road, I spent half an hour in a slow moving line. Lesson learned…

Entering Italy was the usual Como-Milan route, but as opposed to my other trips, this time it was around Milan on the west (A50) and then south on the A7 until it meets the A26 and then down to the coast. From here, along the E80 (Italian A10, French A8) until somewhere near Nice, and then exiting the highway system to get on the small local roads towards Opio.

What I read in advance on the internet was that the coastal highway is very nice, and has better views of the sea than the actual seaside drive (which goes through towns and is much slower). I should know better than trust the internet ☺, and I should read maps instead, which would have shown me the fact that the Alps are reaching to the sea in this region, so… The road was OK, but it definitely didn't feel like a highway: maximum allowed speed was usually either 90km/h or 110km/h, and half the time you're in a short tunnel, so it's sun, tunnel/dark, sun, dark, and you're eyes get quite tired from this continuous switching. The few glimpses of the sea were nice, but the road required enough concentration (both due to traffic and the amount of curves) that one couldn't look left or right.

So that was that a semi-failure; I expected a nice drive, but instead it was a challenge drive ☺ If I had even more time to spend, going back via the Rhone valley (Grenoble, Geneva, Zürich) would have been a more interesting alternative.

France

Going to France always feels strange for me. I learned (some) French way before German, so the French language feels much more familiar to me, even without never actually having used it on a day-to-day basis; so going to France feels like getting back to somewhere where I never lived. Somewhat similar with Italian due to the language closeness between Romanian and Italian, but not the same feeling as I didn't actually hear or learn Italian in the childhood.

So I go to France, and I start partially understand what I hear, and I can somewhat talk/communicate. Very weird, while I still struggle with German in my daily life in Zürich. For example, I would hesitate before asking for directions in German, but not so in French, unrelated to my actual understanding of either language. The brain is funny…

The hotel

We stayed at Club Med Opio-en-Provence, which was interesting. Much bigger than I thought from quick looks on the internet (this internet seems quite unreliable), but also better than I expected from a family-oriented, all-inclusive hotel.

The biggest problem was the food - French Pâtisserie is one of my weaknesses, and I failed to resist. I mean, it was much better than I expected, and I indulged a bit too much. I'll have to pay that back on the bike or running

The other interesting part of the hotel was the wide range of activities. Again, this being a family hotel, I thought the organised activities would be pretty mild; but at least for our group, they weren't. The mountain bike ride included an easy single-trail section, but while easy it was single-trail and rocky, so complete beginners might have had a small surprise. Overall it was about 50 minutes, 13.5km, with 230m altitude gain, which again for sedentary people might be unusual. I probably spent during the ride one of the deserts I ate later that day The "hike" they organised for another sub-group was also interesting, involving going through old tunnels and something with broken water pipes that caused people to either get their feet wet or monkey-spidering along the walls. Fun!

After the bike ride, on the same afternoon, while walking around the hotel, we found the Ecole de Trapèze volant open, which looked way to exciting not to try it. Try and fail to do things right, but nevertheless it was excellent and unexpected fun. I'll have to do that again some day when I'll be more fit!

Plus that the hotel itself had a very nice location and olive garden, so short runs in the morning were very pleasant. Only one cookie though each…

Back home

… and then it was over; short, but quite good. The Provence area is nice, and I'd like to be back again someday, for a proper vacation—longer and more relaxed. And do the trapèze thing again, properly this time.

Categories: Elsewhere

Simon McVittie: Flatpak in Debian

Planet Debian - Sun, 05/06/2016 - 13:24

Quite a lot has happened in xdg-app since last time I blogged about it. Most noticeably, it isn't called xdg-app any more, having been renamed to Flatpak. It is now available in Debian experimental under that name, and the xdg-app package that was briefly there has been removed. I'm currently in the process of updating Flatpak to the latest version 0.6.4.

The privileged part has also spun off into a separate project, Bubblewrap, which recently had its first release (0.1.0). This is intended as a common component with which unprivileged users can start a container in a way that won't let them escalate privileges, like a more flexible version of linux-user-chroot.

Bubblewrap has also been made available in Debian, maintained by Laszlo Boszormenyi (also maintainer of linux-user-chroot). Yesterday I sent a patch to update Laszlo's packaging for 0.1.0. I'm hoping to become a co-maintainer to upload that myself, since I suspect Flatpak and Bubblewrap might need to track each other quite closely. For the moment, Flatpak still uses its own internal copy of Bubblewrap, but I consider that to be a bug and I'd like to be able to fix it soon.

At some point I also want to experiment with using Bubblewrap to sandbox some of the game engines that are packaged in Debian: networked games are a large attack surface, and typically consist of the sort of speed-optimized C or C++ code that is an ideal home for security vulnerabilities. I've already made some progress on jailing game engines with AppArmor, but making sensitive files completely invisible to the game engine seems even better than preventing them from being opened.

Next weekend I'm going to be heading to Toronto for the GTK Hackfest, primarily to to talk to GNOME and Flatpak developers about their plans for sandboxing, portals and Flatpak. Hopefully we can make some good progress there: the more I know about the state of software security, the less happy I am with random applications all being equally privileged. Newer display technologies like Wayland and Mir represent an opportunity to plug one of the largest holes in typical application containerization, which is a major step in bringing sandboxes like Flatpak and Snap from proof-of-concept to a practical improvement in security.

Other next steps for Flatpak in Debian:

  • To get into the next stable release (Debian 9), Flatpak needs to move from experimental into unstable and testing. I've taken the first step towards that by uploading libgsystem to unstable. Before Flatpak can follow, OSTree also needs to move.
  • Now that it's in Debian, please report bugs in the usual Debian way or send patches to fix bugs: Flatpak, OSTree, libgsystem.
  • In particular, there are some OSTree bugs tagged help. I'd appreciate contributions to the OSTree packaging from people who are interested in using it to deploy dpkg-based operating systems - I'm primarily looking at it from the Flatpak perspective, so the boot/OS side of it isn't so well tested. Red Hat have rpm-ostree, and I believe Endless do something analogous to build OS images with dpkg, but I haven't had a chance to look into that in detail yet.
  • Co-maintainers for Flatpak, OSTree, libgsystem would also be very welcome.
Categories: Elsewhere

Petter Reinholdtsen: A program should be able to open its own files on Linux

Planet Debian - Sun, 05/06/2016 - 08:30

Many years ago, when koffice was fresh and with few users, I decided to test its presentation tool when making the slides for a talk I was giving for NUUG on Japhar, a free Java virtual machine. I wrote the first draft of the slides, saved the result and went to bed the day before I would give the talk. The next day I took a plane to the location where the meeting should take place, and on the plane I started up koffice again to polish the talk a bit, only to discover that kpresenter refused to load its own data file. I cursed a bit and started making the slides again from memory, to have something to present when I arrived. I tested that the saved files could be loaded, and the day seemed to be rescued. I continued to polish the slides until I suddenly discovered that the saved file could no longer be loaded into kpresenter. In the end I had to rewrite the slides three times, condensing the content until the talk became shorter and shorter. After the talk I was able to pinpoint the problem – kpresenter wrote inline images in a way itself could not understand. Eventually that bug was fixed and kpresenter ended up being a great program to make slides. The point I'm trying to make is that we expect a program to be able to load its own data files, and it is embarrassing to its developers if it can't.

Did you ever experience a program failing to load its own data files from the desktop file browser? It is not a uncommon problem. A while back I discovered that the screencast recorder gtk-recordmydesktop would save an Ogg Theora video file the KDE file browser would refuse to open. No video player claimed to understand such file. I tracked down the cause being file --mime-type returning the application/ogg MIME type, which no video player I had installed listed as a MIME type they would understand. I asked for file to change its behavour and use the MIME type video/ogg instead. I also asked several video players to add video/ogg to their desktop files, to give the file browser an idea what to do about Ogg Theora files. After a while, the desktop file browsers in Debian started to handle the output from gtk-recordmydesktop properly.

But history repeats itself. A few days ago I tested the music system Rosegarden again, and I discovered that the KDE and xfce file browsers did not know what to do with the Rosegarden project files (*.rg). I've reported the rosegarden problem to BTS and a fix is commited to git and will be included in the next upload. To increase the chance of me remembering how to fix the problem next time some program fail to load its files from the file browser, here are some notes on how to fix it.

The file browsers in Debian in general operates on MIME types. There are two sources for the MIME type of a given file. The output from file --mime-type mentioned above, and the content of the shared MIME type registry (under /usr/share/mime/). The file MIME type is mapped to programs supporting the MIME type, and this information is collected from the desktop files available in /usr/share/applications/. If there is one desktop file claiming support for the MIME type of the file, it is activated when asking to open a given file. If there are more, one can normally select which one to use by right-clicking on the file and selecting the wanted one using 'Open with' or similar. In general this work well. But it depend on each program picking a good MIME type (preferably a MIME type registered with IANA), file and/or the shared MIME registry recognizing the file and the desktop file to list the MIME type in its list of supported MIME types.

The /usr/share/mime/packages/rosegarden.xml entry for the Shared MIME database look like this:

<?xml version="1.0" encoding="UTF-8"?> <mime-info xmlns="http://www.freedesktop.org/standards/shared-mime-info"> <mime-type type="audio/x-rosegarden"> <sub-class-of type="application/x-gzip"/> <comment>Rosegarden project file</comment> <glob pattern="*.rg"/> </mime-type> </mime-info>

This states that audio/x-rosegarden is a kind of application/x-gzip (it is a gzipped XML file). Note, it is much better to use an official MIME type registered with IANA than it is to make up ones own unofficial ones like the x-rosegarden type used by rosegarden.

The desktop file of the rosegarden program failed to list audio/x-rosegarden in its list of supported MIME types, causing the file browsers to have no idea what to do with *.rg files:

% grep Mime /usr/share/applications/rosegarden.desktop MimeType=audio/x-rosegarden-composition;audio/x-rosegarden-device;audio/x-rosegarden-project;audio/x-rosegarden-template;audio/midi; X-KDE-NativeMimeType=audio/x-rosegarden-composition %

The fix was to add "audio/x-rosegarden;" at the end of the MimeType= line.

If you run into a file which fail to open the correct program when selected from the file browser, please check out the output from file --mime-type for the file, ensure the file ending and MIME type is registered somewhere under /usr/share/mime/ and check that some desktop file under /usr/share/applications/ is claiming support for this MIME type. If not, please report a bug to have it fixed. :)

Categories: Elsewhere

Jamie McClelland: Signal and Mobile XMPP Update

Planet Debian - Sun, 05/06/2016 - 05:03

First, many thanks to Planet Debian readers for your thoughtful and constructive feedback to my Signal and Mobile Instant Messaging blogs. I learned a lot.

Particularly useful was the comment directing me to Daniel Gultsch's The State of Mobile in 2016 post.

I had previously listed the outstanding technical challenges as:

  • Implement end-to-end encryption
  • Receive messages the moment they are sent without draining the battery

I am now fairly convined that both problems are well-solved on Android via the Conversations app and a well-tuned XMPP server (I had no idea how easy it was to install your own Prosody modulues -- the client state indicator module is only 22 lines of lua code!)

I think the current technical challenges could be better summarized as: adding iOS (iPhone) support. Both end-to-end encryption and receiving messages consistently seem to be hurdles. However, it seems that Chris Ballinger and the Chat Secure team are well on their way toward solving the push issue and facing funder skittishness on the encryption front. Nonetheless, but seem to be progressing.

With the obvious technical hurdles in progress, we have the luxury of talking about the less obvious ones - particularly the ones requiring trade-offs.

In particular: Signal replaces your SMS client. It looks and feels like an SMS client and automatically sends un-encrypted messages to everyone your address book that is not on signal and sends encrypted messages to those that are on signal.

The significance of this feature is hard to over-state. It differentiates tools by and for technically minded people and those designed for a mass audience.

When I convince people to use Conversations, in contrast, I have to teach them to:

  • Create an entirely new address book by entering addresses for your friends that you don't already have
  • Use a new and different app for sending encrypted messages

For most people who don't (yet) have their friends XMPP addresses or for people who don't have any friends who use XMPP, it means that they will install it, send me a few messages and then never use it again.

The price Signal pays for this convenience is steep: Signal seems to synchronize your entire address book to their servers so they can keep a map of cell phone numbers to signal users. It's not only creepy (I get a text message everytime someone in my address book joins Signal) but it's flies in the face of expectations for a privacy-minded application.

How could we take advantage of this feature, without the privacy problems?

What if...

  • Our app could send both XMPP messages and SMS messages
  • Everytime you added a new XMPP contact, it added the contact to your address book with a new XMPP field
  • Anytime you send a message to a contact with an XMPP field filled in, it would send via XMPP and otherwise it would send a normal SMS message

The main downside (which Signal faces as well) is that you have to contend with the complexities of sending SMS messages on top of the work needed to write a well-functioning XMPP client. As I mentioned in my Signal blog, there are no shortage of MMS bugs against Signal. Nobody wants that head-ache.

Additinally, we would still lose one Signal feature: with Signal, when a user joins, everyone automatically sends them encrypted messages. With this proposed app, each user would have to manually add the XMPP address and have no way of knowing when one of their friends gets an XMPP address.

Any other ideas?

Categories: Elsewhere

Bear Coder: 4 Steps To Getting Drupal's Codebase With PHP

Planet Drupal - Sat, 04/06/2016 - 18:00
4 Steps To Getting Drupal's Codebase With PHP BearCoder Sat, 06/04/2016 - 11:00
Categories: Elsewhere

DrupalEasy: Drupal 8 Debugging: Kareful Klicking in Kint

Planet Drupal - Sat, 04/06/2016 - 17:14

Drupal 8's new theming system is a thing of beauty. As part of the massive changes to the Drupal 8 (front- and back-end) developer experience, the Devel module for Drupal 8 comes with a new variable inspector. Say goodbye to Krumo, and say hello to Kint. Like its predecessor, when you install the Devel project on a Drupal 8 local environment, you automatically get the Kint module as well (like Krumo, there are no additional downloads). Using Kint is similar to using Krumo, where in Drupal 7, any dsm($variable_name) or dpm($variable_name) call automatically used Krumo to display variables on the page in a way that made it easy(ier?) to dive into the many Drupal PHP arrays and objects. In Drupal 8, kint($variable_name) can be used to output any variable - this works in template files as well via {{ kint(variable_name) }}.

It took me a few weeks to get comfortable with Kint, mainly due to one small interface thing; clicking on the "+" icon in a Kint output recursively opens all the arrays and objects. Depending on the variable you're Kint-ing, this could result in a lot of output to sort through (and, depending on your machine, browser, and site configuration, it could take more than a few seconds to fully render).

After a few weeks of instinctively (and incorrectly) clicking on the "+ button every time I used Kint, I've now retrained myself to utilize it in a much more efficient manner.

  1. I almost never click on the "+" anymore. Rather, I click output anywhere other than the "+" to open just that portion of the output.

  2. I download and install the Search Kint module whenever I download and install the Devel module for a local environment. This provides an almost-too-convenient-to-believe search box with each Kint output that makes finding things almost trivial.

Using Kint efficiently is one of the skills that every Drupal 8 developer should have. Combined with an interactive debugger, there's virtually nothing that can't be easily discovered when a developer can wield both of these tools.

Learn more about Drupal 8 module and theme development debugging by attending a DrupalEasy workshop!

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator - Elsewhere