Elsewhere

Drupal @ Penn State: A Significant Event

Planet Drupal - Wed, 21/10/2015 - 22:36

Yesterday something significant occurred, http://brandywine.psu.edu launched on the new Polaris 2 Drupal platform. And soon the Abington Campus web site will move to the same platform. And perhaps many more.

Categories: Elsewhere

Chapter Three: Giving Back

Planet Drupal - Wed, 21/10/2015 - 21:30

A couple years ago, we decided to make a substantial investment in Drupal by employing a Drupal 8 Core Developer. The investment has paid off in ways we never anticipated, transforming our company for the better.



How it began

Drupal 8 development began in 2011. It plugged along for a couple of years and as it got closer to becoming a reality it became clear that the Drupal community would need to adapt their skill sets to accommodate the changes inherent in the new platform architecture. This was daunting. We knew the cost of getting our team ramped up on Drupal 8 was significant and that there could be a steep learning curve.

Categories: Elsewhere

Drupal.org frontpage posts for the Drupal planet: Drupal 7.41 released

Planet Drupal - Wed, 21/10/2015 - 21:25

Drupal 7.41, a maintenance release which contain fixes for security vulnerabilities, is now available for download. See the Drupal 7.41 release notes for further information.

Download Drupal 7.41

Upgrading your existing Drupal 7 sites is strongly recommended. There are no new features or non-security-related bug fixes in this release. For more information about the Drupal 7.x release series, consult the Drupal 7.0 release announcement.

Security information

We have a security announcement mailing list and a history of all security advisories, as well as an RSS feed with the most recent security advisories. We strongly advise Drupal administrators to sign up for the list.

Drupal 7 includes the built-in Update Manager module, which informs you about important updates to your modules and themes.

Bug reports

Drupal 7.x is being maintained, so given enough bug fixes (not just bug reports), more maintenance releases will be made available, according to our monthly release cycle.

Changelog

Drupal 7.41 is a security release only. For more details, see the 7.41 release notes. A complete list of all bug fixes in the stable 7.x branch can be found in the git commit log.

Security vulnerabilities

Drupal 7.41 was released in response to the discovery of critical security vulnerabilities. Details can be found in the official security advisory:

To fix the security problem, please upgrade to Drupal 7.41.

Known issues

None.

Front page news: Planet DrupalDrupal version: Drupal 7.x
Categories: Elsewhere

Daniel Pocock: A mission statement for free real-time communications

Planet Debian - Wed, 21/10/2015 - 19:51

At FOSDEM 2013, the RTC panel in the main track used the tag line "Can we finally replace Skype, Viber, Twitter and Facebook?"

Does replacing something else have the right ring to it though? Or does such a goal create a negative impression? Even worse, does the term Skype replacement fall short of being a mission statement that gives people direction and sets expectations of what we would actually like to replace it with?

Towards a clearer definition

Lets consider what a positive statement might look like:

Making it as easy as possible to make calls to other people and to receive calls from other people for somebody who chooses only to use genuinely Free Software, open standards, a free choice of service providers and a credible standard of privacy.

If you agree with this or if you feel you can propose a more precise statement, please come and share your thoughts on the Free RTC email list.

The value of a mission statement should not be underestimated. With the right mission statement, it should be possible to envision what the future will look like if we succeed and also if we don't. With the vision of success in mind, it should be easier for developers and the wider free software community to identify the steps that must be taken to make progress.

Categories: Elsewhere

Drupal core announcements: Drupal 7 core updates for October 2015

Planet Drupal - Wed, 21/10/2015 - 19:20

Welcome to the first in an occasional series of posts about recent happenings in Drupal 7 core.

This is modeled after the Drupal 8 posts that have been going on for a while; you can find all Drupal 8 and Drupal 7 core updates on the Drupal Core Updates page going forward.

Drupal 7.40 released

Drupal 7.40 was released last week! This is a maintenance release containing bug fixes, performance improvements, and a few new features. Now is a good time to test it out and upgrade your sites. If you discover any problems introduced by this release, report them in the issue queue.

Drupal 7.40 contains no security fixes, but Drupal 7.39 (released in August) does, so make sure your sites are updated to at least that version. Note that there is a security release window coming up today (October 21) also.

New features in Drupal 7 core

Many people have the misconception that Drupal 8 will be the first version of Drupal that allows new features to be added after the initial stable release, but in fact that's been the policy for Drupal 7 since early 2012.

Several features have been added to Drupal 7 since it was released, but awareness is not as high as it could be which is one of the reasons we haven't added more. One goal of this series of posts is to highlight features that have been added recently as well as new ones that are up for consideration. A sample is below.

For site builders For developers
  • Want a convenient theme debugging tool? There's one built into Drupal core (since Drupal 7.33). Just add the indicated line to the settings.php file in your local testing environment (or, since Drupal 7.40, uncomment the line that's already in the default settings.php file) and debug away.
  • Do you like PHP traits? You can now easily use them because the Drupal 7 autoloader supports them (since Drupal 7.40). Just be aware that they are a PHP 5.4 feature, so if your module relies on them then sites running on earlier versions of PHP won't be able to use your module.
  • When specifying dependencies in your module's .info file, you now can (and are encouraged to) specify the project that your module depends on also (since Drupal 7.40) - for example, dependencies[] = views:views_ui to declare a dependency on the Views UI module within the Views project, rather than just dependencies[] = views_ui. This extra information will help resolve ambiguities and can potentially be used by other tools in the future, for example by the drupal.org testbot or Drush to be able to automatically download dependencies that currently can't be downloaded automatically.

If you have ideas for other new Drupal 7 features you'd like to work on, get started in the issue queue (but keep in mind that they still must be subject to the backwards-compatibility policy).

Proposed Drupal 7 changes that need your feedback

Here are some possible Drupal 7 changes on the horizon that need feedback from Drupal 7 site builders or developers to make sure we don't cause problems for existing sites and existing code (and some of them need developers to work on the patches also):

  • We'd like to add an "administer fields" permission to core which would be required to use the field UI (in addition to whatever permission is required by each entity type the fields are attached to). This should benefit site builders and allow for more secure permission setups, but it could have a minor impact on existing sites and a moderate impact on contributed module automated tests; see the linked issue for details.
  • Should we turn on page caching and CSS/JavaScript aggregation in the Standard install profile in Drupal 7 (similar to what Drupal 8 did)? See the linked issue for details and to provide your feedback.
  • How about simplifying the Modules page, in particular to hide the less-useful dependency information by default? This could potentially be a big user-experience win for Drupal 7 sites. Discussion (and potentially patches) are needed on the linked issue.
  • Would your Drupal 7 sites be impacted if we added a default clickjacking defense to Drupal 7 core? This is a security improvement which would involve preventing Drupal 7 sites from being shown inside an iframe on a site with a different domain. For sites that need to be shown inside such an iframe, the protection could be turned off, but we need to make sure we get this right before adding it to core.
  • We'd like to allow the "Limit allowed HTML tags" filter to also restrict HTML attributes in Drupal 7, to make Drupal's text filtering options a bit more flexible. Help in particular is needed with backporting the Drupal 8 patch on that issue.

Please leave constructive feedback on any these topics in the individual issues linked above.

Proposed new release schedule for Drupal 7

We're considering adopting a similar release schedule for Drupal 7 that Drupal 8 will be using (i.e., a 6-month feature release schedule, with pseudo-semantic versioning). See the linked issue for details and to participate in the discussion.

And that's a wrap

Thanks for reading; hopefully you found this post useful!

If you'd like to see more of these posts in the future (or if you'd like to help write them) or if you have any general feedback, feel free to leave your feedback in the comments. But if you have feedback on specific issues mentioned above, please leave it on the relevant issue instead. Thank you.

Categories: Elsewhere

Dries Buytaert: Announcing Acquia Content Hub

Planet Drupal - Wed, 21/10/2015 - 19:11

The explosion of content continues to grow. With more and more organizations managing multiple sites and digital channels, the distribution of content is increasingly difficult to manage. Content can easily become siloed in different sites or platforms. Different data models make it challenging to access, update, and replicate content changes across your sites.

Today, we're excited to announce Acquia Content Hub, a cloud-based content distribution and discovery service. Content Hub serves as a central content repository or hub that allows for bidirectional distribution of content between different platforms. Content Hub lets authors and site owners reuse content from other sites, commerce platforms, and more. To facilitate sharing between all these different systems, we normalize the content, and provide centralized tools to search and discover content within your network of sites. In addition, Content Hub can automatically keep content consistent across different sites (publish-subscribe), mitigating the risk of out of date information, all while respecting workflow rules on the local destination site.

I'm excited about the launch of Content Hub because I believe it will become a critical building block for creating digital experiences that are smart, personal, contextual, predictive, and continuous across digital touch-points in our lives (see Big Reverse of the Web). It's an ambitious vision that will require organizations to better leverage all of their content and data. This means that eventually all data has to get linked: from textual, audio and video data, to customer information and customer support data, to sensory and contextual customer information. To process that amount of data, we will have to build smart systems on top of it to create better digital experiences for the customer. Last year we launched Acquia Lift, and now 12 months later we're launching Content Hub -- both are important steps towards that vision.

Categories: Elsewhere

OSTraining: How to Create a Popular Articles View in Drupal

Planet Drupal - Wed, 21/10/2015 - 18:15

Many sites want to showcase their most popular content to visitors.

However, creating a "Popular Articles" list in Drupal is harder than it might seem.

In this video, Robert shows you how to use the Statistics module (part of the Drupal core) to organize your articles by popularity.

This video is taken from our Advanced Views class.

Categories: Elsewhere

Red Crackle: Reducing Drupal's page load times using Fastly CDN

Planet Drupal - Wed, 21/10/2015 - 17:54
If you have ever optimized a Drupal site, you must have heard that you should serve static assets using a Content Delivery Network (CDN). In this post, we'll go over how CDNs help in reducing page load times. We'll cover one CDN, Fastly, in detail. The reason being that in addition to caching static assets, it caches dynamic HTML content as well and since it's built on top of Varnish, it integrates with Drupal seamlessly.
Categories: Elsewhere

Palantir: Building a Better Drupal

Planet Drupal - Wed, 21/10/2015 - 17:52

In his semi-annual “Driesnote” at DrupalCon Barcelona last month, Drupal founder and project lead Dries Buytaert spoke openly and frankly about some of the challenges facing the Drupal project and ecosystem. One of those challenges is the increasing amount of time it’s taken to develop each release as Drupal has grown larger and more complex.

In my session the next day, Architecting Drupal Businesses that Are Built to Last, I discussed how building complex software is in many ways like building a large building. Much as the way that we’ve approached construction has changed over time, so has the way that we approach building websites.

For most of recorded human history our largest structures were made simply by piling huge stones on top of each other. The Great Pyramid of Giza was built by thousands of laborers over a period of 10-20 years and completed around 2560 BC. For nearly four thousand years, it was the largest structure in the world at about 481 feet.

This is also pretty much how we built websites back before content management systems became widespread; there was a vision of what the end product needed to look like, and we’d just keep piling on more code and markup until it was done. It was usually a very labor-intensive process that took a long time to complete, and you didn’t have a lot of flexibility with the end product, but it got the job done.

The Great Pyramid was surpassed in height only in the 14th century by the Lincoln Cathedral in the United Kingdom’s East Midlands. At 525 feet tall, it was reputedly the tallest building in the world from about the year 1300 until its central spire collapsed in 1549 and was not rebuilt.

Cathedral construction in the Middle Ages was a community-funded and supported effort, frequently taking decades or even centuries to complete. When building a cathedral, the general approach was to get something fully functional up first, usually the chancel, which is where the high altar sits and the choir sings. Once this “minimum viable product” was completed, you could then extend outward as time and money became available, all the while maintaining a working place of worship. Because these buildings were built in stages over long periods of time, they also could evolve during construction and there was a fair amount of architectural experimentation and innovation as a result.

And this is what building a website with a content management system can get you; you can get a basic site up and running fairly quickly, then build on top of it and innovate over time without having to take it down or over from scratch. Like the cathedrals of the Middle Ages, open source content management systems like Drupal, WordPress, and Joomla! are community projects that are supported and maintained by a wide variety of agencies, paid developers, and volunteers.

The Washington Monument, which was completed in 1884, is the world’s tallest stone structure even today at 555 feet tall. It’s very large and impressive, but like the pyramids, cathedrals and other monuments that came before it, the Washington Monument is primarily not functional in nature, but ornamental. Despite their tremendous diversity of shapes and sizes, the kinds of tall buildings that were constructed up until the late 19th century were designed as places of worship and monuments to kings, emperors, and presidents, not as places that you could live or work in every day.

And this is also the limitation of many websites that are designed as completely bespoke solutions. They’re great expressions of the business needs that existed at the time and place in which they were built, but all too often they aren’t built to adapt as those needs change over time.

In 1884, the same year that the Washington Monument was completed, a Chicago architect named William Le Baron Jenney made a huge innovation with the Home Insurance Building, which was built using a load-bearing structural steel frame that supported the entire weight of the walls, instead of load-bearing walls that carried the weight of the building. This was called the Chicago Skeleton method. For the first time, it was possible to have a tall building with usable space all the way up to the top floor.

Steel replaced stone architecturally, and the age of the skyscraper began. Its pinnacle was reached in 1931 with the construction of the Empire State Building, which is 1,250 feet in height. These skyscrapers were a huge leap forward, but they were ultimately limited by the both the cost of building them and the inflexibility of the skeleton method; you could pretty much just build a tower straight up. The Empire State Building remained the tallest building in the world for 40 years.

This is the point that we have reached with Drupal 7. While the raw size and functionality of the sites we can build has scaled greatly, we’ve also hit the limits of Drupal’s cost effectiveness, technical overhead, and flexibility. Drupal 7 as a project has had difficulty adapting to meet some of the needs of today’s Web. It’s often cheaper and easier to use a framework than it is to use Drupal, and while that’s true for sites of all sizes, it’s particularly true for the largest customers, who often choose to create their own bespoke platforms. It’s been very clear for some time that something needs to change.

In 1969, Skidmore Owings and Merrill (SOM) were hired to create a building with 3 million square feet of office space for thousands of Chicago-based employees of the Sears Corporation. To meet this challenge, SOM architect Bruce Graham and engineer Fazlur Kahn designed an approach that combined nine individual “tubes”, each of which was essentially an independent building, clustered in a three-by-three matrix. The Sears Tower was completed in 1974, and reached a height of 1,729 feet.

The advantages of the bundled tube structural system is that it provides great economic efficiency, because it required less steel, and offers greater flexibility for buildings to take on more shapes. The structural innovations introduced by the Sears Tower four decades ago brought about a renaissance in skyscraper construction that continues to this day.

This kind of renaissance is also what Drupal 8 offers with its object-oriented approach, configuration management, improved APIs and native Web services support. Drupal 8’s more modular architecture makes it much easier to create so-called “headless” or “decoupled” websites that play nicely with a wide variety of frameworks and Web services.

When you have this kind of flexibility, all sorts of things are possible. This is the direction Drupal needs to go in order to remain competitive with other platforms like WordPress, Typo3, Adobe Experience Manager, and Sitecore.

Drupal answers the increasing call for skyscraper websites. It can and does support some of the largest, most trafficked, most complex sites on the planet, and with Drupal 8 we’ll be even better equipped to do so.

Images used in this post are done so under Creative Commons. In order:
https://flic.kr/p/7BUH63 - CC BY-SA 2.0
https://flic.kr/p/7sxMP9 - CC BY-SA 2.0
https://flic.kr/p/dERZT6 - CC BY 2.0
https://flic.kr/p/mZU9d2 - CC BY 2.0
https://flic.kr/p/o8AnpS - CC BY-ND 2.0

Categories: Elsewhere

Drupal Commerce: Commerce 2.x Stories: Update

Planet Drupal - Wed, 21/10/2015 - 16:50

Drupal 8 is now ready for production use, with RC1 released just recently. This also means that work on D8 contrib is accelerating, and everyone is curious about when they can start their first eCommerce sites.

A production ready Commerce 2.x beta won’t happen for another two months. However, we’re ready to start releasing related modules, tag Commerce alphas, write documentation, and track our progress more publicly. Starting from today we’ll do weekly blog posts showing the current state of Commerce 2.x. And for the people who want to build the future instead of just anticipating it, we’re holding IRC meetings every wednesday at 3PM GMT+2 on #drupal-commerce.

So, what have we been up to?

Categories: Elsewhere

Wunderkraut blog: The Wunderkraut Gotcha collection volume 3

Planet Drupal - Wed, 21/10/2015 - 15:28

The developers of Wunderkraut have worked hard on digging up fun and exciting surprises about Drupal since the last edition was published. Now that Drupal 8 is just around the corner, I expect that this will be the last edition of gotchas about Drupal 7 - brace yourself for new ones in Drupal 8!

Thanks to my great colleagues who have helped me gather this list: Georg, Cristian, Jani, Aleksi, Juho, Mikael, Olli, Federico, Janne, Pauli, Florian, Andreas, Tuomas, Jim, Peeter, Miķelis, Greg

Token translatability

Georg writes: 

This annoyed me for two days and took another colleague of mine almost a day until I accidently came across it, to help him out:

You configure breadcrumb, metatags or you-name-it in this great dynamic way like: "Read all about [node:field-headline]", which is replaced to, say "Read all about Munich".

Everything is just fine.

Until you are starting with translation stuff.

The token simply resist being translated, no matter what.

You put it though t() and surely find it in the translation interface, you translate it to German like "Lies alles über [node:field-headline]" and end up with a German/English mixture: Lies alles über Munich. (while you would expect to see the german word for Munich here: München).

Lots of blood, sweat and tears later you find out that tokens with underscores work just fine: 

Put "Read all about [node:field_headline]", translate it to "Lies alles über [node:field_headline]" and you are done.

Disabling e-mails sent out to new users by Drupal

You know, sometimes you don’t want Drupal to send people all those emails when they are signing up to your site. The slightly annoying problem is that the Drupal interface to choose which emails get sent out is a bit hidden. Deep down somewhere: 

[11:14] Cristian: I just need to disable that single email ... and I*m not finding a contrib module for that [11:17] Sampo: I think there should be a variable for each of those notifications, but not all of them are exposed to the UI. maybe [12:39] Cristian: @sampo, you were right . I've added the following to settings.php and all is right in the world [12:39] Cristian: $conf['user_mail_register_no_approval_required_notify'] = FALSE; Here is a list of possible values (check the $notify = variable_get('user_mail_' . $op . '_notify', $default_notify); line https://api.drupal.org/api/drupal/modules!user!user.module/function/_user_mail_notify/7 [12:50] Jani: @Bernt This _should_ provide a UI for those: https://www.drupal.org/project/mailcontrol Panels and Display Suite and region names with 3-col are not friends 

Aleksi wrote this to me: 

Hi! Juho asked me to give this link to you for your next blog post? http://g.recordit.co/LDi4RcRgrx.gif

It's an issue where Panels layout doesn't work with Display Suite and the issue occurs because I used names like 3-col in the region. 

Time zone handling with Tokens

Mikael writes: Note that when displaying date/time values through tokens, it will use default timezone handling ...even you have configured in field that differently (like no timezone handling etc.)

Another inconsistency - with the date module

[10:23] Olli: Ehhhh... Nice gotcha from the drupal.org page:

<?php
$start_date = $wrap_node->field_my_data->value()['value'];
$start_date = $wrap_node->field_my_data->value->value();
?>

 

Above two methods of accessing the date value ARE NOT EQUIVALENT. The first one returns date string, and the second a timestamp! I think it should be noted on the page.

Olli also provides an example as to why that value->value() is there: 

Since for date fields:

->value()['value']; ->value()['value2'];

Is pretty much value->value() and value2->value();

Just slightly different output, for whatever reason. 

Block caching - a favourite that has gotten me too

Federico wrote about a strange case: 

I have a block that allows to add products to the cart right under the cart page (/cart). This is a form method post that sends to /cart.
 I can't get why it sometimes works, and then i get first a 302 response and then another GET request of /cart. Other times I get a 200 OK response instead and that's it, no product is added.


Why is this happening - I don’t see no logic to it.. 

And then a bit later: 

Wow @janne fixed it! it was a block being cached, damned little riot!

Following up after this: 

Jani: So resolution was DRUPAL_NO_CACHE Instead of the default DRUPAL_CACHE_PER_ROLE?
Pauli: One could say that you should always disable caching for any block that contains a form. 

Caching form token is a bad idea

Note: This is not a problem for anonymous users

This one I picked up from our favourite friend Drupal Answers: 

http://drupal.stackexchange.com/questions/74775/how-to-cache-forms-with-a-reverse-proxy-and-deal-with-stale-form-tokens

"When Form API generates a form, it also generates a token that is passed out with the form in a hidden field, and expected to be returned back. If it is, the form is processed.

If a rendered form were ever to be cached, say, by Varnish, this mechanism breaks. The first user submitting the form will consume the token, and following attempts to use the form will be rejected." writes user Letharion

Libraries API surprise

And here is one gotcha that I stumbled over in a blog post from Garrett Dawson at Aten Design: 

One gotcha that seriously gotcha'd me is the necessity of providing a value for either version argument or version callback. The documentation for hook_libraries_info() says that both are optional, but if at least one isn't provided, the library isn't available to load. If you're not concerned about the version of the library, you can use a short-circuit function for the version callback. 

See the original blog post for more

Rewrite Results with Path results in doubled language prefix

Drupal user Ursula posted an issue where she reported a problem with views and rewriting of paths on a multilingual site at https://www.drupal.org/node/1491742

To "Fields", I added "Content:Path" and another field (Content:Performance Title). I then, under "Rewrite results" of the second field, I check "Output this field as a link", and set [path] as the Link path. I tried this with "Use absolute path" and without.
The resulting link includes a doubled language prefix like:
http://example.com/en/en/linkednode

dawehner had a great solution at hand: https://www.drupal.org/node/1491742#comment-5802404

Well the problem is that the node:path field is not the best one to use here. A good way is to add a hidden nid field, and then use that as a token when outputting the link like this: node/[nid].

 Display Suite preprocessed fields are a can of worms

It is easy to start liking Display Suite’s preprocessed fields as they are quick to get working. But then you might want to put one inside a Field Group - and boom it won’t work as this issue states: https://www.drupal.org/node/1452198

I got some good advice from Juho

I also got smacked in the face a few times in one project by the "inside field group" issue, and after that, I learned that it's usually better to favour hook_ds_fields_info over ds preprocessed fields.

Creating a Views Bulk Operation based on example moduleIt is really easy to implement custom actions for VBO, just create a normal Drupal action. But if you want to edit entities, read this Drupal Answers article BEFORE you start working, note especially what it says about 'behaviour': 

http://drupal.stackexchange.com/questions/30910/how-to-create-custom-vbo-action

I found this information AFTER I had gotten it to work properly - I put the 'behaviour' in there because it made sense according to the example module, but my action didn't seem to work because VBO would overwrite my changes. Thank you VBO for 7 new grey hairs. I went through this so that you don't have to! 

Interesting way to fill up your database

Yet another tip from Florian

Be careful with what you put into the t() function. If you put a variable in there, there are good chances you'll fill your translation table very quickly...

Watch out for this when using views and menu_get_object for taxonomy pages [16:34] Andreas: @Daniel #drupal-gotcha: menu_get_object('taxonomy_term' ..) does not return the expected result, when the taxonomy pages are overridden with views [16:35] Olli: Yep, need to menu_get_object('views_args', ...) or something similar [16:37] Andreas: @Olli yeah, and even worse, when the view is placed on a panel page, those things really bite you in the @ßß when the code relies on the configuration remaining unchanged :( [16:40] Andreas: And it is much easier for debugging, when we do not use the common drupalism to assign the function to a var inside an if test
 if ($term = menu_get_object(...)) gets stepped over in a blink of an eye by the debugger  Views query alter documentation has important information

kiamlaluno from Drupal Answers pointed out this important information:

If you look at the documentation for views_plugin_query_default::add_where_expression(), you will notice the following note. (The emphasis is mine.)

The caller is responsible for ensuring that all fields are fully qualified (TABLE.FIELD) and that the table already exists in the query. Internally the dbtng method "where" is used.

Batch runs on load balanced environments can give headaches

Tuomas reported this: 

[13:42] Tuomas: If you ever experience issues with batched exports on load balanced environments, this is useful [13:42] Tuomas: #These should be pointed to only one front. if(req.url ~ "^/batch") { set req.backend = web1; } else { set req.backend = default_director; }

Took a while to debug why not all items that should be on export aren't there. Seems some chunks got passed to different server and were not included in the export.

Note from Janne: An even better approach would be to use client director instead of round robin. That way you don't have to worry about forgetting some path, you still have loadbalanced stuff for all users, and it's even failsafe in that if the web1 goes awol it is still able to direct traffic to web2 instead.

 Date and views together 

Something tells me this one has struck many of us:

[11:51] Cristian: Hi . Has anybody @here encountered this before ? I have a field of type "date" which I use to collect date when a certain node was updated ( user inputs the data ) . I want to create a view where I filter all those nodes which have not been updated in the past 3 months. The views filter does not recognize the field as a "date" field but rather as a test field. Any ideas on what could be wrong ? [11:51] Jim: is views date enabled @Cristian? [11:52] Federico: u're missing views date or date_views maybe? yep the right module is date_views :) [11:53] Cristian: I'll go sit in the corner now ... thanks guys [11:53] Peeter: pshh.. someone take a pic of it. [11:55] Miķelis: ahh, the good old drupal. solve any problem by adding a module. :) [11:56] Federico: :p well sometimes brainfarts happen @Cristian :) [11:56] Cristian: @Bernt , there's another gotcha for you Weird error messages and crashing

I am pretty sure a lot of frontenders have been hitten by this one: A lot of core dumps with error messages that don’t really make any sense. 

After digging and digging you ask a colleague and he asks you if you’re using npm in your project? Ooh yes - and then you do realise that php-fpm core dumps because drupal didn't understand the info-files from npm-installed things, in it’s node_modules folders. 

Issue here: https://www.drupal.org/node/2329453

Thanks for reporting, Greg

Bonus: The problem that fixed itself

This is a nice WTF moment, when you spend a while fiddling with a module that doesn’t work right, you check the documentation, Google it a bit, maybe you even open up the debugger and step through the code to see what’s going on, you find the bug and then, only when you’ve even fixed the bug and are ready to create an issue and submit a patch you check the issue queue only to see that somebody beat you to it. 

A year ago. 

The moral to the story is of course to check the issue queue first, and check patches that don’t necessarily seem to be related. 

And if you know a module maintainer that have such lingering patches it doesn’t hurt to remind them over lunch. 

Categories: Elsewhere

DebConf team: DebConf15 dates are set, come and join us! (Posted by DebConf15 team)

Planet Debian - Wed, 21/10/2015 - 14:34

At DebConf14 in Portland, Oregon, USA, next year’s DebConf team presented their conference plans and announced the conference dates: DebConf15 will take place from 15 to 22 August 2015 in Heidelberg, Germany. On the Open Weekend on 15/16 August, we invite members of the public to participate in our wide offering of content and events, before we dive into the more technical part of the conference during following week. DebConf15 will also be preceeded by DebCamp, a time and place for teams to gather for intensive collaboration.

A set of slides from a quick show-case during the DebConf14 closing ceremony provide a quick overview of what you can expect next year. For more in-depth information, we invite you to watch the video recording of the full session, in which the team provides detailed information on the preparations so far, location and transportation to the venue at Heidelberg, the different rooms and areas at the Youth Hostel (for accommodation, hacking, talks, and social activities), details about the infrastructure that are being worked on, and the plans around the conference schedule.

We invite everyone to join us in organising this conference. There are different areas where your help could be very valuable, and we are always looking forward to your ideas. Have a look at our wiki page, join our IRC channels and subscribe to our mailing lists.

We are also contacting potential sponsors from all around the globe. If you know any organisation that could be interested, please consider handing them our sponsorship brochure or contact the fundraising team with any leads.

Let’s work together, as every year, on making the best DebConf ever!

Categories: Elsewhere

Dries Buytaert: WPP-Acquia Alliance: a milestone for Drupal

Planet Drupal - Wed, 21/10/2015 - 14:23

Today Acquia announces the WPP-Acquia Alliance, a global partnership with the world's largest communications services company. This isn't just a milestone for Acquia -- I believe it to be significant for the Drupal community as well so let me tell you a bit more about it.

WPP is a marketing company. A very, very large marketing company. With more than 188,000 people in 112 countries, WPP's billings are nearly $76 billion USD and its revenues approach $19 billion USD.

The reason that the WPP-Acquia Alliance is interesting for Drupal, is because WPP's primary client is the Chief Marketing Officers (CMO). The influence of the CMO has been on the rise; their responsibility has evolved from "the one responsible for advertising" to having a critical role in designing the customer experience across all the customer touchpoints (including the websites). The CMO often has a deep understanding of how to use technology to deliver an integrated, system-wide customer experience. This is one of Drupal's strengths, and bringing organizations like WPP into the Drupal fold will help bring Drupal into the office of the CMO, grow the adoption of Drupal, and expands the opportunity for everyone in our community. If you believe, as I do, that the CMO is important, than I can't think of a better company to work with than WPP.

WPP will connect its Drupal developers from several agencies under one umbrella, creating a Drupal center of excellence, and the world's largest Acquia-certified Drupal practice. Globant, Hogarth, Mirum, Possible, Rockfish, VML and Wunderman are some of the agencies who'll be contributing to the WPP-Acquia Alliance, and building innovative Drupal applications for global clients. Acquia will provide WPP its open cloud platform, solutions for multi-site management, personalization tools, and more.

Categories: Elsewhere

Darryl Norris's Blog: Pokémon and Drupal with Pokéapi

Planet Drupal - Wed, 21/10/2015 - 11:16

I have been working on website (un)official Pokemon fan website and as part of this project we need to create pages with Pokemon data. I found Pokéapi which is a Pokémon RESTful API. Pokéapi have all the data (and even more) that what I need in order to generate my nodes. So, I wrote a module that is going to take data from Pokéapi and generate into nodes. I decide to upload that module into Drupal.org just in case someone was to build another Pokemon website.

Initially, I wrote a Drupal 7 module, but since this is a very simple module I decide to port this module to a Drupal 8 module. The D8 version does not require any extra contrib module thanks to the CMI. Here is the project page: https://www....Read more
Categories: Elsewhere

Russell Coker: LUV Server Upgrade to Jessie

Planet Debian - Wed, 21/10/2015 - 07:08

On Sunday night I started the process of upgrading the LUV server to Debian/Jessie from Debian/Wheezy. My initial plan was to just upgrade Apache first but dependencies required upgrading systemd too.

One problem I’ve encountered in the past is that the Wheezy version of systemd will often hang on an upgrade to a newer version. Generally the solution to this is to run “systemctl daemon-reexec” from another terminal. The problem in this case was that not all the libraries needed for systemd had been installed, so systemd could re-exec itself but immediately aborted. The kernel really doesn’t like it when process 1 aborts repeatedly and apparently immediately hanging is the result. At the time I didn’t know this, all I knew was that my session died and the server stopped responding to pings immediately after I requested a reexec.

The LUV server is hosted at VPAC for free. As their staff have actual work to do they couldn’t spend a lot of time working on the LUV server. They told me that the screen was flickering and suspected a VGA cable. I got to the VPAC server room with the spare LUV server (LUV had been given 3 almost identical Sun servers from Barwon Water) at 16:30. By 17:30 I had fixed the core problem (boot with “init=/bin/bash“, mount the root filesystem rw, finish the upgrade of systemd and it’s dependencies, and then reboot normally). That got it into a stage where the Xen server for Wikimedia Au was working but most LUV functionality wasn’t working.

By 23:00 on Monday I had the full list server functionality working for users, this is the main feature that users want when it’s not near a meeting time. I can’t remember whether it was Monday night or Tuesday morning when I got the Drupal site going (the main LUV web site). Last night at midnight I got the last of the Mailman administrative interface going, I admit I could have got it going a bit earlier by putting SE Linux in permissive mode, but I don’t think that the members would have benefited from that (I’ll upload a SE Linux policy package that gets Mailman working on Jessie soon).

Now it’s Wednesday and I’m still fixing some cron jobs. Along the way I noticed some problems with excessive disk space use that I’m fixing now and I’ve also removed some Wikimedia related configuration files that were obsolete and would have prevented anyone from using a wikimedia.org.au address to subscribe to the LUV mailing lists.

Now I believe that everything is working correctly and generally working better than before.

Lessons Learned

While Sunday night wasn’t a bad time to start the upgrade it wasn’t the best. If I had started the upgrade on Monday morning there would have been less down-time. Another possibility might be to do the upgrade while near the VPAC office during business hours, I could have started the upgrade while at a nearby cafe and then visited the server room immediately if something went wrong.

Doing an upgrade on a day when there’s no meeting within a week was a good choice. It wasn’t really a conscious choice as I’m usually doing other LUV work near the meeting day which precludes doing other LUV work that doesn’t need to be done soon. But in future it would be best to consciously plan upgrades for a date when users aren’t going to need the service much.

While the Wheezy systemd bug is unlikely to ever be fixed there are work-arounds that shouldn’t result in a broken server. At the moment it seems that the best option would be to kill -9 the systemctl processes that hang until the packages that systemd depends on are installed. The problem is that the upgrade hangs while the new systemctl tries to tell the old systemd to restart daemons. If we can get past that to the stage where the shared objects are installed then it should be ok.

The Apache upgrade from 2.2.x to 2.4.x changed the operation of some access control directives and it took me some time to work out how to fix that. Doing a Google search on the differences between those would have led me to the Apache document about upgrading from 2.2 to 2.4 [1]. That wouldn’t have prevented some down-time of the web sites but would have allowed me to prepare for it and to more quickly fix the problems when they became apparent. Also the rather confusing configuration of the LUV server (supporting many web sites that are no longer used) didn’t help things. I think that removing cruft from an installation before an upgrade would be better than waiting until after things break.

Next time I do an upgrade of such a server I’ll write notes about it while I go. That will give a better blog post about it if it becomes newsworthy enough to be blogged about and also more opportunities to learn better ways of doing it.

Sorry for the inconvenience.

Related posts:

  1. Virgin Mobile CRM Upgrade Failure I’ve recently got a new Xperia X10 Android phone for...
  2. Debian SE Linux Status June 2012 It’s almost the Wheezy freeze time and I’ve been working...
  3. I need an LMTP server I am working on a system where a front-end mail...
Categories: Elsewhere

Antoine Beaupré: Proprietary VDSL2 Linux routers adventures

Planet Debian - Wed, 21/10/2015 - 06:40

I recently bought a wireless / phone adapter / VDSL modem from my Internet Service Provider (ISP) during my last outage. It generally works fine as a VDSL modem, but unfortunately, I can't seem to get used to configuring the device through their clickety web user interface... Furthermore, I am worried that I can't backup the config in a meaningful way, that is: if the device fails, I will probably not find the same model again and because they run a custom Linux distributions, the chances of the backup being possible to restore on another machine are basically zero. No way i will waste my time configuring this black box. So I started looking at running a distribution like OpenWRT on it.

(Unfortunately, I don't even dare hoping to run a decent operating system like Debian on those devices, if only because of the exotic chipsets that require all sorts of nasty hacks to run...)

The machine is a SmartRG SR630n (specs). I am linking to third party site, because the SmartRG site doesn't seem to know about their own product (!). I paid extra for this device to get one that would do both Wifi and VoIP, so i could replace two machines: my current Soekris net5501 router and a Cisco ATA 186 phone adapter that seems to mysteriously defy the challenges of time. (I don't remember when I got that thing, but it's at least from 2006.)

Unfortunately, it seems that SmartRG are running a custom, proprietary Linux distribution. According to my ISP, init is a complete rewrite that reads an XML config file (and indeed it's the format of the backup files) and does the configuration through a shared memory scheme (!?). According to DSL reports, the device seems to be running a Broadcom 63168 SOC (system on a chip) that is unsupported in Linux. There are some efforts to write drivers for those from scratch, but they have been basically stalled for years now.

Here are more details on the sucker:

Now the next step would logically be to "simply" build a new image with OpenWRT and install it in place. Then I would need to figure out a way to load the binary blobs into the OpenWRT kernel and run all the ADSL utilities as well. It's basically impossible: the odds of the binary modules being compatible with another arbitrary release of the Linux kernel are near zero. Furthermore, the userland tool are most likely custom as well. And worse of all: it seems that Bell Canada deployed a custom "Lucent Stinger" DSLAM which requires a custom binary firmware in the modem. This could be why the SmartRG is so bizarre in the first place. As long as the other end is non-standard, we are all screwed. And those Stinger DSLAM will stick around for a long time, thanks to bell.

See this other good explanation of Stinger.

Which means this machine is now yet another closed box sitting on the internet without firmware upgrades, totally handicapped. I will probably end up selling it back for another machine that has OpenWRT support for their VDSL modems. But there are very few such machines, and with a lot of those, VDSL support is often marked as "spotty" or "in progress". Some machines are supported but are basically impossible to find. There's the Draytek modems are also interesting because, apparently, some models run OpenWRT out of the box too, which is a huge benefit. This is because they use the more open Lantiq SOC. Which are probably not going to support Stinger lines.

Still, there are some very interesting projects out there... The Omnia is one I am definitely interested in right now. I really like their approach... But then they don't have a VDSL chipset in there (I asked for one, actually). And the connectors are only mini-PCIe, which makes it impossible to connect a VDSL PCI card into it.

I could find a single VDSL2 PCI card online, and it could be supported, but only the annex B is available, not the annex A, and it seems the network is using "annex A" according to the ADSL stats i had in 2015-05-28-anarcat-back-again. With such a card, I could use my existing Soekris net5501 router, slam a DSL card into it, and just use the SmartRG as a dumb wifi router/phone adapter. Then it will remain to see how supported are those VDSL cards in FreeBSD (they provide Linux source code, so that's cool). And of course, all this assumes the card works with the "Stinger" mode, which is probably not the case anyways. Besides, I have VDSL2 here, not the lowly ADSL2+.

By the way, Soekris keeps on pushing new interesting products out: their net6501, with 2 extra Gig-E cards could be a really interesting high-end switch, all working with free software tools.

A friend has a SmartRG 505n modem, which looks quite similar, except without the ATA connectors. And those modems are the ones that Teksavvy recommends ("You may use a Cellpipe 7130 or Sagemcom F@ST 2864 in lieu of our SmartRG SR505N for our DSL 15/10, DSL 25 or DSL 50 services."). Furthermore, Teksavvy provides a firmware update for the 505n - again, no idea if it works with the 630n. Of course, the 505n doesn't run OpenWRT either.

So, long story short, again I got screwed by my ISP: I thought i would get a pretty hackable device, "running Linux" that my ISP said over the phone. I got weeks of downtime, no refund, and while i got a better line (more reliable, higher bandwidth), my costs doubled. And I have yet another computing device to worry about: instead of simplifying and reducing waste, I actually just added crap on top of my already cluttered desk.

Next time, maybe I'll tell you about how my ISP overbilled me, broke IPv6 and drops large packets to the floor. I haven't had a response from them in months now... hopefully they will either answer and fix all of this (doubtful) or I'll switch to some other provider, probably Teksavvy.

Many thanks to the numerous people in the DSL reports Teksavvy forum that have amazing expertise. They are even building a map of Bell COs... Thanks also to Taggart for helping me figure out how the firmware images work and encouraging me to figure out how my machine works overall.

Note: all the information shared here is presented in the spirit of the fair use conditions of copyright law.

Categories: Elsewhere

Ritesh Raj Sarraf: Controlling ill behaving applications with Linux Cgroups

Planet Debian - Tue, 20/10/2015 - 22:23

For some time, I have been wanting to read more on Linux Cgroups to explore possibilities of using it to control Ill behaving applications. At this time, while I'm stuck in travel, it has given me some time to look into it.

In our Free Software world, most of the things are do-o-cracy, i.e. when your use case is not the common one, it is typically you who has to explore possible solutions. It could be Bugs, Feature Requests or as is in my case, performance issues. But that is not to assume that we do not have better quality software in Free Software world. Infact, in my opinion, some of the tools available are far much more better than the competition in terms of features, and to add a sweetener (or nutritional facts) to it is the fact that Free Software liberates the user.

One of my favorite tool, for photo management, is Digikam. Digikam is a big project, very featureful, and has some functionalities that may not be available in the competition. But as is with most Free Software projects, Digikam is a tool which underneath consumes many more subprojects from the Free Software ecosystem.

For anyone who has used Digikam, may know some of the bugs that surface on it. Not necessarily a bug in Digikam, but maybe in one of the underneath libraries/tools that it consumes (Exiv, libkface, marble, OpenCV, libPGF etc). But the bottom line is that the overall Digikam experience (and if I may say: the overall GNU/Linux experience) takes a hit.

Digikam has pretty powerful features for annotation, tagging, facial recognition. These features, together with Digikam, make it a compelling product. But the problem is that many of these projects are independent. Thus tight integration is a challenge. And at times, bugs can be hard to find, root cause and fix.

Let's take a real example here. If you were to use Digikam today (version 4.13.0) with annotation, tagging and facial recognition as some of the core features for your use case, you may run into frustrating overall experience. Not just that, the bugs would also effect your overall GNU/Linux experience.

The facial recognition feature, if triggered, will eat up all your memory. Thus leading you to uncover Linux's long old memory bug.

The tagging feature, if triggered, again will lead to frequent I/O. Thus again leading to a stalled Linux system because of blocked CPU cycled, for nothing.

So one of the items on my TODO list was to explore Linux Cgroups, and see if it was cleanly possible to tame a process to a confinement, so that even if it was ill behaving (for whatever reasons), your machine does not take the beating.

And now that the cgroups consumer dust has kinda settled down, systemd was my first obvious choice to look at. systemd provides a helper utility, systemd-run, for similar tasks. With systemd-run, you could apply all the resource controller logic to the given process, typically cpu, memory and blkio. And restrict it to a certain set. You can also define what user to run the service as.

rrs@learner:/var/tmp/Debian-Build/Result$ systemd-run -p BlockIOWeight=10 find / Running as unit run-23805.service. 2015-10-20 / 21:37:44 ♒♒♒ ☺     rrs@learner:/var/tmp/Debian-Build/Result$ systemctl status -l run-23805.service ● run-23805.service - /usr/bin/find / Loaded: loaded Drop-In: /run/systemd/system/run-23805.service.d └─50-BlockIOWeight.conf, 50-Description.conf, 50-ExecStart.conf Active: active (running) since Tue 2015-10-20 21:37:44 CEST; 6s ago Main PID: 23814 (find) Memory: 12.2M CPU: 502ms CGroup: /system.slice/run-23805.service └─23814 /usr/bin/find / Oct 20 21:37:45 learner find[23814]: /proc/3/net/raw6 Oct 20 21:37:45 learner find[23814]: /proc/3/net/snmp Oct 20 21:37:45 learner find[23814]: /proc/3/net/stat Oct 20 21:37:45 learner find[23814]: /proc/3/net/stat/rt_cache Oct 20 21:37:45 learner find[23814]: /proc/3/net/stat/arp_cache Oct 20 21:37:45 learner find[23814]: /proc/3/net/stat/ndisc_cache Oct 20 21:37:45 learner find[23814]: /proc/3/net/stat/ip_conntrack Oct 20 21:37:45 learner find[23814]: /proc/3/net/stat/nf_conntrack Oct 20 21:37:45 learner find[23814]: /proc/3/net/tcp6 Oct 20 21:37:45 learner find[23814]: /proc/3/net/udp6 2015-10-20 / 21:37:51 ♒♒♒ ☺  

But, out of the box, graphical applications do not work. I haven't looked, but it should be doable by giving it the correct environment details.

  Underneath, systemd is using the same Linux Control Groups to limit resources for individual applications. So, in cases where you have a requirement and do not have systemd, or you directly want to make use of cgroups, it could be easily done with basic cgroups tools like cgroup-tools.   With cgroup-tools, I now have a simple cgroups hierarchy set for my current use case, i.e. Digikam rrs@learner:/var/tmp/Debian-Build/Result$ ls /sys/fs/cgroup/memory/rrs_customCG/ cgroup.clone_children memory.kmem.tcp.limit_in_bytes memory.numa_stat cgroup.event_control memory.kmem.tcp.max_usage_in_bytes memory.oom_control cgroup.procs memory.kmem.tcp.usage_in_bytes memory.pressure_level digikam/ memory.kmem.usage_in_bytes memory.soft_limit_in_bytes memory.failcnt memory.limit_in_bytes memory.stat memory.force_empty memory.max_usage_in_bytes memory.swappiness memory.kmem.failcnt memory.memsw.failcnt memory.usage_in_bytes memory.kmem.limit_in_bytes memory.memsw.limit_in_bytes memory.use_hierarchy memory.kmem.max_usage_in_bytes memory.memsw.max_usage_in_bytes notify_on_release memory.kmem.slabinfo memory.memsw.usage_in_bytes tasks memory.kmem.tcp.failcnt memory.move_charge_at_immigrate 2015-10-20 / 21:45:38 ♒♒♒ ☺ rrs@learner:/var/tmp/Debian-Build/Result$ ls /sys/fs/cgroup/memory/rrs_customCG/digikam/ cgroup.clone_children memory.kmem.tcp.max_usage_in_bytes memory.oom_control cgroup.event_control memory.kmem.tcp.usage_in_bytes memory.pressure_level cgroup.procs memory.kmem.usage_in_bytes memory.soft_limit_in_bytes memory.failcnt memory.limit_in_bytes memory.stat memory.force_empty memory.max_usage_in_bytes memory.swappiness memory.kmem.failcnt memory.memsw.failcnt memory.usage_in_bytes memory.kmem.limit_in_bytes memory.memsw.limit_in_bytes memory.use_hierarchy memory.kmem.max_usage_in_bytes memory.memsw.max_usage_in_bytes notify_on_release memory.kmem.slabinfo memory.memsw.usage_in_bytes tasks memory.kmem.tcp.failcnt memory.move_charge_at_immigrate memory.kmem.tcp.limit_in_bytes memory.numa_stat 2015-10-20 / 21:45:53 ♒♒♒ ☺

 

rrs@learner:/var/tmp/Debian-Build/Result$ cat /sys/fs/cgroup/cpu/rrs_customCG/cpu.shares 1024 2015-10-20 / 21:48:44 ♒♒♒ ☺ rrs@learner:/var/tmp/Debian-Build/Result$ cat /sys/fs/cgroup/cpu/rrs_customCG/digikam/cpu.shares 512 2015-10-20 / 21:49:05 ♒♒♒ ☺

 

rrs@learner:/var/tmp/Debian-Build/Result$ cat /sys/fs/cgroup/memory/rrs_customCG/memory.limit_in_bytes 9223372036854771712 2015-10-20 / 22:20:14 ♒♒♒ ☺ rrs@learner:/var/tmp/Debian-Build/Result$ cat /sys/fs/cgroup/memory/rrs_customCG/digikam/memory.limit_in_bytes 2764369920 2015-10-20 / 22:20:27 ♒♒♒ ☺     rrs@learner:/var/tmp/Debian-Build/Result$ cat /sys/fs/cgroup/blkio/rrs_customCG/blkio.weight 500 2015-10-20 / 21:51:43 ♒♒♒ ☺ rrs@learner:/var/tmp/Debian-Build/Result$ cat /sys/fs/cgroup/blkio/rrs_customCG/digikam/blkio.weight 10 2015-10-20 / 21:51:50 ♒♒♒ ☺     The base group, $USER_customCG needs super admin privileges. Which once set appropriately, allows the user to further self-define sub-groups. And users can then also define separate limits per sub-group.   With the resource limitations set in place, my overall experience on very recent hardware (Intel Haswell Core i7, 8 GiB RAM, 500 GB SSHD, 128 GB SSD) has improved considerably. It still is not perfect, but it definitely is a huge improvement over what I had to go through ealire: A stalled machine for hours.   top - 21:54:38 up 1 day, 6:46, 1 user, load average: 7.22, 7.51, 7.37 Tasks: 299 total, 1 running, 298 sleeping, 0 stopped, 0 zombie %Cpu0 : 7.1 us, 3.0 sy, 1.0 ni, 11.1 id, 77.8 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu1 : 6.0 us, 4.0 sy, 2.0 ni, 49.0 id, 39.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu2 : 5.0 us, 2.0 sy, 0.0 ni, 24.8 id, 68.3 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu3 : 5.9 us, 5.0 sy, 0.0 ni, 21.8 id, 67.3 wa, 0.0 hi, 0.0 si, 0.0 st MiB Mem : 7908.926 total, 96.449 free, 4634.922 used, 3177.555 buff/cache MiB Swap: 8579.996 total, 3454.746 free, 5125.250 used. 2753.324 avail Mem PID to signal/kill [default pid = 8879] PID PPID nTH USER PR NI S %CPU %MEM TIME+ COMMAND UID 8879 8868 18 rrs 20 0 S 8.2 31.2 37:44.64 digikam 1000 10255 9960 4 rrs 39 19 S 1.0 0.8 19:47.73 tracker-miner-f 1000 10157 9960 7 rrs 20 0 S 0.5 3.0 32:29.76 gnome-shell 1000 7 2 1 root 20 0 S 0.2 0:53.48 rcu_sched 0 401 1 1 root 20 0 S 0.2 1.3 0:54.93 systemd-journal 0 10269 9937 4 rrs 20 0 S 0.2 0.4 2:34.50 gnome-terminal- 1000 15316 1 14 rrs 20 0 S 0.2 3.7 30:05.96 evolution 1000 23777 2 1 root 20 0 S 0.2 0:05.73 kworker/u16:0 0 23814 1 1 root 20 0 D 0.2 0.0 0:02.00 find 0 24049 2 1 root 20 0 S 0.2 0:01.29 kworker/u16:3 0 24052 2 1 root 20 0 S 0.2 0:02.94 kworker/u16:4 0 1 0 1 root 20 0 S 0.1 0:18.24 systemd 0  

The reporting tools may not be correct here. Because from what is being reported above, I should be having a machine stalled, and heavily paging, while the kernel scanning its list of processes to find the best process to kill.

From this approach of jailing processes, the major side effect I can see is that the process (Digikam) is now starved of resources and will take much much much more time than what it would have been usually. But in the usual cases, it takes up all, and ends up starving (and getting killed) for consuming all available resources.

So I guess it is better to be on a balanced resource diet. :-)

Categories: Keywords: Like: 
Categories: Elsewhere

Dirk Eddelbuettel: Rblpapi 0.3.1

Planet Debian - Tue, 20/10/2015 - 14:01

The first update to the Rblpapi package since the initial CRAN upload in August is now available.

Rblpapi connects R to the Bloomberg system, giving access to a truly vast array of time series data and custom calculations.

This release brings a new beqs() functions to access the results from Bloomberg EQS queries, improves the build system a correct a bug in the retrieval of multiple tick series. The changes are detailed below.

Changes in Rblpapi version 0.3.1 (2015-10-19)
  • Added beqs() to run Bloomberg Equity Screen Search, based on initial PR #79 by Rademeyer Vermaak, reworked in PRs #83 and #84 by Dirk; closes tickets #63 and #80.

  • Bloomberg header and library files are now cached locally instead of being re-downloaded for every build (PR #78 by Dirk addressing issue #65).

  • For multiple ticks, time-stamps are unique-yfied before merging (PR #77 by Dirk addressing issue #76).

  • A compiler warning under new g++ versions is now suppressed (PR #69 by John, fixing #68).

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the Rblpapi. Questions, comments etc should go to the issue tickets system at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator - Elsewhere