Feed aggregator

Petter Reinholdtsen: MPEG LA on "Internet Broadcast AVC Video" licensing and non-private use

Planet Debian - 5 hours 20 min ago

After asking the Norwegian Broadcasting Company (NRK) why they can broadcast and stream H.264 video without an agreement with the MPEG LA, I was wiser, but still confused. So I asked MPEG LA if their understanding matched that of NRK. As far as I can tell, it does not.

I started by asking for more information about the various licensing classes and what exactly is covered by the "Internet Broadcast AVC Video" class that NRK pointed me at to explain why NRK did not need a license for streaming H.264 video:

According to a MPEG LA press release dated 2010-02-02, there is no charge when using MPEG AVC/H.264 according to the terms of "Internet Broadcast AVC Video". I am trying to understand exactly what the terms of "Internet Broadcast AVC Video" is, and wondered if you could help me. What exactly is covered by these terms, and what is not?

The only source of more information I have been able to find is a PDF named AVC Patent Portfolio License Briefing, which states this about the fees:

  • Where End User pays for AVC Video
    • Subscription (not limited by title) – 100,000 or fewer subscribers/yr = no royalty; > 100,000 to 250,000 subscribers/yr = $25,000; >250,000 to 500,000 subscribers/yr = $50,000; >500,000 to 1M subscribers/yr = $75,000; >1M subscribers/yr = $100,000
    • Title-by-Title - 12 minutes or less = no royalty; >12 minutes in length = lower of (a) 2% or (b) $0.02 per title
  • Where remuneration is from other sources
    • Free Television - (a) one-time $2,500 per transmission encoder or (b) annual fee starting at $2,500 for > 100,000 HH rising to maximum $10,000 for >1,000,000 HH
    • Internet Broadcast AVC Video (not title-by-title, not subscription) – no royalty for life of the AVC Patent Portfolio License

Am I correct in assuming that the four categories listed is the categories used when selecting licensing terms, and that "Internet Broadcast AVC Video" is the category for things that do not fall into one of the other three categories? Can you point me to a good source explaining what is ment by "title-by-title" and "Free Television" in the license terms for AVC/H.264?

Will a web service providing H.264 encoded video content in a "video on demand" fashing similar to Youtube and Vimeo, where no subscription is required and no payment is required from end users to get access to the videos, fall under the terms of the "Internet Broadcast AVC Video", ie no royalty for life of the AVC Patent Portfolio license? Does it matter if some users are subscribed to get access to personalized services?

Note, this request and all answers will be published on the Internet.

The answer came quickly from Benjamin J. Myers, Licensing Associate with the MPEG LA:

Thank you for your message and for your interest in MPEG LA. We appreciate hearing from you and I will be happy to assist you.

As you are aware, MPEG LA offers our AVC Patent Portfolio License which provides coverage under patents that are essential for use of the AVC/H.264 Standard (MPEG-4 Part 10). Specifically, coverage is provided for end products and video content that make use of AVC/H.264 technology. Accordingly, the party offering such end products and video to End Users concludes the AVC License and is responsible for paying the applicable royalties.

Regarding Internet Broadcast AVC Video, the AVC License generally defines such content to be video that is distributed to End Users over the Internet free-of-charge. Therefore, if a party offers a service which allows users to upload AVC/H.264 video to its website, and such AVC Video is delivered to End Users for free, then such video would receive coverage under the sublicense for Internet Broadcast AVC Video, which is not subject to any royalties for the life of the AVC License. This would also apply in the scenario where a user creates a free online account in order to receive a customized offering of free AVC Video content. In other words, as long as the End User is given access to or views AVC Video content at no cost to the End User, then no royalties would be payable under our AVC License.

On the other hand, if End Users pay for access to AVC Video for a specific period of time (e.g., one month, one year, etc.), then such video would constitute Subscription AVC Video. In cases where AVC Video is delivered to End Users on a pay-per-view basis, then such content would constitute Title-by-Title AVC Video. If a party offers Subscription or Title-by-Title AVC Video to End Users, then they would be responsible for paying the applicable royalties you noted below.

Finally, in the case where AVC Video is distributed for free through an "over-the-air, satellite and/or cable transmission", then such content would constitute Free Television AVC Video and would be subject to the applicable royalties.

For your reference, I have attached a .pdf copy of the AVC License. You will find the relevant sublicense information regarding AVC Video in Sections 2.2 through 2.5, and the corresponding royalties in Section 3.1.2 through 3.1.4. You will also find the definitions of Title-by-Title AVC Video, Subscription AVC Video, Free Television AVC Video, and Internet Broadcast AVC Video in Section 1 of the License. Please note that the electronic copy is provided for informational purposes only and cannot be used for execution.

I hope the above information is helpful. If you have additional questions or need further assistance with the AVC License, please feel free to contact me directly.

Having a fresh copy of the license text was useful, and knowing that the definition of Title-by-Title required payment per title made me aware that my earlier understanding of that phrase had been wrong. But I still had a few questions:

I have a small followup question. Would it be possible for me to get a license with MPEG LA even if there are no royalties to be paid? The reason I ask, is that some video related products have a copyright clause limiting their use without a license with MPEG LA. The clauses typically look similar to this:

This product is licensed under the AVC patent portfolio license for the personal and non-commercial use of a consumer to (a) encode video in compliance with the AVC standard ("AVC video") and/or (b) decode AVC video that was encoded by a consumer engaged in a personal and non-commercial activity and/or AVC video that was obtained from a video provider licensed to provide AVC video. No license is granted or shall be implied for any other use. additional information may be obtained from MPEG LA L.L.C.

It is unclear to me if this clause mean that I need to enter into an agreement with MPEG LA to use the product in question, even if there are no royalties to be paid to MPEG LA. I suspect it will differ depending on the jurisdiction, and mine is Norway. What is MPEG LAs view on this?

According to the answer, MPEG LA believe those using such tools for non-personal or commercial use need a license with them:

With regard to the Notice to Customers, I would like to begin by clarifying that the Notice from Section 7.1 of the AVC License reads:

THIS PRODUCT IS LICENSED UNDER THE AVC PATENT PORTFOLIO LICENSE FOR THE PERSONAL USE OF A CONSUMER OR OTHER USES IN WHICH IT DOES NOT RECEIVE REMUNERATION TO (i) ENCODE VIDEO IN COMPLIANCE WITH THE AVC STANDARD ("AVC VIDEO") AND/OR (ii) DECODE AVC VIDEO THAT WAS ENCODED BY A CONSUMER ENGAGED IN A PERSONAL ACTIVITY AND/OR WAS OBTAINED FROM A VIDEO PROVIDER LICENSED TO PROVIDE AVC VIDEO. NO LICENSE IS GRANTED OR SHALL BE IMPLIED FOR ANY OTHER USE. ADDITIONAL INFORMATION MAY BE OBTAINED FROM MPEG LA, L.L.C. SEE HTTP://WWW.MPEGLA.COM

The Notice to Customers is intended to inform End Users of the personal usage rights (for example, to watch video content) included with the product they purchased, and to encourage any party using the product for commercial purposes to contact MPEG LA in order to become licensed for such use (for example, when they use an AVC Product to deliver Title-by-Title, Subscription, Free Television or Internet Broadcast AVC Video to End Users, or to re-Sell a third party's AVC Product as their own branded AVC Product).

Therefore, if a party is to be licensed for its use of an AVC Product to Sell AVC Video on a Title-by-Title, Subscription, Free Television or Internet Broadcast basis, that party would need to conclude the AVC License, even in the case where no royalties were payable under the License. On the other hand, if that party (either a Consumer or business customer) simply uses an AVC Product for their own internal purposes and not for the commercial purposes referenced above, then such use would be included in the royalty paid for the AVC Products by the licensed supplier.

Finally, I note that our AVC License provides worldwide coverage in countries that have AVC Patent Portfolio Patents, including Norway.

I hope this clarification is helpful. If I may be of any further assistance, just let me know.

The mentioning of Norwegian patents made me a bit confused, so I asked for more information:

But one minor question at the end. If I understand you correctly, you state in the quote above that there are patents in the AVC Patent Portfolio that are valid in Norway. This make me believe I read the list available from <URL: http://www.mpegla.com/main/programs/AVC/Pages/PatentList.aspx > incorrectly, as I believed the "NO" prefix in front of patents were Norwegian patents, and the only one I could find under Mitsubishi Electric Corporation expired in 2012. Which patents are you referring to that are relevant for Norway?

Again, the quick answer explained how to read the list of patents in that list:

Your understanding is correct that the last AVC Patent Portfolio Patent in Norway expired on 21 October 2012. Therefore, where AVC Video is both made and Sold in Norway after that date, then no royalties would be payable for such AVC Video under the AVC License. With that said, our AVC License provides historic coverage for AVC Products and AVC Video that may have been manufactured or Sold before the last Norwegian AVC patent expired. I would also like to clarify that coverage is provided for the country of manufacture and the country of Sale that has active AVC Patent Portfolio Patents.

Therefore, if a party offers AVC Products or AVC Video for Sale in a country with active AVC Patent Portfolio Patents (for example, Sweden, Denmark, Finland, etc.), then that party would still need coverage under the AVC License even if such products or video are initially made in a country without active AVC Patent Portfolio Patents (for example, Norway). Similarly, a party would need to conclude the AVC License if they make AVC Products or AVC Video in a country with active AVC Patent Portfolio Patents, but eventually Sell such AVC Products or AVC Video in a country without active AVC Patent Portfolio Patents.

As far as I understand it, MPEG LA believe anyone using Adobe Premiere and other video related software with a H.264 distribution license need a license agreement with MPEG LA to use such tools for anything non-private or commercial, while it is OK to set up a Youtube-like service as long as no-one pays to get access to the content. I still have no clear idea how this applies to Norway, where none of the patents MPEG LA is licensing are valid. Will the copyright terms take precedence or can those terms be ignored because the patents are not valid in Norway?

Categories: Elsewhere

Russ Allbery: INN 2.6.0 release candidate

Planet Debian - 9 hours 47 min ago

In more INN-related news (and catching up on my substantial backlog), a second release candidate for the INN 2.6.0 release is now available. (The first one was only circulated on the inn-workers mailing list.)

INN 2.6.0 is the next major release of INN, with lots of improvements to the build system, protocol support, and the test suite, among many other things. Changes have been accumulating slowly for quite some time.

There are a lot of changes, so I won't go into all the details here. If you're curious, take a look at the NEWS file. You can get the release candidate from ftp.isc.org. (Note that this link will be removed once INN 2.6.0 is released.)

As always, thanks to Julien ÉLIE for preparing this release and doing most of the maintenance work on INN!

For more information about INN, see the official ISC download page or from my personal INN pages. The latter also has links to the full changelog and the other INN documentation.

Categories: Elsewhere

Matthew Palmer: It's 10pm, do you know where your SSL certificates are?

Planet Debian - 13 hours 10 min ago

The Internet is going encrypted. Revelations of mass-surveillance of Internet traffic has given the Internet community the motivation to roll out encrypted services – the biggest of which is undoubtedly HTTP.

The weak point, though, is SSL Certification Authorities. These are “trusted third parties” who are supposed to validate that a person requesting a certificate for a domain is authorised to have a certificate for that domain. It is no secret that these companies have failed to do the job entrusted to them, again, and again, and again. Oh, and another one.

However, at this point, doing away with CAs and finding some other mechanism isn’t feasible. There is no clear alternative, and the inertia in the current system is overwhelming, to the point where it would take a decade or more to migrate away from the CA-backed SSL certificate ecosystem, even if there was something that was widely acknowledged to be superior in every possible way.

This is where Certificate Transparency comes in. This protocol, which works as part of the existing CA ecosystem, requires CAs to publish every certificate they issue, in order for the certificate to be considered “valid” by browsers and other user agents. While it doesn’t guarantee to prevent misissuance, it does mean that a CA can’t cover up or try to minimise the impact of a breach or other screwup – their actions are fully public, for everyone to see.

Much of Certificate Transparency’s power, however, is diminished if nobody is looking at the certificates which are being published. That is why I have launched sslaware.com, a site for searching the database of logged certificates. At present, it is rather minimalist, however I intend on adding more features, such as real-time notifications (if a new cert for your domain or organisation is logged, you’ll get an e-mail about it), and more advanced searching capabilities.

If you care about the security of your website, you should check out SSL Aware and see what certificates have been issued for your site. You may be unpleasantly surprised.

Categories: Elsewhere

Matthew Garrett: Anti Evil Maid 2 Turbo Edition

Planet Debian - Mon, 06/07/2015 - 19:39
The Evil Maid attack has been discussed for some time - in short, it's the idea that most security mechanisms on your laptop can be subverted if an attacker is able to gain physical access to your system (for instance, by pretending to be the maid in a hotel). Most disk encryption systems will fall prey to the attacker replacing the initial boot code of your system with something that records and then exfiltrates your decryption passphrase the next time you type it, at which point the attacker can simply steal your laptop the next day and get hold of all your data.

There are a couple of ways to protect against this, and they both involve the TPM. Trusted Platform Modules are small cryptographic devices on the system motherboard[1]. They have a bunch of Platform Configuration Registers (PCRs) that are cleared on power cycle but otherwise have slightly strange write semantics - attempting to write a new value to a PCR will append the new value to the existing value, take the SHA-1 of that and then store this SHA-1 in the register. During a normal boot, each stage of the boot process will take a SHA-1 of the next stage of the boot process and push that into the TPM, a process called "measurement". Each component is measured into a separate PCR - PCR0 contains the SHA-1 of the firmware itself, PCR1 contains the SHA-1 of the firmware configuration, PCR2 contains the SHA-1 of any option ROMs, PCR5 contains the SHA-1 of the bootloader and so on.

If any component is modified, the previous component will come up with a different measurement and the PCR value will be different, Because you can't directly modify PCR values[2], this modified code will only be able to set the PCR back to the "correct" value if it's able to generate a sequence of writes that will hash back to that value. SHA-1 isn't yet sufficiently broken for that to be practical, so we can probably ignore that. The neat bit here is that you can then use the TPM to encrypt small quantities of data[3] and ask it to only decrypt that data if the PCR values match. If you change the PCR values (by modifying the firmware, bootloader, kernel and so on), the TPM will refuse to decrypt the material.

Bitlocker uses this to encrypt the disk encryption key with the TPM. If the boot process has been tampered with, the TPM will refuse to hand over the key and your disk remains encrypted. This is an effective technical mechanism for protecting against people taking images of your hard drive, but it does have one fairly significant issue - in the default mode, your disk is decrypted automatically. You can add a password, but the obvious attack is then to modify the boot process such that a fake password prompt is presented and the malware exfiltrates the data. The TPM won't hand over the secret, so the malware flashes up a message saying that the system must be rebooted in order to finish installing updates, removes itself and leaves anyone except the most paranoid of users with the impression that nothing bad just happened. It's an improvement over the state of the art, but it's not a perfect one.

Joanna Rutkowska came up with the idea of Anti Evil Maid. This can take two slightly different forms. In both, a secret phrase is generated and encrypted with the TPM. In the first form, this is then stored on a USB stick. If the user suspects that their system has been tampered with, they boot from the USB stick. If the PCR values are good, the secret will be successfully decrypted and printed on the screen. The user verifies that the secret phrase is correct and reboots, satisfied that their system hasn't been tampered with. The downside to this approach is that most boots will not perform this verification, and so you rely on the user being able to make a reasonable judgement about whether it's necessary on a specific boot.

The second approach is to do this on every boot. The obvious problem here is that in this case an attacker simply boots your system, copies down the secret, modifies your system and simply prints the correct secret. To avoid this, the TPM can have a password set. If the user fails to enter the correct password, the TPM will refuse to decrypt the data. This can be attacked in a similar way to Bitlocker, but can be avoided with sufficient training: if the system reboots without the user seeing the secret, the user must assume that their system has been compromised and that an attacker now has a copy of their TPM password.

This isn't entirely great from a usability perspective. I think I've come up with something slightly nicer, and certainly more Web 2.0[4]. Anti Evil Maid relies on having a static secret because expecting a user to remember a dynamic one is pretty unreasonable. But most security conscious people rely on dynamic secret generation daily - it's the basis of most two factor authentication systems. TOTP is an algorithm that takes a seed, the time of day and some reasonably clever calculations and comes up with (usually) a six digit number. The secret is known by the device that you're authenticating against, and also by some other device that you possess (typically a phone). You type in the value that your phone gives you, the remote site confirms that it's the value it expected and you've just proven that you possess the secret. Because the secret depends on the time of day, someone copying that value won't be able to use it later.

But instead of using your phone to identify yourself to a remote computer, we can use the same technique to ensure that your computer possesses the same secret as your phone. If the PCR states are valid, the computer will be able to decrypt the TOTP secret and calculate the current value. This can then be printed on the screen and the user can compare it against their phone. If the values match, the PCR values are valid. If not, the system has been compromised. Because the value changes over time, merely booting your computer gives your attacker nothing - printing an old value won't fool the user[5]. This allows verification to be a normal part of every boot, without forcing the user to type in an additional password.

I've written a prototype implementation of this and uploaded it here. Do pay attention to the list of limitations - without a bootloader that measures your kernel and initrd, you're still open to compromise. Adding TPM support to grub is on my list of things to do. There are also various potential issues like an attacker being able to use external DMA-capable devices to obtain the secret, especially since most Linux distributions still ship kernels that don't enable the IOMMU by default. And, of course, if your firmware is inherently untrustworthy there's multiple ways it can subvert this all. So treat this very much like a research project rather than something you can depend on right now. There's a fair amount of work to do to turn this into a meaningful improvement in security.

[1] I wrote about them in more detail here, including a discussion of whether they can be used for general purpose DRM (answer: not really)

[2] In theory, anyway. In practice, TPMs are embedded devices running their own firmware, so who knows what bugs they're hiding.

[3] On the order of 128 bytes or so. If you want to encrypt larger things with a TPM, the usual way to do it is to generate an AES key, encrypt your material with that and then encrypt the AES key with the TPM.

[4] Is that even a thing these days? What do we say instead?

[5] Assuming that the user is sufficiently diligent in checking the value, anyway

comments
Categories: Elsewhere

Matthew Garrett: Internet abuse culture is a tech industry problem

Planet Debian - Mon, 06/07/2015 - 19:37
After Jesse Frazelle blogged about the online abuse she receives, a common reaction in various forums[1] was "This isn't a tech industry problem - this is what being on the internet is like"[2]. And yes, they're right. Abuse of women on the internet isn't limited to people in the tech industry. But the severity of a problem is a product of two separate factors: its prevalence and what impact it has on people.

Much of the modern tech industry relies on our ability to work with people outside our company. It relies on us interacting with a broader community of contributors, people from a range of backgrounds, people who may be upstream on a project we use, people who may be employed by competitors, people who may be spending their spare time on this. It means listening to your users, hearing their concerns, responding to their feedback. And, distressingly, there's significant overlap between that wider community and the people engaging in the abuse. This abuse is often partly technical in nature. It demonstrates understanding of the subject matter. Sometimes it can be directly tied back to people actively involved in related fields. It's from people who might be at conferences you attend. It's from people who are participating in your mailing lists. It's from people who are reading your blog and using the advice you give in their daily jobs. The abuse is coming from inside the industry.

Cutting yourself off from that community impairs your ability to do work. It restricts meeting people who can help you fix problems that you might not be able to fix yourself. It results in you missing career opportunities. Much of the work being done to combat online abuse relies on protecting the victim, giving them the tools to cut themselves off from the flow of abuse. But that risks restricting their ability to engage in the way they need to to do their job. It means missing meaningful feedback. It means passing up speaking opportunities. It means losing out on the community building that goes on at in-person events, the career progression that arises as a result. People are forced to choose between putting up with abuse or compromising their career.

The abuse that women receive on the internet is unacceptable in every case, but we can't ignore the effects of it on our industry simply because it happens elsewhere. The development model we've created over the past couple of decades is just too vulnerable to this kind of disruption, and if we do nothing about it we'll allow a large number of valuable members to be driven away. We owe it to them to make things better.

[1] Including Hacker News, which then decided to flag the story off the front page because masculinity is fragile

[2] Another common reaction was "But men get abused as well", which I'm not even going to dignify with a response

comments
Categories: Elsewhere

Ben Hutchings: Debian LTS work, June 2015

Planet Debian - Mon, 06/07/2015 - 15:03

This was my seventh month working on Debian LTS. I was assigned 14.75 hours of work by Freexian's Debian LTS initiative.

p7zip

I did not receive any feedback from upstream for my proposed fix for CVE-2015-1038 mentioned last month, so I went ahead and uploaded it based on my own testing. (I also uploaded the fix to wheezy-security, jessie-security and sid.)

Afterwards, I received a request from upstream for a patch against their latest release (even the version in sid is quite a long way behind that), so I ported the fix forward to that.

linux-2.6

I backported further security fixes, but had to give up on one (CVE-2014-8172, AIO soft lockup) as the fix depends on wide-ranging changes. For CVE-2015-1805 (pipe iovec overrun leading to memory corruption), the upstream fix was also not applicable, but this looked so serious that we needed to fix it anyway. Red Hat had already fixed this in their 2.6.32-based kernel and they didn't have overlapping changes to the pipe implementation, so I was able to extract this fix from their source tarball. I uploaded and issued DLA-246-1.

Unfortunately, I failed to notice that Linux 2.6.32.66 had introduced two regressions that were fixed in 2.6.32.67. While these didn't appear in my testing, one of them did affect several users that were quick to upgrade. I applied the upstream fixes, made a second upload and issued DLA-246-2.

I also triaged the issues that are still unfixed, and I spent some time working on a fix for CVE-2015-1350 (unprivileged chown removes setcap attribute), but I haven't yet completed the backport to 2.6.32 or tested it.

openssl

I looked at OpenSSL, which is still marked as affected by CVE-2015-4000 (encryption downgrade aka Logjam). After discussion with the LTS team I made a note of the current situation, which is that a full fix (rejecting Diffie-Hellman keys shorter than 1024 bits) must wait until more servers have been upgraded.

Categories: Elsewhere

Mike Gabriel: My FLOSS activities in June 2015

Planet Debian - Mon, 06/07/2015 - 14:54

June 2015 has been mainly dedicated to these five fields of endeavour:

  • first uploads of MATE 1.10 to Debian experimental (still work in progress)
  • development of nx-libs (3.6.x branch)
  • meeting other nx-libs developers at X2Go: The Gathering 2015 at Linuxhotel in Essen, Germany
  • contribution to Debian and Debian LTS,
  • production deployment of Ganeti and Ganeti Manager (a web frontend for Ganeti)
Received Sponsorship

Last month's contributions of mine (8h) to the Debian LTS project had been contracted by Freexian [1] again. Thanks to Raphael Hertzog for having me on the team. Thanks to all the people and companies sponsoring the Debian LTS Team's work.

Also a big thanks to people from Hetzner GmbH for sponsoring my stay at X2Go: The Gathering 2015 @ Linuxhotel (in Essen, Germany).

MATE 1.10 entering Debian experimental

Together with Martin Wimpress from Ubuntu MATE and other people in the Debian MATE Packaging Team I managed to upload a great portion of the MATE 1.10 packages to Debian experimental.

Please note that this is still work in progress. Not all MATE 1.10 packages have been uploaded yet and several packages from the MATE 1.10 series in Debian have grave bugs still (mostly packaging and installation issues).

The plan is to make the complete MATE 1.10 stack available in Debian experimental by the end of July and also get all the open kinks fixed by then.

Development nx-libs 3.6.x

In June 2015, I have looked at various aspects of nx-libs development:

read more

Categories: Elsewhere

Annertech: Your Connected Website

Planet Drupal - Mon, 06/07/2015 - 14:07
Your Connected Website

Modern websites talk. They talk through great content to the visitors who come to read them, but they also talk through APIs (Application Programming Interfaces) to other systems. Does yours? Should it? Could integrating your site with other systems bring you greater return on investment?

Categories: Elsewhere

Ritesh Raj Sarraf: Apport Integration with Debian - GSoC Update

Planet Debian - Mon, 06/07/2015 - 09:49

For this year's Google Summer of Code, I have been mentoring Yuru Roy Shao, on Integrating Apport with Debian. Yuru is a CS student studying at University of Michigan, USA completing his PhD.

For around 2+ years, Apport was packaged for Debian, but remained in Experimental. While we did have a separate (Debian BTS aware) crashdb, the general concerns (bug spam, too many duplicates etc) were the reason we held its inclusion.

With this GSoC, Yuru has been bringing some of the missing integration pieces of Debian. For example, we are now using debianbts to to talk to the Debian BTS Server, and fetch bug reports for the user.

While apport's Bug Report data collection itself is very comprehensive, still for Debian, it will have the option to use native as well as reportbug. This will allow us to use the many hooks people have integrated so far with reportbug. Both Bug Report data collectors will be available.

Yuru has blogged about his GSoC progress so far, here. Please do have a read, and let us know your views. If the travel formalities work out well, I intend to attend Debconf this year, and can talk in more detail.

Categories: Keywords: Like: 
Categories: Elsewhere

Russ Allbery: INN 2.5.5

Planet Debian - Mon, 06/07/2015 - 07:13

(This release has actually been ready for a while, but there were a few technical difficulties with getting it copied up to the right places, and then I got very distracted by various life stuff.)

This is the first new release of INN in about a year, and hopefully the last in the 2.5.x series. A beta release of INN 2.6.0 will be announced shortly (probably tomorrow).

As is typical for bug-fix releases, this release rolls up a bunch of small bug fixes that have been made over the past year. The most notable changes include new inn.conf parameters to fine-tune the SSL/TLS configuration for nnrpd (generally to tighten it over the OpenSSL defaults), a few new flags to various utilities, multiple improvements to pullnews, and support for properly stopping cnfsstat and innwatch if INN is started and then quickly stopped.

As always, thanks to Julien ÉLIE for preparing this release and doing most of the maintenance work on INN!

You can get the latest version from the official ISC download page or from my personal INN pages. The latter also has links to the full changelog and the other INN documentation.

Categories: Elsewhere

LevelTen Interactive: How to display an RSS feed in a Drupal block

Planet Drupal - Mon, 06/07/2015 - 07:00

If you have a standard RSS feed that you'd like to display in a block on your Drupal website, you've come to the right place. For this example, we will be using a sample feed at http://www.feedforall.com/sample.xml (excerpted here:). ... Read more

Categories: Elsewhere

Ben Armstrong: BLT Bike Trail – Early Summer 2015

Planet Debian - Sun, 05/07/2015 - 22:26

This is one of my regular walking routes, from home to Five Island Lake and back. It’s about 15 km. I usually walk too briskly to capture the many visual delights of this route. Today on the trip out, I stopped and took several photos to share with you.

[slb_group]

An early morning walk up the BLT bike trail to Five Island Lake (pictured here) and back. Click to start the slideshow.

The walk starts from our subdivision. It’s cool and clear when I leave.

Saskatoon berries Saskatoon berries Saskatoon berries Dew on leaves Dew on leaves Pitcher plants Something’s attacking this alder. Maybe woolly aphids?

Wild strawberries Wild strawberry Wild strawberry Wild strawberry Wild strawberries Daisy Daisy Vetch Vetch Water lily Water lily

Sensitive fern Squirrel! Cranberry Lake

Cranberry Lake
[/slb_group]

Categories: Elsewhere

Thorsten Alteholz: My Debian Activities in June 2015

Planet Debian - Sun, 05/07/2015 - 21:51

FTP assistant

This month I marked 539 packages for accept, rejected 61 of them and had to send 24 emails to maintainers. This is a new personal record. Even in the month before the Jessie freeze I accepted only 407 packages. So, very well done (self-laudation has to happen from time to time ).

Another record was broken as well. After 19 month of doing this kind of work, I got my first insulting email. I would prefer to wait another 19 month before I get the next one …

Squeeze LTS

This was my twelfth month that I did some work for the Squeeze LTS initiative, started by Raphael Hertzog at Freexian.

This month I got assigned a workload of only 14.5h and I spent most of it to work on a new upload of php5. Unfortunately there have been so many CVEs comming in, that I didn’t do an upload yet.

Other stuff I uploaded was

  • [DLA 258-1] jqueryui security update
  • [DLA 262-1] libcrypto++ security update

This month I also had my first one and a half weeks of doing frontdesk work. As introduced in this email, every member of the LTS team should do some LTS CVE triage. Up to now it was mainly done by Raphael and he wants to share this task with everybody else. So I answered questions on the IRC channel, on the LTS list and looked for CVEs that are important for Squeeze LTS or could be ignored.

Other stuff

This month I also uploaded a new version of harminv and wondered why the package didn’t move to testing. Of course there is a document how to do a transition of a library properly. But hey, it is me, I know everything better and of course I can use a shortcut. Oh boy, I was wrong. So I also uploaded new versions of meep, meep-lam4, meep-openmpi, meep-mpi-default and meep-mpich2.

And the moral of the story: If you don’t understand why something should be done in a specific way, you shouldn’t try to do it different.

Donations

Again, thanks alot to all donors. I really appreciate this and hope that everybody is pleased with my commitment. Don’t hesitate to make suggestions for improvements.

Categories: Elsewhere

Petter Reinholdtsen: New laptop - some more clues and ideas based on feedback

Planet Debian - Sun, 05/07/2015 - 21:40

Several people contacted me after my previous blog post about my need for a new laptop, and provided very useful feedback. I wish to thank every one of these. Several pointed me to the possibility of fixing my X230, and I am already in the process of getting Lenovo to do so thanks to the on site, next day support contract covering the machine. But the battery is almost useless (I expect to replace it with a non-official battery) and I do not expect the machine to live for many more years, so it is time to plan its replacement. If I did not have a support contract, it was suggested to find replacement parts using FrancEcrans, but it might present a language barrier as I do not understand French.

One tip I got was to use the Skinflint web service to compare laptop models. It seem to have more models available than prisjakt.no. Another tip I got from someone I know have similar keyboard preferences was that the HP EliteBook 840 keyboard is not very good, and this matches my experience with earlier EliteBook keyboards I tested. Because of this, I will not consider it any further.

When I wrote my blog post, I was not aware of Thinkpad X250, the newest Thinkpad X model. The keyboard reintroduces mouse buttons (which is missing from the X240), and is working fairly well with Debian Sid/Unstable according to Corsac.net. The reports I got on the keyboard quality are not consistent. Some say the keyboard is good, others say it is ok, while others say it is not very good. Those with experience from X41 and and X60 agree that the X250 keyboard is not as good as those trusty old laptops, and suggest I keep and fix my X230 instead of upgrading, or get a used X230 to replace it. I'm also told that the X250 lack leds for caps lock, disk activity and battery status, which is very convenient on my X230. I'm also told that the CPU fan is running very often, making it a bit noisy. In any case, the X250 do not work out of the box with Debian Stable/Jessie, one of my requirements.

I have also gotten a few vendor proposals, one was Pro-Star, another was Libreboot. The latter look very attractive to me.

Again, thank you all for the very useful feedback. It help a lot as I keep looking for a replacement.

Categories: Elsewhere

Sjoerd Simons: Debian Jessie on Raspberry Pi 2

Planet Debian - Sun, 05/07/2015 - 20:06

Apart from being somewhat slow, one of the downsides of the original Raspberry Pi SoC was that it had an old ARM11 core which implements the ARMv6 architecture. This was particularly unfortunate as most common distributions (Debian, Ubuntu, Fedora, etc) standardized on the ARMv7-A architecture as a minimum for their ARM hardfloat ports. Which is one of the reasons for Raspbian and the various other RPI specific distributions.

Happily, with the new Raspberry Pi 2 using Cortex-A7 Cores (which implement the ARMv7-A architecture) this issue is out of the way, which means that a a standard Debian hardfloat userland will run just fine. So the obvious first thing to do when an RPI 2 appeared on my desk was to put together a quick Debian Jessie image for it.

The result of which can be found at: https://images.collabora.co.uk/rpi2/

Login as root with password debian (Obviously do change the password and create a normal user after booting). The image is 3G, so should fit on any SD card marketed as 4G or bigger. Using bmap-tools for flashing is recommended, otherwise you'll be waiting for 2.5G of zeros to be written to the card, which tends to be rather boring. Note that the image is really basic and will just get you to a login prompt on either serial or hdmi, batteries are very much not included, but can be apt-getted :).

Technically, this image is simply a Debian Jessie debootstrap with a extra packages for hardware support. Unlike Raspbian the first partition (which contains the firmware & kernel files to boot the system) is mounted on /boot/firmware rather then on /boot. This is because the VideoCore expects the first partition to be a FAT filesystem, but mounting FAT on /boot really doesn't work right on Debian systems as it contains files managed by dpkg (e.g. the kernel package) which requires a POSIX compatible filesystem. Essentially the same reason why Debian is using /boot/efi for the ESP partition on Intel systems rather the mounting it on /boot directly.

For reference, the RPI2 specific packages in this image are from https://repositories.collabora.co.uk/debian/ in the jessie distribution and rpi2 component (this repository is enabled by default on the image). The relevant packages there are:

  • linux: Current 3.18 based package from Debian experimental (3.18.5-1~exp1 at the time of this writing) with a stack of patches on top from the raspberrypi github repository and tweaked to build an rpi2 flavour as the patchset isn't multiplatform capable
  • raspberrypi-firmware-nokernel: Firmware package and misc libraries packages taken from Raspbian, with a slight tweak to install in /boot/firmware rather then /boot.
  • flash-kernel: Current flash-kernel package from debian experimental, with a small addition to detect the RPI 2 and "flash" the kernel to /boot/firmware/kernel7.img (which is what the GPU will try to boot on this board).

For the future, it would be nice to see the Raspberry Pi 2 support out of the box on Debian. For that to happen, the most important thing would be to have some mainline kernel support for this board (supporting multiplatform!) so it can be build as part of debians armmp kernel flavour. And ideally, having the firmware load a bootloader (such as u-boot) rather than a kernel directly to allow for a much more flexible boot sequence and support for using an initramfs (u-boot has some support for the original Raspberry Pi, so adding Raspberry Pi 2 support should hopefully not be too tricky)

Update: An updated image (20150705) is available with the latest packages from Jessie and a GPG key that's not expired :).

Categories: Elsewhere

Dominique Dumont: Major bug fix for cme update copyright command

Planet Debian - Sun, 05/07/2015 - 11:46

Hello

Previous version of libconfig-model-dpkg-perl had 2 bugs related to copyright update command :

  • Too many directory paragraphs (like src/foo/*) were removed during update.
  • Some file paragraph were not merged, leading to needless paragraphs in debian/copyright file. This bug is less severe as no information is lost

Version 2.067 of libconfig-model-dpkg-perl fixes both issues. This version is available in unstable.

To use cme update dpkg-copyright command, the following packages are required:

All the best


Categories: Elsewhere

KatteKrab: CCR at OSCON

Planet Drupal - Sun, 05/07/2015 - 03:11
Sunday, July 5, 2015 - 11:11

I've given a "Constructive Conflict Resolution" talk twice now. First at DrupalCon Amsterdam, and again at DrupalCon Los Angeles. It's something I've been thinking about since joining the Drupal community working group a couple of years ago. I'm giving the talk again at OSCON in a couple of weeks. But this time, it will be different. Very different. Here's why.

After seeing tweets about Gina Likins keynote at ApacheCon earlier this year I reached out to her to ask if she'd be willing to collaborate with me about Conflict Resolution in open source, and ended up inviting her to co-present with me at OSCON. We've been working together over the past couple of weeks. It's been a joy, and a learning experience! I'm really excited about where the talk is heading now. If you're going to be at OSCON, please come along. If you're interested, please follow our tweets tagged #osconCCR.

Jen Krieger from Opensource.com interviewed Gina and I about our talk - here's the article: Teaching open source communities about conflict resolution

In the meantime, do you have stories of conflict in Open Source Communities to share?

  • How were they resolved?
  • Were they intractable?
  • Do the wounds still fester?
  • Was positive change an end result?
  • Do you have resources for dealing with conflict?

Tweet your thoughts to me @kattekrab

Categories: Elsewhere

Robert Edmonds: Git packaging workflow for py-lmdb

Planet Debian - Sun, 05/07/2015 - 02:56

Recently, I packaged the py-lmdb Python binding for the LMDB database library. This package is going to be team maintained by the pkg-db group, which is responsible for maintaining BerkeleyDB and LMDB packages. Below are my notes on (re-)Debianizing this package and how the Git repository for the source package is laid out.

The upstream py-lmdb developer has a Git-centric workflow. Development is done on the master branch, with regular releases done as fast-forward merges to the release branch. Release tags of the form py-lmdb_X.YZ are provided. The only tarballs provided are the ones that GitHub automatically generates from tags. Since these tarballs are synthetic and the content of these tarballs matches the content on the corresponding tag, we will ignore them in favor of using the release tags directly. (The --git-pristine-tar-commit option to gbp-buildpackage will be used so that .orig.tar.gz files can be replicated so that the Debian archive will accept subsequent uploads, but tarballs are otherwise irrelevant to our workflow.)

To make it clear that the release tags come from upstream's repository, they should be prefixed with upstream/, which would preferably result in a DEP-14 compliant scheme. (Unfortunately, since upstream's release tags begin with py-lmdb_, this doesn't quite match the pattern that DEP-14 recommends.)

Here is how the local packaging repository is initialized. Note that git clone isn't used, so that we can customize how the tags are fetched. Instead, we create an empty Git repository and add the upstream repository as the upstream remote. The --no-tags option is used, so that git fetch does not import the remote's tags. However, we also add a custom fetch refspec refs/tags/*:refs/tags/upstream/* so that the remote's tags are explicitly fetched, but with the upstream/ prefix.

$ mkdir py-lmdb $ cd py-lmdb $ git init Initialized empty Git repository in /home/edmonds/debian/py-lmdb/.git/ $ git remote add --no-tags upstream https://github.com/dw/py-lmdb $ git config --add remote.upstream.fetch 'refs/tags/*:refs/tags/upstream/*' $ git fetch upstream remote: Counting objects: 3336, done. remote: Total 3336 (delta 0), reused 0 (delta 0), pack-reused 3336 Receiving objects: 100% (3336/3336), 2.15 MiB | 0 bytes/s, done. Resolving deltas: 100% (1958/1958), done. From https://github.com/dw/py-lmdb * [new branch] master -> upstream/master * [new branch] release -> upstream/release * [new branch] win32-sparse-patch -> upstream/win32-sparse-patch * [new tag] last-cython-version -> upstream/last-cython-version * [new tag] py-lmdb_0.1 -> upstream/py-lmdb_0.1 * [new tag] py-lmdb_0.2 -> upstream/py-lmdb_0.2 * [new tag] py-lmdb_0.3 -> upstream/py-lmdb_0.3 * [new tag] py-lmdb_0.4 -> upstream/py-lmdb_0.4 * [new tag] py-lmdb_0.5 -> upstream/py-lmdb_0.5 * [new tag] py-lmdb_0.51 -> upstream/py-lmdb_0.51 * [new tag] py-lmdb_0.52 -> upstream/py-lmdb_0.52 * [new tag] py-lmdb_0.53 -> upstream/py-lmdb_0.53 * [new tag] py-lmdb_0.54 -> upstream/py-lmdb_0.54 * [new tag] py-lmdb_0.56 -> upstream/py-lmdb_0.56 * [new tag] py-lmdb_0.57 -> upstream/py-lmdb_0.57 * [new tag] py-lmdb_0.58 -> upstream/py-lmdb_0.58 * [new tag] py-lmdb_0.59 -> upstream/py-lmdb_0.59 * [new tag] py-lmdb_0.60 -> upstream/py-lmdb_0.60 * [new tag] py-lmdb_0.61 -> upstream/py-lmdb_0.61 * [new tag] py-lmdb_0.62 -> upstream/py-lmdb_0.62 * [new tag] py-lmdb_0.63 -> upstream/py-lmdb_0.63 * [new tag] py-lmdb_0.64 -> upstream/py-lmdb_0.64 * [new tag] py-lmdb_0.65 -> upstream/py-lmdb_0.65 * [new tag] py-lmdb_0.66 -> upstream/py-lmdb_0.66 * [new tag] py-lmdb_0.67 -> upstream/py-lmdb_0.67 * [new tag] py-lmdb_0.68 -> upstream/py-lmdb_0.68 * [new tag] py-lmdb_0.69 -> upstream/py-lmdb_0.69 * [new tag] py-lmdb_0.70 -> upstream/py-lmdb_0.70 * [new tag] py-lmdb_0.71 -> upstream/py-lmdb_0.71 * [new tag] py-lmdb_0.72 -> upstream/py-lmdb_0.72 * [new tag] py-lmdb_0.73 -> upstream/py-lmdb_0.73 * [new tag] py-lmdb_0.74 -> upstream/py-lmdb_0.74 * [new tag] py-lmdb_0.75 -> upstream/py-lmdb_0.75 * [new tag] py-lmdb_0.76 -> upstream/py-lmdb_0.76 * [new tag] py-lmdb_0.77 -> upstream/py-lmdb_0.77 * [new tag] py-lmdb_0.78 -> upstream/py-lmdb_0.78 * [new tag] py-lmdb_0.79 -> upstream/py-lmdb_0.79 * [new tag] py-lmdb_0.80 -> upstream/py-lmdb_0.80 * [new tag] py-lmdb_0.81 -> upstream/py-lmdb_0.81 * [new tag] py-lmdb_0.82 -> upstream/py-lmdb_0.82 * [new tag] py-lmdb_0.83 -> upstream/py-lmdb_0.83 * [new tag] py-lmdb_0.84 -> upstream/py-lmdb_0.84 * [new tag] py-lmdb_0.85 -> upstream/py-lmdb_0.85 * [new tag] py-lmdb_0.86 -> upstream/py-lmdb_0.86 $

Note that at this point we have content from the upstream remote in our local repository, but we don't have any local branches:

$ git status On branch master Initial commit nothing to commit (create/copy files and use "git add" to track) $ git branch -a remotes/upstream/master remotes/upstream/release remotes/upstream/win32-sparse-patch $

We will use the DEP-14 naming scheme for the packaging branches, so the branch for packages targeted at unstable will be called debian/sid. Since I already made an initial 0.84-1 upload, we need to start the debian/sid branch from the upstream 0.84 tag and import the original packaging content from that upload. The --no-track flag is passed to git checkout initially so that Git doesn't consider the upstream release tag upstream/py-lmdb_0.84 to be the upstream branch for our packaging branch.

$ git checkout --no-track -b debian/sid upstream/py-lmdb_0.84 Switched to a new branch 'debian/sid' $

At this point I imported the original packaging content for 0.84-1 with git am. Then, I signed the debian/0.84-1 tag:

$ git tag -s -m 'Debian release 0.84-1' debian/0.84-1 $ git verify-tag debian/0.84-1 gpg: Signature made Sat 04 Jul 2015 02:49:42 PM EDT using RSA key ID AAF6CDAE gpg: Good signature from "Robert Edmonds <edmonds@mycre.ws>" [ultimate] gpg: aka "Robert Edmonds <edmonds@fsi.io>" [ultimate] gpg: aka "Robert Edmonds <edmonds@debian.org>" [ultimate] $

New upstream releases are integrated by fetching new upstream tags and non-fast-forward merging into the packaging branch. The latest release is 0.86, so we merge from the upstream/py-lmdb_0.86 tag.

$ git fetch upstream --dry-run [...] $ git fetch upstream [...] $ git checkout debian/sid Already on 'debian/sid' $ git merge --no-ff --no-edit upstream/py-lmdb_0.86 Merge made by the 'recursive' strategy. ChangeLog | 46 ++++++++++++++ docs/index.rst | 46 +++++++++++++- docs/themes/acid/layout.html | 4 +- examples/dirtybench-gdbm.py | 6 ++ examples/dirtybench.py | 19 ++++++ examples/nastybench.py | 18 ++++-- examples/parabench.py | 6 ++ lib/lmdb.h | 37 ++++++----- lib/mdb.c | 281 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--------------------- lib/midl.c | 2 +- lib/midl.h | 2 +- lib/py-lmdb/preload.h | 48 ++++++++++++++ lmdb/__init__.py | 2 +- lmdb/cffi.py | 120 ++++++++++++++++++++++++----------- lmdb/cpython.c | 86 +++++++++++++++++++------ lmdb/tool.py | 5 +- misc/gdb.commands | 21 ++++++ misc/runtests-travisci.sh | 3 +- misc/runtests-ubuntu-12-04.sh | 28 ++++---- setup.py | 2 + tests/crash_test.py | 22 +++++++ tests/cursor_test.py | 37 +++++++++++ tests/env_test.py | 73 +++++++++++++++++++++ tests/testlib.py | 14 +++- tests/txn_test.py | 20 ++++++ 25 files changed, 773 insertions(+), 175 deletions(-) create mode 100644 lib/py-lmdb/preload.h create mode 100644 misc/gdb.commands $

Here I did some additional development work like editing the debian/gbp.conf file and applying a fix for #790738 to make the package build reproducibly. The package is now ready for an 0.86-1 upload, so I ran the following gbp dch command:

$ gbp dch --release --auto --new-version=0.86-1 --commit gbp:info: Found tag for topmost changelog version '6bdbb56c04571fe2d5d22aa0287ab0dc83959de5' gbp:info: Continuing from commit '6bdbb56c04571fe2d5d22aa0287ab0dc83959de5' gbp:info: Changelog has been committed for version 0.86-1 $

This automatically generates a changelog entry for 0.86-1, but it includes commit summaries for all of the upstream commits since the last release, which I had to edit out.

Then, I used gbp buildpackage with BUILDER=pbuilder to build the package in a clean, up-to-date sid chroot. After checking the result, I signed the debian/0.86-1 tag:

$ git tag -s -m 'Debian release 0.86-1' debian/0.86-1 $

The package is now ready to be pushed to git.debian.org. First, a bare repository is initialized:

$ ssh git.debian.org edmonds@moszumanska:~$ cd /srv/git.debian.org/git/pkg-db/ edmonds@moszumanska:/srv/git.debian.org/git/pkg-db$ umask 002 edmonds@moszumanska:/srv/git.debian.org/git/pkg-db$ mkdir py-lmdb.git edmonds@moszumanska:/srv/git.debian.org/git/pkg-db$ cd py-lmdb.git/ edmonds@moszumanska:/srv/git.debian.org/git/pkg-db/py-lmdb.git$ git --bare init --shared Initialized empty shared Git repository in /srv/git.debian.org/git/pkg-db/py-lmdb.git/ edmonds@moszumanska:/srv/git.debian.org/git/pkg-db/py-lmdb.git$ echo 'py-lmdb Debian packaging' > description edmonds@moszumanska:/srv/git.debian.org/git/pkg-db/py-lmdb.git$ mv hooks/post-update.sample hooks/post-update edmonds@moszumanska:/srv/git.debian.org/git/pkg-db/py-lmdb.git$ chmod a+x hooks/post-update edmonds@moszumanska:/srv/git.debian.org/git/pkg-db/py-lmdb.git$ logout Shared connection to git.debian.org closed.

Then, we add a new debian remote to our local packaging repository. Per our repository conventions, we need to ensure that only branch names matching debian/* and pristine-tar and tag names matching debian/* and upstream/* are pushed to the debian remote when we run git push debian, so we add a a set of remote.debian.push refspecs that correspond to these conventions. We also add an explicit remote.debian.fetch refspec to fetch tags.

$ git remote add debian ssh://git.debian.org/git/pkg-db/py-lmdb.git $ git config --add remote.debian.push 'refs/tags/debian/*' $ git config --add remote.debian.push 'refs/tags/upstream/*' $ git config --add remote.debian.push 'refs/heads/debian/*' $ git config --add remote.debian.push 'refs/heads/pristine-tar' $ git config --add remote.debian.fetch 'refs/tags/*:refs/tags/*'

We now run the initial push to the remote Git repository. The --set-upstream option is used so that our local branches will be configured to track the corresponding remote branches. Also note that the debian/* and upstream/* tags are pushed as well.

$ git push debian --set-upstream Counting objects: 3333, done. Delta compression using up to 8 threads. Compressing objects: 100% (1083/1083), done. Writing objects: 100% (3333/3333), 1.37 MiB | 0 bytes/s, done. Total 3333 (delta 2231), reused 3314 (delta 2218) To ssh://git.debian.org/git/pkg-db/py-lmdb.git * [new branch] pristine-tar -> pristine-tar * [new branch] debian/sid -> debian/sid * [new tag] debian/0.84-1 -> debian/0.84-1 * [new tag] debian/0.86-1 -> debian/0.86-1 * [new tag] upstream/last-cython-version -> upstream/last-cython-version * [new tag] upstream/py-lmdb_0.1 -> upstream/py-lmdb_0.1 * [new tag] upstream/py-lmdb_0.2 -> upstream/py-lmdb_0.2 * [new tag] upstream/py-lmdb_0.3 -> upstream/py-lmdb_0.3 * [new tag] upstream/py-lmdb_0.4 -> upstream/py-lmdb_0.4 * [new tag] upstream/py-lmdb_0.5 -> upstream/py-lmdb_0.5 * [new tag] upstream/py-lmdb_0.51 -> upstream/py-lmdb_0.51 * [new tag] upstream/py-lmdb_0.52 -> upstream/py-lmdb_0.52 * [new tag] upstream/py-lmdb_0.53 -> upstream/py-lmdb_0.53 * [new tag] upstream/py-lmdb_0.54 -> upstream/py-lmdb_0.54 * [new tag] upstream/py-lmdb_0.56 -> upstream/py-lmdb_0.56 * [new tag] upstream/py-lmdb_0.57 -> upstream/py-lmdb_0.57 * [new tag] upstream/py-lmdb_0.58 -> upstream/py-lmdb_0.58 * [new tag] upstream/py-lmdb_0.59 -> upstream/py-lmdb_0.59 * [new tag] upstream/py-lmdb_0.60 -> upstream/py-lmdb_0.60 * [new tag] upstream/py-lmdb_0.61 -> upstream/py-lmdb_0.61 * [new tag] upstream/py-lmdb_0.62 -> upstream/py-lmdb_0.62 * [new tag] upstream/py-lmdb_0.63 -> upstream/py-lmdb_0.63 * [new tag] upstream/py-lmdb_0.64 -> upstream/py-lmdb_0.64 * [new tag] upstream/py-lmdb_0.65 -> upstream/py-lmdb_0.65 * [new tag] upstream/py-lmdb_0.66 -> upstream/py-lmdb_0.66 * [new tag] upstream/py-lmdb_0.67 -> upstream/py-lmdb_0.67 * [new tag] upstream/py-lmdb_0.68 -> upstream/py-lmdb_0.68 * [new tag] upstream/py-lmdb_0.69 -> upstream/py-lmdb_0.69 * [new tag] upstream/py-lmdb_0.70 -> upstream/py-lmdb_0.70 * [new tag] upstream/py-lmdb_0.71 -> upstream/py-lmdb_0.71 * [new tag] upstream/py-lmdb_0.72 -> upstream/py-lmdb_0.72 * [new tag] upstream/py-lmdb_0.73 -> upstream/py-lmdb_0.73 * [new tag] upstream/py-lmdb_0.74 -> upstream/py-lmdb_0.74 * [new tag] upstream/py-lmdb_0.75 -> upstream/py-lmdb_0.75 * [new tag] upstream/py-lmdb_0.76 -> upstream/py-lmdb_0.76 * [new tag] upstream/py-lmdb_0.77 -> upstream/py-lmdb_0.77 * [new tag] upstream/py-lmdb_0.78 -> upstream/py-lmdb_0.78 * [new tag] upstream/py-lmdb_0.79 -> upstream/py-lmdb_0.79 * [new tag] upstream/py-lmdb_0.80 -> upstream/py-lmdb_0.80 * [new tag] upstream/py-lmdb_0.81 -> upstream/py-lmdb_0.81 * [new tag] upstream/py-lmdb_0.82 -> upstream/py-lmdb_0.82 * [new tag] upstream/py-lmdb_0.83 -> upstream/py-lmdb_0.83 * [new tag] upstream/py-lmdb_0.84 -> upstream/py-lmdb_0.84 * [new tag] upstream/py-lmdb_0.85 -> upstream/py-lmdb_0.85 * [new tag] upstream/py-lmdb_0.86 -> upstream/py-lmdb_0.86 Branch pristine-tar set up to track remote branch pristine-tar from debian. Branch debian/sid set up to track remote branch debian/sid from debian. $

After the initial push, we need to configure the remote repository so that clones will checkout the debian/sid branch by default:

$ ssh git.debian.org edmonds@moszumanska:~$ cd /srv/git.debian.org/git/pkg-db/py-lmdb.git/ edmonds@moszumanska:/srv/git.debian.org/git/pkg-db/py-lmdb.git$ git symbolic-ref HEAD refs/heads/debian/sid edmonds@moszumanska:/srv/git.debian.org/git/pkg-db/py-lmdb.git$ logout Shared connection to git.debian.org closed.

We can check if there are any updates in upstream's Git repository with the following command:

$ git fetch upstream --dry-run -v From https://github.com/dw/py-lmdb = [up to date] master -> upstream/master = [up to date] release -> upstream/release = [up to date] win32-sparse-patch -> upstream/win32-sparse-patch = [up to date] last-cython-version -> upstream/last-cython-version = [up to date] py-lmdb_0.1 -> upstream/py-lmdb_0.1 = [up to date] py-lmdb_0.2 -> upstream/py-lmdb_0.2 = [up to date] py-lmdb_0.3 -> upstream/py-lmdb_0.3 = [up to date] py-lmdb_0.4 -> upstream/py-lmdb_0.4 = [up to date] py-lmdb_0.5 -> upstream/py-lmdb_0.5 = [up to date] py-lmdb_0.51 -> upstream/py-lmdb_0.51 = [up to date] py-lmdb_0.52 -> upstream/py-lmdb_0.52 = [up to date] py-lmdb_0.53 -> upstream/py-lmdb_0.53 = [up to date] py-lmdb_0.54 -> upstream/py-lmdb_0.54 = [up to date] py-lmdb_0.56 -> upstream/py-lmdb_0.56 = [up to date] py-lmdb_0.57 -> upstream/py-lmdb_0.57 = [up to date] py-lmdb_0.58 -> upstream/py-lmdb_0.58 = [up to date] py-lmdb_0.59 -> upstream/py-lmdb_0.59 = [up to date] py-lmdb_0.60 -> upstream/py-lmdb_0.60 = [up to date] py-lmdb_0.61 -> upstream/py-lmdb_0.61 = [up to date] py-lmdb_0.62 -> upstream/py-lmdb_0.62 = [up to date] py-lmdb_0.63 -> upstream/py-lmdb_0.63 = [up to date] py-lmdb_0.64 -> upstream/py-lmdb_0.64 = [up to date] py-lmdb_0.65 -> upstream/py-lmdb_0.65 = [up to date] py-lmdb_0.66 -> upstream/py-lmdb_0.66 = [up to date] py-lmdb_0.67 -> upstream/py-lmdb_0.67 = [up to date] py-lmdb_0.68 -> upstream/py-lmdb_0.68 = [up to date] py-lmdb_0.69 -> upstream/py-lmdb_0.69 = [up to date] py-lmdb_0.70 -> upstream/py-lmdb_0.70 = [up to date] py-lmdb_0.71 -> upstream/py-lmdb_0.71 = [up to date] py-lmdb_0.72 -> upstream/py-lmdb_0.72 = [up to date] py-lmdb_0.73 -> upstream/py-lmdb_0.73 = [up to date] py-lmdb_0.74 -> upstream/py-lmdb_0.74 = [up to date] py-lmdb_0.75 -> upstream/py-lmdb_0.75 = [up to date] py-lmdb_0.76 -> upstream/py-lmdb_0.76 = [up to date] py-lmdb_0.77 -> upstream/py-lmdb_0.77 = [up to date] py-lmdb_0.78 -> upstream/py-lmdb_0.78 = [up to date] py-lmdb_0.79 -> upstream/py-lmdb_0.79 = [up to date] py-lmdb_0.80 -> upstream/py-lmdb_0.80 = [up to date] py-lmdb_0.81 -> upstream/py-lmdb_0.81 = [up to date] py-lmdb_0.82 -> upstream/py-lmdb_0.82 = [up to date] py-lmdb_0.83 -> upstream/py-lmdb_0.83 = [up to date] py-lmdb_0.84 -> upstream/py-lmdb_0.84 = [up to date] py-lmdb_0.85 -> upstream/py-lmdb_0.85 = [up to date] py-lmdb_0.86 -> upstream/py-lmdb_0.86

We can check if any co-maintainers have pushed updates to the git.debian.org repository with the following command:

$ git fetch debian --dry-run -v From ssh://git.debian.org/git/pkg-db/py-lmdb = [up to date] debian/sid -> debian/debian/sid = [up to date] pristine-tar -> debian/pristine-tar = [up to date] debian/0.84-1 -> debian/0.84-1 = [up to date] debian/0.86-1 -> debian/0.86-1 = [up to date] upstream/last-cython-version -> upstream/last-cython-version = [up to date] upstream/py-lmdb_0.1 -> upstream/py-lmdb_0.1 = [up to date] upstream/py-lmdb_0.2 -> upstream/py-lmdb_0.2 = [up to date] upstream/py-lmdb_0.3 -> upstream/py-lmdb_0.3 = [up to date] upstream/py-lmdb_0.4 -> upstream/py-lmdb_0.4 = [up to date] upstream/py-lmdb_0.5 -> upstream/py-lmdb_0.5 = [up to date] upstream/py-lmdb_0.51 -> upstream/py-lmdb_0.51 = [up to date] upstream/py-lmdb_0.52 -> upstream/py-lmdb_0.52 = [up to date] upstream/py-lmdb_0.53 -> upstream/py-lmdb_0.53 = [up to date] upstream/py-lmdb_0.54 -> upstream/py-lmdb_0.54 = [up to date] upstream/py-lmdb_0.56 -> upstream/py-lmdb_0.56 = [up to date] upstream/py-lmdb_0.57 -> upstream/py-lmdb_0.57 = [up to date] upstream/py-lmdb_0.58 -> upstream/py-lmdb_0.58 = [up to date] upstream/py-lmdb_0.59 -> upstream/py-lmdb_0.59 = [up to date] upstream/py-lmdb_0.60 -> upstream/py-lmdb_0.60 = [up to date] upstream/py-lmdb_0.61 -> upstream/py-lmdb_0.61 = [up to date] upstream/py-lmdb_0.62 -> upstream/py-lmdb_0.62 = [up to date] upstream/py-lmdb_0.63 -> upstream/py-lmdb_0.63 = [up to date] upstream/py-lmdb_0.64 -> upstream/py-lmdb_0.64 = [up to date] upstream/py-lmdb_0.65 -> upstream/py-lmdb_0.65 = [up to date] upstream/py-lmdb_0.66 -> upstream/py-lmdb_0.66 = [up to date] upstream/py-lmdb_0.67 -> upstream/py-lmdb_0.67 = [up to date] upstream/py-lmdb_0.68 -> upstream/py-lmdb_0.68 = [up to date] upstream/py-lmdb_0.69 -> upstream/py-lmdb_0.69 = [up to date] upstream/py-lmdb_0.70 -> upstream/py-lmdb_0.70 = [up to date] upstream/py-lmdb_0.71 -> upstream/py-lmdb_0.71 = [up to date] upstream/py-lmdb_0.72 -> upstream/py-lmdb_0.72 = [up to date] upstream/py-lmdb_0.73 -> upstream/py-lmdb_0.73 = [up to date] upstream/py-lmdb_0.74 -> upstream/py-lmdb_0.74 = [up to date] upstream/py-lmdb_0.75 -> upstream/py-lmdb_0.75 = [up to date] upstream/py-lmdb_0.76 -> upstream/py-lmdb_0.76 = [up to date] upstream/py-lmdb_0.77 -> upstream/py-lmdb_0.77 = [up to date] upstream/py-lmdb_0.78 -> upstream/py-lmdb_0.78 = [up to date] upstream/py-lmdb_0.79 -> upstream/py-lmdb_0.79 = [up to date] upstream/py-lmdb_0.80 -> upstream/py-lmdb_0.80 = [up to date] upstream/py-lmdb_0.81 -> upstream/py-lmdb_0.81 = [up to date] upstream/py-lmdb_0.82 -> upstream/py-lmdb_0.82 = [up to date] upstream/py-lmdb_0.83 -> upstream/py-lmdb_0.83 = [up to date] upstream/py-lmdb_0.84 -> upstream/py-lmdb_0.84 = [up to date] upstream/py-lmdb_0.85 -> upstream/py-lmdb_0.85 = [up to date] upstream/py-lmdb_0.86 -> upstream/py-lmdb_0.86 $

We can check if anything needs to be pushed from our local repository to the git.debian.org repository with the following command:

$ git push debian --dry-run -v Pushing to ssh://git.debian.org/git/pkg-db/py-lmdb.git To ssh://git.debian.org/git/pkg-db/py-lmdb.git = [up to date] debian/sid -> debian/sid = [up to date] pristine-tar -> pristine-tar = [up to date] debian/0.84-1 -> debian/0.84-1 = [up to date] debian/0.86-1 -> debian/0.86-1 = [up to date] upstream/last-cython-version -> upstream/last-cython-version = [up to date] upstream/py-lmdb_0.1 -> upstream/py-lmdb_0.1 = [up to date] upstream/py-lmdb_0.2 -> upstream/py-lmdb_0.2 = [up to date] upstream/py-lmdb_0.3 -> upstream/py-lmdb_0.3 = [up to date] upstream/py-lmdb_0.4 -> upstream/py-lmdb_0.4 = [up to date] upstream/py-lmdb_0.5 -> upstream/py-lmdb_0.5 = [up to date] upstream/py-lmdb_0.51 -> upstream/py-lmdb_0.51 = [up to date] upstream/py-lmdb_0.52 -> upstream/py-lmdb_0.52 = [up to date] upstream/py-lmdb_0.53 -> upstream/py-lmdb_0.53 = [up to date] upstream/py-lmdb_0.54 -> upstream/py-lmdb_0.54 = [up to date] upstream/py-lmdb_0.56 -> upstream/py-lmdb_0.56 = [up to date] upstream/py-lmdb_0.57 -> upstream/py-lmdb_0.57 = [up to date] upstream/py-lmdb_0.58 -> upstream/py-lmdb_0.58 = [up to date] upstream/py-lmdb_0.59 -> upstream/py-lmdb_0.59 = [up to date] upstream/py-lmdb_0.60 -> upstream/py-lmdb_0.60 = [up to date] upstream/py-lmdb_0.61 -> upstream/py-lmdb_0.61 = [up to date] upstream/py-lmdb_0.62 -> upstream/py-lmdb_0.62 = [up to date] upstream/py-lmdb_0.63 -> upstream/py-lmdb_0.63 = [up to date] upstream/py-lmdb_0.64 -> upstream/py-lmdb_0.64 = [up to date] upstream/py-lmdb_0.65 -> upstream/py-lmdb_0.65 = [up to date] upstream/py-lmdb_0.66 -> upstream/py-lmdb_0.66 = [up to date] upstream/py-lmdb_0.67 -> upstream/py-lmdb_0.67 = [up to date] upstream/py-lmdb_0.68 -> upstream/py-lmdb_0.68 = [up to date] upstream/py-lmdb_0.69 -> upstream/py-lmdb_0.69 = [up to date] upstream/py-lmdb_0.70 -> upstream/py-lmdb_0.70 = [up to date] upstream/py-lmdb_0.71 -> upstream/py-lmdb_0.71 = [up to date] upstream/py-lmdb_0.72 -> upstream/py-lmdb_0.72 = [up to date] upstream/py-lmdb_0.73 -> upstream/py-lmdb_0.73 = [up to date] upstream/py-lmdb_0.74 -> upstream/py-lmdb_0.74 = [up to date] upstream/py-lmdb_0.75 -> upstream/py-lmdb_0.75 = [up to date] upstream/py-lmdb_0.76 -> upstream/py-lmdb_0.76 = [up to date] upstream/py-lmdb_0.77 -> upstream/py-lmdb_0.77 = [up to date] upstream/py-lmdb_0.78 -> upstream/py-lmdb_0.78 = [up to date] upstream/py-lmdb_0.79 -> upstream/py-lmdb_0.79 = [up to date] upstream/py-lmdb_0.80 -> upstream/py-lmdb_0.80 = [up to date] upstream/py-lmdb_0.81 -> upstream/py-lmdb_0.81 = [up to date] upstream/py-lmdb_0.82 -> upstream/py-lmdb_0.82 = [up to date] upstream/py-lmdb_0.83 -> upstream/py-lmdb_0.83 = [up to date] upstream/py-lmdb_0.84 -> upstream/py-lmdb_0.84 = [up to date] upstream/py-lmdb_0.85 -> upstream/py-lmdb_0.85 = [up to date] upstream/py-lmdb_0.86 -> upstream/py-lmdb_0.86 Everything up-to-date

Finally, in order to set up a fresh local clone of the git.debian.org repository that's configured like the local repository created above, we have to do the following:

$ git clone --origin debian ssh://git.debian.org/git/pkg-db/py-lmdb.git Cloning into 'py-lmdb'... remote: Counting objects: 3333, done. remote: Compressing objects: 100% (1070/1070), done. remote: Total 3333 (delta 2231), reused 3333 (delta 2231) Receiving objects: 100% (3333/3333), 1.37 MiB | 1.11 MiB/s, done. Resolving deltas: 100% (2231/2231), done. Checking connectivity... done. $ cd py-lmdb $ git remote add --no-tags upstream https://github.com/dw/py-lmdb $ git config --add remote.upstream.fetch 'refs/tags/*:refs/tags/upstream/*' $ git fetch upstream remote: Counting objects: 56, done. remote: Total 56 (delta 25), reused 25 (delta 25), pack-reused 31 Unpacking objects: 100% (56/56), done. From https://github.com/dw/py-lmdb * [new branch] master -> upstream/master * [new branch] release -> upstream/release * [new branch] win32-sparse-patch -> upstream/win32-sparse-patch $ git branch --track pristine-tar debian/pristine-tar Branch pristine-tar set up to track remote branch pristine-tar from debian. $ git config --add remote.debian.push 'refs/tags/debian/*' $ git config --add remote.debian.push 'refs/tags/upstream/*' $ git config --add remote.debian.push 'refs/heads/debian/*' $ git config --add remote.debian.push 'refs/heads/pristine-tar' $ git config --add remote.debian.fetch 'refs/tags/*:refs/tags/*' $

This is a fair amount of effort beyond a simple git clone, though, so I wonder if anything can be done to optimize this.

Categories: Elsewhere

Wuinfo: How we come with CYouTube Module

Planet Drupal - Sat, 04/07/2015 - 14:07

Recently, I was working with a team on the project for country music television (CMT) at Corus Entertainment. We need to synchronize videos from two different resources. One of them is Youtube. There are over 200 youtube channels on CMT. We need pull videos from all those channels regularly. By doing that, videos that published on Youtube will be available on CMT automatically. So, in-house editors do not need to spend time uploading the videos.

We store videos in file entities. Videos are from different sources. All imported videos act same across the site. Among the imported video, only their mime type is different if two videos are from the different source. Each imported video is a file entity on CMT. For the front end, we built views pages and blocks based on those video file entities.

We have some videos imported from another source: MPX thePlatform. We used "Media: thePlatform mpx" as main module to import and update those videos. To deal with customized video fields, we contributed a module "Media: thePlatform MPX entity fields sync" to work with the main module.

After we have done with MPX thePlatform videos, we have experience of handling videos import. How do we import YouTube videos now? We have over 200 channels on YouTube.

We gave up to make it work with Feeds module. At first, we planned to use feeds module. Since Google have just deprecated their YouTube API V2.0, Old RSS channels feed is no longer working. Thanks to the community, we found a module called feeds_youtube. The module's 7.x-3.x branch works with YouTube latest API V3.0. So, we have a feeds parser. We still need a feeds processor. Thanks to the community again, we found feeds_files module. We installed those modules and their dependent modules. We configure feeds module and spent couple days. It did work. At the end, we decide to build a lightweight custom module that can do everything from download YouTube JSON data to create local file entities. Each video imported from channels will have a uplink to an artist or a hosting show.

What do we want from the module? We want it to check multiple YouTube channels. If there is a new video uploaded in a channel, we create a associated file entity on CMT. In the created entity, we have some the metadata from YouTube saved to the entity fields. In that entity, we also save local metadata. Local metadata likes a show, artist information. We want the module to handle over 200 and maybe thousands of channels in the future. We want the module to handle the tasks gracefully and not burn out the system when importing thousands of videos. It sounds to be quite intimidating but not really.

We ended up building a module and contributed it to Drupal.org. It is called Youtube Channel Videos Sync V3. Here is how we come up with the module.

First of all, we gathered a list of channels name. We also collected relevant local metadata like artist id or show id. Then, we send a request to YouTube and get a list of videos for each channel. Then, for each video, we create a system queue task with video's data and the local metadata. At last, we built queue processor and created the video file entity one by one. See the diagram below for the complete process.

A little more technical detail below:

How we get a list of YouTube channels name and the local metadata for the channels? In the module configuration page, we set the fields used to store the channel name. In CMT, we have the field in both show and artist nodes. The module went through the nodes and get an array of channel name and the node id with they content type. Array like this:

  array(
   'channel name' => array(
     'field_artist_referenced' => array('nid',...),
    ),

How to configure YouTube metadata fields into entity fields? On the module configuration page, we can set up mapping between the YouTube metadata field and local entity field.

In the imported videos, how to set the local artist and show information? The module has a hook where a custom module can implement to provide that information. Like the array above, "field_artist_referenced" is a field machine name of video entity. "array('nid')" is the value for that field. By doing that, the imported video has an entity reference pointing back to the artist or show node. One of the best ways to set up a relation is to have an entity reference in the video entity that point to the artist node or entity.

That is the overall process the module follow to import thousands of videos from YouTube.

Categories: Elsewhere

Guido Günther: Debian work in June 2015

Planet Debian - Sat, 04/07/2015 - 12:46

June was the second month I contributed to Debian LTS under the Freexian umbrella. In total I spent ten hours working on:

Besides that I did CVE triaging of 17 CVEs to check if and how they affect oldoldstable security. The information provided by the Security team on these issues in data/CVE/list is an awesome help here. So I tried to be as verbose when triaging CVEs that weren't looked at for Wheezy or Jessie yet.

On non LTS time I patched our lts-cve-triage tool to allow to skip packages that are already in dla-needed.txt. This avoids wasting time on CVEs that were already triaged.

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator