Neil Williams: Extending an existing ARMMP initramfs

Planet Debian - Tue, 14/04/2015 - 14:27

The actual use of this extension is still in development and the log files are not currently publicly visible, but it may still be useful for people to know the what and why …

The Debian ARMMP kernel can be used for multiple devices, just changing the DTB. I’ve already done tests with this for Cubietruck and Beaglebone-Black, iMX.53 was one of the original test devices too. Whilst these tests can deploy a full image (there are examples of building such images in the vmdebootstrap package), it is a lot quicker to do simple tests of a kernel using a ramdisk. The default Debian initramfs has a focused objective but still forms a useful base for extension. In particular, I want to be able to test one initramfs on multiple boards (so multiple dtbs) with the same kernel image. I then want to be able, on selected boards, to mount a SATA drive or write an image to eMMC or a USB stick or whatever. LAVA (via the ongoing refactoring, not necessarily in the current dispatcher code) can automate such tests, e.g. to allow me to boot a Cubietruck into a standard Debian ARMMP armhf install on the SATA drive but using a modified (or updated) ARMMP kernel over TFTP without needing to install it on the device itself. That same kernel image can then be tested on multiple boards to see if the changes have benefitted one board at the expense of another. Automating all of that could be of a lot of benefit to the ARM kernel developers in Debian and outside Debian.

So, the start point. Install Debian onto a Cubietruck – in my case, with a SATA drive attached. All well and good so far, standard Debian Jessie ARMMP. (Cubietruck uses the LPAE kernel flavour but that won’t matter for the initramfs.)

Rather than building the initramfs manually, this provides a shortcut – at some point I may investigate how to do this in QEMU but for now, it’s just as quick to SSH onto the Cubietruck and update.

I’ve already written a little script to download the relevant linux-image package for ARMMP, unpack it and pull out the vmlinuz, the dtbs and a selected list of modules. The list is selective because TFTP has a 32Mb download limit and there are more modules than that. So I borrowed a snippet from the Xen folks (already shown previously here). The script is in a support repository for LAVA but can be used anywhere. (You’ll need to edit the package name in the script to choose between ARMMP and ARMMP LPAE.

  1. Get a working initramfs from an installed device running Debian ARMMP and copy some files for use later. Note: use the name of the symlink in the copy so that the file in /tmp/ is the actual file, using the name of the symlink as the filename. This is important later as it saves a step of having to make the (unnecessary) symlink inside the initramfs. Also, mkinitramfs, which built this initrd.img file in the first place, uses the same shared libraries as the main system, so copying these into the initramfs still works. (This is really useful when you get your ramdisk to support the attached secondary storage, allowing you to simply mount the original Debian install and fixup the initramfs by copying files off the main Debian install.) The relevant files are to support DNS lookup inside the initramfs which then allows a test to download a new image to put onto the attached media before rebooting into it. cp /boot/initrd.img-3.16.0-4-armmp-lpae /tmp/ cp /lib/arm-linux-gnueabihf/libresolv.so.2 /tmp/ cp /mnt/lib/arm-linux-gnueabihf/libnss_dns.so.2 /tmp/

    Copy these off the device for local adjustment:

    scp IP_ADDR:/tmp/FILE .
  2. Decompress the initrd.img: cp initrd.img-3.16.0-4-armmp-lpae initrd.img-3.16.0-4-armmp-lpae.gzip gunzip initrd.img-3.16.0-4-armmp-lpae.gzip
  3. Make a new empty directory mkdir initramfs cd initramfs
  4. Unpack: sudo cpio -id < initrd.img-3.16.0-4-armmp-lpae
  5. Remove the old modules (LAVA can add these later, allowing tests to use an updated build with updated modules): sudo rm -rf ./lib/modules/*
  6. Start to customise - need a script for udhcpc and two of the libraries from the installed system to allow the initramfs to do DNS lookups successfully. cp ../libresolv.so.2 ./lib/arm-linux-gnueabihf/ cp ../libnss_dns.so.2 ./lib/arm-linux-gnueabihf/
  7. Copy the udhcpc default script into place: mkdir ./etc/udhcpc/ sudo cp ../udhcpc.d ./etc/udhcpc/default.script sudo chmod 0755 ./etc/udhcpc/default.script
  8. Rebuild the cpio archive: find . | cpio -H newc -o > ../initrd.img-armmp.cpio
  9. Recompress: cd .. gzip initrd.img-armmp.cpio
  10. If using u-boot, add the UBoot header: mkimage -A arm -T ramdisk -C none -d initrd.img-armmp.cpio.gz initrd.img-armmp.cpio.gz.u-boot
  11. Checksum the final file so that you can check that against the LAVA logs. md5sum initrd.img-armmp.cpio.gz.u-boot

Each type of device will need a set of modules modprobed before tests can start. With the refactoring code, I can use an inline YAML and use dmesg -n 5 to reduce the kernel message noise. The actual module names here are just those for the Cubietruck but by having these only in the job submission, it makes it easier to test particular combinations and requirements.

- dmesg -n 5 - lava-test-case udevadm --shell udevadm hwdb --update - lava-test-case depmod --shell depmod -a - lava-test-case sata-mod --shell modprobe -a stmmac ahci_sunxi sd_mod sg ext4 - lava-test-case ifconfig --shell ifconfig eth0 up - lava-test-case udhcpc --shell udhcpc - dmesg -n 7

In due course, this will be added to the main LAVA documentation to allow others to keep the initramfs up to date and to support further test development.

Categories: Elsewhere

ERPAL: 3 things to consider when creating project specifications

Planet Drupal - Tue, 14/04/2015 - 12:45

In the last part of our series, we talked about "Agile work at a fixed price". We realized that detailed requirements in terms of a project specification are the key to agile management of fixed-price projects. Today, we’ll deal with those project specifications.

A “specification” describes the results or certain milestones of a project. Thus, it defines what we have to measure in order to find out whether the project is finished, that is: either the requirements are fulfilled – or they’re not. This point harbors the greatest potential for conflict! Neither restrictive contracts nor other contractual pieces of art can help here. Only when both parties know exactly what has to have been implemented by the end of the project, can you:

  • Show, prove, demonstrate and understand that everything that should have been done has actually been done
  • Check whether a new request is indeed new during the project
  • Find out whether changes have negatively affected the software (change management and risk management)

Using some negative examples of a specification, I’ll try to demonstrate what to avoid during a project.

1) Avoid ambiguous wording in your specifications

"We integrate social media functions." What does that really mean? The developer may understand this to include a Facebook Like button, a Google +1 button and a Tweet this button. In fact, what the customer would like is to have a portal for his Facebook app. It’s purely a matter of interpretation what social media functions really are and how they should be integrated. Always check that your requirements are clear and without ambiguous wordings.

2) Avoid comparisons

"We implement web pages with the same functions as those of awesome-competitor.com." No one knows exactly which functions the competitor’s websites possess in detail. Here again, two different expectations would collide at the end of the project. As provider, you don’t know for sure what features are implemented in the backend. However, if you agree to the statement above, then you must provide these functions. Arguing after the fact with statements like "But I didn’t know that..." doesn’t suffice. The extra costs can be enormous! So, avoid comparisons with other systems in your specification. This might save time in the beginning, but at the end of the project one of the parties could have over twice the expected expenses, which would no longer be controllable.

3) Write clear definitions

"We import the current data of the previous software." The data format for new software is usually not the same as for the previous version. Here, it’s important to clarify how the import should take place. Which old fields should be mapped to which new fields? Which validations should be processed, and, most importantly, what does the data format of the previous version look like, exactly, and how can you get this data and map it to the new structure? Clarify these points up front in order to avoid explosive increases in the effort required. In this case, it’s hard to argue using experience from past projects, because it implies that imports in previous projects are similar to the case at hand, which may be true – but usually isn’t. "We implement ... according to the usual ..." What’s usual here and who defines what’s normal? Make absolutely clear that both parties are talking about the same thing. Otherwise, two worlds will again clash over their differing expectations, which can be difficult to reconcile. Instead, refer to or quote the text that clearly defines "... the usual ..." and the requirements. Then everyone involved knows what the wording means.

There are countless other formulations that you should avoid. However, the above are the most common. A detailed engineering of requirements is always a good investment for both project parties to provide a solid basis for project success. Additionally, relevant user stories with related acceptance criteria can help to clarify the project deliverables.

Incorrect specification happens!

Specifications are wrong if they don’t serve the overall project goal. A short example: the sales manager of a company orders an app to support the sales team. The software is developed according to his requirements. However, it can’t be imported because no one involved the sales team and asked them for their requirements. Take the conditions of each case into account: nothing is more dissatisfying for both sides than fully-developed software that can’t be used because it doesn't deliver value to the users or the company as a whole. You should pay attention to these conditions right at the start of the project, both as a supplier and as a customer. During the analysis of requirements, involve all the stakeholders. Finally, the specification also serves to keep the documentation effort low, because it has already described what the final product looks like. It also provides for good planning and systematic change management to ensure that the software is stable. Imagine you’re building a house and want to combine the kitchen and the living room. For this, you only need to remove one wall. However, if this is a load-bearing wall, the floor above will collapse onto your head as the whole house caves in. This should be prevented at all costs, so be attentive and take all the challenges listed into account!

In the next part in our series, we examine responsibilities and communication in projects.

Other blog posts of this series:

These 3 questions help you to ensure satisfactory project results

Setting objectives in projects with these 3 rules

Agile projects for a fixed price? Yes you can!

Categories: Elsewhere

Steindom LLC: Sorting a view by a list field's allowed values

Planet Drupal - Tue, 14/04/2015 - 11:52

There's a neat feature in MySQL which lets you sort a result set by arbitrary field values. It's the ORDER BY FIELD() function. Here's how to leverage this in your Drupal views.

Let's say you have a field in your Article content type called Status, and it has the following allowed values:

Pending Approval

It can be very helpful to sort the articles by status. You could key your allowed values with alphabetical prefixes, numbers, etc. But let's say you didn't. Or don't want to.

With bare MySQL, the query would look something like this (not an actual Drupal query, but used to illustrate how FIELD() works):

FROM articles
ORDER BY FIELD(status, 'Draft', 'Pending Approval', 'Published', 'Postponed', 'Canceled')

This is now possible in Drupal & Views with the Views List Sort module, which creates a sort handler that populates the FIELD() sort with the allowed values of a given "List (text)" field.

To use it is easy, just add the "List (text)" field to your sort criteria, and set "Sort by allowed values" to "yes".

Submitted by Joel Stein on April 14, 2015.Tags: Drupal, Drupal 7, Drupal Planet
Categories: Elsewhere

J-P Stacey: Safe, performant generation of a unique Drupal username

Planet Drupal - Tue, 14/04/2015 - 10:53

There are examples out there for generating a unique Drupal username. The usual technique is to continue incrementing a numeric suffix until an unused name is found. There's also a project to automatically generate usernames for new users. All of this makes sense and works, but compared to the existing solutions, I wanted one that focussed on encapsulation and stability; by which I mean it should:

Read more of "Safe, performant generation of a unique Drupal username"

Categories: Elsewhere

Rapha&#235;l Hertzog: Freexian’s report about Debian Long Term Support, March 2015

Planet Debian - Tue, 14/04/2015 - 10:37

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In February, 61 work hours have been equally split among 4 paid contributors. Their reports are available:

The remaining hours of Ben and Holger have been redispatched to other contributors for April (during which Mike Gabriel joins the set of paid contributors). BTW, if you want to join the team of paid contributors, read this and apply!

Evolution of the situation

April has seen no change in terms of sponsored hours but we have two new sponsors in the pipe and May should hopefully have a few more sponsored hours.

For the need of a LTS presentation I gave during the Mini-DebConf Lyon I prepared a small graph showing the evolution of the hours sponsored through Freexian:

The growth is rather slow and it will take years to reach our goal of funding the equivalent a full time position (176 hours per month). Even the intermediary goal of funding the equivalent of a halt-time position (88h/month) is more than 6 months away given the current growth rate. But the perspective of Wheezy-LTS should help us to convince more organizations and hopefully we will reach that goal sooner. If you want to sponsor the project, check out this page.

In terms of security updates waiting to be handled, the situation looks similar to last month: the dla-needed.txt file lists 40 packages awaiting an update (exactly like last month), the list of open vulnerabilities in Squeeze shows about 56 affected packages in total (2 less than last month).

Thanks to our sponsors

The new sponsors of the month are in bold (none this month).

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Categories: Elsewhere

Mario Lang: Bjarne Stoustrup talking about organisations that can raise expectations

Planet Debian - Tue, 14/04/2015 - 10:13

At time index 22:35, Bjarne Stroustrup explains in this video what he thinks is very special about organisatrions like Cambridge or Bell Labs. When I just heard him explain this, I couldn't help but think of Debian. This is exactly how I felt (and actually still do) when I joined Debian as a Developer in 2002. This is, what makes Debian, amongst other things, very special to me.

If you don't want to watch the video, here is the excerpt I am talking about:

One of the things that Cambridge could do, and later Bell Labs could do, is somehow raise peoples expectations of themselves. Raise the level that is considered acceptable. You walk in and you see what people are doing, you see how people are doing, you see how apparently easily they do it, and you see how nice they are while doing it, and you realize, I better sharpen up my game. This is something where you have to, you just have to get better. Because, what is acceptable has changed. And some organisations can do that, and well, most can't, to that extent. And I am very very lucky to be in a couple places that actually can increase your level of ambition, in some sense, level of what is a good standard.
Categories: Elsewhere

Michal &#268;iha&#345;: Hacking Gammu

Planet Debian - Tue, 14/04/2015 - 08:30

I've spent first day of SUSE Hackweek on Gammu. There are quite many tasks to be done and I wanted to complete at least some of them.

First I started with the website. I did not really like the old layout and aggressive colors and while touching it's code it's good idea to make the website work well in mobile devices. I've started with conversion to Bootstrap and It turned out to be quite easy task. The next step was making the pages simpler as in many places there was too much information hidden in sidebar. While doing content cleanup, I've removed some features which really don't make much sense these days (such as mirror selection). Anyway read more in the news entry on the site itself.

Second big task was to add support for Python 3 in python-gammu. It seems that world is finally slowly moving towards Python 3 and people started to request python-gammu to be available there as well. The porting itself took quite some time, but I've mostly completed it before Hackweek. Yesterday, there was just some time spent on polishing and releasing standalone python-gammu and Gammu without python bindings. Now you can build python-gammu using distutils or install it using pip install python-gammu.

Filed under: English Gammu python-gammu SUSE Wammu | 0 comments

Categories: Elsewhere

Steve Kemp: Subject - Verb Agreement

Planet Debian - Tue, 14/04/2015 - 02:00

There's pretty much no way that I can describe the act of cutting a live, 240V mains-voltage, wire in half with a pair of scissors which doesn't make me look like an idiot.

Yet yesterday evening that is exactly what I did.

There were mitigating circumstances, but trying to explain them would make little sense unless you could see the scene.

In conclusion: I'm alive, although I almost wasn't.

My scissors? They have a hole in them.

Categories: Elsewhere

Drupal @ Penn State: Install ELMSLN on Digital Ocean in one line

Planet Drupal - Tue, 14/04/2015 - 01:06

This screencast shows how you can use a cloud provider like Digital Ocean to install a working copy of ELMSLN by copying and pasting the following line into the terminal:

yes | yum -y install git && git clone https://github.com/btopro/elmsln.git /var/www/elmsln && bash /var/www/elmsln/scripts/install/handsfree/centos/centos-install.sh elmsln ln elmsln.dev http email@elmsln.dev yes

Categories: Elsewhere

Mehdi Dogguy: DPL campaign 2015

Planet Debian - Tue, 14/04/2015 - 00:28
This year's DPL campaign is over and voting period is also almost over. Many did not vote yet and they really should consider doing so. This is meant as a reminder for them. If you didn't have time to dig into debian-vote's archives and read questions/answers, here is the list with links to candidates' replies:
Compared to past years, we had a comparable number of questions. All questions did not start big threads as it used to be the case sometimes in the past :-) The good side of this is that we are trolling DPL candidates less than we used to do :-P

Now, if you still didn't vote, it is really time to do so. The voting period ends on Tuesday, April 14th, 23:59:59 UTC, 2015. You have only a few hours left!
Categories: Elsewhere

Gizra.com: Shoov - CI tests on the live site

Planet Drupal - Mon, 13/04/2015 - 23:00

Shoov keeps evolving, and now has an example repo that demonstrates how we're trying to make UI regression simpler, we took some time to implement the second feature we were missing - automatic testing on the live site.

We saw a very strange situation everywhere we looked: Dev teams were writing amazing test coverage. They were going the extra mile to setup a Travis box with environment as close as possible to the live site. They tested every single feature, and added a regression test for every bug. Heck, every commit triggered a test suite that run for an hour before being carefully reviewed and merged.

And then the site goes live - and at best they might add Pingdom monitoring to check it's working. Pingdom at its simplest form sends an http request every minute to your site. If the answer is 200 - it means that all is good in the world. Which is of course wrong.

Our mission is to change this, and bring functional testing to the live site. One that is easy to setup and that integrates with your existing testing and GitHub flow.

The Drupal backend holds the CI build data, including the full log, and status

While Pingdom is wonderful and is alerting us on time whenever a site goes down, its "page is fine, move along" approach doesn't cut it for us. Here are some examples why testing on the production server is a good idea:

Continue reading…

Categories: Elsewhere

Drupal @ Penn State: We stole this site, you should too

Planet Drupal - Mon, 13/04/2015 - 21:56

Welcome to the new Drupal @ PSU!

We hope you enjoy the site so much that we want you to have it. No really, go ahead, take it. Steal this site. We did, and we’re proud of that fact. This site is actually a fork of the Office of Digital Learning’s new site that just launched recently.

Categories: Elsewhere

Mediacurrent: Drupalcon LA - Mediacurrent’s Gameplan

Planet Drupal - Mon, 13/04/2015 - 21:02

We’re gearing up for Drupalcon 2015 in sunny Los Angeles and we are looking forward to the exciting plans we have in store. We are Platinum sponsors once again and there are a ton of ways to connect with our team. In fact, here are the highlights:

Categories: Elsewhere

Santiago García Mantiñán: haproxy as a very very overloaded sslh

Planet Debian - Mon, 13/04/2015 - 20:38

After using haproxy at work for some time I realized that it can be configured for a lot of things, for example: it knows about SNI (on ssl is the method we use to know what host the client is trying to reach so that we know what certificate to present and thus we can multiplex several virtual hosts on the same ssl IP:port) and it also knows how to make transparent proxy connections (the connections go through haproxy but the ending server will think they are arriving directly from the client, as it will see the client's IP as the source IP of the packages).

With this two little features, which are available on haproxy 1.5 (Jessie's version has them all), I thought I could give it a try to substitute sslh with haproxy giving me a lot of possibilities that sslh cannot do.

Having this in mind I thought I could multiplex several ssl services, not only https but also openvpn or similar, on the 443 port and also allow this services to arrive transparently to the final server. Thus what I wanted was not to mimic sslh (which can be done with haproxy) but to get the semantic I needed, which is similar to sslh but with more power and with a little different behaviour, cause I liked it that way.

There is however one caveat that I don't like about this setup and it is that to achieve the transparency one has to run haproxy as root, which is not really something one likes :-( so, having transparency is great, but we'll be taking some risks here which I personally don't like, to me it isn't worth it.

Anyway, here is the setup, it basically consists of a setup on haproxy but if we want transparency we'll have to add to it a routing and iptables setup, I'll describe here the whole setup

Here is what you need to define on /etc/haproxy/haproxy.cfg:

frontend ft_ssl bind mode tcp option tcplog tcp-request inspect-delay 5s tcp-request content accept if { req_ssl_hello_type 1 } acl sslvpn req_ssl_sni -i vpn.example.net use_backend bk_sslvpn if sslvpn use_backend bk_web if { req_ssl_sni -m found } default_backend bk_ssh backend bk_sslvpn mode tcp source usesrc clientip server srvvpn vpnserver:1194 backend bk_web mode tcp source usesrc clientip server srvhttps webserver:443 backend bk_ssh mode tcp source usesrc clientip server srvssh sshserver:22

An example of a transparent setup can be found here but lacks some details, for example, if you need to redirect the traffic to the local haproxy you'll want to use the xt_TPROXY, there is a better doc for that at squid's wiki. Anyway, if you are playing just with your own machine, like we typically do with sslh, you won't need the TPROXY power, as packets will come straight to your 443, so haproxy will be able to get the without any problem. The problem will come if you are using transparency (source usesrc clientip) because then packets coming out of haproxy will be carrying the ip of the real client, and thus the answers of the backend will go to that client (but with different ports and other tcp data), so it will not work. We'll have to get those packets back to haproxy, for that what we'll do is mark the packages with iptables and then route them to the loopback interface using advanced routing. This is where all the examples will tell you to use iptables' mangle table with rules marking on PREROUTING but that won't work out if you are having all the setup (frontend and backends) in just one box, instead you'll have to write those rules to work on the OUTPUT chain of the mangle table, having something like this:

*mangle :PREROUTING ACCEPT :INPUT ACCEPT :FORWARD ACCEPT :OUTPUT ACCEPT :POSTROUTING ACCEPT :DIVERT - -A OUTPUT -s public_ip -p tcp --sport 22 -o public_iface -j DIVERT -A OUTPUT -s public_ip -p tcp --sport 443 -o public_iface -j DIVERT -A OUTPUT -s public_ip -p tcp --sport 1194 -o public_iface -j DIVERT -A DIVERT -j MARK --set-mark 1 -A DIVERT -j ACCEPT COMMIT

Take that just as an example, better suggestions on how to know what traffic to send to DIVERT are welcome. The point here is that if you are sending the service to some other box you can do it on PREROUTIING, but if you are sending the service to the very same box of haproxy you'll have to mark the packages on the OUTPUT chain.

Once we have the packets marked we just need to route them, something like this will work out perfectly:

ip rule add fwmark 1 lookup 100 ip route add local dev lo table 100

And that's all for this crazy setup. Of course, if, like me, you don't like the root implication of the transparent setup, you can remove the "source usesrc clientip" lines on the backends and forget about transparency (connections to the backend will come from your local IP), but you'll be able to run haproxy with dropped privileges and you'll just need the plain haproxy.cfg setup and not the weird iptables and advanced routing setup.

Hope you like the article, btw, I'd like to point out the main difference of this setup vs sslh, it is that I'm only sending the packages to the ssl providers if the client is sending SNI info, otherwise I'm sending them to the ssh server, while sslh will send ssl clients without SNI also to the ssl provider. If your setup mimics sslh and you want to comment on it, feel free to do it.

Categories: Elsewhere

LevelTen Interactive: DrupalCamp PHX Session: Drupal as an Inbound Marketing Platform

Planet Drupal - Mon, 13/04/2015 - 20:32

A few weeks ago, Brent Bice attended DrupalCamp PHX to host a session on Drupal as an Inbound Marketing Platform. The video of the PowerPoint presentation of the session, along with audio were made available for everyone to visit and listen to this session.... Read more

Categories: Elsewhere

Liran Tal's Enginx: The Drupal Rap song – Everyday I’m Drupalin’

Planet Drupal - Mon, 13/04/2015 - 19:26

This YouTube video doesn’t need any further explanation beside it’s title: The Drupal Rap song – Everyday I’m Drupalin’





Everyday I’m drupalin

Where them forms you gettin fapi with I’m the fapi boss/ hookin into edit form and webforms is my specialty sauce/ I’ll hook form alter by form id’s or entities/ put a list on Ajax/ just to keep it callin back/

I got them distrobutions, I’m like acqia/
Check my public repos, I didn’t copy nuttin/ I know dries n webchick, I kno Ryan szrama/ all the commerce guys we hipchat when they got some drama/
Might not be pretty code but it gets me paid/ I’m using rules like php loopin through arrays/ I put it all in features, so the code is stable/ it might take longer, but next time I just click enable/ These dudes clearin caches, on every hook init/ queries by thousands, page loads by the minutes

No matter the language we compress it hard/ drugs cc all, we just drugs cc all/
Where’s all of the changes, you never saw/ so drush cc all, we just drugs cc all/ I lean heavy on smacss, compass compilin my sass/ you just installed flexslider now you teachin a class/
I seen your content types, I don’t need to kno you/ to know that we ain’t even in the same nodequeue/
I’m on drupal answers, check my reputation/ I’m on my tablet earnin karma while I’m on vacation/ ya girl like a module, she stay hookin n/ you couldn’t code an info file, without lookin in/
Mo scrums, equals better sprints, break the huddle, n the work begins

Thanks to New Valley Media for helping with the video http://www.newvalleymedia.com/
Thanks to Broadstreet Consullting http://www.broadstreetconsulting.net

(adsbygoogle = window.adsbygoogle || []).push({});

The post The Drupal Rap song – Everyday I’m Drupalin’ appeared first on Liran Tal's Enginx.

Categories: Elsewhere

Drupal Association News: Drupal Newsletter: Jobs, Events, News, and Conversation

Planet Drupal - Mon, 13/04/2015 - 18:58

A long, long time ago—7 years, if you remember—the Drupal Newsletter faded away. On March 26th, the Drupal Association rebooted it. The community does so much that we want to share.

We partnered with TheWeeklyDrop to bring blog posts, articles, podcasts, and more to your inbox. Now, once a week, we’re taking all the effort out of keeping up with the best in Drupal news and events.

The fourth issue hit more than 32,000 inboxes on April 9. Inside it, subscribers from all around the world found Drupal 7.36 and Webform 7.x-3.24 releases, an introduction to D8Upgrade.org (a service offering advice for when you should upgrade to Drupal 8), and more.

To get the newsletter, subscribe via your Drupal.org profile.

The (Renewed) Drupal Newsletter

The Drupal Newsletter will be an opt-in-only thing. Once you’ve subscribed, you’ll get the newsletter in your inbox once a week, every Thursday, at about 06:30 PT / 13:30 GMT.

What kind of content will you get?

  • Drupal 8 progress updates
  • Jobs, so you can find work (or people who get work done)
  • Tutorials, guides, and podcasts
  • Events throughout the community
  • Projects and releases
  • News and conversation

It’s all brought to you by TheWeeklyDrop and us, the Drupal Association. It’s content hand-picked by humans, not bots or aggregators. You’ll get an uncluttered, distraction-free snapshot of the latest from the Drupal community. (Though we could be swayed by community vote to add gratuitous pictures of cats.)

It’s Like the Amazon Dash Button

Ok, no, it’s not. That’s not true. Unless you want it to be, in which case it sort of is.

Subscribe and never run out of the latest news, announcements, and innovations from the Drupal community. We made an animated gif to show you how.

  1. Log in to your Drupal.org profile <www.drupal.org/user>.
  2. Choose Edit.
  3. Scroll to the bottom, to the Subscribe to section.
  4. Check the box next to Drupal Weekly Newsletter.
  5. Hit the Save button.
Keep Up with the Drupal Community

The Drupal Newsletter is the easiest way to keep up with the Drupal community. Don’t already have a Drupal.org account? Create your profile today.

Oh, and two more things:

  1. Please add newsletter@drupal.org to your address book as an approved sender, so the newsletter doesn’t get lost in a pesky spam folder.
  2. Tell us what you think. Comment on this post, or send feedback to newsletter@drupal.org. We’d love to hear from you.
Personal blog tags: Drupal Newsletter
Categories: Elsewhere

Drupal @ Penn State: The year, is 2020.

Planet Drupal - Mon, 13/04/2015 - 18:46

The year is 2020.

Categories: Elsewhere

Drupal core announcements: Drupal core security release window on Wednesday, April 15

Planet Drupal - Mon, 13/04/2015 - 16:51
Start:  2015-04-15 (All day) America/New_York Online meeting (eg. IRC meeting) Organizers:  David_Rothstein

The monthly security release window for Drupal 6 and Drupal 7 core will take place on Wednesday, April 15.

This does not mean that a Drupal core security release will necessarily take place on that date for either the Drupal 6 or Drupal 7 branches, only that you should prepare to look out for one (and be ready to update your Drupal sites in the event that the Drupal security team decides to make a release).

There will be no bug fix/feature release on this date; the next window for a Drupal core bug fix/feature release is Wednesday, May 6.

For more information on Drupal core release windows, see the documentation on release timing and security releases, and the discussion that led to this policy being implemented.

Categories: Elsewhere


Subscribe to jfhovinne aggregator - Elsewhere