Elsewhere

Drupalize.Me: Careful With That Debug Syntax

Planet Drupal - ven, 27/12/2013 - 14:53

A funny thing happened last week. On Wednesday, we performed our weekly code deployment and released a handful of new features/bug fixes to the site. And then, about an hour later, someone on the team found this:

Notice the extra "asdf fdsa" in there? It's okay if you didn't, because neither did we. How did this happen? Don't you guys have a review process? I would have never let this happen on my project.

Related Topics: debugging, Development
Catégories: Elsewhere

Chris Hertzog: Introducing Pinger and PingCheck.in

Planet Drupal - ven, 27/12/2013 - 12:59

As the owner of a small development shop, I deal one on one with all of my clients. Most of the time these are happy/pleasant exchanges. But then on occasion, (usually when I'm on vacation, in the middle of an 8 hour meeting, or somewhere that my access to the internet is severely hindered), I get an email/text/call from a client whose site is down. No matter which form of communication, these exchanges are always written with caps lock.

Any website developer can give you a laundry list of why a site may be down or is not responding. Traffic spikes, server malfunctions, power outages, hacking attempts, etc. But most website owners don't care why the site isn't responding. They just want it fixed. An hour ago.

So in an effort to minimize these unpleasant events, I looked for some website monitoring tools. There are many out there with lots of options. But none really fit my use case. Plus, none were built in Drupal :).

I have a client dashboard system where my clients manage their account. They can view site analytics, submit support tickets, pay invoices, etc. I wanted to be able to track website downtime and have it available to them. Not only would it be a plus for them, but it could help me identify if issues were isolated to their site, or affecting our entire infrastructure.

Step In Pinger

Pinger is a simple module that leverages the Drupal Queue API, Cron, and drupal_http_request. In a nutshell, you tell pinger to monitor a handful of urls, and it will check each url when cron runs. Over and over again.

Elysia Cron is recommended to help manage cron settings, so you can have pinger_cron() running more frequently than other modules (plus it helps run the Queue).

Pinger saves each response as an entity (so the entity API can be leveraged). It records the http status code, the duration of the request, and the timestamp. Additionally, the information is exposed to views, so you can generate reports of outages, slow responses, etc.

PingCheck.in

I decided to launch PingCheck.in as a hosted version of Pinger. The service will check your website (perform a HTTP GET request) every 5 or 10 minutes (depending on subscription), and send a message to up to 5 different email addresses when it received something other than a 200 OK status.

We have different size plans for different needs. The personal website plan is free. Forever. Additionally, we are offering a 6 month free trial of any paid plan with coupon code "drupal6".

Please test out and review the module yourselves, or signup for the service. Comments and feedback on both are greatly appreciated.

CategoryDrupal PlanetComments Add new comment Your name Comment* About text formatsPlain text
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd> <h4> <h5> <h6>
Supported tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd> <h4> <h5> <h6> Leave this field blank
Catégories: Elsewhere

John Goerzen: Richly Blessed

Planet Debian - ven, 27/12/2013 - 06:16

“It’s wedding week! Wedding week! Wedding week! Wedding week! Oh, also Christmas. Oh dad, it’s wedding week! I can’t believe it! It’s finally here! Wedding week!” – Jacob, age 7, Sunday

“Oh dad, this is the best Christmas EVER!” – Jacob, Wednesday

“Dad, is the wedding TODAY?” – Oliver, age 4, every morning this week

This has certainly been a Christmas like no other. I have never known something to upstage Christmas for Jacob, but apparently a wedding can!

Laura and I got to celebrate our first Christmas together this year — together, of course, with the boys. We enjoyed a wonderful day in the middle of a busy week, filled with play, family togetherness, warmth, and happiness. At one point, while I was helping the boys with their new model train components, Laura was enjoying playing Christmas tunes on the piano. Every time she’d reach the end, Jacob paused, and said, “That was awesome!”, beating me to it.

That’s a few days before Christmas — Jacob and Oliver demanding snow ice cream, and of course who am I to refuse?

Cousins opening presents

After his school Christmas program, Jacob has enjoyed singing. Here he is after the Christmas Eve program, where he excitedly ran up into the choir loft, picked up a hymnal, and pretended to sing.

And, of course, opening of presents at home.

Sometimes I think about how I didn’t know life could get this good. Soon Laura and I will be married, and it will be even better. Truly we have been richly blessed.

Catégories: Elsewhere

Russell Coker: Sound Device Order with ALSA

Planet Debian - ven, 27/12/2013 - 04:35

One problem I have had with my new Dell PowerEdge server/workstation [1] is that sound doesn’t work correctly. When I initially installed it things were OK but after installing a new monitor sound stopped working.

The command “aplay -l” showed the following:
**** List of PLAYBACK Hardware Devices ****
card 0: Generic [HD-Audio Generic], device 3: HDMI 0 [HDMI 0]
  Subdevices: 1/1
  Subdevice #0: subdevice #0
card 1: Speaker [Logitech USB Speaker], device 0: USB Audio [USB Audio]
  Subdevices: 1/1
  Subdevice #0: subdevice #0

So the HDMI sound hardware (which had no speakers connected) became ALSA card 0 (default playback) and the USB speakers became card 1. It should be possible to convert KDE to use card 1 and then have other programs inherit this, but I wasn’t able to configure that with Debian/Wheezy.

My first attempt at solving this was to blacklist the HDMI and motherboard drivers (as suggested by Lindsay on the LUV mailing list). I added the following to /etc/modprobe.d/hdmi-blacklist.conf:
blacklist snd_hda_codec_hdmi
blacklist snd_hda_intel

Blacklisting the drivers works well enough. But the problem is that I will eventually want to install HDMI speakers to get better quality than the old Logitech portable USB speakers and it would be convenient to have things just work.

Jason white suggested using the module options to specify the ALSA card order. The file /etc/modprobe.d/alsa-base.conf in Debian comes with an entry specifying that the USB driver is never to be card 0, which is exactly what I don’t want. So I commented out the previous option for snd-usb-audio and put in the following ones to replace it:
# make USB 0 and HDMI/Intel anything else
options snd-usb-audio index=0
options snd_hda_codec_hdmi=-2
options snd_hda_intel=-2

Now I get the following from “aplay -l” and both KDE and mplayer will play to the desired card by default:
**** List of PLAYBACK Hardware Devices ****
card 0: Speaker [Logitech USB Speaker], device 0: USB Audio [USB Audio]
  Subdevices: 1/1
  Subdevice #0: subdevice #0
card 1: Generic [HD-Audio Generic], device 3: HDMI 0 [HDMI 0]
  Subdevices: 1/1
  Subdevice #0: subdevice #0

Catégories: Elsewhere

Asheesh Laroia: New job (what running Debian means to me)

Planet Debian - ven, 27/12/2013 - 03:56

Five weeks ago, I started a new job (Security Engineer, Eventbrite). I accepted the offer on a Friday evening at about 5:30 PM. That evening, my new boss and I traded emails to help me figure out what kind of computer I'd like. Time was of the essence because my start date was very next day, Tuesday.

I wrote about how I value pixel count, and then RAM, and then a speedy disk, and then a speedy CPU. I named a few ThinkPad models that could be good, and with advice from the inimitable danjared, I pointed out that some Dell laptops come pre-installed with Ubuntu (which I could easily swap out for Debian).

On Monday, my boss replied. Given the options that the IT department supports, he picked out the best one by my metrics: a MacBook Pro. The IT department would set up the company-mandated full-disk encryption and anti-virus scanning. If I wanted to run Linux, I could set up BootCamp or a virtualization solution.

As I read the email, my heart nearly stopped. I just couldn't see myself using a Mac.

I thought about it. Does it really matter to me enough to call up my boss and undo an IT request that is already in the works, backpedaling on what I claimed was important to me, opting for brand anti-loyalty to Apple over hardware speed?

Yes, I thought to myself. I am willing to just not work there if I have to use a Mac.

So I called $BOSS, and I asked, "What can we do to not get me a Mac?" It all worked out fine; I use a ThinkPad X1 Carbon running Debian for work now, and it absolutely does everything I need. It does have a slower CPU, fewer pixels, and less RAM, and I am the only person in the San Francisco engineering office not running Mac OS. But it all works.

In the process, I thought it made sense to write up some text to $BOSS. Here is how it goes.

Hi $BOSS,

Thanks for hearing my concerns about having a Mac. It would basically be a fairly serious blow to my self image. It's possible I could rationalize it, but it would take a long time, and I'm not sure it would work.

I don't at all need to start work using the computer I'm going to be using for the weeks afterward. I'm OK with using something temporarily that is whatever is available, Mac or non-Mac; I could happily borrow something out of the equipment closet in the short term if there are plans in the works to replace it with something else that makes me productive in the long term.

For full-disk encryption, there are great solutions for this on Linux.

For anti-virus, it seems Symantec AV is available for Linux <http://www.symantec.com/business/support/index?page=content&id=HOWTO17995>.

It sounds like Apple and possibly Lenovo are the only brands that are available through the IT department, but it is worth mentioning that Dell sells perfectly great laptops with Linux pre-installed, such as the XPS 13. I would perfectly happily use that.

If getting me more RAM is the priority, and the T440s is a bad fit for $COMPANY, then the Lenovo X230 would be a great option, and is noticeably less expensive, and it fits 16GB of RAM.

BootCamp and the like are theoretical possibilities on Macs, but one worry I have is that if there were a configuration issue, it might not be worth me spending work time to have me fix my environment, but instead I would be encouraged for efficiency to use Mac OS, which is well-tested on Apple hardware, and then I would basically hate using my computer, which is a strong emotion, but basically how I would feel.

Another issue (less technical) is that if I took my work machine to the kinds of conferences that I go to, like Debconf, I would find myself in the extremely uncomfortable position of advertising for Apple. I am pretty strongly unexcited about doing that.

Relating to the self-image issue is that it means a lot to me to sort of carry the open source community with me as I do my technical work, even if that technical work is not making more open source software. Feeling part of this world that shares software, and Debian in particular where I have a strong feeling of attachment to the community, even while doing something different, is part of what makes using computers fun for me. So it clashes with that to use Mac OS on my main machine, or to feel like I'm externally indistinguishable from people who don't care about this sort of community.

I am unenthusiastic about making your life harder and looking like a prima donna with my possibly obscure requirements.

I am, however, excited to contribute to $COMPANY!

I hope that helps! Probably nothing you couldn't have guessed in here, but I thought it was worth spelling some of that out. Happy to talk more.

-- Asheesh.
Catégories: Elsewhere

Paul Rowell: The darker side of Drupal; Field collections and revisioning/moderation

Planet Drupal - ven, 27/12/2013 - 02:16

Field collections are awesome, we all know that. When it comes to revisioning it starts to turn sour, go deeper and include workflow moderation and it really turns dark.

Catégories: Elsewhere

Clint Adams: A Very Ćevapi Christmas

Planet Debian - ven, 27/12/2013 - 00:14

Catégories: Elsewhere

Randy Fay: Remote Command-Line debugging with PHPStorm for PHP/Drupal (including drush)

Planet Drupal - jeu, 26/12/2013 - 23:05
debuggingPlanet DrupalIntroduction

XDebug with PHPStorm can do step-debugging on remote sessions started from the command line on a remote machine. You just have to set up a couple of environment variables, map the remote code to the local code that PHPStorm has at its disposal, and tunnel the xdebug connection to your workstation.

Note: If you just want to debug a PHP script (or drush command) on the local machine, that's much easier. Just enter PHPStorm's Run/Debug configuration and create a new "PHP Script" configuration.

Overview of Setup
  • We'll create a PHPStorm project that contains all the code we want to debug on the remote machine. This can be done via source control with matching code, by mounting the remote directory to your local machine, or any way you want.
  • Create a mapping from server side directories to PHPStorm-side code (A "Server" configuration in PHPStorm)
  • Use environment variables on the remote machine to tell xdebug what to do with the debugging session
  • Tunnel the Xdebug TCP connection if necessary.
  • Make PHPStorm listen for a connection
  • Create a breakpoint somewhere early in the execution path
  • Run the command-line tool on the remote server.
Step-by-Step
  1. On the remote server install xdebug and set xdebug.remote_enable=1 In your xdebug.ini (or php.ini). For complete details see Remote Drupal/PHP Debugging with Xdebug and PHPStorm.
  2. Open your project/directory in PHPStorm; it must have exactly the same code as is deployed on the remote server. (You can optionally mount the remote source locally and open it in PHPStorm using sshfs or any other technique you want, see notes below.)
  3. If you're debugging drush, you probably need to copy it into your project (you don't have to add it to source control). PHPStorm is quite insistent that executing code must be found in the project.
  4. Create a debug configuration and a "Server" configuration in your project. The Server configuration is used to map code locations from the server to your PHPStorm code. Run->Edit Configurations, Create PHP Web App, Create a server, give the server a name. Click "Use path mappings" and configure the mappings from your project to the remote server's code. (See )
  5. If your remote server cannot directly create a tcp connection to your workstation, you'll have to tunnel port 9000 to your local machine. ssh -R 9000:localhost:9000 your_account@remote_server.example.com - For more details and debugging, see
  6. Click the "Listen for PHP Debug Connections" button in PHPStorm. I call this the "unconditional listen" button, because it makes PHPStorm listen on port 9000 and accept any incoming connection, no matter what the IDE key. See Remote Drupal/PHP Debugging with Xdebug and PHPStorm
  7. In PHPStorm, set a breakpoint somewhere that your PHP script is guaranteed to hit early in its execution. For example, if you're debugging most drush actions, you could put a breakpoint on the first line of drupal_bootstrap() in includes/bootstrap.inc.
  8. If the computer is not reachable from the server, you will need to tunnel the connection from the server to your workstation. ssh -R 9000:localhost:9000 some_user_account@www.example.com For more details and debugging, Remote Drupal/PHP Debugging with Xdebug and PHPStorm
  9. In your command-line shell session on the remote server set the environment variable XDEBUG_CONFIG. For example, export XDEBUG_CONFIG="idekey=PHPSTORM remote_host=172.16.1.1 remote_port=9000" (Note that port 9000 is the default both for xdebug and for PHPStorm.) If you're tunneling the connection then remote_host must be 127.0.0.1. If you're not tunneling, it must be the reachable IP address of the machine where PHPStorm is listening.
  10. export PHP_IDE_CONFIG="serverName=yourservername" - the serverName is the name of the "server" you configured in PHPStorm above, which does the mapping of remote to local paths.
  11. On the command line run the command you want to run. For example drush cc all or php /root/drush/drush.php status
  12. If all goes well you'll stop at the breakpoint. You can step-debug to your heart's content and see variables, etc.
Drush+Drupal-Specific Hints
  • I've had the best luck actually copying drush into my codebase so that its mappings and Drupal's mappings can be in the same directory.
  • Once you have the environment variables set you can either use the "drush" command (which must be in your path) or use "php /path/to/drush/drush.php" with your drush options. Make sure you're using the drush that's mapped as a part of your project.
Notes and resources
  • We set the xdebug.remote_host in the XDEBUG_CONFIG environment variable; it could also have been configured in the xdebug.ini as xdebug.remote_host=myworkstation.example.com. (Note that the default is "localhost", so if you're tunneling the connection you don't actually have to set it.)
  • Full details about remote debugging on xdebug.org
  • Debugging: Make sure that only PHPStorm is listening on port 9000. If something else (most notoriously php-fpm) is listening there, you'll have to sort it out. PHPStorm is happy to listen on another port, see the preferences, and you'll need to change your environment variable to something like export XDEBUG_CONFIG="idekey=PHPSTORM remote_host=172.16.1.1 remote_port=9999
  • You may want to use sshfs or some other technique to mount the remote code to your local machine. sshfs your_account@server.example.com:~/drush /tmp/drush would mount the contents of drush in the remote home directory to /tmp/drush (which must already exist) on your local machine. It must be writeable for PHPStorm to work with it as a project, so you'll have to work that out.
  • The article that taught me everything I know about this is Command-line xdebug on a remote server. Thanks!
Catégories: Elsewhere

Mediacurrent: Webinar: Code-per-Node

Planet Drupal - jeu, 26/12/2013 - 22:44

Drupal has many well documented best practices, many of which we've covered here previously. From using Features to export all site configuration, to applying updates on a copy of the site instead of production, to using a modern responsive base theme, there is usually a tried & trusted process for most scenarios that can help keep sites stable, and editorial teams & visitors happy.

Catégories: Elsewhere

Greater Los Angeles Drupal (GLAD): Greater Los Angeles Drupal's 2013 Year in Review: Part 2

Planet Drupal - jeu, 26/12/2013 - 22:06

2013 has been a big year and there's so much to share that this is Part 2 in our ongoing series of highlights. (Did you miss Part 1? Read that first.)

This post is a long one, so grab your favorite beverage (or a shawarma) and join us when you're ready.

Highlights from 2013

Drupal Job Fair & Employers Summit
In January, 2013, Droplabs held another Drupal job fair but this time also had its first Drupal Employers Summit. All the companies in the area who had ever hosted a Drupal meetup or other Drupal-related event, had sponsored any of the previous job fairs, or simply asked to attend, were invited.

Half a dozen companies were represented, including CivicActions, Exaltation of Larks, Filter Digital, Princess Cruises, Sensis Agency and Stauffer and they talked into the night about hiring strategies and challenges, the desired skills that candidates should have for various roles, and ideas on how to grow Drupal talent.

Relaunched as Greater Los Angeles Drupal
At our open governance meeting in April, 2013, we voted to rename our group from "Downtown Los Angeles Drupal" to "Greater Los Angeles Drupal" (or just "GLAD", for short). This made sense since our organizers now endeavor to serve the 5 counties in the Greater Los Angeles Area and not only the Downtown Los Angeles region, but the name change led to some debate and vexation from members of a nearby group.

Governance policy enacted after a year of work
GLAD is founded on the values of teamwork, transparency, accountability, and the sharing of assets and resources. Work began on a formal governance policy just 6 days after the group was approved on Drupal Groups, and grew to include definitions of roles, our own code of conduct and a procedure for how to manage and resolve conflicts.

As far as I know, GLAD is the first Drupal user group to draft and implement a governance policy like this that's separate from the Drupal Code of Conduct. Hopefully, this governance policy can serve as an example and help other groups decide whether if a similar policy would be a good fit for them.

GLADCamp 2013 postponed :(
As with the change of name to Greater Los Angeles Drupal, not all of our highlights from 2013 went smoothly. Originally planned as a 3-day megaconference for all things Drupal, our 2013 conference was supposed to be a successor to Drupal Design Camp LA but it fell through when contracts with the venue didn't materialize. This was terribly disappointing, but provided bittersweet lessons for our organizing team.

Module Develoment Boot Camp at Droplabs
In the summer, Droplabs started its free, community-led Module Development Boot Camp. With 12 weeks of drills, exercises and exams, all aimed at training PHP and Drupal programmers the skills they need to write Drupal modules, this is the first event of its kind as far as I know.

Droplabs named world’s “Top Drupal location”
With 62 events dedicated to Drupal, Droplabs was recognized by Drupical as the world's "top Drupal location" between the months of July, 2012, and July, 2013.

Drupal Watchdog magazines for everyone
Are you an an organizer of an open source group or event in Southern California? Would you like a box of Drupal Watchdog magazines to give to your members and attendees? Droplabs announced that it had a very large surplus of Watchdog magazines and that they're just giving them away. They're already making their way as far as San Diego for the upcoming SANDcamp conference.

OneDayCareer.org wins at Stanford
What started out as a barn raising to help attendees of the Module Development Boot Camp get hands-on experience with custom module development, Git, Features and team-based collaboration, the OneDayCareer.org team went on to win at the Technology Entrepreneurship Demo Day at Stanford University, beating out 20,000 other students!

Again, there is simply too much to cover in one post. See you next time in the last part of our series!

Tags: Planet DrupalYear in Review
Catégories: Elsewhere

Niels Thykier: Jessie finally has less than 500 RC bugs

Planet Debian - jeu, 26/12/2013 - 19:42

I am very pleased to say that RC bug counter for Jessie finally dropped below 500 RC bugs.  For comparison, we broke the 700 RC bug curve in the start of November, so that is about 200 RC bugs in about 7-8 weeks (or about 25-28 RC bugs per week).

Today, we have about 162 RC bugs on the auto-removal list.  Though, I suspect many of the affected packages have reverse dependencies, so their removal from testing may be up 30 days away.  Nevertheless, by this time in January, we could be looking at no more than 350 RC bugs left for Jessie.

 


Catégories: Elsewhere

Worchestra: FaceShift - A reporting tool on top of Drupal to fetch any Facebook page's posts' insights

Planet Drupal - jeu, 26/12/2013 - 19:07

Any Facebook page admin will care to see how their posts are doing, what engagement is it driving and how much did it score from likes, comments, and shares.

If you have been using Facebook Insights heavily you would know that there are is a huge limit when you export Post Level Data (Maximum of 500), so if you use your page heavily you would have more than 500 posts a month.

FaceShift is a tool extract post level report (Post title, link, number of likes, comments, and shares) for each post for a selected month, for any given Facebook page whether it was yours or for a competitor, you simply select the date and the facebook page, and the report will be emailed to you (most likely to your spam folder) in an Excel downlable link.

Give it a try and leave me your feedback

http://faceshift.worchestra.com

Catégories: Elsewhere

Worchestra: Drupal to Drupal - Content Migration

Planet Drupal - jeu, 26/12/2013 - 16:19

At some point you will feel the need for a new cleaner website, so you build your new Drupal site with cleaner content types, and you want to migrate your old content to the new website.
Migrate module is the module for that, however, by itself you will not be able to do your migration, you have to build your migration classes on top of that module, so below are the migration steps.

We start with creating a new module that will depend on Migrate module.
Create worchestra_legacy.info

name = "Worchestra Legacy" description = "" core = "7.x" package = "Worchestra"   files[] = migrate/base.inc files[] = migrate/articles.inc
Catégories: Elsewhere

Drupalize.Me: Contributing Time to Drupal 8

Planet Drupal - jeu, 26/12/2013 - 14:39

Drupal 8 is coming in 2014. There is a lot of work to do and a lot to learn. We've decided to dedicate a minimum of five hours per week towards Drupal 8 for each person on the Drupalize.Me team. We are now a hefty group of eight people, and everyone will be diving in for a total of 40 hours per week dedicated to Drupal 8. (At least until Drupal 8 launches, and hopefully even beyond that.) Everyone is picking their own projects and ways to get involved. We just started dedicating this time in December, and folks have been spending time sorting out where things are and where to jump in.

Related Topics: community, Drupal 8
Catégories: Elsewhere

Ian Campbell: Running ARM Grub on U-boot on Qemu

Planet Debian - jeu, 26/12/2013 - 14:20

At the Mini-debconf in Cambridge back in November there was an ARM Sprint (which Hector wrote up as a Bits from ARM porters mail). During this there a brief discussion about using GRUB as a standard bootloader, particularly for ARM server devices. This has the advantage of providing a more "normal" (which in practice means "x86 server-like") as well as flexible solution compared with the existing flash-kernel tool which is often used on ARM.

On ARMv7 devices this will more than likely involve chain loading from the U-Boot supplied by the manufacturer. For test and development it would be useful to be able to set up a similar configuration using Qemu.

Cross-compilers

Although this can be built and run on an ARM system I am using a cross compiler here. I'm using gcc-linaro-arm-linux-gnueabihf-4.8-2013.08_linux from Linaro, which can be downloaded from the linaro-toolchain-binaries page on Launchpad. (It looks like 2013.10 is the latest available right now, I can't see any reason why that wouldn't be fine).

Once the cross-compiler has been downloaded unpack it somewhere, I will refer to the resulting gcc-linaro-arm-linux-gnueabihf-4.8-2013.08_linux directory as $CROSSROOT.

Make sure $CROSSROOT/bin (which contains arm-linux-gnueabihf-gcc etc) is in your $PATH.

Qemu

I'm using the version packaged in Jessie, which is 1.7.0+dfsg-2. We need both qemu-system-arm for running the final system and qemu-user to run some of the tools. I'd previously tried an older version of qemu (1.6.x?) and had some troubles, although they may have been of my own making...

Das U-boot for Qemu

First thing to do is to build a suitable u-boot for use in the qemu emulated environment. Since we need to make some configuration changes we need to build from scratch.

Start by cloning the upstream git tree:

$ git clone git://git.denx.de/u-boot.git $ cd u-boot

I am working on top of e03c76c30342 "powerpc/mpc85xx: Update CONFIG_SYS_FSL_TBCLK_DIV for T1040" dated Wed Dec 11 12:49:13 2013 +0530.

We are going to use the Versatile Express Cortex-A9 u-boot but first we need to enable some additional configuration options:

  • CONFIG_API -- This enables the u-boot API which Grub uses to access the lowlevel services provided by u-boot. This means that grub doesn't need to contains dozens of platform specific flash, mmc, nand, network, console drivers etc and can be completely platform agnostic.
  • CONFIG_SYS_MMC_MAX_DEVICE -- Setting CONFIG_API needs this.
  • CONFIG_CMD_EXT2 -- Useful for accessing EXT2 formatted filesystems. In this example I use a VFAT /boot for convenience but in a real system we would want to use EXT2 (or even something more modern)).
  • CONFIG_CMD_ECHO -- Just useful.

You can add all these to include/configs/vexpress_common.h:

#define CONFIG_API #define CONFIG_SYS_MMC_MAX_DEVICE 1 #define CONFIG_CMD_EXT2 #define CONFIG_CMD_ECHO

Or you can apply the patch which I sent upstream:

$ wget -O - http://patchwork.ozlabs.org/patch/304786/raw | git apply --index $ git commit -m "Additional options for grub-on-uboot"

Finally we can build u-boot:

$ make CROSS_COMPILE=arm-linux-gnueabihf- vexpress_ca9x4_config $ make CROSS_COMPILE=arm-linux-gnueabihf-

The result is a u-boot binary which we can load with qemu.

GRUB for ARM

Next we can build grub. Start by cloning the upstream git tree:

$ git clone git://git.sv.gnu.org/grub.git $ cd grub

By default grub is built for systems which have RAM at address 0x00000000. However the Versatile Express platform which we are targeting has RAM starting from 0x60000000 so we need to make a couple of modifications. First in grub-core/Makefile.core.def we need to change arm_uboot_ldflags, from:

-Wl,-Ttext=0x08000000

to

-Wl,-Ttext=0x68000000

and second we need make a similar change to include/grub/offsets.h changing GRUB_KERNEL_ARM_UBOOT_LINK_ADDR from 0x08000000 to 0x68000000.

Now we are ready to build grub:

$ ./autogen.sh $ ./configure --host arm-linux-gnueabihf $ make

Now we need to build the final grub "kernel" image, normally this would be taken care of by grub-install but because we are cross building grub we cannot use this and have to use grub-mkimage directly. However the version we have just built is for the ARM target and not for host we are building things on. I've not yet figured out how to build grub for ARM while building the tools for the host system (I'm sure it is possible somehow...). Luckily we can use qemu to run the ARM binary:

$ cat load.cfg set prefix=(hd0) $ qemu-arm -r 3.11 -L $CROSSROOT/arm-linux-gnueabihf/libc \ ./grub-mkimage -c load.cfg -O arm-uboot -o core.img -d grub-core/ \ fat ext2 probe terminal scsi ls linux elf msdospart normal help echo

Here we create load.cfg which is the setup script which will be built into the grub kernel, our version just sets the root device so that grub can find the rest of its configuration.

Then we use qemu-arm-static to invoke grub-mkimage. The "-r 3.11" option tells qemu to pretend to be a 3.11 kernel (which is required by the libc used by our cross compiler, without this you will get a fatal: kernel too old message) and "-L $CROSSROOT/..." tells it where to find the basic libraries, such as the dynamic linker (luckily grub-mkimage doesn't need much in the way of libraries so we don't need a full cross library environment.

The grub-mkimage command passes in the load.cfg and requests an output kernel targeting arm-uboot, core.img is the output file and the modules are in grub-core (because we didn't actually install grub in the target system, normally these would be found in /boot/grub). Lastly we pass in a list of default modules to build into the kernel, including filesystem drivers (fat, ext2), disk drivers (scsi), partition handling (msdos), loaders (linux, elf), the menu system (normal) and various other bits and bobs.

So after all the we now have our grub kernel in core.img.

Putting it all together

Before we can launch qemu we need to create various disk images.

Firstly we need some images for the 2 64M flash devices:

$ dd if=/dev/zero of=pflash0.img bs=1M count=64 $ dd if=/dev/zero of=pflash1.img bs=1M count=64

We will initialise these later from the u-boot command line.

Secondly we need an image for the root filesystem on an MMC device. I'm using a FAT formatted image here simply for the convenience of using mtools to update the images during development.

$ dd if=/dev/zero of=mmc.img bs=1M count=16 $ /sbin/mkfs.vfat mmc.img

Thirdly we need a kernel, device tree and grub configuration on our root filesystem. For the first two I extracted them from the standard armmp kernel flavour package. I used the backports.org version 3.11-0.bpo.2-armmp version and extracted /boot/vmlinuz-3.11-0.bpo.2-armmp as vmlinuz and /usr/lib/linux-image-3.11-0.bpo.2-armmp/vexpress-v2p-ca9.dtb as dtb. Then I hand coded a simple grub.cfg:

menuentry 'Linux' { echo "Loading vmlinuz" set root='hd0' linux /vmlinuz console=ttyAMA0 ro debug devicetree /dtb }

In a real system the kernel and dtb would be provided by the kernel packages and grub.cfg would be generated by update-grub.

Now that we have all the bits we need copy them into the root of mmc.img. Since we are using a FAT formatted image we can use mcopy from the mtools package.

$ mcopy -v -o -n -i mmc.img core.img dtb vmlinuz grub.cfg ::

Finally after all that we can run qemu passing it our u-boot binary and the mmc and flash images and requesting a Cortex-A9 based Versatile Express system with 1GB of RAM:

$ qemu-system-arm -M vexpress-a9 -kernel u-boot -m 1024m -sd mmc.img \ -nographic -pflash pflash0.img -pflash pflash1.img

Then at the VExpress# prompt we can configure the default bootcmd to load grub and save the environment to the flash images. The backslash escapes (\$ and \;) should be included as written here so that e.g. the variables are only evaluated when bootcmd is evaluated and not immediately when setting bootcmd and the bootm is set as part of bootcmd instead of executed immediately:

VExpress# setenv bootcmd fatload mmc 0:0 \${loadaddr} core.img \; bootm \${loadaddr} VExpress# saveenv

Now whenever we boot the system it will automatically load boot grub from the mmc and launch it. Grub in turn will load the Linux binary and DTB and launch those. I haven't actually configure Linux with a root filesystem here so it will eventually panic after failing to find root.

Future work

The most pressing issue is the hard coded load address built in to the grub kernel image. This is something which needs to be discussed with the upstream grub maintainers as well as the Debian package maintainers.

Now that the ARM packages have hit Debian (in experimental in the 2.02~beta2-1 package) I also plan to start looking at debian-installer integration as well as updating flash-kernel to setup the chain load of grub instead of loading a kernel directly.

Catégories: Elsewhere

Yuriy Gerasimov: Custom user name from profile

Planet Drupal - jeu, 26/12/2013 - 09:54

Current project I am working on has user profiles using profile2 module. So it is pretty common task to replace all links/references on the site with user's proper name from Profile instead of his drupal username.

This is really easy to achieve using hook_username_alter()

<?php /** * Implements hook_username_alter(). */ function mymodule_username_alter(&$name, $account) { $contact_profile = profile2_load_by_user($account, MYMODULE_PROFILE_CONTACT_TYPE); if (isset($contact_profile->field_name[LANGUAGE_NONE][0]['value'])) { $name = $contact_profile->field_name[LANGUAGE_NONE][0]['value']; } } ?>

And if you will want somewhere in the code display user's name do this using function format_username().

Tags: drupal planet
Catégories: Elsewhere

Pages

Subscribe to jfhovinne agrégateur - Elsewhere