Elsewhere

Russell Coker: Sound Device Order with ALSA

Planet Debian - Fri, 27/12/2013 - 04:35

One problem I have had with my new Dell PowerEdge server/workstation [1] is that sound doesn’t work correctly. When I initially installed it things were OK but after installing a new monitor sound stopped working.

The command “aplay -l” showed the following:
**** List of PLAYBACK Hardware Devices ****
card 0: Generic [HD-Audio Generic], device 3: HDMI 0 [HDMI 0]
  Subdevices: 1/1
  Subdevice #0: subdevice #0
card 1: Speaker [Logitech USB Speaker], device 0: USB Audio [USB Audio]
  Subdevices: 1/1
  Subdevice #0: subdevice #0

So the HDMI sound hardware (which had no speakers connected) became ALSA card 0 (default playback) and the USB speakers became card 1. It should be possible to convert KDE to use card 1 and then have other programs inherit this, but I wasn’t able to configure that with Debian/Wheezy.

My first attempt at solving this was to blacklist the HDMI and motherboard drivers (as suggested by Lindsay on the LUV mailing list). I added the following to /etc/modprobe.d/hdmi-blacklist.conf:
blacklist snd_hda_codec_hdmi
blacklist snd_hda_intel

Blacklisting the drivers works well enough. But the problem is that I will eventually want to install HDMI speakers to get better quality than the old Logitech portable USB speakers and it would be convenient to have things just work.

Jason white suggested using the module options to specify the ALSA card order. The file /etc/modprobe.d/alsa-base.conf in Debian comes with an entry specifying that the USB driver is never to be card 0, which is exactly what I don’t want. So I commented out the previous option for snd-usb-audio and put in the following ones to replace it:
# make USB 0 and HDMI/Intel anything else
options snd-usb-audio index=0
options snd_hda_codec_hdmi=-2
options snd_hda_intel=-2

Now I get the following from “aplay -l” and both KDE and mplayer will play to the desired card by default:
**** List of PLAYBACK Hardware Devices ****
card 0: Speaker [Logitech USB Speaker], device 0: USB Audio [USB Audio]
  Subdevices: 1/1
  Subdevice #0: subdevice #0
card 1: Generic [HD-Audio Generic], device 3: HDMI 0 [HDMI 0]
  Subdevices: 1/1
  Subdevice #0: subdevice #0

Categories: Elsewhere

Asheesh Laroia: New job (what running Debian means to me)

Planet Debian - Fri, 27/12/2013 - 03:56

Five weeks ago, I started a new job (Security Engineer, Eventbrite). I accepted the offer on a Friday evening at about 5:30 PM. That evening, my new boss and I traded emails to help me figure out what kind of computer I'd like. Time was of the essence because my start date was very next day, Tuesday.

I wrote about how I value pixel count, and then RAM, and then a speedy disk, and then a speedy CPU. I named a few ThinkPad models that could be good, and with advice from the inimitable danjared, I pointed out that some Dell laptops come pre-installed with Ubuntu (which I could easily swap out for Debian).

On Monday, my boss replied. Given the options that the IT department supports, he picked out the best one by my metrics: a MacBook Pro. The IT department would set up the company-mandated full-disk encryption and anti-virus scanning. If I wanted to run Linux, I could set up BootCamp or a virtualization solution.

As I read the email, my heart nearly stopped. I just couldn't see myself using a Mac.

I thought about it. Does it really matter to me enough to call up my boss and undo an IT request that is already in the works, backpedaling on what I claimed was important to me, opting for brand anti-loyalty to Apple over hardware speed?

Yes, I thought to myself. I am willing to just not work there if I have to use a Mac.

So I called $BOSS, and I asked, "What can we do to not get me a Mac?" It all worked out fine; I use a ThinkPad X1 Carbon running Debian for work now, and it absolutely does everything I need. It does have a slower CPU, fewer pixels, and less RAM, and I am the only person in the San Francisco engineering office not running Mac OS. But it all works.

In the process, I thought it made sense to write up some text to $BOSS. Here is how it goes.

Hi $BOSS,

Thanks for hearing my concerns about having a Mac. It would basically be a fairly serious blow to my self image. It's possible I could rationalize it, but it would take a long time, and I'm not sure it would work.

I don't at all need to start work using the computer I'm going to be using for the weeks afterward. I'm OK with using something temporarily that is whatever is available, Mac or non-Mac; I could happily borrow something out of the equipment closet in the short term if there are plans in the works to replace it with something else that makes me productive in the long term.

For full-disk encryption, there are great solutions for this on Linux.

For anti-virus, it seems Symantec AV is available for Linux <http://www.symantec.com/business/support/index?page=content&id=HOWTO17995>.

It sounds like Apple and possibly Lenovo are the only brands that are available through the IT department, but it is worth mentioning that Dell sells perfectly great laptops with Linux pre-installed, such as the XPS 13. I would perfectly happily use that.

If getting me more RAM is the priority, and the T440s is a bad fit for $COMPANY, then the Lenovo X230 would be a great option, and is noticeably less expensive, and it fits 16GB of RAM.

BootCamp and the like are theoretical possibilities on Macs, but one worry I have is that if there were a configuration issue, it might not be worth me spending work time to have me fix my environment, but instead I would be encouraged for efficiency to use Mac OS, which is well-tested on Apple hardware, and then I would basically hate using my computer, which is a strong emotion, but basically how I would feel.

Another issue (less technical) is that if I took my work machine to the kinds of conferences that I go to, like Debconf, I would find myself in the extremely uncomfortable position of advertising for Apple. I am pretty strongly unexcited about doing that.

Relating to the self-image issue is that it means a lot to me to sort of carry the open source community with me as I do my technical work, even if that technical work is not making more open source software. Feeling part of this world that shares software, and Debian in particular where I have a strong feeling of attachment to the community, even while doing something different, is part of what makes using computers fun for me. So it clashes with that to use Mac OS on my main machine, or to feel like I'm externally indistinguishable from people who don't care about this sort of community.

I am unenthusiastic about making your life harder and looking like a prima donna with my possibly obscure requirements.

I am, however, excited to contribute to $COMPANY!

I hope that helps! Probably nothing you couldn't have guessed in here, but I thought it was worth spelling some of that out. Happy to talk more.

-- Asheesh.
Categories: Elsewhere

Paul Rowell: The darker side of Drupal; Field collections and revisioning/moderation

Planet Drupal - Fri, 27/12/2013 - 02:16

Field collections are awesome, we all know that. When it comes to revisioning it starts to turn sour, go deeper and include workflow moderation and it really turns dark.

Categories: Elsewhere

Clint Adams: A Very Ćevapi Christmas

Planet Debian - Fri, 27/12/2013 - 00:14

Categories: Elsewhere

Randy Fay: Remote Command-Line debugging with PHPStorm for PHP/Drupal (including drush)

Planet Drupal - Thu, 26/12/2013 - 23:05
debuggingPlanet DrupalIntroduction

XDebug with PHPStorm can do step-debugging on remote sessions started from the command line on a remote machine. You just have to set up a couple of environment variables, map the remote code to the local code that PHPStorm has at its disposal, and tunnel the xdebug connection to your workstation.

Note: If you just want to debug a PHP script (or drush command) on the local machine, that's much easier. Just enter PHPStorm's Run/Debug configuration and create a new "PHP Script" configuration.

Overview of Setup
  • We'll create a PHPStorm project that contains all the code we want to debug on the remote machine. This can be done via source control with matching code, by mounting the remote directory to your local machine, or any way you want.
  • Create a mapping from server side directories to PHPStorm-side code (A "Server" configuration in PHPStorm)
  • Use environment variables on the remote machine to tell xdebug what to do with the debugging session
  • Tunnel the Xdebug TCP connection if necessary.
  • Make PHPStorm listen for a connection
  • Create a breakpoint somewhere early in the execution path
  • Run the command-line tool on the remote server.
Step-by-Step
  1. On the remote server install xdebug and set xdebug.remote_enable=1 In your xdebug.ini (or php.ini). For complete details see Remote Drupal/PHP Debugging with Xdebug and PHPStorm.
  2. Open your project/directory in PHPStorm; it must have exactly the same code as is deployed on the remote server. (You can optionally mount the remote source locally and open it in PHPStorm using sshfs or any other technique you want, see notes below.)
  3. If you're debugging drush, you probably need to copy it into your project (you don't have to add it to source control). PHPStorm is quite insistent that executing code must be found in the project.
  4. Create a debug configuration and a "Server" configuration in your project. The Server configuration is used to map code locations from the server to your PHPStorm code. Run->Edit Configurations, Create PHP Web App, Create a server, give the server a name. Click "Use path mappings" and configure the mappings from your project to the remote server's code. (See )
  5. If your remote server cannot directly create a tcp connection to your workstation, you'll have to tunnel port 9000 to your local machine. ssh -R 9000:localhost:9000 your_account@remote_server.example.com - For more details and debugging, see
  6. Click the "Listen for PHP Debug Connections" button in PHPStorm. I call this the "unconditional listen" button, because it makes PHPStorm listen on port 9000 and accept any incoming connection, no matter what the IDE key. See Remote Drupal/PHP Debugging with Xdebug and PHPStorm
  7. In PHPStorm, set a breakpoint somewhere that your PHP script is guaranteed to hit early in its execution. For example, if you're debugging most drush actions, you could put a breakpoint on the first line of drupal_bootstrap() in includes/bootstrap.inc.
  8. If the computer is not reachable from the server, you will need to tunnel the connection from the server to your workstation. ssh -R 9000:localhost:9000 some_user_account@www.example.com For more details and debugging, Remote Drupal/PHP Debugging with Xdebug and PHPStorm
  9. In your command-line shell session on the remote server set the environment variable XDEBUG_CONFIG. For example, export XDEBUG_CONFIG="idekey=PHPSTORM remote_host=172.16.1.1 remote_port=9000" (Note that port 9000 is the default both for xdebug and for PHPStorm.) If you're tunneling the connection then remote_host must be 127.0.0.1. If you're not tunneling, it must be the reachable IP address of the machine where PHPStorm is listening.
  10. export PHP_IDE_CONFIG="serverName=yourservername" - the serverName is the name of the "server" you configured in PHPStorm above, which does the mapping of remote to local paths.
  11. On the command line run the command you want to run. For example drush cc all or php /root/drush/drush.php status
  12. If all goes well you'll stop at the breakpoint. You can step-debug to your heart's content and see variables, etc.
Drush+Drupal-Specific Hints
  • I've had the best luck actually copying drush into my codebase so that its mappings and Drupal's mappings can be in the same directory.
  • Once you have the environment variables set you can either use the "drush" command (which must be in your path) or use "php /path/to/drush/drush.php" with your drush options. Make sure you're using the drush that's mapped as a part of your project.
Notes and resources
  • We set the xdebug.remote_host in the XDEBUG_CONFIG environment variable; it could also have been configured in the xdebug.ini as xdebug.remote_host=myworkstation.example.com. (Note that the default is "localhost", so if you're tunneling the connection you don't actually have to set it.)
  • Full details about remote debugging on xdebug.org
  • Debugging: Make sure that only PHPStorm is listening on port 9000. If something else (most notoriously php-fpm) is listening there, you'll have to sort it out. PHPStorm is happy to listen on another port, see the preferences, and you'll need to change your environment variable to something like export XDEBUG_CONFIG="idekey=PHPSTORM remote_host=172.16.1.1 remote_port=9999
  • You may want to use sshfs or some other technique to mount the remote code to your local machine. sshfs your_account@server.example.com:~/drush /tmp/drush would mount the contents of drush in the remote home directory to /tmp/drush (which must already exist) on your local machine. It must be writeable for PHPStorm to work with it as a project, so you'll have to work that out.
  • The article that taught me everything I know about this is Command-line xdebug on a remote server. Thanks!
Categories: Elsewhere

Mediacurrent: Webinar: Code-per-Node

Planet Drupal - Thu, 26/12/2013 - 22:44

Drupal has many well documented best practices, many of which we've covered here previously. From using Features to export all site configuration, to applying updates on a copy of the site instead of production, to using a modern responsive base theme, there is usually a tried & trusted process for most scenarios that can help keep sites stable, and editorial teams & visitors happy.

Categories: Elsewhere

Greater Los Angeles Drupal (GLAD): Greater Los Angeles Drupal's 2013 Year in Review: Part 2

Planet Drupal - Thu, 26/12/2013 - 22:06

2013 has been a big year and there's so much to share that this is Part 2 in our ongoing series of highlights. (Did you miss Part 1? Read that first.)

This post is a long one, so grab your favorite beverage (or a shawarma) and join us when you're ready.

Highlights from 2013

Drupal Job Fair & Employers Summit
In January, 2013, Droplabs held another Drupal job fair but this time also had its first Drupal Employers Summit. All the companies in the area who had ever hosted a Drupal meetup or other Drupal-related event, had sponsored any of the previous job fairs, or simply asked to attend, were invited.

Half a dozen companies were represented, including CivicActions, Exaltation of Larks, Filter Digital, Princess Cruises, Sensis Agency and Stauffer and they talked into the night about hiring strategies and challenges, the desired skills that candidates should have for various roles, and ideas on how to grow Drupal talent.

Relaunched as Greater Los Angeles Drupal
At our open governance meeting in April, 2013, we voted to rename our group from "Downtown Los Angeles Drupal" to "Greater Los Angeles Drupal" (or just "GLAD", for short). This made sense since our organizers now endeavor to serve the 5 counties in the Greater Los Angeles Area and not only the Downtown Los Angeles region, but the name change led to some debate and vexation from members of a nearby group.

Governance policy enacted after a year of work
GLAD is founded on the values of teamwork, transparency, accountability, and the sharing of assets and resources. Work began on a formal governance policy just 6 days after the group was approved on Drupal Groups, and grew to include definitions of roles, our own code of conduct and a procedure for how to manage and resolve conflicts.

As far as I know, GLAD is the first Drupal user group to draft and implement a governance policy like this that's separate from the Drupal Code of Conduct. Hopefully, this governance policy can serve as an example and help other groups decide whether if a similar policy would be a good fit for them.

GLADCamp 2013 postponed :(
As with the change of name to Greater Los Angeles Drupal, not all of our highlights from 2013 went smoothly. Originally planned as a 3-day megaconference for all things Drupal, our 2013 conference was supposed to be a successor to Drupal Design Camp LA but it fell through when contracts with the venue didn't materialize. This was terribly disappointing, but provided bittersweet lessons for our organizing team.

Module Develoment Boot Camp at Droplabs
In the summer, Droplabs started its free, community-led Module Development Boot Camp. With 12 weeks of drills, exercises and exams, all aimed at training PHP and Drupal programmers the skills they need to write Drupal modules, this is the first event of its kind as far as I know.

Droplabs named world’s “Top Drupal location”
With 62 events dedicated to Drupal, Droplabs was recognized by Drupical as the world's "top Drupal location" between the months of July, 2012, and July, 2013.

Drupal Watchdog magazines for everyone
Are you an an organizer of an open source group or event in Southern California? Would you like a box of Drupal Watchdog magazines to give to your members and attendees? Droplabs announced that it had a very large surplus of Watchdog magazines and that they're just giving them away. They're already making their way as far as San Diego for the upcoming SANDcamp conference.

OneDayCareer.org wins at Stanford
What started out as a barn raising to help attendees of the Module Development Boot Camp get hands-on experience with custom module development, Git, Features and team-based collaboration, the OneDayCareer.org team went on to win at the Technology Entrepreneurship Demo Day at Stanford University, beating out 20,000 other students!

Again, there is simply too much to cover in one post. See you next time in the last part of our series!

Tags: Planet DrupalYear in Review
Categories: Elsewhere

Niels Thykier: Jessie finally has less than 500 RC bugs

Planet Debian - Thu, 26/12/2013 - 19:42

I am very pleased to say that RC bug counter for Jessie finally dropped below 500 RC bugs.  For comparison, we broke the 700 RC bug curve in the start of November, so that is about 200 RC bugs in about 7-8 weeks (or about 25-28 RC bugs per week).

Today, we have about 162 RC bugs on the auto-removal list.  Though, I suspect many of the affected packages have reverse dependencies, so their removal from testing may be up 30 days away.  Nevertheless, by this time in January, we could be looking at no more than 350 RC bugs left for Jessie.

 


Categories: Elsewhere

Worchestra: FaceShift - A reporting tool on top of Drupal to fetch any Facebook page's posts' insights

Planet Drupal - Thu, 26/12/2013 - 19:07

Any Facebook page admin will care to see how their posts are doing, what engagement is it driving and how much did it score from likes, comments, and shares.

If you have been using Facebook Insights heavily you would know that there are is a huge limit when you export Post Level Data (Maximum of 500), so if you use your page heavily you would have more than 500 posts a month.

FaceShift is a tool extract post level report (Post title, link, number of likes, comments, and shares) for each post for a selected month, for any given Facebook page whether it was yours or for a competitor, you simply select the date and the facebook page, and the report will be emailed to you (most likely to your spam folder) in an Excel downlable link.

Give it a try and leave me your feedback

http://faceshift.worchestra.com

Categories: Elsewhere

Worchestra: Drupal to Drupal - Content Migration

Planet Drupal - Thu, 26/12/2013 - 16:19

At some point you will feel the need for a new cleaner website, so you build your new Drupal site with cleaner content types, and you want to migrate your old content to the new website.
Migrate module is the module for that, however, by itself you will not be able to do your migration, you have to build your migration classes on top of that module, so below are the migration steps.

We start with creating a new module that will depend on Migrate module.
Create worchestra_legacy.info

name = "Worchestra Legacy" description = "" core = "7.x" package = "Worchestra"   files[] = migrate/base.inc files[] = migrate/articles.inc
Categories: Elsewhere

Drupalize.Me: Contributing Time to Drupal 8

Planet Drupal - Thu, 26/12/2013 - 14:39

Drupal 8 is coming in 2014. There is a lot of work to do and a lot to learn. We've decided to dedicate a minimum of five hours per week towards Drupal 8 for each person on the Drupalize.Me team. We are now a hefty group of eight people, and everyone will be diving in for a total of 40 hours per week dedicated to Drupal 8. (At least until Drupal 8 launches, and hopefully even beyond that.) Everyone is picking their own projects and ways to get involved. We just started dedicating this time in December, and folks have been spending time sorting out where things are and where to jump in.

Related Topics: community, Drupal 8
Categories: Elsewhere

Ian Campbell: Running ARM Grub on U-boot on Qemu

Planet Debian - Thu, 26/12/2013 - 14:20

At the Mini-debconf in Cambridge back in November there was an ARM Sprint (which Hector wrote up as a Bits from ARM porters mail). During this there a brief discussion about using GRUB as a standard bootloader, particularly for ARM server devices. This has the advantage of providing a more "normal" (which in practice means "x86 server-like") as well as flexible solution compared with the existing flash-kernel tool which is often used on ARM.

On ARMv7 devices this will more than likely involve chain loading from the U-Boot supplied by the manufacturer. For test and development it would be useful to be able to set up a similar configuration using Qemu.

Cross-compilers

Although this can be built and run on an ARM system I am using a cross compiler here. I'm using gcc-linaro-arm-linux-gnueabihf-4.8-2013.08_linux from Linaro, which can be downloaded from the linaro-toolchain-binaries page on Launchpad. (It looks like 2013.10 is the latest available right now, I can't see any reason why that wouldn't be fine).

Once the cross-compiler has been downloaded unpack it somewhere, I will refer to the resulting gcc-linaro-arm-linux-gnueabihf-4.8-2013.08_linux directory as $CROSSROOT.

Make sure $CROSSROOT/bin (which contains arm-linux-gnueabihf-gcc etc) is in your $PATH.

Qemu

I'm using the version packaged in Jessie, which is 1.7.0+dfsg-2. We need both qemu-system-arm for running the final system and qemu-user to run some of the tools. I'd previously tried an older version of qemu (1.6.x?) and had some troubles, although they may have been of my own making...

Das U-boot for Qemu

First thing to do is to build a suitable u-boot for use in the qemu emulated environment. Since we need to make some configuration changes we need to build from scratch.

Start by cloning the upstream git tree:

$ git clone git://git.denx.de/u-boot.git $ cd u-boot

I am working on top of e03c76c30342 "powerpc/mpc85xx: Update CONFIG_SYS_FSL_TBCLK_DIV for T1040" dated Wed Dec 11 12:49:13 2013 +0530.

We are going to use the Versatile Express Cortex-A9 u-boot but first we need to enable some additional configuration options:

  • CONFIG_API -- This enables the u-boot API which Grub uses to access the lowlevel services provided by u-boot. This means that grub doesn't need to contains dozens of platform specific flash, mmc, nand, network, console drivers etc and can be completely platform agnostic.
  • CONFIG_SYS_MMC_MAX_DEVICE -- Setting CONFIG_API needs this.
  • CONFIG_CMD_EXT2 -- Useful for accessing EXT2 formatted filesystems. In this example I use a VFAT /boot for convenience but in a real system we would want to use EXT2 (or even something more modern)).
  • CONFIG_CMD_ECHO -- Just useful.

You can add all these to include/configs/vexpress_common.h:

#define CONFIG_API #define CONFIG_SYS_MMC_MAX_DEVICE 1 #define CONFIG_CMD_EXT2 #define CONFIG_CMD_ECHO

Or you can apply the patch which I sent upstream:

$ wget -O - http://patchwork.ozlabs.org/patch/304786/raw | git apply --index $ git commit -m "Additional options for grub-on-uboot"

Finally we can build u-boot:

$ make CROSS_COMPILE=arm-linux-gnueabihf- vexpress_ca9x4_config $ make CROSS_COMPILE=arm-linux-gnueabihf-

The result is a u-boot binary which we can load with qemu.

GRUB for ARM

Next we can build grub. Start by cloning the upstream git tree:

$ git clone git://git.sv.gnu.org/grub.git $ cd grub

By default grub is built for systems which have RAM at address 0x00000000. However the Versatile Express platform which we are targeting has RAM starting from 0x60000000 so we need to make a couple of modifications. First in grub-core/Makefile.core.def we need to change arm_uboot_ldflags, from:

-Wl,-Ttext=0x08000000

to

-Wl,-Ttext=0x68000000

and second we need make a similar change to include/grub/offsets.h changing GRUB_KERNEL_ARM_UBOOT_LINK_ADDR from 0x08000000 to 0x68000000.

Now we are ready to build grub:

$ ./autogen.sh $ ./configure --host arm-linux-gnueabihf $ make

Now we need to build the final grub "kernel" image, normally this would be taken care of by grub-install but because we are cross building grub we cannot use this and have to use grub-mkimage directly. However the version we have just built is for the ARM target and not for host we are building things on. I've not yet figured out how to build grub for ARM while building the tools for the host system (I'm sure it is possible somehow...). Luckily we can use qemu to run the ARM binary:

$ cat load.cfg set prefix=(hd0) $ qemu-arm -r 3.11 -L $CROSSROOT/arm-linux-gnueabihf/libc \ ./grub-mkimage -c load.cfg -O arm-uboot -o core.img -d grub-core/ \ fat ext2 probe terminal scsi ls linux elf msdospart normal help echo

Here we create load.cfg which is the setup script which will be built into the grub kernel, our version just sets the root device so that grub can find the rest of its configuration.

Then we use qemu-arm-static to invoke grub-mkimage. The "-r 3.11" option tells qemu to pretend to be a 3.11 kernel (which is required by the libc used by our cross compiler, without this you will get a fatal: kernel too old message) and "-L $CROSSROOT/..." tells it where to find the basic libraries, such as the dynamic linker (luckily grub-mkimage doesn't need much in the way of libraries so we don't need a full cross library environment.

The grub-mkimage command passes in the load.cfg and requests an output kernel targeting arm-uboot, core.img is the output file and the modules are in grub-core (because we didn't actually install grub in the target system, normally these would be found in /boot/grub). Lastly we pass in a list of default modules to build into the kernel, including filesystem drivers (fat, ext2), disk drivers (scsi), partition handling (msdos), loaders (linux, elf), the menu system (normal) and various other bits and bobs.

So after all the we now have our grub kernel in core.img.

Putting it all together

Before we can launch qemu we need to create various disk images.

Firstly we need some images for the 2 64M flash devices:

$ dd if=/dev/zero of=pflash0.img bs=1M count=64 $ dd if=/dev/zero of=pflash1.img bs=1M count=64

We will initialise these later from the u-boot command line.

Secondly we need an image for the root filesystem on an MMC device. I'm using a FAT formatted image here simply for the convenience of using mtools to update the images during development.

$ dd if=/dev/zero of=mmc.img bs=1M count=16 $ /sbin/mkfs.vfat mmc.img

Thirdly we need a kernel, device tree and grub configuration on our root filesystem. For the first two I extracted them from the standard armmp kernel flavour package. I used the backports.org version 3.11-0.bpo.2-armmp version and extracted /boot/vmlinuz-3.11-0.bpo.2-armmp as vmlinuz and /usr/lib/linux-image-3.11-0.bpo.2-armmp/vexpress-v2p-ca9.dtb as dtb. Then I hand coded a simple grub.cfg:

menuentry 'Linux' { echo "Loading vmlinuz" set root='hd0' linux /vmlinuz console=ttyAMA0 ro debug devicetree /dtb }

In a real system the kernel and dtb would be provided by the kernel packages and grub.cfg would be generated by update-grub.

Now that we have all the bits we need copy them into the root of mmc.img. Since we are using a FAT formatted image we can use mcopy from the mtools package.

$ mcopy -v -o -n -i mmc.img core.img dtb vmlinuz grub.cfg ::

Finally after all that we can run qemu passing it our u-boot binary and the mmc and flash images and requesting a Cortex-A9 based Versatile Express system with 1GB of RAM:

$ qemu-system-arm -M vexpress-a9 -kernel u-boot -m 1024m -sd mmc.img \ -nographic -pflash pflash0.img -pflash pflash1.img

Then at the VExpress# prompt we can configure the default bootcmd to load grub and save the environment to the flash images. The backslash escapes (\$ and \;) should be included as written here so that e.g. the variables are only evaluated when bootcmd is evaluated and not immediately when setting bootcmd and the bootm is set as part of bootcmd instead of executed immediately:

VExpress# setenv bootcmd fatload mmc 0:0 \${loadaddr} core.img \; bootm \${loadaddr} VExpress# saveenv

Now whenever we boot the system it will automatically load boot grub from the mmc and launch it. Grub in turn will load the Linux binary and DTB and launch those. I haven't actually configure Linux with a root filesystem here so it will eventually panic after failing to find root.

Future work

The most pressing issue is the hard coded load address built in to the grub kernel image. This is something which needs to be discussed with the upstream grub maintainers as well as the Debian package maintainers.

Now that the ARM packages have hit Debian (in experimental in the 2.02~beta2-1 package) I also plan to start looking at debian-installer integration as well as updating flash-kernel to setup the chain load of grub instead of loading a kernel directly.

Categories: Elsewhere

Yuriy Gerasimov: Custom user name from profile

Planet Drupal - Thu, 26/12/2013 - 09:54

Current project I am working on has user profiles using profile2 module. So it is pretty common task to replace all links/references on the site with user's proper name from Profile instead of his drupal username.

This is really easy to achieve using hook_username_alter()

<?php /** * Implements hook_username_alter(). */ function mymodule_username_alter(&$name, $account) { $contact_profile = profile2_load_by_user($account, MYMODULE_PROFILE_CONTACT_TYPE); if (isset($contact_profile->field_name[LANGUAGE_NONE][0]['value'])) { $name = $contact_profile->field_name[LANGUAGE_NONE][0]['value']; } } ?>

And if you will want somewhere in the code display user's name do this using function format_username().

Tags: drupal planet
Categories: Elsewhere

Russ Allbery: C TAP Harness 2.4

Planet Debian - Thu, 26/12/2013 - 05:24

I always enjoy this time of year: lots of peace and quiet and time to work on whatever I feel like focusing on. That's been watching a lot of speedrunning and League of Legends, but also experimenting with systemd and upstart. I've now ported lbcd to both "properly," meaning that I make full use of their features as far as I currently understand them.

I have some more packaging work to do, and need to make a release, but of course I started fixing various other things since I was in the code anyway, and now I don't want to release it without some testing. And that prompted another digression, since I didn't have a good test framework for spawning the server and pounding on it.

That finally brings this journal entry to its actual topic: a new release of C TAP Harness. This release adds a new pair of functions, diag_file_add() and diag_file_remove(), which tell the TAP library to take the contents of log files as an additional source of diag() messages. This produces much nicer, and more readable, output from test cases that involve forking a background server that produces output to standard output and standard error. That output can be directed to a file and then included in the test output stream, properly tagged and in sequence with the result messages.

You can get the latest release from the C TAP Harness distribution page.

Categories: Elsewhere

Paul Tagliamonte: life update

Planet Debian - Thu, 26/12/2013 - 01:15

sorry for not posting, life’s been getting in the way. Here’s an update of what I did the last few weeks:

I hacked more on Hy. A lot more. Tons of new stuff is coming through, and I got a slot to talk about Hy at PyCon US!

I worked a bit on Debian stuff. I got Debile a bit more in shape, and did more NEW queue work. Hopefully I can get some more in.

I just bought another keyboard to supplement my Das Keyboard - the Tex Beetle. I’m thinking of getting some pimped out keycaps. Get at me if you know anything about this.

Merry christmas!

Categories: Elsewhere

Jon Dowland: 2012 In Review

Planet Debian - Thu, 26/12/2013 - 01:00

2013 is nearly all finished up and so I thought I'd spend a little time writing up what was noteable in the last twelve months. When I did so I found an unfinished draft from the year before. It would be a shame for it to go to waste, so here it is.

2012 was an interesting year in many respects with personal highs and lows. Every year I see a lots of "round-up"-style blog posts on the web, titled things like "2012 in music", which attempt to summarize the highlights of the year in that particular context. Here's JWZ's effort, for example. Often they are prefixed with statements like "2012 was a strong year for music" or whatever. For me, 2012 was not a particularly great year. I discovered quite a lot of stuff that I love that was new to me, but not new in any other sense.

In Music, there were a bunch of come-back albums that made the headlines. I picked up both of Orbital's Wonky and Brian Eno's Lux (debatably a comeback: his first ambient record since 1983, his first solo effort since 2005, but his fourth collaborative effort on Warp in the naughties). I've enjoyed them both, but I've already forgotten Wonky and I still haven't fully embraced Lux (and On Land has not been knocked from the top spot when I want to listen to ambience.) There was also Throbbing Gristle's (or X-TG) final effort, a semi/post-TG, partly posthumous double-album swan song effort which, even more than Lux, I still haven't fully digested. In all honesty I think it was eclipsed by the surprise one-off release of a live recording of a TG side project featuring Nik Void of Factory Floor: Carter Tutti Void's Transverse, which is excellent. Ostensibly a four-track release, there's a studio excerpt V4 studio (Slap 1) which is available from (at least) Amazon. There's also a much more obscure fifth "unreleased" track cruX which I managed to "buy" from one of the web shops for zero cost.

The other big musical surprise for me last year was Beth Jeans Houghton and the Hooves of Destiny: Yours Truly, Cellophane Nose. I knew nothing of BJH, although it turns out I've heard some of her singles repeatedly on Radio 6, but her band's guitarist Ed Blazey and his partner lived in the flat below me briefly. In that time I managed to get to the pub with him just once, but he kindly gave me a copy of their album on 12" afterwards. It reminds me a bit of Goldfrapp circa "Seventh Tree": I really like it and I'm looking forward to whatever they do next.

Reznor's How To Destroy Angels squeezed out An Omen EP which failed to set my world on fire as a coherent collection, despite a few strong songs individually.

In movies, sadly once again I'd say most of the things I recall seeing would be "also rans". Prometheus was a disappointment, although I will probably rewatch it in 2D at least once. The final Batman was fun although not groundbreaking to me and it didn't surpass Ledger's efforts in The Dark Knight. Inception remains my favourite Nolan by a long shot. Looper is perhaps the stand-out, not least because it came from nowhere and I managed to avoid any hype.

In games, I moaned about having moaning about too many games, most of which are much older than 2012. I started Borderlands 2 after enjoying Borderlands (disqualified on age grounds) but to this day haven't persued it much further. I mostly played the two similar meta-games: The Playstation Plus download free games in a fixed time period and the more sporadic but bountiful humble bundle whack-a-mole. More on these another time.

In reading, as is typical I mostly read stuff that was not written in 2012. Of that which was, Charles Stross's The Apocalypse Codex was an improvement over The Fuller Memorandum which I did not enjoy much, but in general I'm finding I much prefer Stross's older work to his newer; David Byrne's How Music Works was my first (and currently last) Google Books ebook purchase, and I read it entirely on a Nexus 7. I thoroughly enjoyed the book but the experience has not made a convert of me away from paper. He leans heavily on his own experiences which is inevitable but fortunately they are wide and numerous. Iain Banks' Stonemouth was an enjoyable romp around a fictional Scottish town (one which, I am reliably informed, is incredibly realistical rendered). One of his "mainstream" novels, It avoided a particular plot pattern that I've grown to dread with Banks, much to my suprise (and pleasure). Finally, the stand-out pleasant surprise novel of the year was Pratchett and Baxter's The Long Earth. With a plot device not unlike Banks' Transition or Stross's Family Trade series, the pair managed to write a journey-book capturing the sense-of-wonder that these multiverse plots are good for. (Or perhaps I have a weakness for them). It's hard to find the lines between Baxter and Pratchett's writing, but the debatably-reincarnated Tibetan Monk-cum-Artificial Intelligence 'Lobsang' must surely be Pratchett's. Pratchett managed to squeeze out another non-Discworld novel (Dodger) as well as a long-overdue short story collection, although I haven't read either of them yet.

On to 2013's write-up...

Categories: Elsewhere

Carl Chenet: X-mas present: Brebis 0.9, the fully automated backup checker, released

Planet Debian - Wed, 25/12/2013 - 23:01

Follow me also on Twitter 

Just in time for this 2013 Christmas, the Brebis Project released Brebis "Bouddhinette" 0.9. Hope you’ll enjoy our X-mas present Reminder: Brebis is the fully automated backup checker, a CLI software developed in Python, allowing users to verify the integrity of archives (tar,gz,bz2,lzma,zip) and the state of the files inside the archives.

What’s new?

The major features for this release are:

  • support of the apk archive
  • the cli offers new options to store the configuration file (-C), the list of files (-L) or both (-O) in custom locations

Anisette, the proud mascot of the Brebis Project

The extensive list of the supported features is available on the Brebis Project homepage.

Feedback about Brebis

What do you think about the Brebis project ? We at the Brebis Project welcome any feedback about Brebis. Feel free to comment on this blog,  to subscribe to the Brebis-users mailing listby Twitter or email me directly at carl.chenet@brebisproject.org

Official website: http://www.brebisproject.org Mailing-list: http://lists.sourceforge.net/lists/listinfo/brebis-users
Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator - Elsewhere