Feed aggregator

Simon Josefsson: Wifi on S3 with Replicant

Planet Debian - Sun, 10/08/2014 - 20:02

I’m using Replicant on my main phone. As I’ve written before, I didn’t get Wifi to work. The other day leth in #replicant pointed me towards a CyanogenMod discussion about a similar issue. The fix does indeed work, and allowed me to connect to wifi networks and to setup my phone for Internet sharing. Digging deeper, I found a CM Jira issue about it, and ultimately a code commit. It seems the issue is that more recent S3′s comes with a Murata Wifi chipset that uses MAC addresses not known back in the Android 4.2 (CM-10.1.3 and Replicant-4.2) days. Pulling in the latest fixes for macloader.cpp solves this problem for me, although I still need to load the non-free firmware images that I get from CM-10.1.3. I’ve created a pull request fixing macloader.cpp for Replicant 4.2 if someone else is curious about the details. You have to rebuild your OS with the patch for things to work (if you don’t want to, the workaround using /data/.cid.info works fine), and install some firmware blobs as below.

adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_apsta.bin_b1 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_apsta.bin_b2 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_mfg.bin_b0 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_mfg.bin_b1 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_mfg.bin_b2 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_p2p.bin_b0 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_p2p.bin_b1 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_p2p.bin_b2 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_sta.bin_b0 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_sta.bin_b1 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_sta.bin_b2 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt_murata /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt_murata_b2 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt_semcosh /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt_murata /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt_murata_b2 /system/vendor/firmware/ adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt_semcosh /system/vendor/firmware/

Categories: Elsewhere

Cyril Brulebois: Why is my package blocked?

Planet Debian - Sun, 10/08/2014 - 19:45

A bit of history: A while ago udeb-producing packages were getting frozen on a regular fashion, when a d-i release was about to be cut. While I wasn’t looking at the time, I can easily understand the reasons behind that: d-i is built upon many components, it takes some time to make sure it’s basically in shape for a release, and it’s very annoying when a regression sneaks in right before the installation images get built.

I took over d-i release maintenance in May 2012 and only a few uploads happened before the wheezy freeze. I was only discovering the job at the time, and I basically released whatever was in testing then. The freeze began right after that (end of June), so I started double checking things affecting d-i (in addition to or instead of the review performed by other release team members), and unblocking packages when changes seemed safe, or once they were tested.

A few uploads happened after the wheezy release and there’s already a Jessie Alpha 1 release. I was about to release Jessie Beta 1 after some fair bits of testing, a debian-installer upload, and the only remaining bits were: building installation images (hello Steve), and of course communication (mail announce and website update).

Unfortunately a new upstream release reached testing in the meanwhile, breaking the installer in several ways. I’ll give details below, of course not because I want to point finger at the maintainer, but to illustrate the ramifications that a single package’s migrating to testing can induce.

  • parted 3.2-1 was uploaded on 2014-07-30 and migrated on 2014-08-05.

  • parted 3.2-2 fixed a regression reported in Ubuntu only (LP#1352252) which I also hit with images built locally after that migration.

  • I then built some images locally using fixed parted packages but then discovered that auto-lvm was still broken, which I reported in #757417.

  • After some investigation Colin confirmed some behavioral changes in this new parted release, which imply the need for an update of several other partman-* components: #757661, #757662, #757663, #757664, #757665, #757666.

  • Thankfully fixes have been added for all of those (except partman-zfs at the time of writing), but more testing is needed, before possibly urgenting those packages so that they get into testing as soon as possible.

Since I’d like to avoid such experience in the future, I’ll probably reintroduce the old method and freeze all udeb-producing packages during next d-i releases.

So you know why it might happen. Your next question might be: “What to do when your package is getting caught in that net?”. In that case, please get in touch with both debian-release@ and debian-boot@ asking for an unblock. I’ll then review your particular package, and either let it migrate to testing, or delay it until after the release.

Categories: Elsewhere

VM(doh): OPCache Module for Drupal

Planet Drupal - Sun, 10/08/2014 - 13:08

Last Friday, we published the start to our OPCache module for Drupal. While it's still lacking a few features to be ready for a tagged release, we thought it'd be a good idea to get some working code out there.

The goal for this module is to allow Drupal site administrators to clear their opcode cache if they are using the PHP OPcache extension (also known as Zend OPcache or Zend Optimizer+) as well as provide an interface similar to the Memcache Admin module.

As of right now, the only implemented feature is cache flushing. We understand that you might be running on multiple webservers (we build a lot of sites that run on multiple webservers), so we included the ability to flush caches on all of your webservers at once.

Go try the module and feel free to submit patches!

Categories: Elsewhere

Russ Allbery: Orphaning some software packages

Planet Debian - Sun, 10/08/2014 - 06:23

As mentioned in my previous message, I'm orphaning various software packages that I was maintaining as part of my job at Stanford, or that for some other reason I'm no longer using directly. My goal is to free up some time and space for new work projects at my new employer, for other hobbies, and to take better care of other projects that I'm not orphaning.

The following software packages are now orphaned, and marked as such on their web pages:

I'm also stepping down from Debian package maintenance for the OpenAFS and Shibboleth packages, and have already notified the relevant communities. For the Debian packages, and for the above software packages, I will continue to provide security support until someone else can take them over.

WebAuth is going to be in a state of transition as noted on its page:

My successor at Stanford will be continuing maintenance and development, but that person hasn't been hired yet, and it will take some time for them to ramp up and start making new releases (although there may be at least one interim release with work that I'm finishing now). It's therefore not strictly orphaned, but it's noted that way on my software pages until someone else at Stanford picks it up.

Development of the other packages that I maintain should continue as normal, with a small handful of exceptions. The following packages are currently in limbo, since I'm not sure if I'll have continued use for them:

I'm not very happy with the current design of either kadmin-remctl or wallet, so if I do continue to maintain them (and have time to work on them), I am likely to redesign them considerably.

For all of my packages, I've been adding clones of the repository to GitHub as an additional option besides my personal Git repository server. I'm of two minds about using (and locking myself into) more of the GitHub infrastructure, but repository copies on GitHub felt like it might be useful for anyone who wanted to fork or take over maintenance. I will be adding links to the GitHub repositories to the software packages for things that are in Git.

If you want to take over any of the orphaned software packages, feel free. When you're ready for the current software pages to redirect to its new home, let me know.

Categories: Elsewhere

Ian Donnelly: How-To: kdb merge

Planet Debian - Sun, 10/08/2014 - 04:30

Hi Everybody,

As you may know, part of my Google Summer of Code project involved the creation of merge tools for Elektra. The one I am going to focus on today is kdb merge. The kdb tool allows users to access and perform functions on the Elektra Key Database from the command line. We added a new command to this very useful tool, the merge command. This command allows a user to perform a three-way merge of KeySets from the kdb tool.
The command to use this tool is:
kdb merge [options] ourpath theirpath basepath resultpath

The standard naming scheme for a three-way merge consists of ours, theirs, and base. Ours refers to the local copy of a file, theirs refers to a remote copy, and base refers to their common anscestor. This works very similarly for KeySets, especially ones that consist of mounted conffiles. For mounted conffiles, ours should be the user’s copy, theirs would be the maintainers copy, and base would be the conffile as it was during the last package upgrade or during the package install. If you are just trying to merge any two KeySets that derive from the same base, ours and theirs can be interchanged. In kdb merge, ourpath, theirpath, and basepath work just like ours, theirs, and base except each one represents the root of a KeySet. Resultpath is pretty self- explanatory, it is just where you want the result of the merge to be saved under.

As for the options, there are a few basic ones and one option, strategy, that is very important. The basic options are:
-H --help which prints the help text
-i --interactive which attempts the merge in an interactive way
-t --test which tests the propsed merge and informs you about possible conflicts
-v –verbose which runs the merge in verbose mode
-V –version prints info about the version

The other option, strategy is:
s --strategy which is used to specify a strategy to use in case of a conflict

The current list of strategies are:
preserve the merge will fail if a conflict is detected
ours the merge will use our version during a conflict
theirs the merge will use their version during a conflict
base the merge will use the base version during a conflict

If no strategy is specified, the merge will default to the preserve strategy as to not risk making the wrong decision. If any of the other strategies are specified, when a conflcit is detected, merge will use the Key specified by the strategy (ours, theirs, or base) for the resulting Key.

An example of using the kdb merge command:
kdb merge -s ours system/hosts/ours system/hosts/theirs system/hosts/base system/hosts/result

-Ian S. Donnelly

Categories: Elsewhere

Russell Coker: Being Obviously Wrong About Autism

Planet Debian - Sat, 09/08/2014 - 18:01

I’m watching a Louis Theroux documentary about Autism (here’s the link to the BBC web site [1]). The main thing that strikes me so far (after watching 7.5 minutes of it) is the bad designed of the DLC-Warren school for Autistic kids in New Jersey [2].

A significant portion of people on the Autism Spectrum have problems with noisy environments, whether most Autistic people have problems with noise depends on what degree of discomfort is considered a problem. But I think it’s most likely to assume that the majority of kids on the Autism Spectrum will behave better in a quiet environment. So any environment that is noisy will cause more difficult behavior in most Autistic kids and the kids who don’t have problems with the noise will have problems with the way the other kids act. Any environment that is more prone to noise pollution than is strictly necessary is hostile to most people on the Autism Spectrum and all groups of Autistic people.

The school that is featured in the start of the documentary is obviously wrong in this regard. For starters I haven’t seen any carpet anywhere. Carpeted floors are slightly more expensive than lino but the cost isn’t significant in terms of the cost of running a special school (such schools are expensive by private-school standards). But carpet makes a significant difference to ambient noise.

Most of the footage from that school included obvious echos even though they had an opportunity to film when there was the least disruption – presumably noise pollution would be a lot worse when a class finished.

It’s not difficult to install carpet in all indoor areas in a school. It’s also not difficult to install rubber floors in all outdoor areas in a school (it seems that most schools are doing this already in play areas for safety reasons). For a small amount of money spent on installing and maintaining noise absorbing floor surfaces the school could achieve better educational results. The next step would be to install noise absorbing ceiling tiles and wallpaper, that might be a little more expensive to install but it would be cheap to maintain.

I think that the hallways in a school for Autistic kids should be as quiet as the lobby of a 5 star hotel. I don’t believe that there is any technical difficulty in achieving that goal, making a school look as good as an expensive hotel would be expensive but giving it the same acoustic properties wouldn’t be difficult or expensive.

How do people even manage to be so wrong about such things? Do they never seek any advice from any adult on the Autism Spectrum about how to run their school? Do they avoid doing any of the most basic Google searches for how to create a good environment for Autistic people? Do they just not care at all and create an environment that looks good to NTs? If they are just trying to impress NTs then why don’t they have enough pride to care that people like me will know how bad they are? These aren’t just rhetorical questions, I’d like to know what’s wrong with those people that makes them do their jobs in such an amazingly bad way.

Related posts:

  1. Autism, Food, etc James Purser wrote “Stop Using Autism to Push Your Own...
  2. Autism and a Child Beauty Contest Fenella Wagener wrote an article for the Herald Sun about...
  3. Autism Awareness and the Free Software Community It’s Autism Awareness Month April is Autism Awareness month, there...
Categories: Elsewhere

Joachim's blog: Using Human Queue Worker to process comments

Planet Drupal - Sat, 09/08/2014 - 11:44

Some time ago, I released Human Queue Worker, a module that takes the concept of the Drupal Queue system, but where the processing of the items is done by human users rather than an automated process. I say 'takes the concept'; it in fact uses the Drupal Queue to create and claim queue items, but instead of declaring your queue with hook_cron_queue_info(), you declare it to Human Queue Worker as a queue that humans will be working on.

This was written for my current project and for a fairly specific need, and I didn't imagine many sites would be using it. However, it has an obvious and popular application: approving comments. I always figured it would be nice if someone wrote a little module to define a comment processing human queue.

Well, that someone is me, and the time is now. You see, I'm an idiot: when I set up this new blog site of mine, I totally forgot to set up a CAPTCHA, and then when I added Mollon, I didn't set it up properly. So this site has a few hundred spammy comments that I need to delete.

The problem is that comment management takes time. Unless there are some magical area of the core UI I've completely missed, I can either visit each node and delete them one by one, or use the comment admin form. There, I can mass-delete the ones with obvious spammy titles, but all the others will still need individual inspection.

The Human Queue UI simplifies this hugely. There's just one page for the queue. When you go to that page, you're presented with an item to process. In the case of comment approval, that's the comment itself, plus the parent node and parent comment to give you some context. To process the comment, click one of two buttons: 'Publish' or 'Delete'. The comment is dealt with, and the form reloads, with a brand new comment for you to process. Which means that the only clicking you do is the action buttons: Publish; Delete; Publish; Delete. (Though with the amount of spam on my site, it's probably Delete; Delete; Delete, like the Cybermen.)

I've not timed it, but I reckon I can probably go at quite a rate. And that's with just one of me: the core Queue system guarantees that only one worker can claim an item at any one time, and that applies to human workers too. So if another user were to work the queue too, by going to the same page, they would be getting shown different comments to work on, and we'd work through the comments at twice the rate.

Now I just need to find a compliant friend and make them into my worker drone. If you're interested, please don't post a comment!

Categories: Elsewhere

Steve Kemp: Rebooting the CMS

Planet Debian - Sat, 09/08/2014 - 10:59

I run a cluster for the Debian Administration website, and the code is starting to show its age. Unfortunately the code is not so modern, and has evolved a lot of baggage.

Given the relatively clean separation between the logical components I'm interested in trying something new. In brief the current codebase allows:

  • Posting of articles, blog-entries, and polls.
  • The manipulation of the same.
  • User-account management.

It crossed my mind the other night that it might make sense to break this code down into a number of mini-servers - a server to handle all article-related things, a server to handle all poll-related things, etc.

If we have a JSON endpoint that will allow:

  • GET /article/32
  • POST /article/ [create]
  • GET /articles/offset/number [get the most recent]

Then we could have a very thin shim/server on top of that whihc would present the public API. Of course the internal HTTP overhead might make this unworkable, but it is an interesting approach to the problem, and would allow the backend storage to be migrated in the future without too much difficulty.

At the moment I've coded up two trivial servers, one for getting user-data (to allow login requests to succeed), and one for getting article data.

There is a tiny presentation server written to use those back-end servers and it seems like an approach that might work. Of course deployment might be a pain..

It is still an experiment rather than a plan, but it could work out: http://github.com/skx/snooze/.

Categories: Elsewhere

Russ Allbery: A new challenge

Planet Debian - Sat, 09/08/2014 - 06:01

Those reading this journal may have noticed that my rate of posting has dropped a bit in the past few years, and quite a lot in the past year. One of the major reasons for this was work, which had been getting more bureaucratic, more stressful, less trusting, and more fearful. After this got drastically worse in the past six months, I finally decided enough was enough and took advantage of a good opportunity to do something different.

I will be joining Dropbox's site reliability engineering team in a week and a half (which means that I'll be working on their servers, not on the product itself). It will take a few months to settle in, but hopefully this will mean a significant improvement to my stress levels and a lot of interesting projects to work on.

I'm taking advantage of this change to inventory the various things I'm currently committed to and let go of some projects to make more space in my life. There are also a variety of software projects that I was maintaining as part of my job at Stanford, and I will be orphaning many of those packages. I'll make another journal post about that a bit later.

For Debian folks, I am going to be at Debconf, and hope to meet many of you there. (It's going to sort of be my break between jobs.) In the long run, I'm hoping this move will let me increase my Debian involvement.

In the long run, I expect most of my free software work, my reviews, and the various services I run to continue as before, or even improve as my stress drops. But I've been at Stanford for a very long time, so this is quite the leap into the unknown, and it's going to take a while before I'm sure what new pattern my life will fall into.

Categories: Elsewhere

X-Team: ContributeX: Drupal 8 needs you

Planet Drupal - Sat, 09/08/2014 - 02:38
“Contributing to Drupal is life-changing.” Dries, you couldn’t be more right. At DrupalCamp Singapore 2014, developers from all over Asia had the opportunity to spend a day focused on learning, collaborating and preparing for Drupal 8. One takeaway was being reminded of the fact that Drupal 8 still needs significant help from its community to be...
Categories: Elsewhere

Clint Adams: The politically-correct term is a juvenile cricket

Planet Debian - Fri, 08/08/2014 - 22:16

Normally I'm disgusted by fangirling of jwz, but it seems that he finally wrote something I like.

Categories: Elsewhere

Daniel Pocock: Help needed reviewing Ganglia GSoC changes

Planet Debian - Fri, 08/08/2014 - 22:14

The Ganglia project has been delighted to have Google's support for 5 students in Google Summer of Code 2014. The program officially finishes in ten more days, on 18 August.

If you are a user of Ganglia, Nagios, RRDtool or R or just an enthusiastic C or Python developer, you may be able to use and provide feedback for the students while benefitting from the cool new features they have been working on.

Student Technology Comments Chandrika Parimoo Python, Nagios and some Syslog Chandrika generalized some of my ganglia-nagios-bridge code into the PyNag library. I then used it as the basis for syslog-nagios-bridge. Chandrika has also done some work on improving the ganglia-nagios-bridge configuration file format. Oliver Hamm C Oliver has been working on metrics about Ganglia infrastructure. If you have a large and dynamic Ganglia cloud, this is for you. Plamen Dimitrov R, RRDtool Plamen has been building an R plugin for inspecting RRD files from Ganglia or any other type of RRD. Rana NVIDIA, C Rana has been working on improvements to Ganglia monitoring of NVIDIA GPUs, especially in HPC clusters Zhi An Java, JMX Zhi An has been extending the JMXetric and gmetric4j projects to provide more convenient monitoring of Java server processes.

If you have any feedback or questions, please feel free to discuss on the Ganglia-general mailing list and CC the student and their mentor.

Categories: Elsewhere

Jan Wagner: Monitoring Plugins Debian packages

Planet Debian - Fri, 08/08/2014 - 22:03

You may wonder why the old good nagios-plugins are not up to date in Debian unstable and testing.

Since the people behind and maintaining the plugins <= 1.5 were forced to rename the software project into Monitoring Plugins there was some work behind the scenes and much QA work necessary to release the software in a proper state. This happened 4 weeks ago with the release of the version 2.0 of the Monitoring Plugins.

With one day of delay the package was uploaded into unstable, but did hit the Debian NEW queue due the changed package name(s). Now we (and maybe you) are waiting to get them reviewed by ftp-master. This will hopefully happen before the jessie freeze.

Until this will happen, you may grab packages for wheezy by the 'wheezy-backports' suite from ftp.cyconet.org/debian/ or 'debmon-wheezy' suite from debmon.org. Feedback is much appreciated.

Categories: Elsewhere

tsvenson: How Open Source worx

Planet Drupal - Fri, 08/08/2014 - 22:00

It's funny how quick things happen - Really it is!

Just a week ago I posted I am a Follower & Thinker describing some of my experiences from Open Source. Then, just days later, someone had left me a message in an open Drupal chatroom. What happened after is the result of a chain of interesting - but more or less isolated - events and situations.

Quick background

I'm currently spending some of my time working on an open initiative called Baksteg. I have quite a bit of experience with Drupal and for me it is more than good enough to build the site I have in mind with. Just recently I begun doing some real prototyping of ideas too. Most of the testing have gone very well, but some not so and for those I have started to seek the online community more - poking for help to find solutions or alternative ways.

Online communities are everywhere

www.drupal.org is a fantastic resource to start with. Many problems can quickly be solved doing a quick search there, or it's related sites such as groups.drupal.org and api.drupal.org. Then, when you struggle to find useful help by just searching, you have already started to find other channels to communicate on. In fact you find people using, working with, Drupal everywhere these days. That even includes the same social networks everyone else uses, such as Twitter, Facebook and LinkedIn. Personally I prefer Twitter as it fills my needs and interests good enough, both with Drupal and other ones.

Over the last few months I have worked on tuning my own use of Twitter. I wanted it less as a *megaphone* and more a communication tool to have meaningful, while at the same time a bit entertaining, conversations on. It has worked out really well, while also - as an welcomed bonus - helped me much better appreciate what others actually share out there. My two main feeds are now @tsvenson (personal, mainly in English) and @Baksteg (mainly in Swedish). They both play important roles for my daily needs, which - totally coincidently - works really well with how I now see open source collaboration works out to ;)

Kinda one of the original Social Networks, not that long time ago

However this story happened in the Swedish Drupal-channel on IRC - a social network for geeks and nerds since long ago. It was from kristofferw_ (Kristoffer Wiklund) asking me about this entity ID *problem* I was having troubles with. Some days earlier I had posted a description of it in the Swedish Drupal Group. While that post resulted in some nice advices and ideas, they all turned out to be dead ends. Still, it gave me opportunity to play around with some other modules that later will be used.

- Practice is always good I'm told ;)

Kristoffer was one of them who had helped there. We also go further back, including several Drupal events around. Thus we already know each other, at least when it comes to Drupal stuff. I explained that testing been good, but the work had to, for good reasons, be *pushed* forward. That's when Kristoffer offered to help and write a simple module, if nothing else just to get some coding experience poking around.

As things often end up then, when passionate nerds and geeks find an interesting problem or challenge, brainstorming starts and ideas flows back and forth.

IRC is an important channel in the Drupal community, but it is not because of it's fancy features. It's its simplicity that makes it into such a useful tool to use as a communication hub. A hub that connects us to worldwide chatrooms that can run in the background only to be brought forward when needed or when we have a moment to spare. Or just as inspiration...

Just following the discussions is itself educational and often spurns new ideas. There are many different specialized chatrooms too, one simply called #Drupal-support, most often filled with several hundreds of users, helping each other. It is in these chatrooms many of the toughest challenges with improving the project is ironed out.

A new project is born

It is also here many new projects are born, small as larger ones. As Kristoffer and I began to talk, we quickly found a much more interesting approach. This one had much better potential and many more use cases as well as better flexibility and UX benefits too - So we created a sandbox project on drupal.org. It is now used so that we can experiment further in a better collaborative, open and efficient way.

If this module turns out the way we think it will, then we can take the next step and apply for it to become a Full project. This is a form of quality assurance process that includes us behind the project too, but to get there we need to pass gateways. These are not put in place to stop us though, quite the opposite. Many members in the community voluntary spend their own time to help others to pass. The whole guided process is filled with tools, tips and personal advices about how to make the project work as good as possible, not just for one self but for others too.

Once a full project, access to new features to organize and administrate is granted. Includes proper name space and better ways to manage versions and releases. These are features rarely needed when just poking around and testing ideas in the sandbox.

What I have also learned is what an amazing way to improve my own skills this is. Not just coding related, but also the way to collaborate and how great knowledge transfer can work too. All while at the same time get to know new interesting people and getting exposed to new cultures and ideas.

For me the Drupal community encapsulates all this, and more. Then, taking its size and success into count, it is a pretty remarkable achievement that shows how things can be done quite differently.

Add this to the mix:

  • Drupal is used on millions of sites
  • Drupal probably already generates a multi billion dollar ecosystem around it

 

Still there is room for practically anyone, like myself, to feel welcomed and included.

Even if it just starts out as a learning experience!

What this module does

At first glance it looks to do little more than adding an extra step, when creating new content, while hiding parts of the form from displaying. That's right, that is basically what it does and one of its main purposes.

Some might now say - Hey, you just going to end up with tons of garbage content! - and they would be right too. Sure, this is going to make it quicker to create a lot of content - Yes, it can certainly be used for that!

But it also makes some other quite interesting new things possible - This is why:

  • Entities in Drupal have unique ID-numbers, which can be used for all sorts of interesting things.
  • A new entity doesn't get its own ID until after it has been saved the first time.

 

Therein we find my initial problem! There where no smooth way to get around the fact: Users filling in those forms must remember and manually save once before adding content to certain fields. Worse, it would be tricky to notice, when forgotten, as in most cases the data would evaluate, just with something much different than the entity ID needed.

As I played around, with several other modules to see if it was possible to circumvent this somehow, I always hit the same brick-wall. Problem was: Every new idea that showed promise resulted in a solution with tons of complexity, not just in one place but several. Gladly, that complexity was what Kristoffer and I could avoid with just a little bit collaborative brainstorming!

What this module now does is simple:

  1. Hijacks entity create (just for content types yet)
  2. Allows to limit to content create form down to only display the Title field
  3. Creates the new entity
  4. Immediately reopens it in normal edit

 

Content type settings:

 

Adding a new node:

 

Thanks to this, I can now avoid displaying any fields that needs the entity ID, minimizing the risk of mistakes. At the same time it also means this new - pre create content - step can be used to only show the bare essential fields while opening up to new interesting possibilities. Any rarely used, and other optional, fields can be dealt with better as they come into play.

This new, less complex, solution will also be much easier to improve upon. It has actually already allowed me to visualize and identify a whole bunch of interesting uses and UX-benefits. It will improve for many roles, not the least site builders and content editors.

So, for me, this is no garbage creator. Instead I see how a small improvement can open up many new creative ways to work and collaborate on content. However that depends, after all it is still just a sandbox project on drupal.org a kind of - Open Source playground - for nerds and geeks like myself.

Not everything is as limited as what is usually read on the tin. For me, this is a few good examples of how things in Open Source can work out just nicely.

Note: What *specifics* Kristoffer gets out of this I know little of. Those specifics are his to share and actually not that important for me - As long as I sense we both get something of value out of the collaboration.

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator