Some years ago I was working on a project that involved a database cluster of two Sun E6500 servers that were fairly well loaded. I believe that the overall price was several million pounds. It’s the type of expensive system where it would make sense to spend adequately to do things properly in all ways.
The first interesting thing was the data center where it was running. The front door had a uniformed security guard and a sign threatening immediate dismissal for anyone who left the security door open. The back door was wide open for the benefit of the electricians who were working there. Presumably anyone who had wanted to steal some servers could have gone to the back door and asked the electricians for assistance in removing them.
The system was poorly tested. My colleagues thought that with big important servers you shouldn’t risk damage by rebooting them. My opinion has always been that rebooting a cluster should be part of standard testing and that it’s especially important with clusters which have more interesting boot sequences. But I lost the vote and there was no testing of rebooting.
Along the way there were a number of WTFs in that project. One of which was when the web developers decided to force all users to install the latest beta release of Internet Explorer, a decision that was only revoked when the IE install process broke MS-Office on the PC of a senior manager. Another was putting systems with a default Solaris installation live on the Internet with all default services running, there’s never a reason for a database server to be directly accessible over the Internet.No Backups At All
But I think that the most significant failing was the decision not to make any backups. This wasn’t merely forgetting to make backups, when I raised the issue I received a negative reaction from almost everyone. As an aside I find it particularly annoying when someone implies that I want backups because I am likely to stuff things up.
There are many ways of proving that there’s a general lack of competence in the computer industry. But I think that one of the best is the number of projects where the person who wants backups has their competence questioned instead of all the people who don’t want backups.
A decision to make no backups relies on one of two conditions, either the service has to be entirely unimportant or you need to have no bugs in the OS or hardware defects that can corrupt data, no application bugs, and a team of sysadmins who never make mistakes. The former condition raises the question of why the service is being run and the latter is impossible.
As I’m more persistent than most people I kept raising the issue via email and adding more people to the CC list until I got a positive reaction. Eventually I CC’d someone who responded with “What the fuck” which I consider to be a reasonable response to a huge and expensive project with no backups. However the managers on the CC list regarded the use of profanity in email to be a much more serious problem. To the best of my knowledge there were never any backups of that system but the policy on email was strongly enforced.
This is only a partial list of WTF incidents that assisted in my decision to leave the UK and migrate to the Netherlands.Not Doing Much
About a year after leaving I returned to London for a holiday and had dinner with a former colleague. When I asked what he was working on he said “Not much“. It turned out that proximity to the nearest manager determined the amount of work that was assigned. As his desk was a long way from the nearest manager he had spent about 6 months getting paid to read Usenet. That wasn’t really a surprise given my observations of the company in question.
In this article I am going to show you how to embed a View in a template file (.tpl). Using a cool Views API function, you can render the display of any View and even pass it arguments.
A couple of months ago, after a particularly furious week of trying to contribute something useful to Drupal core, I woke up one morning to a see a lot of activity on my twitter account (Pretty much unheard of for me). I had received this tweet from webchick (Angie Byron).
@alasdaircf Hey, thanks for all the CMI conversion patches! Keep 'em comin'! :D
This was an amazing feeling as Angie is one of the core maintainers for Drupal and a really big name in Drupal . But in a deeper way I think that it symbolises some of the things that I really appreciate about the Drupal community.
Drupal, probably like many other open source technologies, is very meritocratic. There is very definitely some level of hierarchy, not everyone is a co-maintainer of core. Talent and ability are important and are a huge part of what drives the technology forward. But the fact that I got thanks from one of the major core maintainers demonstrates something else. That isn’t to say that I don’t have any talent or ability, but I am relatively new to this whole world and at the moment I don’t have the ability and comprehension of others.
What I have is an urge to put a little bit of an effort in taking what I do know and taking a little bit of time to help and contribute back. And a big part of why I have that urge is that the people involved in Drupal seem to at least have a real appreciation for any time that I put in. I have to say that this is unusual and special part of this community.
Having been involved in music (mostly classical) for the majority of my life, I can say that this is not the case at all. Behind a lot of good amateur ensembles there are people that put in a lot of effort in organising. But when it comes to the performing, people aren’t really there to pat you on the back for trying hard if they don’t think you are performing to a standard they would like to listen to. I’m not an idiot (well not all the time), I know that there are lots of very rational reasons for why the two are very far apart and if I were to make a comparison with sporting activities, not having any ability is a real problem.
However that doesn’t stop the fact that alongside all the other things that are great about the Drupal community it really is a community that appreciates effort.DevelopmentTeaser: Alasdair writes about how great it is to be working in the Drupal communityCategories: CommentDevelopmentDrupal NewsDrupal PlanetPrimary Category: Comment
A common topic of discussion on computer users’ group mailing lists is advice on buying a PC. I think that most of the offered advice isn’t particularly useful with an excessive focus on building or upgrading PCs and on getting the latest and greatest. So I’ll blog about it instead of getting involved in more mailing-list debates.A Historical Perspective – the PC as an Investment
In the late 80′s a reasonably high-end white-box PC cost a bit over $5,000 in Australia (or about $4,000 without a monitor). That was cheaper than name-brand PCs which cost upwards of $7,000 but was still a lot of money. $5,000 in 1988 would be comparable to $10,000 in today’s money. That made a PC a rather expensive item which needed to be preserved. There weren’t a lot of people who could just discard such an investment so a lot of thought was given to upgrading a PC.
Now a quite powerful desktop PC can be purchased for a bit under $400 (maybe $550 if you include a good monitor) and a nice laptop is about the same price as a desktop PC and monitor. Laptops are almost impossible to upgrade apart from adding more RAM or storage but hardly anyone cares because they are so cheap. Desktop PCs can be upgraded in some ways but most people don’t bother apart from RAM, storage, and sometimes a new video card.
If you have the skill required to successfully replace a CPU or motherboard then your time is probably worth enough that getting more value out of a PC that was worth $400 when new and is worth maybe $100 when it’s a couple of years old probably isn’t a good investment.
Times have changed and PCs just aren’t worth enough to be bothered upgrading. A PC is a disposable item not an investment.Buying Something Expensive?
There are a range of things that you can buy. You can spend $200 on a second-hand PC that’s a couple of years old, $400 on a new PC that’s OK but not really fast, or you can spend $1000 or more on a very high end PC. The $1000 PC will probably perform poorly when compared to a PC that sells for $400 next year. The $400 PC will probably perform poorly when compared to the second-hand systems that are available next year.
If you spend more money to get a faster PC then you are only getting a faster PC for a year until newer cheaper systems enter the market.
As newer and better hardware is continually being released at low enough prices that make upgrades a bad deal I recommend just not buying expensive systems. For my own use I find that e-waste is a good source of hardware. If I couldn’t do that then I’d buy from an auction site that specialises in corporate sales, they have some nice name-brand systems in good condition at low prices.
One thing to note is that this is more difficult for Windows users due to “anti-piracy” features. With recent versions of Windows you can’t just put an old hard drive in a new PC and have it work. So the case for buying faster hardware is stronger for Windows than for Linux.
That said, $1,000 isn’t a lot of money. So spending more money for a high-end system isn’t necessarily a big deal. But we should keep in mind that it’s just a matter of getting a certain level of performance a year before it is available in cheaper systems. Getting a $1,000 high-end system instead of a $400 cheap system means getting that level of performance maybe a year earlier and therefore at a price premium of maybe $2 per day. I’m sure that most people spend more than $2 per day on more frivolous things than a faster PC.Understanding How a Computer Works
As so many things are run by computers I believe that everyone should have some basic knowledge about how computers work. But a basic knowledge of computer architecture isn’t required when selecting parts to assemble to make a system, one can know all about selecting a CPU and motherboard to match without understanding what a CPU does (apart from a vague idea that it’s something to do with calculations). Also one can have a good knowledge of how computers work without knowing anything about the part numbers that could be assembled to make a working system.
If someone wants to learn about the various parts on sale then sites such as Tom’s Hardware  provide a lot of good information that allows people to learn without the risk of damaging expensive parts. In fact the people who work for Tom’s Hardware frequently test parts to destruction for the education and entertainment of readers.
But anyone who wants to understand computers would be better off spending their time using any old PC to read Wikipedia pages on the topic instead of spending their time and money assembling one PC. To learn about the basics of computer operation the Wikipedia page for “CPU” is a good place to start. Then the Wikipedia page for “hard drive” is a good start for learning about storage and the Wikipedia page for Graphics Processing Unit to learn about graphics processing. Anyone who reads those three pages as well as a selection of pages that they link to will learn a lot more than they could ever learn by assembling a PC. Of course there’s lots of other things to learn about computers but Wikipedia has pages for every topic you can imagine.
I think that the argument that people should assemble PCs to understand how they work was not well supported in 1990 and ceased to be accurate once Wikipedia became popular and well populated.Getting a Quality System
There are a lot of arguments about quality and reliability, most without any supporting data. I believe that a system designed and manufactured by a company such as HP, Lenovo, NEC, Dell, etc is likely to be more reliable than a collection of parts uniquely assembled by a home user – but I admit to a lack of data to support this belief.
One thing that is clear however is the fact that ECC RAM can make a significant difference to system reliability as many types of error (including power problems) show up as corrupted memory. The cheapest Dell PowerEdge server (which has ECC RAM) is advertised at $699 so it’s not a feature that’s out of reach of regular users.
I think that anyone who makes claims about PC reliability and fails to mention the benefits of ECC RAM (as used in Dell PowerEdge tower systems, Dell Precision workstations, and HP XW workstations among others) hasn’t properly considered their advice.
Also when discussing overall reliability the use of RAID storage and a good backup scheme should be considered. Good backups can do more to save your data than anything else.Conclusion
I think it’s best to use a system with ECC RAM as a file server. Make good backups. Use ZFS (in future BTRFS) for file storage so that data doesn’t get corrupted on disk. Use reasonably cheap systems as workstations and replace them when they become too old.
Update: I find it rather ironic when a discussion about advice on buying a PC gets significant input from people who are well paid for computer work. It doesn’t take long for such a discussion to take enough time that the people involved could spent their time working instead, put enough money in a hat to buy a new PC for the user in question, and still had money left over.
- Buying Old PCs I install quite a number of internet gateway machines for...
- Buying a Laptop from Another Country Mary Gardiner has written a lazyweb post asking about how...
- IT Recruiting Agencies – Advice for Contract Workers I read an interesting post on Advogato about IT recruiting...
Midwestern Mac, LLC: Moving Server Check.in functionality to Node.js increased per-server capacity by 100x
Just posted a new blog post to the Server Check.in blog: Moving functionality to Node.js increased per-server capacity by 100x. Here's a snippet from the post:
One feature that we just finished deploying is a small Node.js application that runs in tandem with Drupal to allow for an incredibly large number of servers and websites to be checked in a fraction of the time that we were checking them using only PHP, cron, and Drupal's Queue API.
If you need to do some potentially slow tasks very often, and they're either network or IO-bound, consider moving those tasks away from Drupal/PHP to a Node.js app. Your server and your overloaded queue will thank you!
Now you can track it via tracker, thanks to release team to file it.
The UCSF Drupal Web Starter Kit project has been our most successful university project to date. It has empowered UCSF to roll out sites for small departments, offices, and researchers in a matter of minutes.
Just 3 months after launch, 70 sites have gone live.
Here are a few examples of sites leveraging the Drupal Web Starter Kit:
UCSF has hundreds of small web properties for offices, researchers and small departments who don’t have the budgets and resources to create custom websites. Historically these groups have been left to their own devices to cobble together sites by whatever means necessary. These sites grow quickly out of date, are hard to maintain and rarely adhere to UCSF brand guidelines.
UCSF created an initiative to build a Drupal install profile that they could offer to these groups at minimal cost and effort. UCSF turned to Chapter Three to design and build this solution.The solution
A flexible information architecture
Because this web solution had to work for small departments, offices, and researchers, we needed to find some common ground in how the sites were structured, while still providing enough flexibility for end users to modify the site’s structure to fit their needs.
We began by creating menu structure consisting of “Home, About, News, Events, Publications, Services and People”. We arrived at this list after careful research of the commonalities across sites for the three key audiences. This meant that when a new website was created, the new client would have a primary navigation menu which was already created. They could then add items to the menu as needed, customizing it to fit their specific needs.
We also created specific content types for News & Events. Events were structured so that they could show upcoming and past. Over time it is our goal to extend the project to create structure around more content including Publications and People.
Three different palettes
We collaborated with UCSF’s brand specialist to ensure that our designs were approved at the highest level to properly represent the look and feel of the University. We delivered three different color palettes of the template so that end users could pick the color scheme they liked most for their site.
Robust content display options
To empower the admins to have more control of the key content regions, we designed a WISYWIG editor with the power to do far more than add text, links and images. All project administrators can add:
- vertical tabs
- tool tips
Additionally, special care was taken to ensure that the back end system could be easily controlled by individuals who self identified as “non-technical” people.
Responsive design framework
The future is device agnostic. As screen sizes multiply by the day, we knew that delivering a fully responsive site was paramount for the long term success of this project. We accounted for this with a fully responsive solution which provides legible content on any device interface. Since this solution was meant for hundreds of groups at UCSF, accounting for the long term viability of the website was fundamental to it’s success.
We appreciate the opportunity to work with an amazing client like UCSF. The project has been a resounding success for all involved. We look forward to building on this framework long into the future to better equip UCSF's groups with the tools they need to do their jobs.
Here we go! Portland's Drupalcon is here. Here is a quick update about some of the exciting things that Metal Toad is bringing to the event. Stop by our booth (#207) and come party with us Tuesday and Wednesday. Come watch us record the podcast live and even step up to the mic if you dare. T-shirts, wine, stickers, foosball, Drupal!?!?! Whoa.
We'll be streaming each of our three keynotes live beginning on Tuesday, with Dries' infamous #DriesNote at11:30am PDT (Pacific Daylight Time, PDT | UTC -7).
Have a burning question you want to ask our keynotes? Michael Anello from the Drupal Easy podcast will be fielding and moderating your twitter questions in real time, to ask Dries, Karen, and Michael, following each presentation.
DrupalCon Portland is off with a bang! Over 1,270 people have already arrived to pick up their badges and DrupalCon tshirts, and we're expecting as many attendees to arrive tomorrow.
Today alone, over 498 training attendees rolled in, as well as nearly 90 attendees for the CXO event. We're expecting over 3,300 people to attend the conference this week, so don't get stuck in line, get here early and grab your badge before sessions start at 9:00am.
If you haven't registered yet and still want to attend - this is your chance!
What if, just like Bill Murray in Groundhog Day, you could wake up to a fresh and identical development environment completely free of yesterday's experiments and mistakes? Vagrant lets you do exactly that. more>>
The Los Angeles County Museum of Art has recently launched its Collections Online site, an online image library where art lovers can explore and download high quality images. This is a triumph for the accessibility of fine art in an increasingly digital world. Making this vast collection public benefits not only local art lovers but also the international art community, particularly students.
This is a bit of a follow-up to Mike Bell's introductory article on using Codeception to create Drupal test suites. He concludes by stating he "need[s] to figure out a way of creating a Codeception module which allows you to plug in a Drupal testing user (ideally multiple so you can test each role) and then all the you have to do is call a function which executes the above steps to confirm your logged in before testing authenticated behaviour."
"Something along the lines of:
So, after skimming through Codeception and Mink documentation, I've tinkered with two potential ways of achieving this... for acceptance testing at least.A crude toolbox
The first method is to use two custom classes to provide details of (a) a general Drupal site and (b) the specific site to be tested. This idea stemmed from this article which suggests that including literals - such as account credentials, paths and even form labels - in tests is bad practice. What if the login button label changes?
The big week is finally here with DrupalCon Portland kicking off in our own backyard. For those of you not familiar with Portland, we're really big into birds (yes, I'm aware that's very 2010), and chickens in particular. I'm working real hard here to make a clever connection to RedHen, the leading native Drupal CRM, and the only one named after a bird!
Just in time for the conference, RedHen has a new release with plenty of performance improvements and bug fixes. We have a production site about to launch with over 100k contacts, and our test/development environments are running with over a 100k contacts with thousands of engagements each. We still have lots of work to do, but we're confident in RedHen's ability to scale to "enterprise" levels.
Understandably, one of the most requested features since we launched RedHen as been the ability to import contacts. Our initial pass at meeting that critical need also launched last week in the form RedHen Feeds, a Feeds processor for RedHen contacts. So get those contacts out of that spreadsheet and into RedHen! Support for organizational affiliations isn't there yet, but is in the works.
ThinkShout will be helping lead a RedHen sprint on Friday, May 24th, DrupalCon Portland's official sprint day. So if you're at all native CRM curious, come join our team as we hack away on RedHen and related tools. Learn about large datasets, Salesforce integration, managing memberships, email integration, event registrations, and common use cases. Site builders, documentarians, UX specialists, and developers are all welcome.
PS - ThinkShout is co-hosting the Drupal DoGooders Happy Hour, a fundraiser for Aaron Winborn, today, Monday May 20th. So please joint us and start your week off right by giving back to someone who has given so much to the Drupal community!Tags: Drupal PlanetRedHenconferenceevents
In the past, we have had multiple heated discussions involving systemd. We (the pkg-systemd-maintainers team) would like to better understand why some people dislike systemd.
Therefore, we have created a survey, which you can find at http://survey.zekjur.net/index.php/391182
Please only submit your feedback to the survey and not this thread, we are not particularly interested in yet another systemd discussion at this point.
The deadline for participating in that survey is 7 days from now, that is 2013-05-26 23:59:00 UTC.
Please participate only if you consider yourself an active member of the Debian community (for example participating in the debian-devel mailing list, maintaining packages, etc.).
Of course, we will publish the results after the survey ends.
the Debian systemd maintainers
About a month ago, I had the opportunity to present at Internet World London, why I believe that Drupal is the best Open Source solution to build professional level websites, e-shops or online applications and why you should dig in it and do your own research about it.
The speech is in English. You can enable the English or Greek subtitles by clicking the captions button or read the transcript below.Presentation Transcript
Hello everybody, my name is Yannis Karampelas. I'm the owner and founder of Netstudio.
Netstudio is a Web Design and Web development company in Athens, Greece. I am Greek and this is the first time I give a presentation in English, so if what I say, sounds Greek to you, feel free to interrupt me and ask questions.
Apparently (hi Zhenech, found on Plänet Debian), a Man does not only need to fork a child, plant a tree, etc. in their life but also write a DynDNS service. Perfect for opening a new tag in the wlog called archæology (pagetable.com – Some Assembly Required is also a nice example for these).
Once upon a time, I used SixXS’ heartbeat protocol client for updating the Legacy IP (known as “IPv4” earlier) endpoint address of my tunnel at home (My ISP offers static v4 for some payment now, luckily). Their client sucked, so I wrote on in ksh, naturally.
And because mksh(1) is such nice a language to program in (although, I only really begun becoming proficient in Korn Shell in 2005-2006 or so, thus please take those scripts with a grain of salt, I’d do them much differently nowadays) I also wrote a heartbeat server implementation. In Shell.
The heartbeat server supports different backends (per client), and to date I’ve run backends providing DynDNS (automatically disabling the RR if the client goes offline), an IP (IPv6) tunnel of my own (basically the same setup SixXS has, without knowing theirs), rdate(8) based time offset monitoring for ntpd(8), and an eMail forwarding service (as one must not run an MTA on dynamic IP) with it; some of these even in parallel.
Not all of it is documented, but I’ve written up most things in CVS. There also were some issues (mostly to do with killing sleep(1)ing subprocesses not working right), so it occasionally hung, but very rarely. Running it under the supervise of DJB dæmontools was nice, as I was already using djbdns, since I do not understand the BIND zone file format and do not consider MySQL a database (and did not even like databases at all, back then). For DynDNS, the heartbeat server’s backend simply updated the zone file (by either adding or updating or deleting the line for the client) then running tinydns-data, then rsync’ing it to the djbdns server primary and secondaries, then running zonenotify so the BIND secondaries get a NOTIFY to update their zones (so I never had to bother much with the SOA values, only allow AXFR). That’s a really KISS setup ☺
Anyway. This is archæology. The scripts are there, feel free to use them, hack on them, take them as examples… even submit back patches if you want. I’ll even answer questions, to some degree, in IRC. But that’s it. I urge people to go use a decent ISP, even if the bandwidth is smaller. To paraphrase a coworker after he cancelled his cable based internet access (I think at Un*tym*dia) before the 2-week trial period was even over: rather have slow but reliable internet at Netc*logne than “that”. People, vote with your purse!
I usually try to avoid administering printers whenever possible. As a result I end of flailing around the CUPS web interface before I figure out how to re-enable a printer. And, when I get a call to help debug a printer, I can't easily tell people what to do.
When I try to do what I need via the command line, I end up spending at least 10 or 15 minutes re-reading man pages before I piece together the steps.
Here's my attempt to document the steps so I don't have to re-read man pages.Setup
In these examples, the printer name in question is: stability and it is a network printer, with local DNS that properly resolves the hostname stability to an IP address.
The cups commands in these examples can be run as a non-root user if that user is in the lpadmin group.
To see if lpadmin is listed. If not:sudo adduser <your-user-name> lpadmin
Then, to gain access to the new group without logging out and logging in again:newgrp lpadmin Network access
First, try to ping the printer:ping stability
If this fails, restart the printer and/or check network cables. No point in doing anything else until it responds to pings.Can't submit new jobs to the printer
Next, if the problem is that the printer is greyed out when you try to print a document or your application tells you that the printer is rejecting jobs, confirm this status with:lpstat -a stability
It will either output:stability accepting requests since Mon 20 May 2013 10:28:57 AM EDT
Orstability not accepting requests since Mon 20 May 2013 10:28:57 AM EDT - Rejecting Jobs
If it is rejecting jobs, try:/usr/sbin/cupsaccept stability Accepts new jobs, but just doesn't print
On the other hand, if the printer is accepting jobs, but the jobs are not printing, find out if the printer is enabled with:lpstat -p stability
You should get either:printer stability is idle. enabled since Mon 20 May 2013 10:28:57 AM EDT
Or:printer stability disabled since Mon 20 May 2013 10:35:10 AM EDT - Paused
If it is disabled, you should first see what queued jobs there are:lpq
If you have a list of duplicate pending jobs, be sure to delete the duplicates to avoid having your print job come out multiple times.
To delete a queued job, type the following (n should be the number in the Job column of the lpq output):cancel <n>
After you have deleted duplicate jobs, try "enabling" it:/usr/sbin/cupsenable stability
Then, re-rerun the lpq command and see if it's now "ready." At this point, the jobs should start printing.Review of concepts
For review... a few important concepts:
- cupsaccept/cupsreject: controls whether a printer will accept or reject new jobs. It doesn't matter whether the printer is enabled or disabled.
- cupsenable/cupsdisable: controls whether a printer will print existing jobs. It doesn't matter whether the print is accepting or rejecting new jobs.
Drupal websites don't always need to allow users to register themselves with an account. This site doesn't, for instance. Anonymous commenting is turned on. The contact form is enabled for anonymous users. And those are the only thing that any member of the public would need to do - other than read. So nobody needs to set themselves up with a login. … Read more about Useful modules: SpambotBlog Category: TechnologyDrupal Planet
I want to thank the good folks at ThinkShout and ZivTech for organizing the Drupal DoGooders Happy Hour to benefit my family and me, as well as giving people attending DrupalCon an opportunity to hang out and have some drinks. Even though I will not be in Portland this week, I plan to be present in spirit, beginning with a virtual appearance there. Join the crew this evening (May 20) at about 4:00 PDT to raise a glass in toast of doing Drupal Good and for a quick Q & A with me beginning about 4:30.
What a long strange trip it's been.
From Sunnyvale in 2007 when I conceived the Embedded Media Field module, to Boston DrupalCon in 2008, where I presented my first State of the Media session, to DC in 2009 where we launched the Media sprint supporting the Media suite of modules, to Chicago 2011 and Denver 2012.
These are the fun times that I recall fondly, doing good with my fellow cohorts. And by doing good, I mean really doing good things. Because where else in the business world can you spontaneously form a group of competitors, build something awesome, and give it freely to the rest of the world?
I'm really going to miss that this year. I mean that even though I continue to contribute to Drupal whatever and whenever I can, I am going to miss seeing you guys this year. There is a magic that happens when you get three or more Drupalers together in the same room. But circumstance has had its way with me these past two years and until we have a DrupalCon "Three Mile Island", I will have to be content with a virtual appearance.
So, join me on Monday evening to see my Stephen Hawking impersonation.