Drupal 8 is skipping through the betas and it won’t be long until we’re staring at a release candidate. With that in mind, i’m now taking the time to learn some of the key concepts that you’ll need to know as a day to day site builder using Drupal 8.Custom Config Entity Types
A custom configuration entity type referred to as a config entity for most of this article is a custom definition of an entity that allows you to provide a config class, validation schema and custom storage.
They'll have hundreds of practical usages during development custom development. To give some examples, core uses them for user roles, blocks, image styles and plenty more. Use your IDE to see what’s extending ConfigEntityBase if you’re interested.Creating Our Config Entity Type Schema
First up, lets define our schema. The schema allows us to say what fields our config entity should have and what type those fields should be.
At first, I dismissed the need for yet another module. I have always added the Google Analytics code straight into a template file in various other CMS and static sites I have been involved in over the years. Why do I need a module to do that? Well, you don't, but the Drupal Google Analytics Module does offer a lot more functionality that will make it worth your while.
The first thing you need to do, it you haven't is sign up for a Google Analytics account. It's pretty straight forward process.
Once you are signed up, the most confusing thing to me was the Account vs. Property vs. View:
I break it down like this. Account is the company or the owner. The Property is the website. The view is segmented data of the website. At first, you should have a default "All Website Data" view, but moving forward you may have an app and a website, or a view for different languages on the site like /en and /de.
At least 20 people helped push one or more issues forward in Montpellier, at the Drupal Dev Days Performance Sprint!
Here’s an overview of what we set out to do, what we did, and what the next steps are.The plan for DDD Montpellier
- Priority one: uncover the “unknown unknowns”, i.e. finding more performance issues.
- Priority two: Drupal 8’s internal page cache was enabled by default shortly before DDD Montpellier, so we should try to find edge cases where it breaks (where stale content is served from the internal page cache).
- Priority three: fix known performance problems, as well as those uncovered by the work done for priorities one & two.
We already know that certain things are slow, and we know how to fix them. But significant portions of slowness do not yet have explanations for, let alone precise or even rough plans on how to reduce the slowness.
The parts of Drupal 8 that are slow that we do not have a strong grasp on yet are the bootstrap phase in general, but also routing, container services and route access checking.
Berdir, amateescu, dawehner, yched, znerol, pwolanin and I did a lot of profiling, testing hypotheses about why certain things took a given amount of time, comparing to Drupal 7 where possible, figuring out where the differences lied, and so on.
Bits and pieces of that profiling work1 are in https://www.drupal.org/node/2470679, including patches that help profile the routing system.
In the weeks since DDD Montpellier, effulgentsia and catch have continued discussions there, posted further analyses and filed many more issues about fixing individual issues.2. Try to break Drupal 8’s internal page cache & fix it
So, Drupal 8 had the internal page cache enabled by default shortly before DDD Montpellier, with only a few known problems. Having many people to try to break it, using scenarios they use daily in their day jobs, that’s the ideal way to find any remaining cache invalidation problems.
Many tried, and most did not succeed in breaking it (yay! :)), but about half a dozen problems were discovered. See https://www.drupal.org/node/2467071.What we got done
We fixed so incredibly many issues, and we had more than twenty people helping out! Notably, fgm was all over class loading-related issues, borisson_ and swentel got many of the page cache issues fixed, pwolanin pushed routing/REST/menu links issues forward significantly, and we overall simply made progress on many fronts simultaneously.
We made Drupal 8’s authenticated user page loads several percent faster in the course of that week!
Most of the page cache problems that were discovered (see above) were fixed right at the sprint! There are 4 known issues left, of which one is critical on its own, one is blocked, and the two others are very hard.
(If you want more details, we have day-by-day updates on what got done.)
We currently have 11 remaining criticals with the “Performance” tag. Getting that to zero is our top priority. But many in that list are difficult.
If you specifically care about performance for authenticated users: less difficult issues can be found in the child issues of the Cache contexts meta issue. And for some least difficult issues, see the child issues of the SmartCache issue.
Generally speaking, all major issues tagged with either “Performance” or “D8 cacheability” can use a hand.
Hopefully see you in the queues! :)
It was impossible to capture all things we considered in an issue, that’d have slowed us down at least tenfold. ↩
DrupalCon LA is right around the corner! Woo hoo! I'll be there. Will you?
If this will be your first DrupalCon, I'd like to provide you with some ideas of how you can approach things. You have different options available to put together your Con:
The DrupalCon Los Angeles extended sprints start this Saturday, May 9, and the main sprint day on Friday, May 15 is just a little over a week away. I'll be leading a Drupal 8 Critical Burndown sprint to help get D8 done, and kgoel and cilefen will be leading a sprint to triage Drupal 8 majors. And we need your help! Sign up for the sprints now, or read on for more information on what we'll be doing and why.
Help fix the 29 remaining issues blocking Drupal 8's release
DrupalCon Munich contribution sprints. Photo credit: Pedro Lozano.
A release candidate for Drupal 8.0.x will be created once there are zero critical issues remaining. In the past six months, we've reduced the critical issue count from 130 issues then to just 29 issues as of today. This ongoing progress is thanks to the hard work of dozens of contributors.
Over these six months, we've held three focused sprints on critical issues: in Ghent in December, Princeton in January, and Dev Days Montpellier in April. The impact of each of these sprints on the critical issue count was clear: a focused sprint is the best way to get a lot of work done efficiently. Help us with the next burst of momentum! Come to webchick's "Plain Drupal English" guide to the remaining Drupal 8 criticals, and sign up for the Los Angeles critical issue sprint.
Help triage major Drupal 8 issues
900 reasons to get involved
Issues fixed during the Dev Days critical and performance issue sprint
The criteria for critical issues are very specific: the issue needs to be serious enough that we're not going to support Drupal 8 for any of the hundreds of thousands of sites that run Drupal without it. However, they're not the only issues we need to solve. Like most beta software, Drupal 8 still has plenty of bugs that need to be fixed, but that aren't severe enough to block release. Among these are the major issues, which are not a critical problem for every site, but still nasty enough to ruin someone's day.
Drupal 8 core committers assess each issue that is marked critical
Thee Drupal 8 core committer team systematically evaluates every single critical issue to make sure it's relevant and that it really should block release, but we don't have the resources to do this for major issues. Over the four years of Drupal 8's development, we've racked up nearly 900 major issues that are still outstanding. Many of these issues are still relevant and important to make Drupal 8.0.x more robust. However, many others are outdated; they might already have been resolved, they might no longer be relevant, or they might be candidates to postpone until 8.1.x or 9.x instead because they're no longer acceptable during the Drupal 8 beta.
This is where you come in! If you don't really see yourself helping with those 29 criticals, but you do have some experience with Drupal 8 and with core contribution in general, help us sort through these majors during DrupalCon LA. Take a look at the draft instructions for the sprint, and if you're up for it, put your name down for the major triage sprint in our sprint sign up sheet.Better faster stronger
Like I mentioned above, a lot of the major issues in the queue probably aren't relevant anymore, and fixing the rest is going to make Drupal 8 better. There's another reason to sort through them though: there might actually be some critical issues hiding in that queue. The average critical issue takes between 1 and 4 weeks to fix once it's marked critical. However, a third of critical issues start at some other priority before their significance is recognized. This is actually a big challenge for Drupal 8 release management: we can organize resources for the criticals we know about, but not for ones we don't. So not only does helping move majors along make Drupal 8 better, it might also get it ready faster.Maintainers wanted
Recently, Dries posted a new proposal on Drupal core's structure, responsibilities, and decision-making, which definitively established the role and responsibilities for core subsystem maintainers. We have lots of "vacancies" to fill. :) And it just so happens that triaging major issues triaging major issues for a particular component is a chance to learn a lot about that specific topic area and maybe to find your niche in Drupal core. We'll be organizing the major triage sprint into teams by subsystem so that each team has the chance to focus and build some expertise over the course of the sprint. Contributing to one patch will build your technical skills; contributing to the "big picture" through triage will build your understanding at a higher level. Participating in the major triage sprint is one way to explore what it would be like to be a subsystem maintainer -- and, as the Docker community has put it, maintainers are what distinguish a good project from a great one.
New to Drupal core contribution? Come to the Mentored Core Sprint
Views subsystem maintainers dawehner and damiankloip. Photo credit: Amazee Labs.
If you're just getting started with Drupal 8 or with core contribution, there's still a spot for you on the Friday sprint day! Join the Mentored Core Sprint.Tags Comments
Another sales call today, with a prospective start-upper who thought Drupal might lower his costs to get a web startup launched.
And I didn't really answer the question directly -- because in the long run, if you're building something successful, you're going to spend as much on your Drupal site as you would building from scratch.
The key difference? How quickly you can get something in front of users that might help you get some traction and build your business.DrupalDrupal PlanetOperationsDevOps
One of the options in Nittany Vagrant is to build a local, development version of an existing Drupal site - copying the files and database, then downloading it to the Vagrant VM. Its is pretty straightforward, but there is the occasional trouble spot.
Here is a short video of how to do it.
I'm speechless. And in awe. And maybe a little bit scared.
Through a bit of an odd synchronicity I have just come to learn about a new technology that is getting ready to make its debut. And, while the term "disruptive" gets totally overused these days, I do think that this particular technology may have the power to disrupt my livelihood.
I don't know whether to celebrate this technology, appreciate this technology, or update my resume. At any rate, I just signed up for TheGrid. It's kind of mind blowing.
As information architects, we love tools that help clients think about the structure of their content. One which we’ve found particularly helpful is what we call our Technical Architecture document. It’s a spreadsheet that defines the structure of the site. This approach is not uncommon, especially within the Drupal community; however, we have promoted this spreadsheet from information architecture tool to site generator. By automating a once manual process, we’ve introduced some really exciting opportunities around rapid prototyping and efficient product iteration. I’ll get into the details of that in a follow up post. For now, we’ll look at how we move from sitemaps and wireframes to the technical architecture document in a way that sets us up for rapid prototyping.From Wireframes to Technical Architecture Document
After the initial discovery phase of identifying goals, personas, user needs, core content and sitemaps, we build out wireframes for the project. The structure behind this content really comes together when we then translate those wireframes into the technical architecture document.
As mentioned above, this document breaks the site down into the specific content types, fields, vocabularies, terms, users and other units which together comprise the entirety of the project. You can look through and make a copy of the document for yourself - Technical Architecture document templateSitemap
First we translate the sitemap as defined in our wireframes into separate rows in the technical architecture document.
Now we can specify the nature of each page and work with the client to define path aliases.Content Types
Next we group all of the content into distinct types. We’ve been keeping these potential groupings in mind as wireframes are built out, but it is here in the technical architecture document where we explicitly define them.
Here is an example of the Course detail page (as shown above) translated into a content type and its associated fields.
This is where the structure of the content begins to take shape. We have identified the discrete fields which together form a single course. We can also now define field groups and help text where that is relevant and helpful.Fields
The fields we saw in the content type sheet validate to fields referenced here. This sheet is where we hash out the finer details of each field. We bring the client into conversations around this sheet when necessary (such as help text and default values), but allow them to concern themselves primarily with the sitemap and content types.
Next we define in the same level of detail the remaining components of the site, including field groups, vocabularies, image styles and user roles.
Here is a look at user roles and their associated permissions.
This document is a great way to model content and inform developers as to what would be built. It also reveals to clients the underlying structure necessary to implement the functionality and form proposed in the wireframes.
Of course, there are still limitations in representing a course page in a series of rows and columns. The concept of structured content is now more clear, but it is not until a content author can create that Methodology of Science course that the picture truly comes into focus.
This is the exact situation prototypes were made for. In the next post I’ll dive into how we’re using the technical architecture document to automate much of the build process, enabling us to quickly create these prototypes.
Creating the code that makes a website accessible to all visitors doesn’t have to be as time-consuming or resource-intensive as you might think. All you need to do is follow some simple steps that require a little extra time and effort. But these efforts will ensure that your Web content is at the fingertips of everyone — including those with blindness and low vision, deafness, and other disabilities.
It’s up to both the developer and the client to achieve site accessibility. Although they usually work together in the planning and later stages of website creation, a developer and client also have separate responsibilities in making a site accessible. This blog post, the first in a four-part series that offers website accessibility tips for developers, will make this important part of development easy to follow. And it comes just in time for Global Accessibility Awareness Day on May 21.Reading More Shouldn’t Take More Effort
The visually impaired rely on screen readers to help them learn what’s on a page, and to navigate a site. But without the proper code in place, a screen reader that’s reading a short list of blog posts on a landing page that will direct users to a longer list of items, will only say “read more” over and over again.
Not knowing what the “more” is, a user will probably get frustrated and go somewhere else. Fortunately, all that’s needed to get a screen reader to articulate the details that accompany the “more” is just a few snippets of code — commands that accommodate disabled people while at the same time hiding text and not cluttering up the screen for non-disabled people.
A developer simply needs to include a small snippet of code similar to:
<a href="/">Read more <span class="readmore"> [site] Blogs</span></a>.
The “read more” CSS class should include code that makes the span class invisible to sighted users but can be heard by non-sighted users. Sighted users would still see “Read More” but non-sighted users would hear “Read More about [site] Blogs.”Alt Text Should Paint a Clear Picture
Developers should stress to clients the necessity and importance of using alt text on a webpage. It helps visually impaired visitors know all there is to know about an image.
It’s easy to overlook when constructing a webpage, but it’s super easy to include alt text in an image’s markup code. A simple description of what the image is, and its title, are all that’s needed — as seen in this example of the markup box for a photo of Attorney General Eric Holder.Spell Out What’s Required in a Required Field
Many websites use an asterisk as a cue for people to know what’s a required field of input on a form — but that’s not the best method to reach everyone. That sort of general warning doesn’t always work with screen readers; the user will be left guessing where the required field is. Not to mention, a colorblind visitor won’t see the red or green that’s typically used to highlight the field asterisk warning.
The solution is easy. First, the code behind the field should spell out the name of the required field, and the fact that inputting information there is required — so that people using screen readers will have no doubt about what they need to do. Also, the code should inform users that an asterisk is indeed the warning that’s denoting a required field; that way colorblind visitors who can’t see the red or green text that’s often used on websites as the only type of warning won’t have to second-guess and those using a screen reader will also will know what to do.Don’t Bury Mistakes; Put the Error Message at Top
Another thing about website forms: A visitor who errs when completing an online form should be immediately informed on the refreshed page about where they’ve made a mistake. Otherwise, a screen reader will speak the entire page again and mention the errors only when it reaches the incorrect fields. The refreshed page should instead offer — at the very top — a basic box that lists what went wrong and what is required.
That’s it for now. Stay tuned for second part of this series, as we take you, the developer, down a true and easy path to website accessibility.Tags: acquia drupal planet
Design Systems are all the rage in the world of Web design these days, and for good reason. Good design has always been about design system thinking, and when applied to the Web, it results in robust sites that s are able to evolve over time, grow with your business, adapt to new technologies like phones and watches, and provide a better user experience. Drupal is great for design systems thinking, and that's why Palantir is making it a theme of DrupalCon Los Angeles.
Start off with my article "Strategies for a Designer-friendly CMS" in the latest issue of Drupal Watchdog, which attendees will find in their conference swag bags. It's based on my presentation "Design Systems and Drupal", which you may have caught at a conference in the last few months. (If not, tell your local DrupalCamp to have me come and present it!)
Tuesday afternoon, fellow Palantiri Steve Persch will be presenting on Rendering HTML with Drupal; Past, Present, and Future, discussing how the Drupal approach to theming has changed over the years and how the mental models we use have evolved. Did you know there's two completely different theming philosophies intertwined into one system? Steve will explain how that came to be, and what the future holds. You can get a jump on the topic with Steve's two part series on the Panels module, and why we use it extensively at Palantir. Catch him at 2:15 pm in room 502B.
Wednesday morning, Drupal Watchdog is hosting a meet the authors session at their booth in the Exhbit Hall with both Steve and I. Come by at 10:15 am to chat Design Systems, Drupal, writing, and vests.
Immediately after, Steve and I will also be hosting a BoF (Birds of a Feather - group discussion) on Design Systems and their future within Drupal. Join us in room 506 at 10:45 am to discuss how to better leverage design systems in our favorite CMS.
And finally, Thursday kicks off with Steve presenting in the Core Conversations track about What Panels can teach us about Web Components. Web Components are coming soon to a browser near you, and you need to be ready. "Ready" in this case means "know your Panels module", as it uses the mental model that you will need in a componentized, design-systems-based world. Come by Core Conversations (room 518) at 10:45 am to discuss a design-system-web-component-panels-all-the-things future for Drupal.
Even if you can't catch one of those sessions, you can still catch Palantir at our booth (Booth 103 on your map) any time if you want to talk about design systems, a project you have, or the fine art of Nerf wars. Or find us at one of our other sessions: Drupal 8: The Crash Course and Buidling sustainable recruiting strategies on Tuesday, Silex: Mini-Symfony! on Wednesday, No on Wednesday (yours truly, talking about the importance of focus and prioritization), and advice on Relaunch strategies for Drupal 8 and the modern web on Thursday to finish off sessions. Then join everyone at Palantir Trivia Night on Thursday to show your Drupal-fu.
See you there!
Drupal is always changing. The community constantly reinvents Drupal with new code and reimagines Drupal with new words. This article seeks to examine the current narratives about Drupal. By examining the stories we tell about Drupal — the so called cultural constructions — we can better understand what is going well and what should be making us uncomfortable.
At the Drupal Association, we love the Drupal project and our fantastic community that we serve. If you want to show your same love for the Association, the project, and the community, consider stopping by the Drupal Store and becoming a Drupal Association member.
Next week, Drupal enthusiasts from around the world will gather in Los Angeles for what is sure to be another fantastic DrupalCon, which we're proud to be supporting as a Diamond sponsor. This year’s conference is especially important to us for a few different reasons. Last month, we announced the opening of our new office in the LA area. We’re very excited to be in beautiful southern California, especially as LA hosts the Drupal community. This also the first major Drupal event since we announced that we are joining forces with Blink Reaction to form the largest Drupal agency in the world.
At DrupalCon, we will reveal to the community our new name and new identity. We look forward to sharing our vision with you, so please stop by booth 300 to say hello. We will have information on our expanded service offerings and chances for you to win some great swag and prizes. You can even sign up early for an entry in our Drupalcon Barcelona sweepstakes. We look forward to seeing all our familiar friends in the Drupal community and making new ones.
In addition to our name reveal, we are excited for our staff members who will be leading DrupalCon trainings and summits, as well as those who will be presenting sessions throughout the week:Data-Driven Marketing
On Tuesday, May 12, at 10:45am our Chief Strategy & Insights Officer, Gus Murray, will present Enough With The Pretty Brochures, Let’s Start Building Better Business Tools: The Art of Data-Driven Marketing. Gus’s talk will cover how digital insights and personalization are transforming the way organizations can create value from their websites. This session is a must for anyone working with digital marketing or business development.Symfony2 and Drupal 8 Training
On Monday, May 11 we will have a team leading the training Introduction to Symfony2: Getting Ready for D8. This is an introduction to Symfony2 and will help experienced and new PHP developers understand the power and flexibility of Drupal 8’s new development framework. Space for the training is limited, so make sure to sign up for a spot! This training will also serve as preview of the upcoming free trainings we will offer throughout the rest of the year.Drupal Business Summit
Also on May 11, Blink Reaction’s Director of Talent Development, John DeSalvo, will be leading the Drupal Business Summit. This one-day event is aimed at business leaders who provide Drupal services. Attendees can participate in peer discussion and networking with other Drupal business executives.Drupal Sessions
We are proud that five of our sessions were accepted for DrupalCon LA, ranging in topics and skill level. On Wednesday, May 13th, Yuriy Gerasimov and Andrii Podanenko will hold a session on Multidimensional Testing Workflow Before Merge to Master that will demonstrate the building of an actual site based on the changes of pull request (installation profile vs. pulling live database workflow), running automated phpunit, simpletest, behat tests and much more!
Also on May 13th, Matt Korostoff, will present I Survived Drupalgeddon: How Hackers Took Over My Site, What I Did About It, And How You Can Stay Safe. This session will be a post-mortem of the Drupalgeddon SQL injection bug Matt experienced on his personal site. He’ll show you how hackers invaded and how you can defeat a similar attack on your website.
On Thursday, May 14th, we will also present a session on Speeding Up Drupal 8 Development Using the Drupal Console. The Drupal Console is a suite of tools that run on a command line interface (CLI) to generate boilerplate code and interact with a Drupal 8 installation. Jesus Manuel Olivas has been the lead on this project and will be sharing his insights and the latest developments during this session. You don’t want to miss it.
Tess Flynn will present two sessions: Fighting Core FUD: We Need a Contrib Champion on Tuesday, May 12 , where she will discuss the need for a Contrib Champion and how Example Module’s rewrite exposed the need for a new category and initiative to support contrib developers through the dramatic changes coming in Drupal 8. Her other session will be on Capture the (D8) Flag, on Thursday, May 14, where she’ll discuss tasks, challenges and choices that you’ll encounter when porting your module to Drupal 8.We Want to Meet You!
If you’re attending DrupalCon and want to schedule some time to talk to us one-on-one, make sure to drop us a line. We’ll be showcasing the services of our new agency as well as looking to add to our already incredible pool of Drupal talent. See you soon!Tags: Drupalcon Los AngelesCheck this option to include this post in Planet Drupal aggregator: planetTopics: Community & Events
The European Acquia Client Advisory team had an onsite week in Reading the week after Drupal Camp London 2015. I got the chance to see a number of them there and sit down with two of my friends from "Supporta!"–Daniel Blomqvist and Henk Beld. We talked about remote teams and helping others succeed with Drupal, while also paying it back/forward by sharing and teaching what they learn and what they know.
As developers, oftentimes we want or need our working environment to be an exact match of the production environment. Using MAMP or your Mac’s built in server simply won’t cut it because there would be too much variation in software versions and/or too much configuration differences between projects. This is especially true when your project is running a complex or specific infrastructure. Lucky for us, there’s Vagrant!
This post serves as a basic overview of Vagrant, a guide to some handy resources and some tips we’ve learned since we started using it. It kinda touches on everything!
Vagrant is written in Ruby and used for creating and configuring virtual development environments. Although it can use other virtual machine software, the most commonly used is VirtualBox. VirtualBox allows you to create a virtual server right on your machine. This allows you to run a Linux server (or whatever else you would like) on your personal Mac or PC. Vagrant is essentially the glue between provisioning software such as Puppet or Chef and VirtualBox.
Vagrant helps you configure your network configurations for VirtualBox as well as run the necessary scripts to provision your virtual machine. The beauty of Vagrant is that once you have your provisioning scripts ready and your Vagrantfile set up, you can spin up a VM with a simple command (vagrant up) and easily dispose of it with (vagrant destroy).
The benefit of this approach, as opposed to manually configuring VirtualBox, is that spinning up and tearing down environments is trivial. When your project is complete you can simply destroy the VM. Six months later, when the project picks back up, just run vagrant up and you pick up right where you left off.
Vagrant is also a great for onboarding new developers. How many times have you wished a new developer could just hit the ground running and not have to spend hours setting up their own development environment? This can be handled by including your Vagrant provisioning scripts in your project’s code repository.
When a new developer needs to get rolling, they simply checkout the project’s repository (as they normally would), navigate to the proper directory and run "vagrant up". A few minutes later they have the same development environment as everyone else on the team! This also eliminates the old adage, “It worked on my machine”. Now everyone’s development environment should be identical. What works on one developers machine, will work on another developers machine, no matter their primary platform.Provisioning your Vagrant Machine:
Another nice feature of Vagrant is that you have several options for provisioning your VM.
The most common options are:
- Chef https://www.chef.io/chef/
- Puppet https://puppetlabs.com/
- Ansible http://docs.ansible.com/guide_vagrant.html
- Shell Script (Probably the best starting point if you aren’t familiar with any the above configuration tools.)
Better yet, you can also use multiple provisioning options on the same VM. For example, you can provision your box using Chef and then finish your provisioning with a shell script afterwards! I have found this particularly handy when I am using Chef recipes that I’ve found in the wild that take me 90% of the way but I still need something a little more specific.
Here is an example of how you might use Puppet and a shell script together:config.vm.provision :shell, :path => "bootstrap/bootstrap.sh" config.vm.provision "puppet" do |puppet| puppet.module_path = "modules" end
Why is this useful? Let’s say you want to provision a CentOS virtual machine using puppet. Your VM needs puppet installed on it before it can do the provisioning. So your provision requires two steps:
- Run a shell script to install puppet (bootstrap/bootstrap.sh).
- Run your Puppet provisioning.
First the the shell script would run and install Puppet:sudo yum install -y http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-puppetlabs sudo yum install -y puppet-2.7*
Then Vagrant would run the Puppet provisioning:config.vm.provision "puppet" do |puppet| puppet.module_path = "modules" end Networking:
Networking is one the trickiest parts of using a Vagrant setup. Some key concepts to consider are:Forwarded Ports
Forwarded ports allows you to map ports on your host machine to ports on your guest machine (VM). This is useful direct access to guest ports or services from your host machine. For example, if you wanted to visit a site served from Apache on your VM from your host machine’s browser, you’d want to map host port 8080 to guest port 80 on your VM. This would allow you to hit http://localhost:8080 on your host and have it automatically forwarded to port 80 on the guest. Here’s how you do it:config.vm.network "forwarded_port", guest: 80, host: 8080
As your VM gets more complex, you may need to explicitly forward more ports for other services, such as Varnish or MySQL.Private Network
**Private Network configures your VM with an address from the private address space (something like 192.168.92.68). Private Networks are used when you do not want other machines on the network able to access your virtual machine. The easiest way to set this up is to let the IP address be assigned to you via DHCP. That is as simple as adding ‘config.vm.network "private_network", type: "dhcp"’ to your Vagrant file.
In some instances, however, you may want to define a static ip address for your VM. This can can be done with the following snippet:config.vm.network "private_network", ip: "192.168.92.68"
In this example, it’s important to make sure the ip address does not conflict with any other addresses on the network. If you are unable to access your virtual server, you may need to check your /etc/hosts file or ping the virtual machine’s intended IP address from the command line to make sure that requests for that IP are routed to your virtual machine.Public Network
**Sometimes, it’s handy to serve your VM over your public network. This might be useful when you need to access your site or application from other devices or when other stakeholders need to access, test, etc. In this case, you would use the public network option. Vagrant will prompt you for which network interface you would want to use, or you can specify it like this:config.vm.network "public_network", bridge: 'en1: Wi-Fi (AirPort)' Synced Folders:
A Synced Folder is a folder that is accessible to both your host machine and your virtual machine. This is particularly useful for syncing your host machine’s codebase changes to your VM. Without synced folders, you’d have to directly checkout your project’s codebase onto your VM or work directly on your VM. Neither of these are ideal.
With Synced Folders, your workflow doesn’t have to change. Editing files, making commits, etc. all happens on your host machine and your VM will reflect those changes in real time. The files on your host machine are essentially mapped to a directory on your guest machine.
Here’s how to do it:config.vm.synced_folder "path/to/host/codebase", "path/to/desired/guest/directory", create: true, type:"nfs"
The type variable here is very important when it comes to large code bases such as Drupal. If you don’t specify your folder type as "nfs" you may find your site is extremely slow and drush commands take forever. In fact, you will also need to specify this option for Vagrants base mount folder like so:config.vm.synced_folder ".", "/vagrant", create: true, type: "nfs"
Fortunately there is a plugin for windows users at: https://github.com/GM-Alex/vagrant-winnfsdLAMP Stacks:
There are a lot of Chef and Puppet LAMP environments already available out there in the interwebs.Some examples
- Puppet LAMP stack: https://github.com/jrodriguezjr/puppet-lamp-stack
- Chef: https://github.com/r8/vagrant-lamp
- Drupal Specific: https://www.drupal.org/project/vagrant
For those new to Vagrant looking to get a machine up and running quickly for a Drupal project, I recommend checking out The Drupal Vagrant project. This project comes packed with all the bits for running Drupal. Drush, phpMyAdmin, XDebug are all available out of the gate. Even if you end up using your own solution, this will give you a strong place to start.Further Reading/Helpful Links:
Vagrant is a powerful tool and I’m only beginning to scratch the surface here in terms of covering it thoroughly. We’re now using it on a couple of projects and have found it incredibly useful. Drop us a line in the comments if you have other tips for folks new to Vagrant or other resources worth reading.
With Views, we build structured components of our content that are placed on a page. HTML or formatters are applied at will, CSS classes are added as needed, and the component is most likely placed on a page somewhere and styled. And it’s gorgeous.
Well, I have good news and better news. There are two ways to expose your Views as JSON (or XML, or other formats, but I’m going to focus on JSON in this article). One method allows you to retrieve your existing views through a REST server, and the other method lets you add a Services display in a View, which allows for finer-grained control of the JSON output. You may actually end up liking the second method better, even if you do have to add some new displays.
To start, let’s get some modules installed and a REST server up and running.Huh? What Is A REST Server Again?
All you need to know here is that we’re going to allow someone (who knows, maybe you?) to access Views results as JSON data, by issuing a GET request. GET is the HTTP method that you can use with query strings in the URL. You know query strings, right? It starts with a question mark and then you string more than one query together with an ampersand, kinda like this:
It was a great relief to learn that MICHAEL MEYERS (V.P. Large Scale Drupal, Acquia) is not the homicidal psycho-killer from the Halloween movie franchise, particularly as we go one-on-one with him in a noisy corner of the RAI.Tags: DrupalCon Amsterdam DrupalCon Video Video:
This is the first of a series of security-related postings, which Acquia will compile into a free ebook. In this entry, we’ll look at the perennial question: Is Open Source software inherently more secure than commercial closed-source software?
Securing applications is an ongoing process. It’s a continuum that requires vigilance.
Application security begins during the requirements analysis stage of the Software Development Lifecycle, and must be nurtured throughout the life of the application to be successful.
With Drupal properly configured and managed, it is as secure and reliable as any enterprise content management tool available. However, a Drupal application must be maintained and enhanced over the course of its existence. Acquia Cloud eases this management and maintenance burden on customers, and substantially reduces the risk that software vulnerabilities, external actors, or poor human choices will compromise the integrity of the application or the organization.
As requirements are gathered for a new project, and both open and closed systems are considered, evaluators often ask, Which solution is more secure?
This is frequently cast as a contest between the ideologies of open and closed source software.
It’s the wrong question. All software is susceptible to errors at every step of the lifecycle: from first release, through patches, and on through end-of-life support, when the provider no longer supports the code.
Repeated professional and academic assessments have demonstrated that coding errors are simply part of software development. The professionalism of closed-source commercial software development, which has continually improved its security reviews and practices, is matched by the professional commitment of open source engineers. The main difference between the two is the visibility of source code to all users.
Because most malicious users take the same approach to probing for and exploiting known vulnerabilities, by trying to enter systems on the Web, source code availability seldom plays an important role in discovering flaws in mistakenly unprotected servers, services, or protocols. Open source code, however, enjoys a greater flexibility and speed-to-solution when a vulnerability is discovered, which we will look at later.
Writing, testing, and shipping perfect code is the impossible dream that falsely creates the impression that some software is inherently more secure than others. All non-trivial software is imperfect, and the hardware it runs on can also carry vulnerabilities. In fact, the likelihood of security vulnerabilities is inherent in any application, because people often make mistakes in development or configuration of an application.
So, when we talk about “computer security,” we must recognize that we’re really talking about human security practices that can fail at any number of user-controlled points. Open source software makes those potential flaws a discussion among a group of coders, reviewers, and security professionals. In closed source software, a potential flaw is often regarded as a secret -- one that may impede the resolution time or increase the risk of a discovered vulnerability.A race with intruders
At the time a vulnerability is discovered, the clock starts counting down to an increase in attacks against that vulnerability. The closed-source software world depends on “security through obscurity,” the assumption that hiding source code makes it harder to discover vulnerabilities. This means that a newly discovered vulnerability sets off a race between the developer and malicious users: who’s going to patch or exploit that vulnerability first?
The same race happens in the open-source software field, but there are many more people familiar with the open source code, so the dynamics are very different. Sometimes projects even collaborate with each other to increase the number of developers working on the same fix, as was the case in August, 2014 for the XML-RPC Denial of Service affecting both WordPress and Drupal.
By comparison, in proprietary software only the commercial software company’s employees can work to fix an error in the closed code.
Indeed, many commercial software security vulnerabilities are discovered by outside consultants and security professionals, who inform the company that built the application. These outside discoverers may bring a solution to the problem along with their vulnerability report, but ultimately the vulnerability will only be patched when the company decides to respond, when it is able.
In some situations, this vulnerability becomes an open secret held closely by an ever-expanding circle of people in the know, all hoping the Bad Guys don’t find out before they deliver a patch. By contrast, commercial companies with a vested interest in the security and capabilities of certain open source systems are now frequently joining together to fund development or security remediations out in the open. Thus, open source actually adds another dimension of security through this community approach to development: it provides a constructive outlet for coders whose passion is searching for vulnerabilities.
Open source code software allows many hands to work towards the mission of identifying and fixing vulnerabilities. The same race to patch a vulnerability exists, but the open source community has a more distributed approach to responding to a known issue. This is generally understood to be an advantage. In a 2009 University of Washington paper, Is Open Source Software More Secure?, researchers, including a Microsoft contributor, concluded:
“...Open source does not pose any significant barriers to security, but rather reinforces sound security practices by involving many people that expose bugs quickly, and offers side-effects that provide customers and the community with concrete examples of reusable, secure, and working code.”
It’s worth mentioning, by the way, that in late 2014 Microsoft itself, once the paragon of the closed software model, announced that it will make its server-side .NET stack and core runtime frameworks available as open source code. Acquia’s Christopher Stone said it was the software equivalent of the falling of the Berlin Wall.
So the real operational security challenges come after a vulnerability is discovered, in the time between a patch becoming available and the time that customers patch their software. Most successful attacks occur in that window. Any system left unpatched is likely to be targeted at some point.
This is why Acquia Cloud's managed platform includes patching and maintaining of all server components, prepares security updates for their customers with Remote Administration, and recommends security best practices and configuration hardening for Drupal applications.
With Acquia, customers can count on rapid responses to vulnerabilities and a quick delivery of patches when available.
The intractable problems in computer security remain: open or closed, people write imperfect code; many are lazy about patching or upgrading to the latest version to close newly discovered vulnerabilities.
The challenge is bigger than open source versus closed software.
That’s why we’re confident that the Acquia approach is the best hybrid response to the threat of imperfect software. We leverage professional practice, the open source community, and a tightly managed continuous-deployment workflow to quickly patch vulnerabilities on our platform, while providing the tools to customers to stay up to date with regards to patching their Drupal applications.
We’ll get into the details of Acquia’s approach to patch management in one of our next posts.Tags: acquia drupal planet