2nd May 2016: amazee.io just launched their Drupal hosting platform built for develeopers, which has a full integration into Drop Guard. And that’s when our common story started.
The Amazee team just dedicate themselves to the Drupal world: “We’re a secure, high-performance, cloud-based hosting solution built for folks who love their Drupal sites as much as we do”.Drupal Planet Drupal Success Story Amazee Security Hosting
One of our OSTraining members wanted to restrict access to certain content on his Drupal 8 site.
To do this in Drupal 8, we are going to use the Content Access module.
To follow along with this tutorial, download and install Content Access. I found no errors while using this module, but please note that currently it is a dev release.
Today's third-party applications increasingly depend on web services to retrieve and manipulate data, and Drupal offers a range of web services options for API-first content delivery. For example, a robust first-class web services layer is now available out-of-the-box with Drupal 8. But there are also new approaches to expose Drupal data, including Services and newer entrants like RELAXed Web Services and GraphQL.
The goal of this blog post is to enable Drupal developers in need of web services to make an educated decision about the right web services solution for their project. This blog post also sets the stage for a future blog post, where I plan to share my thoughts about how I believe we should move Drupal core's web services API forward. Getting aligned on our strengths and weaknesses is an essential first step before we can brainstorm about the future.
The Drupal community now has a range of web services modules available in core and as contributed modules sharing overlapping missions but leveraging disparate mechanisms and architectural styles to achieve them. Here is a comparison table of the most notable web services modules in Drupal 8:Feature Core REST RELAXed Services Content entity CRUD Yes Yes Yes Configuration entity CRUD Create resource plugin (issue) Create resource plugin Yes Custom resources Create resource plugin Create resource plugin Create Services plugin Custom routes Create resource plugin or Views REST export (GET) Create resource plugin Configurable route prefixes Renderable objects Not applicable Not applicable Yes (no contextual blocks or views) Translations Not yet (issue) Yes Create Services plugin Revisions Create resource plugin Yes Create Services plugin File attachments Create resource plugin Yes Create Services plugin Shareable UUIDs (GET) Yes Yes Yes Authenticated user resources (log in/out, password reset) Not yet (issue) No User login and logout Core RESTful Web Services
Thanks to the Web Services and Context Core Initiative (WSCCI), Drupal 8 is now an out-of-the-box REST server with operations to create, read, update, and delete (CRUD) content entities such as nodes, users, taxonomy terms, and comments. The four primary REST modules in core are:
- Serialization is able to perform serialization by providing normalizers and encoders. First, it normalizes Drupal data (entities and their fields) into arrays with a particular structure. Any normalization can then be sent to an encoder, which transforms those arrays into data formats such as JSON or XML.
- RESTful Web Services allows for HTTP methods to be performed on existing resources including but not limited to content entities and views (the latter facilitated through the “REST export" display in Views) and custom resources added through REST plugins.
- HAL builds on top of the Serialization module and adds the Hypertext Application Language normalization, a format that enables you to design an API geared toward clients moving between distinct resources through hyperlinks.
- Basic Auth allows you to include a username and password with request headers for operations requiring permissions beyond that of an anonymous user. It should only be used with HTTPS.
Core REST adheres strictly to REST principles in that resources directly match their URIs (accessible via a query parameter, e.g. ?_format=json for JSON) and in the ability to serialize non-content into JSON or XML representations. By default, core REST also includes two authentication mechanisms: basic authentication and cookie-based authentication.
While core REST provides a range of features with only a few steps of configuration there are several reasons why other options, available as contributed modules, may be a better choice. Limitations of core REST include the lack of support for configuration entities as well as the inability to include file attachments and revisions in response payloads. With your help, we can continue to improve and expand core's REST support.RELAXed Web Services
As I highlighted in my recent blog post about improving Drupal's content workflow, RELAXed Web Services, is part of a larger suite of modules handling content staging and deployment across environments. It is explicitly tied to the CouchDB API specification, and when enabled, will yield a REST API that operates like the CouchDB REST API. This means that CouchDB integration with client-side libraries such as PouchDB and Hood.ie makes possible offline-enabled Drupal, which synchronizes content once the client regains connectivity. Moreover, people new to Drupal with exposure to CouchDB will immediately understand the API, since there is robust documentation for the endpoints.
RELAXed Web Services depends on core's REST modules and extends its functionality by adding support for translations, parent revisions (through the Multiversion module), file attachments, and especially cross-environment UUID references, which make it possible to replicate content to Drupal sites or other CouchDB compatible services. UUID references and revisions are essential to resolving merge conflicts during the content staging process. I believe it would be great to support translations, parent revisions, file attachments, and UUID references in core's RESTful web services — we simply didn't get around to them in time for Drupal 8.0.0.Services
Since RESTful Web Services are now incorporated into Drupal 8 core, relevant contributed modules have either been superseded or have gained new missions in the interest of extending existing core REST functionality. In the case of Services, a popular Drupal 7 module for providing Drupal data to external applications, the module has evolved considerably for its upcoming Drupal 8 release.
With Services in Drupal 8 you can assign a custom name to your endpoint to distinguish your resources from those provisioned by core and also provision custom resources similar to core's RESTful Web Services. In addition to content entities, Services supports configuration entities such as blocks and menus — this can be important when you want to build a decoupled application that leverages Drupal's menu and blocks system. Moreover, Services is capable of returning renderable objects encoded in JSON, which allows you to use Drupal's server-side rendering of blocks and menus in an entirely distinct application.
At the time of this writing, the Drupal 8 version of Services module is not yet feature-complete: there is no test coverage, no content entity validation (when creating or modifying), no field access checking, and no CSRF protection, so caution is important when using Services in its current state, and contributions are greatly appreciated.GraphQL
GraphQL, originally created by Facebook to power its data fetching, is a query language that enables fewer queries and limits response bloat. Rather than tightly coupling responses with a predefined schema, GraphQL overturns this common practice by allowing for the client's request to explicitly tailor a response so that the client only receives what it needs: no more and no less. To accomplish this, client requests and server responses have a shared shape. It doesn't fall into the same category as the web services modules that expose a REST API and as such is absent from the table above.
GraphQL shifts responsibility from the server to the client: the server publishes its possibilities, and the client publishes its requirements instead of receiving a response dictated solely by the server. In addition, information from related entities (e.g. both a node's body and its author's e-mail address) can be retrieved in a single request rather than successive ones.
Typical REST APIs tend to be static (or versioned, in many cases, e.g. /api/v1) in order to facilitate backwards compatibility for applications. However, in Drupal's case, when the underlying content model is inevitably augmented or otherwise changed, schema compatibility is no longer guaranteed. For instance, when you remove a field from a content type or modify it, Drupal's core REST API is no longer compatible with those applications expecting that field to be present. With GraphQL's native schema introspection and client-specified queries, the API is much less opaque from the client's perspective in that the client is aware of what response will result according to its own requirements.
I'm very bullish on the potential for GraphQL, which I believe makes a lot of sense in core in the long term. I featured the project in my Barcelona keynote (demo video), and Acquia also sponsored development of the GraphQL module (Drupal 8 only) following DrupalCon Barcelona. The GraphQL module, created by Sebastian Siemssen, now supports read queries, implements the GraphiQL query testing interface, and can be integrated with Relay (with some limitations).Conclusion
For most simple REST API use cases, core REST is adequate, but core REST can be insufficient for more complex use cases. Depending on your use case, you may need more off-the-shelf functionality without the need to write a resource plugin or custom code, such as support for configuration entity CRUD (Services); for revisions, file attachments, translations, and cross-environment UUIDs (RELAXed); or for client-driven queries (GraphQL).
Special thanks to Preston So for contributions to this blog post and to Moshe Weitzman, Kyle Browning, Kris Vanderwater, Wim Leers, Sebastian Siemssen, Tim Millwood and Ted Bowman for their feedback during its writing.
The machine is a complete ARM-based PC with micro HDMI, SATA, USB plugs and many others connectors, and include a full keyboard and a 5" LCD touch screen. The 6000mAh battery is claimed to provide a whole day of battery life time, but I have not seen any independent tests confirming this. The vendor is still collecting preorders, and the last I heard last night was that 22 more orders were needed before production started.
As far as I know, this is the first handheld preinstalled with Debian. Please let me know if you know of any others. Is it the first computer being sold with Debian preinstalled?
Jessie was released one year ago now and the Java Team has been busy preparing the next release. Here is a quick summary of the current state of the Java packages:
- A total of 136 packages have been added, 63 removed, 213 upgraded to a new upstream release, and 145 updated. We are now maintaining 892 packages (+12.34%).
- OpenJDK 8 is now the default Java runtime in testing/unstable. OpenJDK 7 has been removed, as well as several packages that couldn't be upgraded to work with OpenJDK 8 (avian, eclipse).
- OpenJDK 9 is available in experimental. As a reminder, it won't be part of the next release; OpenJDK 8 will be the only Java runtime supported for Stretch.
- Netbeans didn't make it into Jessie, but it is now back and up to date.
- The main build tools are close to their latest upstream releases, especially Maven and Gradle which were seriously lagging behind.
- Scala has been upgraded to the version 2.11. We are looking for Scala experts to maintain the package and its dependencies.
- Freemind has been removed due to lack of maintenance, Freeplane is recommended instead.
- The reproducibility rate has greatly improved, climbing from 50% to 75% in the past year.
- Backports are continuously provided for the key packages and applications: OpenJDK 8, OpenJFX, Ant, Maven, Gradle, Tomcat 7 & 8, Jetty 8 & 9, jEdit.
- The transition to Maven 3 has been completed, and packages are no longer built with Maven 2.
- We replaced several obsolete libraries and transitioned them to their latest versions - for example, asm2, commons-net1 and commons-net2. Groovy 1.x was replaced with Groovy 2, and we upgraded BND, an important tool to develop with OSGi, and more than thirty of its reverse-dependencies from the 1.x series to version 2.4.1.
- New packaging tools have been created to work with Gradle (gradle-debian-helper) and Ivy (ivy-debian-helper).
- We have several difficult transitions ahead: BND 3, Tomcat 7 to 8, Jetty 8 to 9, ASM 5, and of course Java 9. Any help would be welcome.
- Eclipse is severely outdated and currently not part of testing. We would like to update this important piece of software and its corresponding modules to the latest upstream release, but we need more active people who want to maintain them. If you care about the Eclipse ecosystem, please get in touch with us.
- We still are in the midst of removing old libraries like asm3, commons-httpclient and the servlet 2.5 API, which is part of the Tomcat 6 source package.
- Want to see Azureus/Vuze in Stretch again? Packaging is almost complete but we are looking for someone who can clarify remaining licensing issues with upstream and wants to maintain the software for the foreseeable future.
- Do you have more ideas and want to get involved with the Java Team? Just send your suggestions to firstname.lastname@example.org or chat with us on IRC at irc.debian.org, #debian-java.
- The Java Team is not the only team that maintains Java software in Debian. DebianMed, DebianScience and the Android Tools Maintainers rely heavily on Java. By helping the Java Team and working together, you can improve the Java ecosystem and further the efforts of multiple other fields of endeavor all at once.
The packages listed below detail the changes in jessie-backports and testing. Libraries and Debian specific tools have been excluded.
Packages added to jessie-backports:
- ant (1.9.7)
- elasticsearch (1.6.2)
- gradle (2.10)
- groovy2 (2.4.5)
- japi-compliance-checker (1.5)
- jedit (5.3.0)
- jetty8 (8.1.19)
- jetty9 (9.2.14)
- maven (3.3.9)
- openjdk-7-jre-dcevm (7u79)
- openjdk-8 (8u72-b15)
- openjfx (8u60-b27)
- tomcat7 (7.0.69)
- tomcat8 (8.0.32)
Packages removed from testing:
Packages added to testing:
- apache-directory-server (2.0.0~M15)
- dokujclient (3.8.1)
- elasticsearch (1.7.3)
- ivyplusplus (1.14)
- jetty9 (9.2.16)
- netbeans (8.1)
- openjdk-8 (8u91-b14)
- openjdk-8-jre-dcevm (8u74)
- openjfx (8u60-b27)
Packages upgraded in testing:
- activemq (5.13.2)
- ant (1.9.7)
- aspectj (1.8.9)
- bnd (2.4.1)
- checkstyle (6.15)
- eclipse-gef (3.9.100)
- electric (9.06)
- felix-main (5.0.0)
- findbugs (3.0.1)
- fop (2.1)
- freeplane (1.3.15)
- gant (1.9.11)
- gradle (2.10)
- groovy2 (2.4.5)
- hsqldb (2.3.3)
- icedtea-web (1.6.2)
- ivy (2.4.0)
- jajuk (1.10.9)
- jakarta-jmeter (2.13)
- japi-compliance-checker (1.7)
- jasmin-sable (2.5.0)
- java-common (0.57)
- java-package (0.61)
- jedit (5.3.0)
- jetty8 (8.1.19)
- jftp (1.60)
- jgit (3.7.1)
- jruby (1.7.22)
- jtreg (4.2-b01)
- libapache-mod-jk (1.2.41)
- maven (3.3.9)
- nailgun (0.9.1)
- pleiades (1.6.0)
- proguard (5.2.1)
- robocode (126.96.36.199)
- sablecc (3.7)
- scala (2.11.6)
- service-wrapper-java (3.5.26)
- simplyhtml (0.16.13)
- svnkit (1.8.12)
- sweethome3d-textures-editor (1.4)
- tomcat-native (1.1.33)
- tomcat7 (7.0.69)
- tomcat8 (8.0.32)
- triplea (188.8.131.52)
- uimaj (2.8.1)
- weka (3.6.13)
- zookeeper (3.4.8)
Drupal Watchdog was founded in 2011 by Tag1 Consulting as a resource for the Drupal community to share news and information. Now in its sixth year, Drupal Watchdog is ready to expand to meet the needs of this growing community.
Drupal Watchdog will now be published by Linux New Media, aptly described as the Pulse of Open Source.
“It’s very clear that the folks at Linux New Media know what they’re doing, and that they truly value the open source culture,” said Jeremy Andrews, CEO/Founding Partner, Tag1 Consulting. “I’m ecstatic that the magazine will not just live on, but it will thrive as a quarterly publication … this is a wonderful step forward that benefits everyone who reads and contributes to Drupal Watchdog.”
The magazine will continue to be offered in print and digital formats, and Linux New Media’s international structure provides better service to subscribers worldwide, with local offices in North America and Europe and ordering options in various local currencies.
“We don’t want to change what has brought Drupal Watchdog this far, but we do want to see it grow and expand to the next level, which mainly means – extending the reach of the magazine,” said Brian Osborn, CEO and Publisher, Linux New Media. “As our first step, Drupal Watchdog will now be published quarterly, helping us stay even more current in our coverage and in more frequent contact with our readership.”
Drupal Watchdog is written for the Drupal community and will only thrive through community participation.
Here is what you can do to help:
- Join the Drupal Watchdog community on Facebook
- Follow Drupal Watchdog on Twitter
- Visit the Drupal Watchdog team at DrupalCon
- Subscribe to the magazine so you won’t miss an issue
- Provide your feedback through our reader survey at drupalwatchdog.com/reader-survey
The first issue of Drupal Watchdog published by Linux New Media will be available May 9th! All DrupalCon attendees will receive a copy at the event. Come meet the new team, and learn more about the future of Drupal Watchdog!Images:
Many people might not be aware of it, but since a couple of years ago, we have an excellent tool for tracking and recognising contributors to the Debian Project: Debian Contributors
Debian is a big project, and there are many people working that do not have great visibility, specially if they are not DDs or DMs. We are all volunteers, so it is very important that everybody gets credited for their work. No matter how small or unimportant they might think their work is, we need to recognise it!
One great feature of the system is that anybody can sign up to provide a new data source. If you have a way to create a list of people that is helping in your project, you can give them credit!
If you open the Contributors main page, you will get a list of all the groups with recent activity, and the people credited for their work. The data sources page gives information about each data source and who administers it.
For example, my Contributors page shows the many ways in which the system recognises me, all the way back to 2004! That includes commits to different projects, bug reports, and package uploads.
I have been maintaining a few of the data sources that track commits to Git and Subversion repositories:
- The Go packaging group (added just a couple of weeks ago).
- The Perl packaging group.
The last two are a bit problematic, as they group together all commits to the respective VCS repositories without distinguishing to which sub-projects the contributions were made.
The Go and Perl groups' contributions are already extracted from that big pile of data, but it would be much nicer if each substantial packaging team had their own data source. Sadly, my time is limited, so this is were you come into the picture!
If you are a member of a team, and want to help with this effort, adopt a new data source. You can be providing commit logs, but it is not limited to that; think of translators, event volunteers, BSP attendants, etc.
In Drupal Commerce 1.x, we used the Commerce Fancy Attributes and Field Extractor modules to render attributes more dynamically than just using simple select lists. This let you do things like show a color swatch instead of just a color name for a customer to select.
Fancy Attributes on a product display in Commerce Kickstart 2.x.
In Commerce 2.0-alpha4, we introduced specific product attribute related entity types. Building on top of them and other contributed modules, we can now provide fancy attributes out of the box! When presenting the attribute dropdown, we show the labels of attribute values. But since attribute values are fieldable, we can just as easily use a different field to represent it, such as an image or a color field. To accomplish this, we provide a new element type that renders the attribute value entities as a radio button option.
Read more to see an example configuration.
Prompted by Tollef, moving to Hugo, I investigated a replacement blog engine. The former site used Wordpress which is just overhead - my blog doesn't need to be generated on every view, it doesn't need the security implications of yet another website login and admin interface either.
So, I've chosen Pelican with the code living in a private git repo, naturally. I wanted a generator that was supported in Jessie. I first tried nikola but it turns out that nikola in jessie has syntax changes. I looked at creating backports but then there is a new upstream release which adds a python module not yet in Debian, so that would be an extra amount of work.
Hopefully, this won't flood planet - I've gone through the RSS content to update timestamps but the URLs have changed.
With the release of Drupal 8.1 on April 20th the BigPipe module was added to core to increase the speed of Drupal for anonymous and logged in visitors.What does BigPipe do in general?
BigPipe is a technique to render a webpage in phases. It uses components to create the complete page ordered by the speed of the components themselves. This technique gives the visitors a feeling that the website is faster then it may actually be. Thus giving a boost in user experience.
This technique, originally developed by Facebook, deploys the theory of multi threading, just like processors do. It disperses multiple calls to a single backend to make full use of the web server and thus rendering a webpage faster then conventional rendering does.What does BigPipe do in Drupal?
For “normal” websites with anonymous visitors, BigPipe doesn’t do much. If you use a caching engine like Varnish, or even Drupal cache itself, pages are generally rendered fast enough. When using dynamic content like lists of related, personalized or localized content BigPipe can kick in and really make a difference. When opening the website BigPipe returns the page skeleton that can be cached. Elements like the menus, footer, header and often even content. And then rendering of the dynamic content will start. This means that the visitor of your website is already reading the most import content, and is able to see the dynamic related list later on after it’s loaded asynchronously.
For websites with logged in users BigPipe can be a real boost in performance. Standard Drupal cache doesn’t work out of the box for logged in users. For Drupal 7 you had the Authenticated User Page Caching (Authcache) module (which had some disadvantages), but for Drupal 8 there was nothing. Until Drupal 8.1!
With BigPipe Drupal is now able to cache certain parts of the page (the skeleton which I mentioned above) and to multithread some other parts. And these parts are cacheable by themselves.
Video is made by Dries BuytaertBigPipe in Drupal
As I said, starting from Drupal 8.1. BigPipe is added as a core module. And everybody can use it. Whether you are using a budget hosting platform or you are hosting your own website with state-of-the-art servers, it is basically just one (1) click away. You can just enable the module and get all the benefits BigPipe has to offer!
Each day, more Drupal 7 modules are being migrated over to Drupal 8 and new ones are being created for the Drupal community’s latest major release. In this series, the Acquia Developer Center is profiling some of the most most prominent, useful modules, projects, and tools available for Drupal 8. This week: Display Suite.Tags: acquia drupal planetdsdisplay suitedrag and dropuiUX
If you are in a hurry and only need a recipe, please head to the technical part of the article, but I would like to start sharing a bit of my experience first because you might be still deciding if Platform is for you.
I decided to try Platform because a friend of mine needed a site. Do to several reasons I didn't want to host it on my personal server. But I didn't want to run a server for him either. I wanted to forget about maintaining the server, keeping it secure or do upgrades to it.
So I started thinking about options for small sites:
So you’ve got a great idea. Spent months thinking about it. Sold the idea to internally key stakeholders. Grabbed the attention of the right people, organized a team and managed to get funding. You’ve selected your agency and have gone through a discovery process and are ready to design out the idea. Now what?
Well, you start by sketching of course. Yup I said it, we start by drawing pretty pictures (well not so pretty really).The power of sketching. There’s no need for commitment here.
You may think you don’t need to sketch because you already know how you want the interface to look. But often times when you actually start sketching, you’ll realize that the path that you were so set on, might not work the best.
Sketching sets the tone for the rest of the design process. It ensures you’re creating a user experience that meets both user and stakeholder goals and objectives. Removing this step from the process puts you at a disadvantage as you’re more likely to get locked into a design because it’s more difficult to make quick iterations using software built for wireframing and design comps. Sketching allows you to visualize what an interface could become without committing to anything.Sketching clutter is a means to an end
Initial sketches will likely uncover that your trying to cram too much onto the user’s screen, but that’s OK. We’re trying to uncover all possibilities so we can iterate quickly.
Having a UX team take an outside-in approach can really help define what you’re trying to achieve without overwhelming the user.
We’ve found that sketching the pages/concepts can be beneficial in a number of ways:
Speeds up the discovery phase by allowing all members of the team to get their thoughts on paper and get buy-in from key stakeholders
Allows the team to iterate quickly on the structure of the site/application without focusing on design elements such as colors, fonts, imagery, etc.
Offers a quick frame of reference to have early implementation discussions with developers on the project
Offers the ability to highlight key areas for measurement to ensure we’re meeting business and project objectives
Offers the ability to test real users with paper prototypes without writing a single line of code
Start with drawing the high level elements on the page such as the main navigation, secondary navigation, footer elements and high level links. But also try to think about the positioning of elements on the page. Most users read left to right and top to bottom. Keeping that in mind we can guide the user’s eyes to elements on the page by highlighting elements with design characteristics such as color, graphics, etc..
Moving some navigational aids into the secondary nav or the footer doesn’t mean they’re less important, but it does allow us to simplify the interface and add clarity for users to achieve their online goal.Sketching helps you brainstorm ideas
One of the biggest advantages of sketching is that everyone can do it. From designers to the director of human resources at your company (you don’t have to be an artist). So don’t be afraid to sketch out your ideas.
Sketching is an efficient way to get the ideas out of your head and out in the open for discussion. It keeps you from getting caught up in the technology, and instead focuses you on the best possible solution, freeing you to take risks that you might not otherwise take.
Getting everyone involved in this stage can be incredibly valuable for a couple of reasons. You can quickly get a good grasp of what you’re envisioning while gaining an understanding of the development process and interaction requirements, as you’re guided through the process.
What gets designed on the front end has a back end component that most clients don’t understand. Working with a UX team gives you the opportunity to gain that understanding while contributing with feedback that moves the project forward.Sketching a UI develops multi-dimensional thinking
Designing a user interface is a process. Translating an idea to meet user requirements requires multi-dimensional thinking. Sketching a user interface is primarily a 2 dimensional process, but as UX professionals we need to consider a number of factors:
What is the user trying to accomplish?
How is the user interacting with the site/application (desktop, mobile, kiosk, device specific, etc.)?
How does the UI react as the user interacts with it?
What appears on each of the pages as content and navigational aids?
What if a user encounters an error? Are there tools to help them recover?
Sketching allows you to visualize the screen-to-screen interaction so that your idea is something that’s visible and clear in user interface form. Ultimately helping you move the project to the next level.Take your sketches up a notch with interactivity
Lastly, using an online prototyping tool offers the ability to upload the sketches and add hotspots over the navigation and linking aids. This allows you to click through on rough sketches as if it were a real functioning website (a really ugly website). I can’t tell you the number of times I’ve worked on a series of sketches and didn’t realize that I was missing a major element or interaction until i added hotspots and tried to use it.
The design phase beginning with the initial sketches is a way to envision an interface that meets measurable goals. The ultimate goal is to align key business objectives with user goals. When those two things align you’ve got a website or product that’s bound to succeed.Tags: Drupal PlanetUXUIDesignwireframesrapid iterative
Each card is a self contained lesson on a single topic related to Drupal 8: with a set objective, steps to complete the learning exercise, links to blogs, documentation, and videos to get more information.Tags: acquia drupal planet
- Feed2tweet on Github (stars appreciated )
- The official documentation of Feed2tweet on Readthedocs.
- Feed2tweet on PyPi
Using Feed2tweet? Send us bug reports/feature requests/push requests/comments about it!
Jeff Geerling's Blog: Set up a faceted Apache Solr search page on Drupal 8 with Search API Solr and Facets
In Drupal 8, Search API Solr is the consolidated successor to both the Apache Solr Search and Search API Solr modules in Drupal 7. I thought I'd document the process of setting up the module on a Drupal 8 site, connecting to an Apache Solr search server, and configuring a search index and search results page with search facets, since the process has changed slightly from Drupal 7.Install the Drupal modules
In Drupal 8, since Composer is now a de-facto standard for including external PHP libraries, the Search API Solr module doesn't actually include the Solarium code in the module's repository. So you can't just download the module off Drupal.org, drag it into your codebase, and enable it. You have to first ensure all the module's dependencies are installed via Composer. There are two ways that I recommend for doing this (both are documented in the module's issue: Keep Solarium managed via composer and improve documentation):
CEO and Founder George DeMet shares a continuation of ideas presented at DrupalCon Barcelona with his new talk on the benefits of running a company according to a set of clearly defined principles, which he's presenting next week at DrupalCon New Orleans. It's called Finding Your Purpose as a Drupal Agency.iTunes | RSS Feed | Download | Transcript
We'll be back next Tuesday with another episode of the Secret Sauce and a new installment of our long-form interview podcast On the Air With Palantir next month, but for now subscribe to all of our episodes over on iTunes.Want to learn more? We have built Palantir over the last 20 years with you in mind, and are confident that our approach can enhance everything you have planned for the web.
Allison Manley [AM]: Hi, and welcome to the Secret Sauce by Palantir.net. This is our short podcast that gives quick tips on small things you can do to make your business run better. I’m Allison Manley, an account manager here at Palantir, and today’s advice comes from George DeMet, our Founder and CEO, who as a small business owner knows a thing or two about how to run a company based on clearly defined principles.
George DeMet [GD]: My name is George DeMet, and I’m here today to talk about the benefits of running a company according to a set of clearly defined principles. What follows is taken from a session that I presented last fall at DrupalCon Barcelona on Architecting Companies that are Built to Last.
At the upcoming DrupalCon New Orleans in mid-May, I’ll be continuing this conversation in an all- new session called Finding Your Purpose as a Drupal Agency. If you’re able to attend Drupalcon New Orleans, I hope you’ll check it out.
Some time back I came across an article from the early 1970s about my grandfather, who was also named George DeMet. He was a Greek immigrant who spent more than 60 years running several candy stores, soda fountains, and restaurants in Chicago. While the DeMet’s candy and restaurant business were sold decades ago, the brand survives to this day and you can still buy DeMet’s Turtles in many grocery stores.
I never really got to know my grandfather, who died when I was 7 years old, but I have heard many of the stories that were passed down by my grandmother, my father, and other members of the family.
And from those stories, I’ve gotten a glimpse into some of the principles and values that helped make that business so successful for so long. Simple things, like honesty, being open to new ideas, listening to good ideas from other people, and so forth.
And as I was thinking about those things, I started doing some research into the values that so-called family businesses have in general, and that some of the oldest companies in history have in particular.
The longest lasting company in history was Kongo Gumi, a Japanese Buddhist temple builder that was founded in the year 578 and lasted until 2006. At the time that Kongo Gumi was founded, Europe was in the middle of the dark ages following the fall of the Roman Empire, the prophet Muhammed was just a child, the Mayan Empire was at its peak in Central America, and the Chinese had just invented matches.
At some point in the 18th century the company’s leadership documented a series of principles that were used by succeeding generations to help guide the company.
This included advice that’s still relevant to many companies today, like:
- Always use common sense
- Concentrate on your core business
- Ensure long-term stability for employees
- Maintain balance between work and family
- Listen to your customers and treat them with respect
- Submit the cheapest and most honest estimate
- Drink only in moderation
Even though the Buddhist temple construction and repair business is a pretty stable one, they still had to contend with a lot of changes over their 1,400 year history. Part of what helped was that they had unusually flexible succession planning; even though the company technically was in the same family for 40 generations, control of the company didn’t automatically go to the eldest son; it went to the person in the family who was deemed the most competent, and sometimes that person was someone who was related by marriage.
Kongo Gumi not only only built temples that were designed to last centuries, but they also built relationships with their customers that lasted for centuries.
In the 20th century, Kongo Gumi branched out into private and commercial construction, which helped compensate for the decline in the temple business. They also didn’t shy away from changes in technology; they were the first in Japan to combine traditional wooden construction with concrete, and the first to use CAD software to design temples.
And while Kongo Gumi’s business had declined as they entered the 21st century, what ultimately did them in were speculative investments that they had made in the 80’s and early 90s in the Japanese real estate bubble.
Even though they were still earning more than $65 million a year in revenue in the mid-2000s, Kongo Gumi was massively over-leveraged and unable to service the more than $343 million in debt they had accumulated since the collapse of the bubble, and they ended up being absorbed by a larger construction firm.
Principles are designed to help answer the question of *how* a company does things, and what criteria they should use to make decisions. In the end, Kongo Gumi was no longer able to survive as an independent entity after 1,400 years in business not because of economic upheaval or changes in technology, but because they strayed from their core principles, stopped taking the long view, and went for the quick cash.
Companies that want to be successful in the long run need to identify their core principles and stick to them, even when doing so means passing up potentially lucrative opportunities in the short term.
Regardless of whether the business involves building Buddhist temples, making chocolate-covered pecans, or building websites, a focus on sustainability over growth encourages companies to put customers and employees first, instead of shareholders and investors. These kinds of companies are uniquely positioned to learn from their failures, build on success, and learn how to thrive in an ever-changing business landscape.
AM: Thank you George! George will be presenting his session, Finding Your Purpose as a Drupal Agency at DrupalCon New Orleans on Wednesday, May 11. You can find out more on our website at palantir.net and in the notes for this particular podcast episode.
If you want to see George’s presentation from DrupalCon Barcelona last year on Architecting Drupal Businesses that are Built to Last, you can also find that link in the notes for this episode as well.
As Drupal 7 developers, we know how risky it is to edit production code and configuration live. However, we often let clients do it because using Features is hard. Drupal 8 has solved a lot of this headache with file-based configuration management, which allows file-based workflows that gracefully avoid editing production directly. This article will show you how to use Drupal 8 configuration management and Pantheon’s amazing workflow tools to easily give your clients the ability to make configuration changes. We’ll show you how to seamlessly integrate those changes into your normal development workflow, so that you - and your clients - will win at Drupal!Benefits of File-based Config
Storing active configuration directly in files has many benefits. The main benefit is that clients no longer have any reason to ever edit production configuration directly on production. Further, using file-based configuration removes the extra steps required to edit configuration in the database. These extra steps are confusing, can fail with fatal errors, and are made unnecessary by not storing configuration in the database.How to Enable File-based Config
The documentation for enabling this isn’t too difficult. But, Pantheon recommends not storing the services.yml file in version control. So, we’ll create a new services YAML file and include that along with the active configuration settings in settings.php. Before you start, export your current configuration to the sites/default/config folder and deploy that to Pantheon. Next, enable file storage by adding the following config.services.yml to your sites folder and using the following settings.php.
Once deployed to Pantheon, the site will now be running in file-based configuration storage. To test this, go ahead and make a setting change in your local environment. You should see Drupal immediately write the change to site/default/config. Deploying this edit to Pantheon should make the Pantheon site immediately update to reflect the new configuration change. You just won at Drupal!Configuration Workflow on Pantheon
Now create a multidev for the client to use. Once the multidev is created, put the Pantheon account into SFTP mode because SFTP mode allows Drupal to edit the configuration files directory. So, now so the client can edit the configuration in Drupal and save their work with the Pantheon dashboard.
Changes ready to commit
Merge to development
Configuration deployed to development
When the client has completed their work, they can deploy it using the Pantheon deployment tools. You can merge in a client’s work into your own easily because the client is now using version control. Once the configuration is merged to Dev, the standard Pantheon workflow makes it easy to deploy these changes to production.Don’t Edit Production Directly
If production is in SFTP mode, clients can still edit production live. To prevent this, either keep production in Git mode, or use the Config Readonly module to lock production configuration.
Drupal gives users the power to build and edit a website, and users can make dramatic changes to websites with just a few clicks in forms. With Pantheon’s tools and Drupal 8, we now have the ability to use those wonderful tools in a safe environment. The tools combined allow us to bring clients into the workflow and manage deployments as a part of the team because Drupal 8 allows us to build robust, collaborative workflows like never before.
May First/People Link has several members that are targets of politically motivated denial of service attacks (mostly groups that support reproductive justice for women and palestinian rights). To fight off the attacks, we work closely with Deflect - a non-governmental organization based in Canada that fights against this kind of censorship.
When a site is down, it's not always easy to understand why. Deflect runs as many as 5 edge servers, any of them could be down. And, of course, the origin server could also be down.
I tried using a commericial/free as in beer service for monitoring up time, but when it reported the site being down, I had no idea which part was down.
So... httping to the rescue. Unfortunately, it depends on --divert-connect which is only available in Debian Stretch. I run the script via a cron job and output the results to a log file.#!/bin/bash # Test all given edges domain="$1" origin="$2" proto=http if [ -n "$3" ]; then proto="$3" fi if [ -z "$domain" ]; then printf "Please pass the domain as first argument.\n" exit 1 fi if ! ping -c 1 184.108.40.206 >/dev/null; then # printf "We are offline. Not running.\n" exit 1 fi ips=$(dig +short "$domain") if [ "$?" -ne "0" ]; then # printf "DNS lookup failure. Not running.\n" exit 1 fi if [ -n "$origin" ]; then ips="$ips $origin" fi l= if [ "$proto" = "https" ]; then l=-l fi for ip in $ips; do date=$(date +%Y.%m.%d-%H:%M) for i in 1 2 3; do out=$(httping $l -m -t 5 -c 1 --divert-connect "$ip" "$proto://$domain") [ -z "$out" ] && out=1 printf "%s %s %s\n" "$date" "$ip" "$out" done done