Webinar

Click to see video transcript

Moderator: Today’s webinar is Integrating a CDN with Acquia Cloud. Our Acquia presenters are Kieran Lal, Wim Leers and Alex Jarvis. We also have Christopher Meyfarth from Akamai we’re very excited to have him on.

Kieran Lal: So a few months ago, actually starting back in June of 2012, I started really exploring how we could go beyond what Acquia Cloud had in terms of performance and the most obvious answer once we had tuned the stack, got the reverse proxy right and tuned the PHP stack and tuned the database and got everything together, we start looking at some of the really great web performance optimization tools that were available and so that got me thinking about using CDN. So I started talking to a lot of people at Acquia and collecting more and more information, looking at all the different CDNs that had been integrated with Acquia Cloud and started asking people about what were the different scenarios and different use cases that came together. So I sort of had this on the shelf for a while and then we’re lucky enough to see that Wim Leers who I thought was going to work somewhere else actually ended up joining the Office of the CTO and so that was really exciting and I think within a few days of him joining, I immediately reached out and told them that I wanted to do a webinar off somewhere in the future on CDN. Wim I’ve known for many years and we hung out together when I was over for a conference in Hungary and so I knew about his interest in CDNs and more recently some of the work that he’d been doing through his master’s and/or an internship that he had done at Facebook. So it was really great to have what I consider to be one of the leading experts of CDN integration with Drupal here at Acquia and so we worked together, put together some outlines. I showed him my notes and we planned to do this, and then as we started getting close to the webinar, I was able to reach out to some new partners at Akamai and so I’m really excited to have Christopher Meyfarth from Akamai join us who’s one of the sales engineers, but a real expert in Akamai, and so we’re able to really focus on what is the premiere CDN in the space.

Then more recently, I was at BADcamp and then bumped into Alex Jarvis who’s one of our senior technical account managers and he was talking to me about some work that he was doing and we were going back and forth and I brought up the idea of Akamai and so he started telling me about all the expertise and some of the notes and training that he’d been doing internally, and so I was really excited to really bring what I think is one of the strongest teams of technical contributors that we have in our webinar series together to show off the advantages of using a CDN with Acquia Cloud and in particular using Akamai. So we’ll see three different perspectives, but they’ll keep it pretty thorough and we’ll have a lot of great content. I’m really looking forward to some excellent Q&A and even I know a lot of the staff at Acquia are really excited to learn about this because it’s a tier of knowledge that they really want and something that almost 50 people on our support team are keen to learn about so that they can use Akamai and use chris’ knowledge of CDN’s to improve the performance of their sites. So without further delay, I’m going to hand it over to Christopher who will go first and then we’ll have Wim talk about CDNs and some specific expertise that he’s got and then Alex will finish off and talk about specific configurations of Drupal with Akamai, and then we’ll try to pause. If we see questions that are relevant or particularly topical while our speakers are talking, we’ll take the time to answering the questions in context, but the most of them, if they’re generic-enough questions, we’ll hold it to until the end where possible and then try to leave enough time to do a Q&A. So Christopher, without further delay, I’m going to hand over the slide to you.

Christopher Meyfarth: Great. Thank you, Kieran. I’ll just make sure we got the presenter view here. So as Kieran mentioned, I’m Chris Meyfarth. I’m a lead senior sales engineer working with our channel program at Akamai. I’m going to talk a little bit today about the inherent problems of the internet. We’ll talk a little bit about the Akamai technology that we’ve created to help combat those inherent problems and we’re going to look at some test results that we did last week on Acquia.com. So without further ado, I’ll just jump right into it and talk a little bit about the inherent problems of delivering traffic over the public internet. I think that it’s important to understand why you need a CDN. Acquia is spending a lot of time honing Drupal, putting a lot of time into the data centers and regardless if it’s Drupal or an application that you have in-house, you don’t want to spend all that time on the application, all that time on the data center and then just cross your fingers as that information goes out across the wilds of the public internet. When you look at the public internet, it’s a network of networks and the protocols that control the public internet, there are three major protocols: you have BGP, TCP and HTTP. The issue here is BGP first and foremost which controls the route, it’s based on the number of steps that you take, not the performance. TCP of course is an antiquated protocol, it has a slow start and then you have HTTP and HTTPS which is just the sheer volume of information that we’re sending across the line. A great example is with commerce websites now in Akamai, the average commerce customers has about a two-meg footprint of their commerce website. I can remember 10 years ago when we were willing to wait 10 or 15 minutes to download a two-meg MP3, and now according to Forrester and Gartner, we have the two-second rule. We’re looking two download a site in two seconds or else we’re going to start seeing exponential abandonment. We’re doing in two seconds today what we were willing to wait 10 or 15 minutes to do 10 years ago. So when you step back and look at the public internet, you really have a compounding inefficiency. You have a ton of data going over a pipe that has a slow start, going over a route that isn’t based on performance. It’s just simply based on the number of steps that it needs to take.

So I’m going to talk about some of the newer Akamai technologies. I’m going to leave a lot of the caching and some of the traditional CDN stuff to Wim and to Alex, but to combat this compounding inefficiency, Akamai has developed some technology called SureRoute. SureRoute leverages our 125,000 servers in 2,000 different locations. They basically all talk to each other. They build a virtual weather map of internet traffic conditions, so anytime an end user connects to an Akamai server, we go to that weather map and in real time we query it and we pull down the three fastest routes given his exact location and where he’s going. We’ll actually take it a step further. We’ll actually send a little piece of information across each one of these routes. We’ll run a race, if you would, and the one that comes in first will start passing traffic across that route. Now, we’ll continue to run those races, so if anytime maybe internet traffic conditions change, people are coming back from lunch, there’s congestion on different lines, we continue to run those races, we detect if that’s no longer the fastest route, we failover to the second fastest route and then go back to that weather map to get three new routes.

Now, it’s an extreme example, but to see SureRoute in action, this is a cable cut that happens. So this, as I said, an extreme example of internet traffic pattern shifting very quickly. What you see in red is the public. It took the public internet about two weeks to recover and it took Akamai about 20 minutes. So SureRoute is just one technology that we have; of course Akamai is known for doing caching. We continue to push a lot of that caching logic out to the edge of the internet so that we can make caching decisions based on languages or cookies and we can make those as far away from the application origin as possible. In addition, the last technology feature we’ll talk about is a feature called prefetch. Prefetch is a really interesting technology because as the end user requests an HTML file and that is coming back and being returned, actually our Akamai edge server will hold on to that HTML file and start to parse it. So we will send out requests, we’ll refresh our cache. If anything is expired, we will refresh those cache objects or request them from the origin before the end user has even received that HTML file, so by the time that end user has parsed that HTML file and starts requesting those objects, they’re located on the edge server 10 milliseconds away from them.

So a quick overview of some of the newer Akamai technologies in action, we did a test last week of Acquia.com. We were looking, so we choose six steps, just generic web pages. We chose the homepage, the Support and Cloud Services, Solutions for Marketers, Network Trial page, Careers page and the PDF. The way we conduct these trials is we’ll actually build a fully functional Akamai configuration and send it out over the public internet, and then we spin up either Gomez or Keynote, one of the major leading performance testing suites and we’ll have them hit both the Akamai configuration as well as Acquia.com as it was being accessed on the public internet. So what we have here is the comparison results. This was over the course of about three days from many cities all over the world and we have the public internet results and then the Akamai results. What’s interesting about this – I mean I’ll let everyone consume the data points on their own, but what really stands out here is step number five. If you look at the average improvement, step five is nowhere close to everyone else, so I wanted to highlight step five and dig in a little bit deeper. Step five is the Careers page and this was a very heavy page that was pulling information from a few different queries and one of the major advantages of Akamai and using Akamai as a CDN is you can build rules into Akamai to handle the caching. So what we did with step five was we said, “Well, let’s rerun these tests, and rather than just cache the images and do the SureRoute acceleration, let’s take a snapshot of that page. Let’s cache the full HTML of that page not for very long, maybe just for a few hours. We want the page to stay dynamic, but in this case we would much rather have maybe a highly performing page that we could clear out from cache on request. Akamai has the ability to interact with an API to purge things out from cache, so let’s cache that full page.” These are the results. You can see that public internet was about five and a half seconds. We turned on Akamai, it’s about four and a half, but then when we started doing that full page caching, it dropped it down to two. Those results will only get better the more people you have requesting that page. It’ll keep it in cache, it’ll keep it in cache longer and more current.

So before I hand things over to Wim, just to summarize quickly for Akamai, the trial we did for Acquia.com is very easy to do. We do it for many customers, so if you want to get set up with the CDN, we strongly advise you to talk to an Acquia rep. We can do one of those trials, we can set up the steps, build that configuration, send it out over the public internet and in just a couple of weeks we can get some real world results for you. So at that point, if there are any questions, we can open it up quickly or we can pass it over to Wim.

Moderator: There’re no questions at this point, so I’m going to pass it over to Wim.

Christopher Meyfarth: Great. Thanks guys!

Wim Leers: Hi everyone, my name is Wim. As Kieran explained, I’m a senior software engineer with Acquia and I’ll be talking a bit about Drupal plus CDN, but before I really begin, I just want to make sure that everybody actually knows what a CDN is. If you’re not quite sure yet what that is, please raise your hand right now. [Pause] Okay, I do see that at least one or a few people are raising their hand, so a CDN is essentially a short name for content delivery network. In a nutshell, what it does is move content closer to the end user, meaning that to get the data for a website, for example, there will be less physical distance to cover, meaning that the data will be faster at the end user’s computer. So essentially, move the data closer to the end user so that everything on the web becomes faster. I hope that is clear; if not, I recommend the explanation on Wikipedia. So let’s dive into the real stuff.

First of all, you should know about page loading performance. I’ll keep this also very short. I think most of you have already noticed, but in any case, when you load a webpage, then only about 10% of the time is actually spent on loading the HTML, the webpage itself, the textual content of the webpage, and about 90% is actually spent on loading the resources referenced by the HTML. These are the JavaScript files of course, but also images, fonts, videos and so on. So when we’re trying to make websites faster, it’s really important that we keep in mind that the biggest gains are to be gained in the resources department, not on the HTML side, so improving websites in the sense of making the rendering work faster, that’s nice, but it will never have the same kind of impact that you can achieve by making sure that the resources are loaded and served fast to the end user. This is what we’ll keep in mind and this actually brings us to the next aspect of using CDN.

There are actually two types of – you can use CDN in two big different ways. The first one is use a CDN for serving resources, static files, and the second way is using a CDN for everything in HTML. So the serving of resources should always be the first step since this is the thing that affects 90% of the page load time. It’s easy to do especially with origin-pull CDNs – more about that later – but in any case, the most important aspect is that this will give you the most benefit with the least amount of effort. You can also go for serving everything from a CDN, so this is an optional second step. It makes sense that this is a second step because the first aspect you can do without much changes to your application logic and it’s very easy to do, so it makes sense to always do the resources [Audio Gap], but if you go for the full approach, then you will affect 100% of your page load time since everything is being served from a CDN minus of course third party scripts. The downside is that you will have to carefully plan the integration; it takes quite a bit of effort especially if you have a website in which you are serving authenticated users a lot of content, meaning that if you have for example a news website where the typical user is not logged in, then you can serve the exact same content to millions of users, but imagine the case where you have for example a news website where each user can make his individual preferences for topics of interest. Well, then things become a lot more complex because not the same content can be served to every single user, meaning that you have to do far more complex page building to achieve the same performance. To do this kind of authenticated user, and thus, personalized content through CDN, what is typically used is Edge Side Includes and Akamai is I think the company who pioneered this.

More about that in just a second, but the one thing that you should also know is that in either of these cases, in both scenarios there will be less requests to the app server, exactly because static files are no longer served by your app server, your web server. That actually means that there is less load on your web servers. If you have many of those web servers, then it’s possible that you can even retire some of them, meaning that overall your costs for your app server should go down. So CDNs may cough some money, but they may also save some to some degree, at least. So I just mentioned Edge Side Includes which is really important if you want to serve the HTML of your website from a CDN while still having the ability to serve personalized content to every user. In Drupal 5, 6 and 7, it has been pretty difficult or rather complex and not well supported out of the box to do Edge Side Includes, but I’m happy to report that in Drupal 8, it will become much easier because in Drupal 8, there is the Blocks and Layouts Everywhere Initiative, codenamed SCOTCH. The single keyword that is really important here is Blocks Everywhere. Blocks you can regard as the atoms in the Drupal universe, as it were, the atoms of the Drupal webpage. In Drupal 5, 6 and 7, many things are blocks, but not everything is, so if we make everything a block, every single compartmentalized piece of content, if each of those is a block, then we can do much more interesting things.

In Drupal 8, we are adopting the Symfony which is another open source project, so we’re sharing code and sharing the effort there. We’re adopting the HttpKernel components from Symfony. What we’ll be doing is making sure that each part of the page, each block, each atom can be requested individually and when you, for example, would ask a Drupal 8 website to render the front page, what it will do then is use in-process subrequest to build a page. So it will actually fire subrequests within the PHP process to render each of those blocks individually. As you can tell, this very closely resembles ESI and is actually pretty much the exact same thing. Drupal 8 will make it much easier, if not trivial to support Edge Side Includes. So ESI plus CDN will become trivial in Drupal 8, but please remember that not every CDN supports Edge Side Includes.

So now we have an idea of the two big different ways of using a CDN and I’ve clarified that one of them will become significantly easier in Drupal 8, so let’s move on to the key properties of a CDN. The single most important property I believe is the geographical spread, the points of presence. PoP stands for “point of presence.” A point of presence is essentially a location where the CDN has edge servers as Chris pointed out already in his presentation, meaning that as a PoP, a CDN connects with local ISPs, so the more connections with local ISPs around the world or in different regions, the closer you are to the end user with that CDN. So what you need to do is make sure that the latency between your [Audio Gap] and your content is as low as possible, globally speaking, meaning that for as many users as possible, you want to have the lowest latency possible because then web pages will be rendered faster. So if you put those things together, what really matters is that you match your CDN to match your audience, so if you’re a company that’s mostly oriented towards European users, for example, then you would probably choose a CDN that has a strong European presence that has many PoPs in Europe. It could also be that you are going with a global CDN provider, provided that they do have a lot of presence in Europe.

So as you can see, it really depends on your needs, on your target audience which CDN is the best choice from a geographical point of view, from a best lowest latency point of view. The second most important property is the way you get files onto the CDN. Actually, there are two types: origin-pull CDNs and push CDNs. For origin-pull CDNs, there is not really a special transfer protocol involved. Your web server obviously speaks HTTP because otherwise it would not be a web server, so what origin-pull CDNs actually do is when they receive a request for a certain file, they will just go back to the origin server which is your web server and go and get the file over there. The upside is of course that there’s virtually no setup. You just do very, very little and everything will work just fine. The downside is exactly because it will work almost automatically, that there is very little flexibility in terms of what kind of preprocessing or optimization you can do before getting the file onto the CDN. Another downside can be redundant traffic. For most websites, this will not be a problem since requesting caches Again, it’s not really an issue since they’re so small, but imagine a case where you have multi-gigabyte video streams, for example. As Chris already explained, at least Specific edge servers, the PoPs of a CDN, tend to clear out files that have been requested least often in a certain period of time, so it is possible that your video file has only been requested a few times in a given timeframe, meaning that it will be removed from the CDN temporarily until it’s requested again, but that means that your origin server – meaning your web server – will have to serve the file again to your CDN. So multiply that by a number of edge servers and then you might be looking at significant traffic, but it really depends of course on your specific site and your specific goals.

That’s exactly where a push CDN can be advantageous. Exactly because it’s a push CDN, you have to push the files onto the CDN, so you control when a file is transferred from your origin server onto the CDN, so there is no redundant traffic since exactly it’s you who control that. You also have more flexibility in terms of potential preprocessing exactly because it pushes onto the CDN, but the big downside here is that there is a lot of setup involved. You have to write the script or code or some sync layer, whatever it is going to be. You have to somehow get the file onto the CDN and that might be a lot of work. I failed to add this on the slide, but this is exactly what I try to solve with my bachelor’s thesis. It’s called FileConveyor.org and it’s a Python daemon that is intended to simplify the setup of a push CDN. Essentially, its job is to weigh the differences between different ones, so it makes it easy to switch from one to the other or to advanced preprocessing and whatnot, but in any case, this is another component that you have to deal with which you don’t have to do with an origin-pull CDN.

That actually brings us to the last thing which is lock-in. Depending on the kind of CDN that you choose and its features, you may be looking at some or significant lock-in. For example, if you’re using Amazon S3 as a CDN, then you’re actually using a custom transfer protocol which means that if you want to switch to another CDN, you have to rewrite your sync layer, your script, whatever it is, so that is some degree of lock-in. Some CDNs offer unique features – for example, some kind of statistics – and you may not have the same in another CDN and so on, so you have to be careful there. You have to weigh the differences to figure out which things are most important to you.

All of this is about the differences between the different CDNs. So how do you select a CDN? Which CDNs should you choose? The first and foremost reason to go with a CDN is of course performance. That’s the whole reason we’re having this webinar. The most important thing is low latency. This is again about matching your target audience well to your CDN and making sure that the PoPs the CDN has correspond well with the geographical location of your visitors. However, this is not always the single most important thing. If you’re serving a lot of small files, for a typical website, it definitely is, but in the case of serving video streams for example or large software downloads or whatnot, what might matter more is high throughput - some call this bandwidth – meaning that it doesn’t really matter if you have to wait 0.1 or 0.2 seconds for your video to start streaming, but what you really want is that your video stream does not get interrupted. That’s the difference that you have to weigh there.

The other aspect based upon which you might want to choose a CDN is the type, origin-pull versus push, because of the inherent differences in how you get files onto the CDN and how you integrate with them. Also, advanced CDN-specific features such as, for example, automatic lossless image optimization, real time statistics, authentication in the sense that, for example, you have an ecommerce website and you sell digital goods, meaning that you don’t want everybody to be able to access the files, so then you want some kind of signed URL or [Audio Gap]. So it really depends on your use case. You have to make sure that the CDN offers the features that you really, really need.

Finally, of course there are also the support aspects, different CDNs with different kinds of support and the costs. Now, how can you maximally exploit a CDN for resources, meaning for static files? These are simple tricks that can have a significant performance impact, so it’s really recommended that you always use them. The first one is actually the one with probably the least amount of performance impact, but it’s also the single most simple one. It’s DNS prefetching; it’s just adding a small tag to your HTML head on every webpage where you use a CDN or even where you don’t use a CDN, and what you do there is reference the DNS, the domain name, for your CDN so that the web browser will already be able to look up the IP address for your CDN, meaning that when – so this is at the top of the page and only after that the loaded script images and so on are referenced. So by the time that the browser gets to parsing the actual resources, the DNS lookup will already have happened and you don’t incur the DNS lookup wait time.

The second thing is auto-balancing over multiple CDN domain names, also known as domain charting. This is very useful in cases where you have a lot of resources in a single webpage. For example, say that you have an ecommerce web shop and you have 100 product images on a single webpage. The typical web browser will do between 6 and 10 simultaneous HTTP requests to a single domain name, meaning that only, say, about 10 downloads are happening at the same time. So only when one of those 10 is ready can the next one occur, and so on. So as you can see, there is a lot of waiting going on for no good reason and the way you can solve that is by having multiple CDN domain names, multiple host names and balancing them automatically. For example, say that you would have four CDN domain names. You could automatically balance those 100 images over the four different domain names so that each has approximately the same amount - in this case, 25 – and then what you get is not 10 images being downloaded at the same time, but 40 images being downloaded at the same time. So given this theoretical case, this would accelerate the download of these 100 images by fourfold, so that’s a very significant increase, but again, it’s really only useful when you have a lot of images, or in any case, a lot of resource from a single page.

The last one is actually the one with the most impact in my experience also for very small websites and it’s called Far Future expiration. It’s actually really simple, really logical. Essentially, browser caches are always faster than a CDN unless you have an extremely, extremely slow hard disk drive. If you needed to get a file from a CDN, then you’re always going to incur some level of latency. You have to download the file, you have to save the file on the disk and so on. Browser caches avoid that. If the file is in the browser cache, you don’t need to go to the CDN. How do you make sure that files remain in the browser cache for as long as possible? Well, you need to mark them to expire many years from now and that’s why it’s called Far Future expiration. The one downside to this is that if you are changing, for example, your web shop logo and then what happens if the file is cached in the browser cache is that it will not be downloaded again exactly because it is still in the browser cache. So now for some reason, some subset of users is still seeing the old logo. The way you can solve that is by using unique file URLs. You need to make sure that each file, whenever its contents change, it’s served from a different unique file URL and that will cause of course the browser cache to retrieve the new file. One funny side aspect or interesting side aspect to this is that actually if you implement this, it will actually cause your CDN costs to go down because there will be fewer requests to the CDN since the browser cache retains the file longer. So yes, these are free tricks that are really useful.

Now I’ve explained the different types of usage and overall CDN information, so now let’s move on to the Drupal CDN module. I maintain this module and if you have any questions, I look forward to seeing you in the question queue. As you can see, this is the default admin screen for the CDN module and there is a message at the top that says “If you install the advanced help module, the CDN module will provide more and better help.” So if you install it for the first time and you’re still finding your way around it, please install it and it will give you examples and technical background information, but in any case, there is very little to configure as you will soon see. This is the general tab and here you can either disable or enable the integration with CDN or you can enable the testing mode. In testing mode, you can play around with it without harming your actual users. You can give specific users access to the files on the CDN by giving them specific permission that allows them to do so, so this is great for giving it a try.

The second tab is called “Details” and this is where the bulk of the work happens, the bulk of the configuration. Essentially, there are two modes: origin-pull CDNs and file conveyor for integrating with the project that I mentioned earlier for doing more advanced CDN integration. So we are going to go with origin-pull; this is actually exactly how it’s configured on my personal website. All you have to do really is copy-paste the CDN domain name that you get from your CDN provider, paste it in the CDN mapping field that you can see at the bottom of the current slide and that’s it. Just hit save and you will have CDN integration. I really tried to make it as simple as possible; however, you can also enable Far Future expiration. CDN module automatically does all the aforementioned maximally-exploiting CDN tricks. Far Future expiration obviously involves the unique file URL aspect and the CDN module shifts with several methods for generating unique file identifiers. You can even add your own.

The Drupal CDN module is great, I hope and believe, for simple use cases or for many use cases, even for more advanced ones that also does domain charting, but there are also times when you should not use it. That’s when every millisecond matters. It’s really designed for ease of use and frontend performance, i.e., the serving from a CDN in the most easy manner possible. It’s not designed to have the absolute lowest overhead possible. In that case, you should just write a hook implementation of “hook_file_url_alter” which I added to Drupal 7 and that will allow you to easily do what you need to do without the overhead of an entire Drupal module. It’s also feasible or reasonable to use your own code when you have a very complex CDN mapping, but even the CDN module has support for that in the sense that it has a callback in which you can implement custom logic to determine which CDN should be used for which file.

So now I’ve explained the different ways you can use the CDN, how we can maximally exploit it, how we can use it with Drupal, but what you really want to do is actually prove that the CDN integration that you’ve done is actually having a positive performance impact on your website. That’s what these last two slides will be about. Ideally you are already doing continuous integration for your website or your application which is essentially making sure that no new bugs enter the website and that everything continues to work correctly while you should also, in theory or in the best case possible, have a continuous performance monitoring that makes sure that whenever you add new features or make improvements and so on, you’re not actually making or adding performance regressions. You want to make sure that your website stays as fast or gets faster, or at least if it got slower, that you actually know it got slower.

So there are two ways to do that: synthetic user monitoring which is a test script essentially, but it only works in a few browsers and it’s a very controlled environment in terms of networking and OS and so on, so it’s not really realistic. It doesn’t give a realistic picture of what your end users are seeing or experiencing, but it is really great as a reference point for tracking the performance of your sites as it evolves. Exactly because it is in a controlled environment, the only variable is really the changing code, so for internal use, this is excellent, but for making sure that your website as it is experienced by your actual visitors and you want to improve that, then what you need is real use monitoring. This is actually measuring your actual visitors, hence in all browsers, hence in real world environments and does give very realistic results. It actually shows you how fast your website is for real users. This also gives you the ability to see in which specific location or if browser site performance is good or bad so that you have more useful information on reproducing the problem and improving it for those specific cases.

So if we go look at synthetic user monitoring, I’m going to show you how you can do synthetic user monitoring for free as well as real user monitoring for free. So the first step here is configure a dev or staging app or web server. Maybe you want to do this on production traffic; that’s also possible, but I’m assuming that you want to do this internally so that you can track the performance internally before it is pushed live. What you need to do is ensure that your pages’ resources are served from the CDN domain, then you can perform a test with the free and open source WebPageTest.org with a node, i.e., a desk server, a browser that is far away from your origin app server. Why exactly one that is far away? Well, we want to make sure that the CDN is having a positive performance impact and if it’s far away from your app server, then by definition the latency is higher, and in theory, if the CDN is working well, it should have a lower latency. Hence, this should show a significant difference with the case where you’re not using a CDN.

So point three is using it with a CDN and in point four, what we’re going to do is use WebPagTest’s scripting engine to point to your origin server instead of a CDN, so the CDN domain name will make it point to your origin server by remapping it to a different fixed IP address, run the test again and then all you have to do is compare the results. Of course this is a single measurement; you can repeat it, of course, but you have to make sure that what you’re looking at is actually representative of the real world, so I recommend to repeat it a few times to at least make sure that the largest variance is mitigated.

Finally, real user monitoring, what you need to do then ensure your production site’s pages’ resources are served from the CDN domain. Why production? Well, exactly because otherwise you’re not really testing with production with real users, unless of course you have some kind of mechanism where you’re serving the newest version of your website to a subset of your users, then you can use that instead, but in either of those cases, what you need to do is make sure that you’re using a CDN only for a subset of your users - for example, 50% of the users that you’re testing this against because we want to compare non-CDN traffic versus CDN traffic. What you need to do then is install some kind of real user monitoring performance measurement tool - New Relic RUM and Torbit Insight have both got free packages, so you can try either of those – and then again compare the results.

So that’s all I wanted to share with you. I hope it was clear. If there’re any questions, then I’m sure I’ll hear about it later. I’ll now pass it on to Alex who will talk about Drupal plus Acquia.

Alex Jarvis: Hi everyone. Thanks, Wim. My name’s Alex Jarvis. I’m a senior technical account manager at Acquia and it’s been my good fortune over the last two years to help some of Acquia’s biggest customers adopt Akamai and integrate Akamai tools to improve their site. I’m going to go over fairly quickly in high level the steps you’ll want to take on a Drupal site to take advantage of the Akamai network.

So the first rule of a CDN and an Akamai integration is really that every site is unique. I mean as Wim was pointing out, there’re a lot of factors and you need to customize the experience to be what your site needs. Really, I’m going to go over some best practices, but work with Akamai or your CDN provider to really understand the feature sets that they’re offering and to figure out how those work for your site because that’s going to be the most important thing. That said, for Akamai in particular, there are certainly some best practices and some initial steps that you need to take to really be able to leverage Akamai integration. At the most basic level, those changes that you need to make are in settings.php, so Drupal, as I hope you know, already has some CDN reverse proxy flags in settings.php and you’ll want to uncomment those or they’re commented by default. In particular, you’ll want to change the “reverse_proxy”, “reverse_proxy_header” and “omit_vary_cookie” config lines and uncomment those. What that’s going to do is tell Drupal that it is now behind a reverse proxy in the CDN and so that it will track its traffic differently particularly like the reverse proxy header tells it to look at a different HTTP header for the real IP address of the users that are accessing it.

Akamai for example sends requests as a HTTP true client IP header and that’s the user. The reason you want to do that is, for example, in Drupal 7, there’s an automatic throttling in place for things like failed login attempts. If you’re behind the CDN, all of your users’ IPs that are coming from the same edge servers will look the same, so if you have a university and a city that are all hitting the same edge networks and one person in the city makes a failure to log in, suddenly the users in the university nearby are getting blocked even though they have a different IP because it all looks the same. So that’s the note there. Similarly for Akamai and I believe others as well, the CDNs do not handle the very headers that Drupal sends out by default. Those are seen as cache busters and they will break cacheability of your site, so saying omit the vary cookie header will solve that because the vary cookie header encoding will not be included on request.

Finally, another issue that comes up fairly frequently for basic configuration is the base URL for your content where Drupal will try to use the URL that the request is coming into it for all of the static assets. So this is your aggregated CSS and JS and often your embedded content like images as well and it will preserve that URL, but when you’re behind Akamai, when it makes requests back to origin, it makes it against an Akamai name, so the Akamai name may be “origin-akamaidomain.com” and you don’t want users making requests past Akamai. You want them to be going through Akamai for everything and to ensure that that happens, you need to check what the request that’s coming into you is – so that’s the line about forwarded hosts there – and grab the name of the server that is being requested and if that matches that origin domain, that means this request is coming to your origin from Akamai and you want to then rewrite your base URL to be the public Akamai domain so that all those static assets are served from that domain and are coming to Akamai and not to your origin directly.

Alright, so we got a quick question here. I think we’ll keep that for a later time. So two other best practices for Drupal and Akamai is Akamai provides a staging network that they set up and by default they will point that at your production site along with their production network and when you’re doing testing, you’re testing against production. A much more helpful configuration is to have Akamai point their staging network at your staging tier so that you can test things, change configurations and work with Akamai to make changes without impacting production at all or having any risk of impacting our users. The second piece of that is that you should really have separate domains for the public, so a CDN-facing domain which is where users are coming in and an edit domain which is for your administrative users where people log in that are really creating the content and administering the site and this is a best practice with the CDN in general in my experience because you don’t want your administrators coming to the same site that the public is using for a variety of reasons. Chief among those are the CDN is specifically designed to cache as much as it can. I mean if you’re optimizing for the site, you want it to be as performing as it can be and give the best experience to the users, and that’s not really the same objective that you have for your administrators who, by definition, need to see an un-cached version of the page or have access to really see the latest and greatest that’s happening in the site before anyone else. For that reason, they shouldn’t be going through the front door, as it were, because the objectives of those two experiences are quite different. In Akamai especially, you will have unpredictable experiences where administrators will not see the content that they expect or they’ll be seeing some cache things, or even worse, Akamai or whatever network you’re using can cache administrative content on the public-facing site. So the user comes to a page that the administrator was just on and sees your admin bar or other undesirable content.

Similarly, by splitting out an edit domain from the rest of the site, you can take advantage of a whole bunch of additional security measures to make sure that the only thing talking to you from the public domain is your CDN environment and the only thing talking to your edit domain are your privileged users. I’ll talk about that a little bit more in a minute. Finally, as I already mentioned, you really want to make sure that you’re able to maximize cacheability and that you don’t need to put in any special rules or needlessly complicate your caching strategy by trying to use the same domain for both purposes. As I mentioned a moment ago, since you’re securing access, the way that I typically recommend that we approach that is if at all possible, if your company is running an internal DNS server, to have the edit domains entirely on your internal DNS so that no one outside of your network has access to it and the few people that hopefully do need access are able to set up their own host file entries for accessing that domain. Similarly, you can implement an IP whitelist that says “I don’t allow any access to my edit domain except from this IP range inside my network, my administrator’s home IP address” – hopefully they’re using a VPN and that may not even be necessary – “and my CDN or Akamai edge servers.”

So those are the best practices and there’s a whole lot more that you can do as I talked here briefly about some of the other things that you can take advantage of once you have those split domains. One that I really enjoy and adds a lot of security to the site is a blank user roles table. So Drupal by default stores all of the user’s role mapping. So user ID 19 has these roles in the users role table and if you create a copy of that and if you’re using MySQL, you use the BLACKHOLE storage engine where it’s basically writing everything to blackhole and nothing is actually written there. With a separate edit domain, you can add a database prefix to your user’s roles table to be “blank_” and then it will use this newly BLACKHOLE’d table when it does its lookups. As a result, everyone on the site will have no roles assigned, so even someone who’s an administrator on the site, if they accidentally logged in on the public-facing domain, will not have any roles assigned to them and there is no risk that they can access administrative content and have that cached for the public. It’s fairly easy to set up and it gives you a really nice benefit and then you only have to grant access to content that you have for authenticated users. That becomes the only thing where if maybe you have some content like comments that are only available to authenticated users, they’ll still be authenticated, but they will not have any other privilege roles assigned to them.

Similarly, you can mask that you’re even a Drupal site or that your site allows login or has any administrative functions. By modifying your .htaccess file for the site, you can check again incoming domain requests and if it matches the public-facing site, you can 403 or even 404 Not Found administrative paths, the user path, the directories that are in the site by default that are not needed typically for serving site content, and if you really want to mask potentially that it’s Drupal, for example, you can disallow all access to node and only allow your aliases to be accessed, so this gives you a lot of control over customizing the user experiencing and securing the site in a way that isn’t generally convenient otherwise.

So some other things, again, this is fairly high level and it’s site-specific, but some other cool options that you can do with something like Akamai in front of your site or a similarly good CDN is you can work with them and set up exclusion paths for certain things where AJAX callback paths may not be safe to cache in your CDN, so things to look out for are things like a login, lockout paths, AJAX callback, obviously any content creation that any of those paths need to be excluded. Another issue that comes up frequently is if you’re really caching content on your site for a long time and since you have a long TTL – time to live - on the site because maybe certain pieces of content don’t change frequently that Drupal’s aggregated CSS and JS files by default go away after a period of time and that can be problematic if the CDN maybe has the content of the page in place, but doesn’t have all the static assets in cache anymore and then comes back to your origin and requests these aggregated files that haven’t existed for a week or however long. There’re a couple of options for that. In Drupal 7, there’s a new variable, “drupal_stale_file_threshold” that you can increase to be as long as your maximum TTL time on your CDN and it will ensure that those files are preserved for at least that length of time. On Drupal 6, that variable’s not yet present; however, there is the advanced aggregation module called “advagg” available that can similarly be set to preserve these files for an extended period.

Some other things that often come into play are issues around cookie domains making sure that sessions are handled as you’d expect. You may need to change the cookie domains in your settings and if you’re leveraging Akamai in particular, Akamai has a lot of other really nice options for using their services including net storage where you can switch Akamai from being a pull-based system where it’s coming to origin for its request to you have a pushed-based system – and Wim was giving the benefits of those earlier – where you can push the assets that you want to be served on the edge.

Moderator: Alex?

Alex Jarvis: Yes?

Moderator: I’m sorry to interrupt, but I just wanted to say thank you to anyone that has to sign off now. If anyone wants to stay on and finish the presentation and ask questions, we’ll continue.

Alex Jarvis: I’m sorry; we’ve run a little bit over. I’m almost done here, so I’ll wrap it up really quickly. So NetStorage has nice advantages for going from a pull to a push-based system. Similarly for securing the site, Akamai SiteShield allows you to restrict access back to your origin webs and if you really hone the advantages of CDNs in Akamai, you can do some crazy things like allow users to sign in, get details about what role they have on a site and then send them over to your Akamai domain and give them customized content as an anonymous user, but that’s still customized to what rules they have. It’s cool things like that that can be done that Akamai and Acquia have done together that we’d be happy to help you with, and Acquia’s cloud is set up to do this. Aside from the initial configuration that I already mentioned, we work out of the box with Akamai on these things and the sky is the limit of some of the ways that you can optimize and enhance the experience on your website.

So that’s all I had. Thank you so much and I believe I may give it back to Kieran here and we’ll have some Q&A.

Kieran Lal: Great. Thanks, Alex. That was really great content from everybody. One of the things we seem to have found through the presentation was that people were either asking questions directly of the speakers or maybe there’s something going on with WebEx. As the organizers, we only saw some of the questions, so I’d like to ask the panelists, Chris and Wim and Alex, if you’ve seen any additional questions that have shown up in your windows, if you could assign them to me, then we’re going to go ahead and we’re going to take a couple of questions that were directed at Wim earlier and Wim had had a chance to read them, but they were around talking a little bit more about targeting assets that are pushed to a CDN versus one that are not. So Wim, do you want to take that away?

Wim Leers: Sure. So essentially the question was “Can you talk a bit more about targeting what assets are pushed to CDN versus which ones are not?” Looking at the exact timestamp, I think this question relates to the Drupal CDN module. In general, you can implement any kind of logic you want to determine which files are served by the CDN or not. In the case of a push CDN, you control which files are pushed. In the case of an origin-pull CDN, you control the URL, and if you control the URL, then you determine when it’s served from the CDN or not because for example, if your CDN would be mysite.CDN.com, if you use that domain name, then the origin-pull CDN will come back to your origin and get the file so it’s served from the CDN. If you decide to serve it from mysite.com, then it won’t be. So that’s the general answer, and in specific cases, a Drupal CDN module, what you do there is essentially say, “Hey, these specific file types should be served from the CDN.” So you list the domain name then you list file extensions – for example, CSS, JS, JPEG, PNG, GIF, whatnot. Those files are then searched from the CDN, so it’s based on file extension in the case of the CDN module because that’s the easiest and most understandable and most performance solution there.

The other question was “Can dynamic responses static assets be targeted for CDN?” The answer is yes, but it really depends on what you understand on dynamic responses. So essentially anything is possible, but for example, in the case of – I don’t know – a thumbnail that shows the latest promoted product and that changes every day, that is something that can definitely be a use case. The tricky thing is always to ensure that it’s not cached for too long or to make sure that it’s cleared when it needs to be cleared. In the case of Akamai for example, they support explicit purging, so there you can execute or perform a purge command on that specific file so that then the CDN will come and get the new version. That’s one way of doing it, but the most HTTP standard-like way is by using the proper cache control. Essentially what you do there is say this file can be cached for X hours or X days or X seconds and if your CDN listens or takes that specific method into account, then you can rely on it, but you should make sure that your CDN actually listens to those correctly because that’s not the case for every single CDN, or maybe they ignore any caching duration that is below, for example, five minutes or below one minute because otherwise the caching doesn’t make sense from a CDN point of view.

So the answer depends there, but if you’re talking about dynamic responses in the sense of page caching, serving the entire website from the CDN, then I think that maybe Alex or Chris from Akamai can answer the question better because I have little experience in that area.

Alex Jarvis: Sure, I’ll pick up on that. I’m going to jump back real quickly to the first question about pushing content to the CDN and just say from an Akamai perspective, for example, if you’re using NetStorage, that your Akamai configuration will control what kind of assets you want to do and what the lookup order is. You should look for these sorts of assets in that storage first and then you will explicitly control yourself which files you push up to that environment. Speaking more to the second question about dynamic content, in most cases, as Wim indicated, you have some options there being creative with your cache control, but if it’s something where you really need something to be completely dynamic and it cannot be cached every single time, then what you’re usually looking at doing is some form of ESI where you’re going to expose that dynamic piece in some individually-chunkable way that can be requested by ESI so that when the page renders, three-fourths of the page comes from the cache hopefully or as much of it as possible comes from the cache and the dynamic pieces making a callback to some service or some portion of the site that can deliver that dynamic content very efficiently and very quickly because you’re bypassing the CDN for that. In my experience, it’s not particularly feasible to have the CDN directly cache or directly hold dynamic content because the CDN is going back to the origin for anything that it doesn’t have on itself. It’s not processing things. In order to be fast, it needs to be able to just serve something itself and not be concerned about computing it, so if you need to do that, you really need to be looking at how to provide a way for origin to provide that quickly.

Kieran Lal: Okay, great. Thanks, Alex. We’ve got one more question and let me just pull it up here so that I can read it. So it was from Kai and he said “We’ve seen examples with which media files are hosted independently from the site itself. When or why would this be used instead of a traditional CDN?”

Alex Jarvis: I guess I’m not quite following the question. It sounds to me like that basically is a CDN or at least that approach to charting the assets so that they can be requested separately from the site content. I’d be curious if perhaps I’m misunderstanding the question slightly.

Kieran Lal: Kai, feel free to jump in and clarify your question, but I guess my thought would be when do I put my videos on YouTube and have them embedded and delivered as part of my site from YouTube? Then when do I host them, but have the videos delivered directly from the CDN or something along those lines?

Alex Jarvis: So in a case like that, I mean basically anything that you can offload from your origin is a win. The advantage to running your own CDN versus something like YouTube is that you have a lot more control over where that content is coming from and how it’s being handled. If you’re relying on a third party, then you’re also relying on your content delivery system, so you don’t know how YouTube distributes access to their videos. I mean in this case, in YouTube specifically, we’re talking about Google and we can assume probably that it’s going to be pretty good, but you don’t know precisely what’s happening, whereas with the CDN, you, by definition, have a lot more control even if it’s an abstract flavor. I mean you don’t know where every single edge server in Akamai is for instance, but you do know that you can look at the statistics, you can get the information about what’s being served, you can get a sense of where in the world those things are coming from and how they’re being used, so it really comes down to having a better sense of where your content is and how it’s being accessed by whom and how it’s being served. That’s where having a CDN that you’ve set up and you understand empowers you in ways that using a third party service that may not provide or that won’t provide the same level of information that’s helpful.

Kieran Lal: Great. Thanks, Alex. Kai, if you needed to – okay, so one of the things he was saying, the example he was thinking about was, say, an audio hosted independently, but I think he understands based on your response, so great.

Okay, we’re a good chunk over, but as promised, we had some really outstanding content and some real depth and expertise from all of our speakers, from Chris and from Wim and from Alex, so thanks everybody for attending. Hannah, was there anything else you wanted to say to wrap up?

Moderator: No. Thanks everyone for attending. We’ll send you slides and the recording within the next 48 hours.

Alex Jarvis: Alright. Thank you all very much and happy holidays to everyone.

Click to see video transcript

Hannah: Today’s webinar is Accessible Theming in Drupal with Dan Mouyard who’s a Senior Interface Engineer at Forum 1 Communications and I’m Hannah Corey and I’m a marketing specialist here at Acquia

Dan: Hello, I’m hoping everybody can see the slides Accessible Theming in Drupal. As Hannah said my name is Dan Mouyard, Senior Interface Engineer at Forum 1 Communications. I’ve been working in Drupal for about four years now and I’ve been involved in Drupal 8 development core development and I’ve been involved with the HTML5 Initiative, I’ve also been involved with the mobile initiative and I’ve also been a member of the accessibility team for the past couple of years where we get together about once a month and chat and try to talk over some of the accessibility issues going on in Drupal core and contrib. Interface engineer so that’s basically a fancy way of saying themer and so I have a lot of experience with HTML, CSS, Java Script as well as PHP.

As a personal note I’m also legally blind and I wear hearing aids so a lot of the accessibility issues that I work with every day affect me on a personal level and I hope I’ll be able to share some of that insight. The focus for today’s talk is primarily on theming especially in Drupal 7 although some of the topics that I’ll cover I’ll also bring on some of the techniques that we’ve been working on for Drupal 8.

Hopefully you’ll have a good grasp of HTML, CSS and maybe a little bit of Java Script as well as just basic theming topics such as template files and stuff like that and also finally the focus is on accessibility and with a title like accessible theming most of you you tend to probably you’re already familiar with accessibility so I won’t go too much into detail as far as what it is or why it’s important or what that we need to work on but you wanted to point out that my view on accessibility is a little bit different and here’s a great quote from Tim Berners-Lee about the world wide web and he said, “The power of the web or the universality accessed by everyone regardless of disability is an essential aspect.” How that everyone because to me that is the focus of what we do when we build websites is that we want to build them in such a way that everyone can access them regardless of whatever limitations they may have and those could be disability limitations, they could also be device limitations, internet connection, there’s also what’s called situational impairment. Such as being in a bar where it’s very noisy and you’re trying to watch TV. That situation can hamper things so stuff like post captioning could really help and the same thing occurs on the web. There’s things we can do that are vitally important to those with disability but they also help everyone.

Before I get into some of the nitty gritty details of the techniques for advanced theming I just want to highlight really quickly my overall philosophy and that’s through this idea called progressive enhancement and the whole idea of progressive enhancement is just layer on functionality from the simplest most basic to more advanced.

First you start off with HTML. That most – devices and browsers can understand and then you layer on CSS and then you layer on behavior with Java Script and also even within those different areas you can add on layer to more complexity. With HTML you can layer on some of the newer HTML5 elements and with CSS you can layer on top some of the more advanced CSS3 reflective and the rounded corners just stuff like that and again with Java Script.

The reason for this approach is this ground up approach is that regardless of however you’re accessing that page you still get a usable interface and a usable way to consume that content. The main thing that I’ll be covering today is first I’ll be going over some of the accessibility stuff in Drupal Core that you should really be familiar with during theming and then I’ll cover some HTML basic of creating accessible HTML and how you can implementing that in your theme. I’ll also be going over quite a bit of different CSS techniques to make things more accessible and then finally I’ll be going over Aria it’s just a new way you can add basically meta data and semantic to HTML that can help assistive technology understand.

First Drupal Core, what have we done with Drupal Core particularly Drupal 7 that makes things easier accessible to wire that you really should be familiar with when you’re theming? The first one is we’ve added three helper classes. Those are element dash hidden, element dash invisible and element dash focusable.

The first one Element dash hidden here’s to CSS. Basically you’re just applied to CSS property display none. Now what this does is it tells browsers and any other devices that, “Hey, this part of HTML just ignore it.” And so all browsers and devices and such technologies such as screen waiters they won’t present that information. There are times however you want stuff to be hidden visually, but yet that information will still be available to devices such as screen waiters and that’s where we use element dash invisible. You can add this class to anything in Drupal and it’ll essentially hide it from view yet still be there so that screen waiters don’t read aloud whatever the content is inside of it and the finally some of the work done element invisible is the element dash focusable class.

Here’s the CSS. Basically what it does is any hidden element using invisible if you also add element dash focusable class to it whenever keyboard users or other devices tab through the mark up when they hit one of the units focuses it’ll bring it back into view and go back to your tech test and how will you just skip links.

Skip links are the first links on every page which allow users to skip directly to other sections of the same page most often to the main content. Now in Drupal itself it uses only one skip link and it links to the main content but you can have others such as a link to the main navigation or a link to the search box. This is the first link that occur on the page so you really want to limit how many they are. You should never have more than three.

Here’s an example of the bar tech theme and you can see at the very top where it says skip to main content and that’s the skip link. Normally you don’t see that because it has the element dash invisible class but when the keyboard uses a tab to it because it’s using the element dash focusable it pops into view and that allows people to jump down to the main content.

That’s also used in the seven theme here’s an example, which is the default admin theme for Drupal 7. Again if you would tab the first thing that would pop up would be the skip to main content. How this is done is in the HTML template which is provided by the core system module. To add just a little bit of HTML at the very top right after the body tag and the important thing to really focus on here is the target of that link where it says main dash content and the reason that it’s important because that needs to be a valid link and sometimes whenever new themes are created they ignore the skip link or in the page template they don’t provide the target for that link.

Then the page template you should have one of two things. you should either have an anchor tag with the ID of main content or you should have a div or some other HTML with the ID main dash content that enables a skip link to functional correctly and then finally in Drupal Core we do a lot of work on forms and one thing that’s really important about forms and accessibility is that all form elements need to have either a label or a title attribute and Drupal’s form API uses the title property to set these.

Now Drupal’s form API is very powerful. It uses the raise help developers build these forms and so here’s a little a snip at a PHP. Now you’re creating a new form element and just for the sake of this demo we’re creating a text field such as names stuff like that and then the title property that palm title you see where we’re setting a description and for those of you who are not familiar the T function that just allows it to be translated.

What this will do is this will create a text field form element and the label for that element will be description and the markup will be put in such a way that the label is correctly linked to that element but there are times when theming when you might not want to show that label or you might want an attribute of third you might want to move it around. In Drupal 7 the form API was the title display property and there are four possible options.

By default it had before with a label show up before the element. The next after where the label comes after the element in the mark up order and then invisible what this will do just will add the element dash invisible class to the label. It’s still there and screen readers can still hear the association of what that label is to that element and then finally attribute and let you set that then instead of outputting a label for a form element it will set that title as the pedal attribute for that element.

Here’s an example that same one we had before where we added the title display property and we set it to invisible so again this markup will output a text field and description will be the label for that element and it will have the class element dash invisible so it won’t be visible however it will still be read by screen readers.

Next HTML theming, you need to really focus on for accessibility and the first one is just a general best practice is you want to have semantics? Here’s an example code of – no article. Perhaps it’s a node or something like that. You can see where users some of the HTML5 elements and stuff. We have an article, we have the heading, the footer element which is new in HTML5 which doesn’t necessarily have to be at the bottom it’s just a semantic saying, “Hey, this is information about this part of the mark up.”

There’s also the figure element for adding images and the caption and the paragraph where you can emphasize things and even an abbreviation. All this stuff you can make it use different code that’s unsemantic and it will look at the same as well as in theming and just use bids and class and stuff and making it the same exact look however you lose some of that semantic meeting and some of that meeting it doesn’t really make a difference accessibility wise but in the future especially with some of the HTML5 elements newer sister technology can take advantage of this information and so you can change this markup usually in template files in Drupal.

It’s pretty easy you can just open up a template and you can just rename this to article something like that. Where it gets really tricky however is whenever you have custom content types and you have say a bunch of fields on those content types and by default the mark up Drupal outputs for those fields is pretty cluttered because it has to be usable of course through a wide variety of used cases.

You can use individual field templates to overwrite those mark ups to however you want however it’s very difficult to create those field templates that will work across all projects. What I recommend is using the fences module and what defensive module does and its jupitaorg/project/fences and this allows you so that when you create those content types and you create those fields you can define the mark up.

Here is a content type where you can manage fields and what fences will do is if you go under operations and you click edit next to one of those fields now that’s where you can set, “Is this required? What’s the maximum length? What’s the default value.” What the fences module does is it allows you to define the rapid mark up for that particular field.

Usually whenever you’re creating these content types, that’s when you have the best idea of the semantic meeting of those individual fields. The next thing to really focus on for HTML is that the document order is the tab order. If you go to any page HTML page and you use a source you see all the mark up and machines and assistive technology and screen readers they will read through that in the order of that code and that’s very important because for keyboard users they might have visibility issues and can’t use a touch device or a mouse. They have to use a tab on the keyboard or some other binary movement device. They can tab through to different areas on the page and it goes in the same order as this code.

Here’s the example, if you look at the page template. Now this is the default how it orders things. It also has the header, the navigation, the bread crumb the main area and then the footer and then within those the header. The logo the site name and slogan then if you have any blocks assigned to header regions they come after that. You can see the order of how things are and the code. Whenever somebody is tabbing through the stuff on a keyboard you have to be mindful of what order these are because whenever they’re tabbing through the page this is the order that they will hit things.

Now you also notice that the header and the navigation they come at the top. You have to go through quite a bit before you actually reach the main content and that’s the big benefit of the skip link because the skip link is in the HTML template which is a wrapper for the page template so it would come first and so if you hit that you can quickly skip and jump down to the content. You don’t always have to continue tabbing through all of the logos and the navigation and that stuff.

Another thing with HTML is in addition to the source order there’s also the mark up that you can use. Now in Drupal 8 with the HTML5 Initiative we went through all the templates and we tried to use the best HTML6 elements that we could. In this example, the page template we can use the new HTML5 elements. Now we can use the header element, there’s the nav element for the navigation stuff. There’s also the section element so you can create different sections and also the footer element.

The next thing for HTML you really want to be mindful of whenever you’re theming accessibility wise pretty much the one thing you have to focus on for images is you need to have text alternatives for that content. Just like with videos you need to have some tech so that people who can’t see the videos, can’t hear or have perhaps they have flash block and they can’t see the video at all there’s some other information there that they can get some the same thing with images and the primary way to give that alternative tech for images is with the alt attribute.

Now the key thing to keep in mind with the alt attribute is one every image element needs to have an alt attribute. For decorative images perhaps it’s a flower that’s part of the design but it’s not really part of the content you can just leave the alt attribute empty. If it’s a content image then you have want to have that alt tech be short concise description about the image and if that image happens to be inside of a link that image is pointing somewhere then that alt tech should describe the destination of that link and the reason you even want an empty alt attribute on the decorative image is because a lot of screen readers if they see the empty alt they’ll ignore the image which is want you want. Otherwise sitting on a screen reader they’ll say, “Blank image blank image or this is an image or unknown image.” It’ll repeat that so like really interrupt the flow of content.

Also with the HTML as we get more and more into not just desktop but also the tablets and mobile devices you also want to keep in mind the view port. The view port is essentially what the screen is for that particular device. How does it render it? What I’ve seen all too often are these two view port Meta tag.

The first one sets an initial scale a minimum scale to maximum scale and they set it all to one and that prevents zooming which you really don’t want to do because on mobile devices and the tablets sometimes the text is too small for people with vision problems so they want to zoom in and they use the touch zoom in and they can’t and so this prevents them from being able to read that text.

The second one prevents scrolling. Sometimes they might have zoomed in but they can’t scroll to part of the page that is now something out of the view port. The recommended that I recommend is just have width equal device width. This works very well on all devise and just says, “Hey, whatever the device width of the device you have set that as the width of this page.” It allows zooming, scrolling and all that kind of stuff.

Okay so the next big section is CSS and so all the styles and there’s a bunch of different stuff you can do with CSS to make sure your site is accessible. The first one is you should be familiar with the image replacement technique. Just like we had images in HTML and we have the alt texts sometimes you might want to add those images that are decorative as CSS backgrounds but you still want screen readers and search engines to still be able to get that text and that information

Here’s an example where you have – here’s a header where the texts visits the capital. You still get that semantic information but you maybe want to show an image set. so you can use CSS to define an image background for that header and then it’s important you set the height and the width to be the dimensions of the image and there’s a couple of different image replacement techniques that you can use and the one that I have shown here is one of the newer ones so that it will hide the text and still only show the image background.

The next important area for CSS is styling links and three main key ideas you want to keep in mind whenever you’re styling links. One is you want to make them obvious. If you have links on the page you want people to know right away, “Hey, that’s a link I can interact with.” Next is you want to design all sticks of lengths. When it first comes up you also want to have something different for hover or focus whenever somebody tabs through and is focused on it. It’s like whenever somebody actually clicked on that link and also visited so people know where they’ve been and that’s why you want to make them easy to click.

This is just general usability best practice and it’s also very helpful for people with mobility problems. They have a hard time being very accurate with their mouse movements. It’s also very helpful for tablets and touch devices where they have to use fingers and people have really big fingers compared to the length that they’re teaching.

Some cool trick that I’ve done with CSS and links is one if you don’t’ have enough time to design the focus state which is ideal just at a really good default is that whenever you set CSS properties on hover do the same thing for focus and this way if you add the background, different color or stuff like that whenever a mouse hovers over a link the same styles get applied whenever somebody is tabbing through on a keyboard. They can easily see where they are and what’s highlighted.

Another problem I often see is the link outline. Now quite often in reset style sheets you’ll see this just generally applied to all anchors it just says outline zero and that’s because on certain links and stuff on browsers if you click on them and stuff you get this weird outline and it sometimes doesn’t look good however that outline is very important for keyboard users who are tabbing through and so a better default is to use this then you reset and set. Where you set the outline to zero for the hover in the active state but for the focus state you make sure it’s still there that way keyboard users still get the benefit of that outline.

Finally you want to have a big target area for people to work on and this is what I like to use as the default anchor tags is set the margin to negative two pixels and the padding to two pixels and this works on all the inline links and basically just to add two pixels of clickable area around each link and also if you’re tabbing through the outline won’t be right next to the text anymore it’ll be two pixels and it’ll make that text a lot more readable.

The one caviar if you’re having this in your reset is that you want to make sure whenever you’re styling other links such as menu links just stuff like that you want to make sure that you’re overwriting the margins and the paddings so it doesn’t screw things up.

The big part of making themes accessible with CSS is to really pay attention to typography. Now and it’s been said web design is like 90% typography because that is basically what people go to websites for is to get content and to read that information and you want to make that as easy as possible for people to do.

Here’s a great quote from Emil Ruder, he was the head of the Swiss Design School then he’s in – he’s a big influence in a lot of typography work and he said, “A printed work which cannot be read become the product without purpose.” That is the essential purpose of text is to be read and so some general things you want to keep in mind with typography is first of all you want to choose the right fonts and you want to make sure that they’re legible and you want to do as much as you can to make sure that they’re readable easy for people to see what they are.

Some of the display fonts sometimes the lens look they may look cool but they’re a lot harder to read and so you have to balance the design versus how readable it is. Another thing is you want to pay attention to the measure.

Now the measure the typographic term essentially saying how wide a line of text is. If you have a block a paragraph the measure is how wide it is and you don’t want that measure to be too wide because then as I read down the line and get to the end and it comes back to the beginning if that distance is too great and you miss your landing spot. It’s hard to see where the next line is but at the same time you don’t want that measure to be too narrow because then your eyes become like a pinball machine jumping back and forth very quickly from one link to the next.

Another thing is to use the appropriate leading. Leading is another typographic term and it basically means how much space is there between lines and text and in CSS this is the line height property and again just use the go to actual. You don’t want too much space between lines and you don’t want too little space so they crowd together.

Another good general rule of typography is you want to create a nice hierarchy. You want to create a nice hierarchy in the sense of perhaps heading structure. You want to have the heading structure be such a way so that you know this chunk of text belongs in this area. You also want to create hierarchy with lengths and stuff to make things easier to scan and you also want to use font sizes to make this hierarchy more noticeable more make this stand out and then some more detail just stuff to pay attention to.

The first big one is font size and so for nay of the body copy – like so you go through a page and it’s an article page you want that text to be the default font size. If you’re on CSS you want it to be 100% which basically is 16 pixels in all bounces. You want make that – they set it at that size for a reason because it’s very readable and back when the spec was first defined the 16 pixels on a monitor with the same size with PowerPoint that you would see in a book and one of the good things about using the default font size to body copy is that I had mentioned before about the hierarchy you have a much broader spectrum of font size that you can use for some of the smaller detailed stuff. The byline, the more information, full length that kind of stuff.

I remember it was several years ago where like people would set the body copy in like 13 pixels or 12 pixels because it was very very small it’s hard to read and everything looked the same because you couldn’t really get much smaller than that. so the byline, the more information link they were all the same size and very small and the small tech would create eye strain having to read stuff that’s small especially on monitors because it’s not as crystal clear as it is on a pungent magazine or newspaper and the final one good hint I have for whenever you’re designing with font size is to use actual text rather than just the Latin lorem ipsum because you have actual text there. As you’re designing you’re like, “Oh.” You can tell right away whether something is readable or not.

The other thing you pay attention to typography wide is the text alignment and in general you want to let the link the main chunk of body text. For a line if you have a right to left language and you want to avoid using fully justified texts because they create rivers and rivers are like the larger spaces between words that flow through the document and it can really throw off the rhythm of the text and it’s especially hard for people who are dyslexic and again having text alignment that left align that makes it easier to scan text. It easier to jump from paragraph to paragraph and scan through length and stuff like that.

Finally you want to make sure that there’s enough color contrast between the text and what’s behind it. What’s very good at increasing visibility to make things easier for people to read? Here’s a great website contrast dot com and it’s a great example of how high contrast design can still be beautiful and it also has a lot of good facts of why it’s important and the color contrast you want to make sure the smaller text it needs high contrast because it’s smaller. You need to make it stand out a little more and one caviar however is that you can’t have too much contrast. I don’t see this mentioned often whenever people talk about accessibility contrast but for people who are deflecting if there’s too much contrast it can actually make it harder for them to read and so some good tools I recommend is one is a really good tool that we’ve put out recently the contrast ratio and it uses the W3C W Color 3.0 double A and triple A contrast ratios you can easily input color values and see how readable stuff is.

There’s also online color filters great bit dot com and color filter dot work line dot org and they allow you to just put in the address that whatever website you’re looking at and you can see what that text looks like in black and white and under different types of color blindness and then my favorite is the color blindness simulator is that color ore core dot org and this is the program that you install on your computer and they have versions for Windows, Mac and Linux and what this will do is while you’re working you can have whatever – if you’re in a website or if you’re in Photoshop designing it will easily switch the entire monitor to a certain type of color blindness and you can check out and make sure that the color contrast works for whatever you’re designing.

Finally for CSS techniques the tone to responses design paradigm where you build and theme stuff in such a way that it looks good across all devices. So the big idea behind that is the layout. We use media queries to adjust the layout and media queries are basically just CSS that says, “Hey, if this condition is try apply this CSS.” And then you want to set the break point based on the content rather than the devices.

The break point are the different queries where things change. for example if you had a mobile layout where everything’s a single column at some point you have enough room that you may have two columns and where that changes is the break point and you want to set the break point based on how the content looks make sure it’s still readable rather than changing it at different devices and that’s for two main reasons.

One is device sizes are going to change. In five, ten years there will be a monitor on your refrigerator where you can see the website of the grocery store for example and we don’t know what size that screen is going to be and so if you set these break points based on content.

Another reason why you don’t want to set it on device size is that these media queries can be triggered when people zoom in in their web browser. People who have vision problems can zoom in and they can trigger these media queries on caviar. That doesn’t currently work in web browsers but they’re in the bug for it and they are going to be working on it.

Here’s an example of a media query. Here’s some CSS and so for the body we’re shedding with the margins, the max width and the padding for it and so what’s what every browser and device will use that and below it where it has the media screen and min width so that says, if the device screen if the width is 35M or larger then apply the CSS inside and so that’s where we can have a body and set the max width and change things around.

Finally one area of technically we can work on is WAI and so basically this is the Web Accessibility Initiative which was created by the W3C to tackle accessibility stuff and so this is the accessible rich internet application. There’s are – it’s especially Meta data that you can apply to HTML that helps assistive technology understand the purpose of that HTML. Just another layer of semantic that gives more information.

The use of it. Basically it defines a way to make web content and web application more accessible to people with disabilities. It especially helps with dynamic content and advanced user interface control developed with AJAC, HTML, Java Script and related technology. The different area, roads and landmarks and attributes that you can apply to make things more usable to assistive technology.

The only thing we really need to worry about in theming as far as getting started is the land mark roles. If we look at the page template we’ll see these big areas. We’ll have a dim with an ID header, navigation to make content and the footer. What was done for triple A is that we’ve used a new HTML5 tag the header tag, the nav the footer and in the future those tags will go assistive technology can take advantage and they would know what those tags mean.

Some of these technology however can’t understand those yet. At the stop gap we can use Aria. We can use for example replace it where for the header we give it the role of banner and for the nav we give the role of navigation. For the main content we can give it the role of main and for the footer we can give it the role of info and there’s even a new element called the main element which they’re discussing at the W3C and it looks like it’s going to gain track and it will be used but it’s not quite ready yet. We can use the main element instead of the section with the main content and assistive technology that can understand that can now also have the ability to jump that main content rather than having to use a skip link right now.

Okay, before we stop for questions I just wanted to demo really quickly the responses layout. For the past seven months or so at Forum One we’ve been working on the EPA dot gov that can be moving things over to Drupal and so we’ve been building the Drupal platform for them and I was the fine art architect for that project and theming and so one of their big wishes was for responsive design and also accessibility. They really wanted to focus in on that.

Here’s an example of one of their learning pages and one of the benefits of having these responsive layouts is that in addition to the screen readjusting itself at different device width it’s also help for people who have vision impairment that might need to zoom in and the immediate queries can be handled that way. So for example we can zoom in but first let’s look at the default desktop view and suppose somebody needs to zoom in and wanted to be able to read better.

As they zoom in it triggers the immediate queries and you’ll see things and the layout and stuff will change and you’ll see like for example the search field ban that may not widen the bigger the stuff that’s still readable and usable even for people with vision problems they need to zoom in. they rearrange and fit.

Again maybe people are really need zoom in even more and they keep going they can even get the mobile what people might see on a mobile layout. You could see how things change. Length that gives a little more padding so that people with fingers it’s easier for them to touch stuff and again all this stuff makes things usable not just for people with devices but also people with accessibility problems and they might need to zoom in. Okay, we have some time for questions.

Hannah: Hi everyone, if you have any questions please ask them in the Q&A pad in the web XEY please. Okay we have one question come in. what web accessibility checking tools would you recommend now that Bobby is no longer.

Dan: Yes, as far as automated tools one of my favorite currently is the wave tool bar and the new beta version is like the version five of the wave and it’s actually very good. There are also the new bookmark it basically you add it to your bookmark and it’ll go through and check all the mark up for you.

Hannah: Okay, the next question is do you recommend using Zen theme?

Dan: Yes, basically when it comes to themes you want to use whatever you’re most comfortable and a really good thing is you’re choosing themes and you’re really concerned about accessibility is on each theme they might have the tag D7AX which says they really paid attention to accessibility issues. I know John he has put work accessibility wise into the Zen theme and we also have others such as adaptive themes. I know Omega have done some so yes.

Hannah: One more is there a Jay Query book that focuses on accessibility design?

Dan: As far as accessibility design for Jay Query I haven’t seen on but there’s actually a very good book I think it’s called Progressive Enhancement with Java Script and it’s put out – yes I think like within two years ago and it talks all about progressive enhancement and how you add on Jay Query to create widgets that are accessible.

Hannah: Okay, the next one is can you go over how zooming is accessed in media queries?

Dan: Sure, again with media queries now say here we have the media screen and then inside that parenthesis you can say you can test for different things such as aspect ratio, device width, device height that kind of stuff and in general we tend to use the width as the thing we test for whenever doing these layout stuff. Especially if you use M rather than pixels then whenever you zoom in those media queries get triggered. Stamp on this site zooming in and out it triggers the same media queries just as if you work your resize screen. Any other questions?

Hannah: None are coming through so I want to say thank you so much Dan for the great presentation and thank you everyone for attending. Again the slides and recorded webinar will be posted to the Acuaia dot com website in the next 48 hours. Dan do you want to close with anything?

Dan: Sure, you can reach me at dmouyard@forumone.com if you need to email me any questions and also you can follow me on Twitter at DCmouyard that’s also my Drupal.org user name that’s also my RC Nick and also my Skype handle.

Hannah: Great, thanks. Everyone have a great day.

Click to see video transcript

Hannah: Today's webinar is: Constructing a Fault-Tolerant, High Available Cloud Infrastructure for your Drupal Site.
First speaking we have Jess Iandiorio, who is the Senior Director of Cloud Products Marketing, and then we have Andrew Kenney who is the VP of Platform Engineering.

Jess, you take it away now.

Jess: Great, thank you very much, Hannah. Thanks everybody for taking the time to attend today, we have some great content, and we have a new speaker for our Webinar Series. For those of you who attend meetings you know we do three to five per week.

Andrew Kenney has been with the organization since mid-summer, and we are really excited to have him, he comes to us from ONEsite, but he is heading our Platform Engineering at this point, and he is the point person on all things; Acquia Cloud specifically, he'll speak in just a few minutes.
Thank you, Andrew.

Just to key up what we are going to talk about today, what we want to talk about, is we want our customers to be able to focus on Web innovations, and creating killer websites is hard, so that’s why we wanted to be able to spend all of the time you possibly can, figuring out how to optimize your experience and create a really, really cool experience on your website. Hosting that website shouldn’t be as much of a challenge.

The topic today is designing a fault-tolerant, highly available system and the point of the matter is, if your site is mission-critical how do you avoid a crisis, and why do you need this type of infrastructure?

Andrew has some great background around designing highly-available infrastructure and systems, and he's going to go through best practices and then I'll come back towards the end just to give a little bit of information about Acquia Cloud as it relates to all the content he's going to cover, but he's just going to talk generally about best practices and how you could go about doing this yourself.

Again, please ask those questions in the Q&A Tab as we go, and we'll get to them as we can. For the content today, first Andrew is going to discuss the challenges that Drupal sites can have when it comes to your hosting, what makes them complex and why you would want a tuned infrastructure in order to have high availability. He's been able with the types of scenarios that would cause failure, how you can go about creating high availability and resiliency, talk about the resource challenges with some organizations may incur, and then you may go through practical steps in best practices around designing for failure and how you can actually do that and architect and automate the failover as well. He'll close with some information on how you can test failure as well.

With that, I'm going to hand it over to Andrew, and I'm here in the background if you have any questions for me, otherwise I'll moderate Q&A and I'll be back towards the end.
Andrew: Thank, Jess. It's nice to meet you, everyone. Feel free to ask questions as we go or we can just have those at the wrap up, and I'm more than willing to be interrupted though.

Many of you may be familiar with Drupal and its state as a great PHP Content Management system, but even with it being well engineered in having a decade-plus of enhancements, there some number of issues with hosting Drupal and these issues were always present if you're hosting in your own datacenter, or environmental server in, let's say, RackSpace or a SoftLayer but even more challenging when you're dealing with Cloud hosting.

The Cloud is great at a lot of things, but some of these more Legacy applications are very, very complex and extensive applications may have some issues which you can solve with modules, you can solve with great platform engineering, or you can just work around in other ways.

One of these issues is Drupal expects POSIX file system, this essentially means that Drupal and all that’s filing the output calls were designed with the fact that there's a hard drive underneath the Web server, if not a hard drive in there, is an NFS server, there's a Samba server. There's some sort of underlying file system. This is not oppose to some new applications where maybe they're built by default to go store files inside Amazon [Espree 00:04:16] or inside Akamai NetStorage, or inside documented oriented database, like CouchDB or one of those databases.

Drupal has come a long way especially in Drupal 7 in making it so that you can enable modules that will use PHP file streams instead of direct app open … Legacy, Unix file operations, but there's a number of different version of Drupal and they don’t all support this and there's not a lot of great file system options inside the Cloud. At the end of the day Drupal still expects to have that file system there.

A number of other issues are: Drupal may make … you may make five queries on a given page, you may make 50 queries on a given page, and when you're running everything on a single server this is not necessarily a big deal. You may have latency in the hundredth of milliseconds, when you run you're running something on the Cloud it may be the same latency on a single server, but now let's talk about you're running and even with the same availability zone in the Amazon you may have your Web server on one rack and you may have your database on a rack that is a few miles away within the same availability zone.

This latency, even if it's only one millisecond or 10 milliseconds per query it could dramatically add up. One of the key challenges in dealing with Drupal both at the scale of [horizontal 00:05:49] layer as well as just in the Cloud in general, it's how you deal with high latency MYSQL operations. Do you improve the efficiency of the overall page and use less dynamic modules or less … 12-way left joins and views and different modules? Do you implement more cashing? There are a lot of options here but, in general, Drupal still needs to do a lot of work in improving its performance at a database layer.

One other similar-related note is Drupal is not built with partitions tolerance in mind, so Drupal will expect to have a master database that can you can go commit transactions to. It won't have any automatic charging built in so if you move, let's say, the articles on your website, your article section may go down but you'll still have your photo galleries; your other node-driven elements.

Some other new-generation applications may be able to deal with the loss of a single backend database node, maybe they're using a database like a REOC or Cassandra that has grades, partition tolerance built into it, but unfortunately MySQL doesn’t do that unless you're in familiar in charting manually. We can scale out Drupal and scale up Drupal to MySQL layer and we can have availability MySQL, but at the end of the day if you lose your MySQL layer you are going to lose your entire application essentially.

One of the other issues with Drupal hosting is, there's a shortage of talent, there's a shortage of people that have really driven Drupal at a massive scale. There are companies like … the economies of the world who are the Top 50 Internet site that’s powered by Drupal, or there's talent that the WhiteHouse giving back Drupal, but there's still a lack of good, dev ops expertise in terms of selling that … an organization that runs hundreds of Drupal sites. How to go to deploy this either on your internal infrastructure in, let's say, a university IT department, or to go deploy it on a rack space or a traditional datacenter company?

Drupal has its own challenges, and one of those challenges is: how do you find great, either engineering operations, dev ops people to go help you with your Drupal projects?
Now there's a number of ways, and you're all may be aware. of how an application would die in a traditional datacenter. That may be someone tripping over a power cord, it may be you lose your Internet access or one of your actual upstream ISPs or you have DDOS attack.

Many of these also go from the Cloud, but the Cloud also introduces other more … complex scenarios or in a couple of scenarios. You can still have machine loss, Amazon exacerbated this by that machine loss may be even more random and unpredictable, so Amazon may announce that a machine is going to be available on a given day, which is great and probably something that your traditional IT department or infrastructure provider didn’t give you unless they're very good at their jobs.

There's still a chance that at any given moment, Amazon machine may just go down, and may become unavailable, and you really have to introspection into why this happened. The hypervisor layer, all this … the hardware abstraction is not available to you, Amazon shields us, RackSpace Cloud shields us. All these different clouds shield you from knowing what's going on at the machine layer, or there may just be a service outage, so Amazon may lose a datacenter, RackSpace, just this weekend, issued in the Dallas region with its Cloud customer.

You never know when your actual infrastructure and service provider is going to have a hiccup. There may just be network disruption, this could be packet loss, this could be routes being malformed going to different regions different countries, a lot of different ways that the network can go impact your application, and it's not just traffic coming to your website, it's also your website talking with its main [cache 00:10:10] layer, talking with its database layer, all these different things.

One of the key points of Amazon Cloud specifically, is that its file system, if you're using Lasix Box storage, there's been a lot of horror stories out there about EBS outages have taken down Amazon's EC2 systems or anything that’s backed by EBS. In general, it's hard to go have an underlying, like I said before, a POSIX file system at scale, and EBS' instrument technology, but it's still in its infancy. Amazon, although it's focused on reliability and performance for EBS has a lot of work to do to go and improve that, and even people like RackSpace are just now deploying their own EBS-like sub-systems with an open stack.

Your website may fail just from a traffic spike. This traffic may be legitimate traffic, maybe someone talked about your website on a radio program, or TV broadcast, or maybe you get linked from the homepage of TechCrunch or Slashdot, but traffic spike could also be someone that’s initially trying to take down your website. The Cloud doesn’t necessarily makes us any worse other than the fact that you may have little to no control over your network infrastructure, you do not have access to a network engineer who can go to point out exactly we are upstream and all these factors are coming from, and go implement routing changes, or firewall changes to do this, so the Cloud may make it harder for you to go control this.

Your control thing, your ability to go manage, your service in Cloud may go down entirely; this is one of the issues that crops up on Amazon when they have an outage. They may go all the way back so you can do anything. It may go down entirely and you have to be able to engineer around this and ensure that your applications will survive even if you can't go spin up new servers or go adjust resizing and different things like that.

Another way to system failure is that your backups may fail, it's either a network to go and do backups of servers and volumes and all these different things in the Cloud but you have no guarantee even when the API says that a backup is completed, that it's actually done. This may be better than traditional hosting but it's still a lot of progress to be made in engineering to go accommodate this.

In general, everyone wants to have a highly-available and resilient website, there's obviously different levels of SLAs, some people may be happy if the website can sustain an hour of downtime, other organization may feel their website condition is critical and even a blip of a few minutes is just too much because it's actually having financial transactions or just publicity if the website is down.

In general, Drupal specifically should be … your hosting at Drupal should be engineered with high availability and resiliency in mind. To do this you should plan for failure because that’s in the Cloud, just know that any given time a server may die and you can have either the hot standby and process in place to go spin up a new server. This means that you want to make sure that your deployment and your configuration are as automated as possible.

This may be a puppet configuration, it may be CFEngine configuration and may just be the chef or a batch script that says, "This is how I spin up a new machine and install the necessary packages. This is how I check out my Drupal code from GitHub, but at the end of the day when you're woken up by a pager … due to a pager at 2:00 in the morning, you don’t want to have to go think about how you built the server, you want to have a script to go spin it up, or ideally, you want to use tools to go have it scale over automatically; and so you actually have no blips.

Obviously, to have no blips means that you need to have this configured automatically. You should have no single points of failure, that’s the ideal in any engineering organization, any consumer-facing or internal-facing website application have no single point of failures. In a traditional datacenter would mean having dual UPSs, having dual upstream power supplies and network connectivity; having two sets of hard drives in their machine, or having RAID, or having multiple servers, having your application low-distributed across … in regions … geographic regions.

There's lots of single points of failure out there. The Cloud abstracts a lot of this, so in general it's a great idea to run the Cloud because you don’t have to worry about the underlying network infrastructure, you can actually spin a server up in one of the five Amazon East Coast availability zones, and you don’t have to worry about any of the hardware requirements or the power, or any of those things. In order to have no single points of failure, it means you have to have two of everything, or if you due to the downtime have … you can use Amazon's Cloud formation along with CloudWatch to go quickly spin up a server from one of your versions and just boot that up that way, but definitely it's good to have two of everything, at least.

You will want to monitor everything, before I said you could use CloudWatch to go monitor your servers, you can use Nagios installations, you can use Pingdom to make sure that your website is up, but you want everything monitored, so your website itself is returning … Drupal returning the homepage, do you actually want to submit a transaction to go create a new node and validate that this node is there, using companies like Selenium.

Do you want to just make sure that MySQL is running, do you want to see what the CPU help is, or how much network activities there is, and one of the other things is you want to monitor your monitoring system. Maybe you trust that Amazon's CloudWatch isn't going down, maybe trusting Pingdom not to go down, but you probably won't trust the fact that if you're running Nagios and your Nagios server goes down, you can't sustain an outage like that, you don’t want that to happen at 2:00 in the morning and then someone tells you on Monday morning your website has been down all weekend, and a good idea to monitor the monitor servers.

Backing up all your data is key for resiliency and business continuity, and ensuring that your Drupal Cloud system is backed up; your MySQL database is backed up. Your configurations are all there, and this includes not just backing up but validating that your backups are working, because many of us may have been in organization where, yes, the DBA did back up a server but when the primary server failed and someone tried to restore it from the backup someone found out that, oh, well, it's missing one of the databases or one of the sets of cables. Or, maybe the configuration or the password wasn’t actually backed up so there's no way to even log in to that new database server.

It's a very good idea to always go and test all of your backups, and this also includes testing emergency procedures, for organizations have to have business continuity plans, but no plan is flawless and plans have to be continually iterated just like in software. The only way to ensure that the plan works is to go actually engage that plan and test it, so it's all of my recommendation that if you have a failover a datacenter, or you have a way to failover for your website, you will want to test that failover plans.
Maybe you only do it once a year or maybe you do it every Saturday morning at a certain time, if you can engineer out so there's no hiccup for your end users, or may be your website has no traffic at any given point in time of the week, but it's a great idea to actually go test those emergency procedures.

In general, there's challenges with Drupal management, and just the resource challenges. The Cloud tells you that your developers no long have to worry about all the testy details but are necessary to go launch and maintain a website. You don’t have to have any operations staff to be more … invest in Hype. I think a lot of engineers always felt that the operation team is just a bottleneck in their process and once they have validated that their code is good, either versus their opinion or they're running their own system test, or unit test. They wanted to go just push that live and that’s one of the principles of continuous integration.

The reality is that developers aren't necessarily great at managing server configurations, or engineering a way to go deploy software without having any hiccup to the end user client who may load a page and then there's an AJAX call that refreshes in another base so we want to make sure that there's no delay in the process, and that code doesn’t go impact the server, and the server configurations are maintained.

Operations staffs are still very, very likely and you have to go plan for failure to go plan your performer process in reality. It's very hard to go find people that are great at operations as well as understanding an engineer's mindset, and so dev ops is resource challenge.

Here's an example of how we design for failure. Here at Acquia, we plan for failure; we engineer different solutions to different clients' budgets to make sure that we give them something that will make their stakeholders, internally and externally happy. We have multiple availabilities on hosting so for all of our managed Cloud customers when we launch one server we'll then have another backup server in another zone.

Drupal will replicate data from one zone to the other. If there's any service interruption in one zone it will go serve data from the other zone, so this includes the actual Web node layer, or the Apache servers that are serving the raw Drupal files includes the file system. Here we use Cluster effects to go replicate the Drupal file system from server to server and from availability zone to availability zone.

It's also the MySQL layer, we'll have a master database server in its region, or we may have a slave against those master database servers, but it's ensuring that all the data is always in two places and anytime there's a hiccup in one Amazon availability zone it won't impact your Drupal website.

Sometimes that’s not enough. There's been a number of outages recently in the Amazon's history where maybe one availability zone goes down, but due to the control system failure, or due to other issues with the infrastructure there's multiple zones that are impacted. We have the ability to have multiple region-hosting, so this may be out of the East Coast, and the West Coast, U.S. West, and maybe the … our own facilities.

It really depends on what the organization wants, but the multi-region hosting gives businesses the peace of mind and the confidence that if there is a natural disaster that wipes out all of U.S. East, or if there's a colossal failure that’s a cascading failure in the U.S. East, or one of these different regions that your data is always there, your website is always in another region, and you're not going to experience catastrophic delays in bringing your website up-to-date.

During Hurricane Sandy there were a number of organizations that learned this lesson when they had their datacenters in, let's say, Con Edison's facilities in Manhattan and maybe they're in multiple datacenters there, but it's possible for an entire city to go and lose power for, potentially a week, or to have catastrophic damage by water to the equipment. It's always important to have your website available for multiple regions and we offer that for our clients.

One of the other key things … since they are to prevent failure is making sure that you understand the responsibilities and the security model for all the stakeholders in your website. You have the public consumer who is responsible for their browser and them engaging with them and showing they don’t distribute their passwords to unauthorized people.

You have Amazon who is responsible for the network layer for … during that two different machine images on the HyperVisor don’t have the ability to go disrupt each other. Making sure that they are … the physical medium of the servers and the facilities are all locked down and that customers using the security groups can't go from one machine to the other, or have a database called on from one rack to the other for different clients.

Then you have Acquia who is responsible for the software servers to the platform as a service layer with Drupal hosting. We are in charge of the operating system patches, we are in charge of configuring all of the security modules for Apache and in charge of recommending to you that you have Acquia network inside tools that you need to update … you need Drupal modules to ensure a high security, and you do all these things, but that brings it back to you. At the end of the day you're responsible for your application, your developers are the ones that go and make changes too and implement newer architectural things that may need to be security tested, or that choose not to go update a module for one point of view or another.

There's a shared security model here which covers both security availability in compliance, there may be a Federal customer who has to have things enabled a certain way just to go comply with a [FISMA 00:24:23] or Fed ramp accreditation. Obviously security can go impact the overall availability for your website and you don’t engineer for a security up-front them half of them can go take down your machine or they'll compromise your data so you don’t want your website back online until you’ve validated exactly what has changed.

What's very important to understand in the shared security module, and as you're planning for failure. Another thing I had briefly touched before was monitoring. This includes both monitoring your infrastructural application as well as monitoring for the security threats I just mentioned. At Acquia we use a number of different monitoring systems which I'll go in detail in, including Nagios, including your own 24/7, 365, operation step, but we also use third party software to go scan our machines to ensure that they are up-to-date and have no open ports that may be an issue, or have no demons running that are going to be an issue. Or have no other vulnerability.

This includes Rapid7, OSSEC, monitoring the logs, and for thwarting any … lots of issues across issues during security scans. It's important to monitor your infrastructure both from making sure the service is available as well as there's no security holes.
Back to monitoring, we have a very robust monitoring system, it's one of the ways … it's one of the systems we have to have, it's something we have 4,000-plus servers in the Amazon's Cloud, so all the Web servers and database servers and the Git and SVN servers, and all these different types of servers, they are monitored by something we call [Mon 00:26:01] Server, and these, on servers check to makes sure the websites are up, check to make sure that MySQL and Memcache is running, all these different things.

The mon servers also monitor each other, you see that from the line form mon server to mon server at the top, so they monitor each other in the same zone. They may choose to go monitor a mon server in another region, just to ensure that if we lose and entire region we want to get a notice about it.

The mon servers may also be the [height 00:26:32] of Amazon's Cloud, that we may go through rounding from someone like Rackspace, just to have your own business continuity, best-breed monitoring to ensure that if there is a hiccup or service interruption in one of the Amazon regions that we go and catch it. It's important to have external validation of the experience and if we … we may just use something like [Pingdom 00:26:50] in order to go ensure your website is always there.

Ensure that it is operating within the bounds of its SOA, so there's all sorts of ways to do monitoring but it's important to have the assurance that your monitored servers are working and each monitor that goes down has something else alert you that it's down, just so you don’t impact your supporter operations team in trying to recover from an issue.

In pattern high availability resiliency in your monitoring infrastructure is very important. One of the other things; just being able to recover from failure; this includes having database backups; this includes having a file system, snapshots, so you can recover all the Drupal files, making sure that all your EBS volumes are backed up. Pushing those snapshots coming way over to [Espree 00:27:42], making sure that the process is replicated using a distributive file system technology-like luster. With all of this, you can potentially recover from catastrophic data-failure because having backups is important.

You can choose if you want to have these backups live, live replication of MySQL or the file system, or just hourly snapshots, or weekly snapshots, and that depends on your level of risk and how much you want to go spend on these things.

In terms of preventing failover, we utilize a number of these different possibilities, but you can use Amazon Elastic load balancers, multiple servers behind an ELB, and these servers can be distributed across multiple zones. For example, we use ELBs for a client like the MTA of New York, where they wanted to go and ensure that Hurricane Sandy wiped out one of the Amazon availability zones, we can still serve their Drupal website from the other availability zones.

We also used our own load balancers just in our backend to go and distribute traffic between all the different Web nodes, so one of the availability zone may go for request to the other availability zone, where you can do round robin, and that’s a different logic in there to go to distribute the request to all the healthy Web nodes, and to make sure that any unhealthy Web node we cannot sent and travel too, so while our operations team are automating systems to go recover from the reason it's unhealthy.

We have the ability to also use DNS switch to take a database that's catastrophically failed or has other replication labs or something out of service. We always choose, at Acquia, to ensure that all your data transactions are committed. We'd rather have no data loss than incur a minimal service disruption, and so you're potentially losing, usually uploading the file or a user … and account being created or some other … we have people building software service business on top of us, so that loss and protection is very important to us, and so we utilize a DNS switch mechanism to make sure that that database traffic all flows to the other database server.

For the larger sites, multi-region sites, we actually use the manual DNS switch, to switch from region to the other, this prevents a flopping of an issue and having a cache server turned into something even worse, where you may have data written to both regions. The DNS switch allows us and allows our clients to build their Web site over when they choose to and then when everything is status quo again, they can go build back.

As I said before it's very important to test all of your procedures and this includes your failover process. It should be scripted so you can go, failover to your secondary database server, so you shut down one of your Web nodes and have it auto-heal itself. People like Netflix are brilliant about this, where they have their Simian Army as they call it, that they can go shut down RAM and shut down servers, and shut down entire zones and ensure that everything is recovered.

There's a lot of best practices out there in terms of actually testing the failover, and these failover systems and the extra redundancy that you’ve added to the [limiting 00:31:22] or points of failure is key and other non-disaster scenarios. Maybe you were upgrading your version of Drupal or you're rolling out a new module and you need to go add a new database or alter a cable, go through that process within Drupal.

You can failover to one of your given database nodes and then apply the [modular 00:31:44] schema changes to that node without impacting your end users. There's ways you use these systems and in your normal course of business to make sure that you use the available nodes to their full capacity and minimize the impact to your stakeholders.

Jess, do you want to talk about why you would to do everything yourself?

Jess: Sure, yeah. Thank you so much, Andrew. I think that was a really good overview, and hopefully people on the phone were able to take some notes and think about, if you want to try this yourself, what are the best practices that you should be following.

Of course, Acquia Cloud exists and as I'm in marketing I would be remiss not to talk about why you'd want to look at Acquia, but the reasons why our customers had chosen to leave DIY, they are mainly pocketed into these three groups. One is: they don’t have a core competency around hosting let alone high availability, and so if that core competency doesn’t exist it's much easier and much more cost effective to work with a provider who has that as their core competency and can provide the infrastructure resources as well as the support for it.

Another main reason people will come to Acquia is they don’t have the resources or have no desire to have the resources to support their site meeting 24x7 resources available in order to make sure that the site is always up and running optimally, so Acquia is in a unique position to respond to both Drupal application issues as well as the infrastructure issues. We don’t make code changes for our customers but we always are aware of what's going on with your site, and can help you very quickly identify the root cause of an outage and resolve it quickly with you.

Then one of the other reasons is it can be a struggle when you're trying to do this yourself, either hosting on premise and you have purchase servers from someone or if you’ve actually gone straight to Amazon or Rackspace. Oftentimes people have found themselves in between sort of blame game and a lot of finger-pointing if the site goes down, their instinct would be to call the provider and if that provider says, "Hey, it's not us, lights are on, you have service," then you have to turn around and try to talk to your application team, what's wrong, and so there can be a lot of back and forth, a lot to time wasted and what you really is your site up and running.

Those are reasons to not try and do this yourself, of course you're welcome to, but if you try and you haven’t had success, the reasons you're going forward with Acquia is our White Glove service so, again, fully managing on a 24x7 basis for the Drupal application support as well as the infrastructure support, as well as our Drupal expertise, so we have about 90 professionals employed here at Acquia across operations, who are able to scale up and down your application.

We have engineers, we have Drupal support professionals, and they can help you either on the break-fix basis or on an advisory capacity to understand what you should be doing with your Drupal site between the code and configuration to make it run more optimally in the Cloud, so that’s a great piece of what we offer. Of course all of the engines covered today in terms of our high availability offerings and our ability to create full backups and redundancy, across availability zones as well as Amazon Regions.

We are getting to the close here, if you have some questions I'd encourage you to start thinking about them and put them into the Q&A.

The last two slides here just showcase the two options that we have if you would like to look at hosting with Acquia, Dev Cloud is a single server self service instance, so you have a fully-dedicated single server, you manage it yourself and you get access to all of our great tools that allow you to implement continuous integration best practices.

This screen shot you're seeing here is just a quick overview of what our user interface looks like for developers and we have separate dev staging and prod environments pre-established for our customers, very easy-to-use drag and drop tools that allow you to push code files and database across from the different environments while implementing the necessary testing in the background, to make sure that you never have made a change to your codes that could potentially harm your production site.

The other alternative is Managed Cloud, and this is the White Glove service offering where we promise your best day will never become your worst day with someone playing traffic spike that ends up taking your site down. We'll manage your application and your infrastructure for you, our infrastructure is Drupal tuned with all the different aspects that Andrew has talked about. We've used exclusively Open Source technologies as part of what we add to Amazon's resources and we've made all the decisions that need to be made to ensure high availability across all the layers of your stack.

With that, we'll get to questions, and we have one that came in. "Can you run your own flavor of Drupal on Acquia's HA architecture?"

Andrew: The answer is, yes. You can use any version of Drupal and I think we are running Drupal 6, 7 and 8 websites right now. You can install any of Drupal modules you want, we have a list of which HA extensions we support. We support most popular modules out there. There's always been a day, maybe there is some random security module or some media module that need something and we may need to go sell it for you or recommend the alternatives. You can … people have taken Drupal and just … for lack of a better word, and just bastardized it, and just built these kind of crazy applications on top or we've written chunks of it, and then it also works with our HA architecture.

Our expertise is in the Core Drupal, but our professional services and our technical account managers are great at analyzing applications and understanding how to improve them in performance so by now we support pretty much any … the platform can host any PHP application, or static application. It's optimized for Drupal, but the underlying MySQL and file system and Memcache and all these different requirements for Drupal website; they are the AJ capabilities of that works across the board.

Jess: And we do have multiple incidents where customers have come to us, and they’ve got their application running and in our Cloud environment fine, but they came to us from hosting directly with RackSpace or Amazon and they found it to be either unreliable or it just wasn’t cost-effective for them because of the amount of resources that had to be thrown at the custom code.
Another good thing about Acquia is through becoming a customer you can have access to all these tools that help test the quality of your code and configuration, so when you have extensive amounts of custom codes that are brought into our environment we can help you quickly figure out how to tune it and/or if there are big issues that are the culprit for why you would need to constantly increase the amount of resources you're consuming; we can let you know what those issues are and we can do a site audit through [PS 00:38:37] like Andrew mentioned.

Our hope and our promise to our customers is that you're lowering your total cost of ownership if you're giving us the hosting piece of it along with the maintenance, and if there's a situation for any of our customers where we are continually having to assign more resources because of an issue with the quality of your application; that’s when we'll intervene, and suggest, as a cost-savings measure, work with our PS team to do a site audit so we can help you figure out how to make the site better and therefore use less resources.

Andrew: In a lot of cases we can just grow more hardware at a problem to go have that be a Band-Aid, but it's at both our best interest and the best interest from the customer in terms of both their [builds 00:39:17] as well as having an application that will last for many, many more years, to have our team recommend this is what you should not have done. This is how you can best use this module or this other recommendation to go have a more highly-optimized website for the future.

Jess: The question on, "Why did Acquia choose Amazon to standardize the software, Cloud, on?"

Andrew: Acquia has been around for the past four or five years and Amazon was the original Cloud Company, I was at the Amazon Reinvent Conference a couple weeks ago and one of the key agencies there said, "Amazon is number one in the Cloud and there is no number two." We chose Amazon because it was the best horse and the time, and we continue to choose Amazon because it's still the best choice.

Amazon is … their release cycle for new product features and new price change and all these things is accelerating. They continue to invest in new regions and Amazon is still a great choice to go reduce your total cost of ownership by increasing your agility and your velocity to go build new websites and deploy new things, and move things off your traditional IT vendor to the Cloud, and so we are so very, very strong believers in Amazon.

Jess: "Does Acquia have experts in all theirs … as a Drupal architecture across the data base, the OS, caching?" Then the marketing person is going to take a stab at this, where it's a [crosstalk 00:40:50]…

Andrew: We definitely have experts at all different levels. The RBS team may go and we have some Red Hat experts, we have some … a bunch of experts so they can go recommend different options, for people who don’t host with us. Internally we are all gone to based-hosting so that that may be the expertise about operations staff. Database, we know we have operation staff dedicated just to MySQL. We have support contracts with key MySQL either consulting or software companies for any questions that we can't handle.

It's one of the ways that we go scale if you don’t have to go pay at the corner of the world a 10 grand fee for something that we can just go ask them. Caching , we have people that have … help design some of the larger Drupal sites out there and live through them to be under heavy traffic storms, people that they may go contribute after Drupal Core caching modules, be it Mem-cache or regis-caching and all these different capabilities. With [Agar 00:41:56], we don’t have to use Agar internally but we do interact with it and support it, a lot of our big university or government clients may be using Agar in their internal IT department and they may go and choose to use us for maybe some of the flagship sites or for some other purpose. Yes, we do have experience across the board.

Jess: [Ken 00:42:22], unless you have you any questions that you came in straight to you; that looks like the rest of the questions that we have for the day. Hopefully that you found this valuable, you’ve got 15 minutes back in your day, hopefully. You can find good use for that.
Thank you so much, Andrew, for joining us, I really appreciate it and the content was great.

Andrew: Thank you.

Jess: Again, thanks everybody for your time; and the recording will be available within 48 hours if you'd like to take another listen, and you can always reach out directly to Andrew or myself with any further questions. Thank you.

Andrew: Thanks everybody.

Click to see video transcript

Speaker 1: Hi everyone. Thanks for joining the webinar today. Today’s webinar is Creating Solid Search Experiences with Drupal, with Chris Pliakas, who is the product owner of Acquia Search.

Chris Pliakas: Today’s webinar on creating a search experience with Drupal, I think we’ve done a lot of webinars in the past where we focused on Acquia Search, we focused on some of the basics. Today, I really wanted to focus on just Drupal in general, not having Acquia Search focus. Of course, all these techniques can be used with Acquia Search, but I just wanted to highlight some of the things that are in the community.

Also, based on our experiences hosting over 1,600 indexes, with 1,600 subscriptions, people who are experimenting with search pages and various UX talk about some of the trends that we’re seeing, and in order to get the best experience possible, we wanted to touch on some content strategy items that you can employ to make sure that your search is set up for success.

Then we’ll focus on the search page user interface, so we’ll do a live demo exploring some of the tools that are available to Drupal right now that can be used to create modern-day search user interfaces so that your users get the best experience possible out of the application and could find content that they’re looking for.

Also, we’re going to demo some things that are coming down the pike. I think it’s important to recognize that right now, enterprise search is at a crossroads, and I just want to distinguish for a minute what enterprise search means. When we talk about enterprise search, we’re talking about internal site search, and enterprise doesn’t necessarily mean large corporations. Enterprise simply means that that search is important to your business and important to you, so this isn’t just a big business thing. This is for searches of any size.

But we see some trends that are emerging with external searches, searches like Google, Bing, Yahoo!, that are now going to be expected by users of your internal site search. Trends that are emerging in the search community at large, really, there is going to be an expectation that your search experience matches what’s out there currently. It’s pretty advanced things, so we’ll talk about those trends and we’ll talk about what’s changing in this space specifically.

We talked about search is really evolving. Over the past 10 years or so, which is quite a long period of time, your internal site search really hasn’t been much more than the user entering keywords and then displaying results that are pretty basic. You have a title, you have a snippet, that sort of thing. But really, right now, search is starting to move into a different space where we have to identify what the user is actually looking for and then display relevant results. Relevant results don’t just mean keyword matching, meaning knowing things about your content, knowing things about your user to make some assumptions to present them with relevant results.

As we create more and more content on the Web today, it’s getting harder and harder to sift through that data and display meaningful data. One thing that we’ll start out with is just a simple example.

What I want to start out with is talking about Apple. How many people know Apple? All right, so I see some hands in the webinar. I guess I want to ask “How well do you know Apple?,” so a first question that I want to ask you is, “Is Apple growing?” I’ll let you answer in your heads. It’s not really a good forum for answering in public.

The second question that you should think about is does Apple have money? Then the third question, is Apple multilingual? Does Apple support multiple, does Apple have knowledge of multiple languages? Does Apple speak more than just English? Those are the three questions that I want you to answer in your head.

I’m just going to assume that you guys did a good job and you were able to answer that. Based on those three questions, I think there is no doubt that we’re talking about Apple Martin, who is the daughter of Gwyneth Paltrow and Coldplay lead singer Chris Martin. Apple Martin, like all kids, she is growing. Does she have money? Absolutely. I think her parents are doing pretty well; one is a rockstar; one is an actress. One useful tidbit is that she cannot watch TV in English, so she is getting raised as a multilingual speaker.

Is that the wrong Apple that you were thinking of? I’m assuming that it is. Tech audiences, let me say, Apple, usually think of the company Apple, and the problem really is about context. The first trend that I want to talk about is contextual computing. Right now, we start to see how Apple could mean different things. It could mean the fruit. It could mean the company. It could mean Apple Martin. It could mean Fiona Apple. It could mean a lot of different things.

When I talk about context, I mean the things surrounding it that expose the content for what it is. For example, if we are talking about Apple being a Fortune 5 company or a Fortune 1 company, whatever it is right now, then that context would expose Apple as being a company. If we were on a pop culture website, then it would be more likely that Apple is the daughter of Gwyneth Paltrow, like we mentioned.

Context and how it relates to your content is getting to be really important as we get more and more data. Sites aren’t just displaying one thing now. Sites are starting to display lots of different pieces of content, and we need to start recognizing that simple keyword searches aren’t going to serve our users. We really want our results to be relevant towards what people are actually looking for.

One way that we can do this is by search statistics. Search is a really unique tool in that it is a window to what your users are expecting on your site. By entering keywords and by clicking on various pieces of content, your users are actually telling you what they want from your website, and they are telling you what content they think is relevant.

There are things out there like voting or reputation metrics, but search is really the best tool to be able to extrapolate what people are trying to do with your website.

That also leads into structured data, which is another trend that we’re going to talk about. Structured data is a way to actually denote what type of content you have on your site. Whether, again, we’ll go back to the Apple example, is Apple the organization or Apple that’s something else? These are the three trends that search is really rallying around.

I want to talk about what Drupal is doing right now to address this and some of the things that are going to be coming down the pike within the next six months or so, because it’s important that as you start to build your search experience that you’re starting to recognize some of these trends so that when the Drupal tools emerge, you can make use of them effectively and provide the site search that your users are coming to expect.

Now I’m going to go to the live demo portion of the site just to set the stage here. I have a really basic Drupal install. It’s the standard Drupal blue that you see out of the box, and it has some prepopulated content. It has a couple of events, a couple of blogs. We’ll actually build out some of the search experiences and identify some of the trends that we talked about.

Now that we have the site up, right now, I’m connected to an Apache Solr backend. Again, if you’re connected to Acquia Search or you are connected to Apache Solr, I think there are demos soon that you can install Drupal. You can configure some of the basic modules. You can download, install the modules. We’re going to start with that assumption that that’s the level that we’re at.

If you do need some help or if you are unsure as to how to install modules, how to configure modules, I do recommend that after this webinar, there are some great resources on drupal.org and some great articles that Acquia provides as part of its forums, part of its library that can help ease that transition. But you can still get some value out of this webinar by following along and taking notes of which modules are being used and seeing how you can configure them once they’re installed.

First, what I want to do is I want to just execute a search. It’s the same whether you’re using core search or any other backend. But I’m going to search for DrupalCon, and we’ll start to analyze some of the results to see what the default behavior is that you get out of the box.

The default behavior we’ll see is somewhat useful but not really. But if I entered DrupalCon, it will give me the pieces of content that match that keyword. It will give me a highlighted results snippet, and it will show me a little bit of information in terms of who the user was that posted that content and what date that content was posted. Sometimes, that’s useful. Sometimes, that’s not. But again, this is a basic search interface that you get out of the box.

To be perfectly honest, this isn’t very useful. This isn’t what users expect. If you compare it to Google or Yahoo! or Bing or all the other major players out there, this is weak, and it doesn’t really give users the information that they need to effectively search the content of your site.

The first thing that I want to do is I want to explore something called Facets. And facets are filters that users can apply to help refine the search results, and it also gives some aggregate information such as the count or number of results matching that filter based on the keyword that you entered.

The first module that I want to explore is something called the Facet API module. I’m going to go to the project page here. This is a module that works with core search. It works with Apache Solr search integration. It works with Search API if you’re using that module. It’s a way to configure your search interface regardless of what search backend that you’re using.

If I expand the screenshot here, you’ll see that here are some examples of the types of facets that you can have. You can have facets by content type, by date. There are even some interesting contributed modules out there that allow you to display facets as graphs. You can really control the interface and display things in pretty interesting ways.

I’m just going to scroll down and show some of the things that you can … some of the add-ons that are available that you can make advantage of. Again, we have the graphs that we talked about. We have a slider, so if you have numeric facets, numeric content, you can say, “I want to show data between this range,” tag clouds, and also date facets, which we’ll actually explore and configure.

I’m not going to spend too much time. That’s just an overview to whet your appetite for what’s out there and what’s available in the Drupal community. But I do want to just go and start configuring this so you can see what this looks like and how this works.

The first thing that I want to do is I want to be able to filter this by the content type. I do have two content types here, blog and event, so I want people to say, “Okay, if I’m searching for DrupalCon, I want to filter by the blogs or I want to filter by the events that I want to see,” so that you can get the relevant information for you.

First thing I’m going to go do is configure the Apache Solr Search Integration Module. That’s the one that I’m using. I’m going to go to Apache Solr, going to go to Settings, and I am going to go to Facets. These are the lists of the facets that I have available to me. First thing I’m going to do is configure and enable the content type. I’m going to save this configuration.

Now that facet is saved, I actually have to position it on the page. The default facets are blocks. Blocks in Drupal are small pieces of content that you can position in various regions or various areas on a page. Once you enable a facet, there is a link up top that allows you to go directly to the block configuration page so that you can configure this immediately.

If I click on Blocks and scroll down … it’s actually enabled for me. I’m just going to reset this so that it is where you guys will see it when you start from scratch. But it will start down here in the disabled category. These are all the blocks that are disabled. We look for Facet API, the backend that we’re using, and then content type. This is the facet that we just enabled.

I’m going to position this in the first sidebar. It is recommended that you do position it in the first sidebar, so that will be on the left-hand side. The reason is because that’s where most of the major search engines position their facets, so in order to help people navigate your search page, we use expected patterns. That’s the best place to put it so that they don’t have to hunt around for it.

I’m going to save my block. Now I’m going to go back to my search page. I’m going to search for DrupalCon. Now I have a facet up in the upper left-hand corner that allows me to filter by events or by blogs. If I filter by blogs, it’s reporting that I have two results. If I click that, you’ll see that I do get my results filtered to the blog that I want. That’s pretty basic stuff, but it allows your users to actually target what they’re looking for.

The next thing that I want to discuss, this is very basic, facet configuration. The next thing that I want to discuss is a pattern called progressive disclosure. This is something that you’ll see on Amazon where if you go to Amazon’s search, you’ll see that you’ll be prompted to search for something that you’re interested in, whether it’s one of the products that they have. Then when you search on that product, you’ll be displayed different filters based on the different types of things that are returned. What it prompts a user to do is start out small, like selecting the department that they want to search in, and then based on that department, it will expose different filters or facets that are relevant just to that.

I do want to take a step back and talk about the events. The events that I have on this site have dates that are associated with them, so the date that the event actually starts, whereas the blogs have a different type of date. They have the date that the article was posted.

When you’re searching for events, you don’t really want to know the date that the event was posted. You want to know the time that the event is actually happening, so you’re going to have two different types of date facets, depending on the content that you’re targeting.

Instead of displaying all of that information, all the possible combinations of facets on the left-hand side, we want to only display the facets as we start to navigate down the content types that we’re interested in.

To highlight this, I’m going to go back to the configuration page, and I’m going to go to Apache Solr, and I’m going to go to Settings, configure my facets, and I’m going to scroll down. We’re going to see two types of date facets that I was referring to. One was the post date and one was the event date. I’m going to enable both of these. I’m going to go to my blocks, position them. I’m going to scroll down, and now I see that the new blocks are here and disabled, so I’m going to position them in the sidebar first, like the other. I’m going to make sure that they’re in the correct order that I expect.

I’m going to save these blocks. I’m going to go back to my search page, search for DrupalCon. You see that by default, now I have filter by post date, filter by event date. In order to configure this progressive disclosure pattern, what we’re going to do is leverage something in Facet API called Dependencies. Instead of just explaining, I’m just going to go for it and highlight by example.

When I mouse over the facet, I get a little gear in the upper right-hand corner. If I expand that, I have an option to configure facet dependencies. This is the date that the actual content was posted, so again, it makes more sense for the blog than it does for the event. The first option that I have here is bundles, which are synonymous with Drupal content types. I’m going to say at least one of the selected bundles must be active. I’m going to say I only want to show this for blogs. I’m going to save this and go back to the search page.

Now you see that that date facet is gone. If I click on blog, now it appears. Now filter by post date. Again, I’m only shown, I’m only displayed facets that are relevant to the content type that I’m looking for.

Again, I could do this filter by event date. Again, mouse over the gear. Click Configure facet dependencies, Bundles. At least one of the bundles is active, and I’m going to say Events.

Now I go back, and when I search for DrupalCon, I’m going to start off very small, limited options, kind of guiding your users to select something and refine their results. As I click on blog, we know that we’re in the blog context, so again, context meaning information that is used to determine what type of content you’re viewing. Now that I know that I’m viewing blogs, I see the post date, which is a little more interesting.

Whereas if I click on the events, now I get the filter by event date. I can say, “Show me events that start in August of 2012 or May of 2013.” It’s not going to really target the type of events that are relevant to me.

One thing too, I’m actually going to go back to the blog facets, you see here that for the blogs, we have this drilled down thing that starts … we have a couple of blogs that span a couple of years, and the default facet that’s coming out of the box, you have to actually drill down to 2011. Now I’m going to go in March. Now I’m going to go March 21st. It allows you to drill down by the specific date all the way down to the time. But that’s actually not what users expect when you’re dealing with types of displays that are blogs, that sort of thing.

I’m actually going to go to Google and search for Drupal blogs. If I click on Search Tools, we’ll see anytime they don’t have that type of drill-down. They actually have the ability to refine by a certain range. That’s usually what users expect, and that’s a use case that people commonly ask for that we’ve seen in our support requests.

The next module that I want to explore is called the Date Facets Module. Again, this is available on drupal.org, date_facets. This can be linked to by the Facet API project page. But again, if we look at the screenshot, we’ll see that it provides a nice little display widget that allows you to display your facet in the range selection. We’re going to assume that that module was downloaded.

Click on Modules. Once you download that module, you’re going to install it. I’m using the module Filter Module to provide this nice interface where I can make sense of my modules because anybody that builds Drupal sites know that you can get up to hundreds of modules, so you need to be able to filter them more easily in this Module Administration Page. All I have to do, I already enabled this, but if I select the check box, click Save configuration, that’s all I need to do to install the module.

Once the module is installed, actually, I’ll do this from the search page, again, filter by blog, you have an option with facets to configure the display. If I mouse over the gear and click it, same list of options that allow me to configure the facet dependencies that can configure the facet display.

After I’ve installed that module, I’m going to have a new display widget site. If I expand here, you can see up at the top there is a new date range widget. The type of display in Facet API is called the widget.

If I click on date range, I’m going to click Save and go back to search page, I’m actually going to get an arrow here, which I wanted to highlight purposely. It says the widget does not support the date query we typed. When you’re doing the date range, this is a common error that people report. You have to actually scroll down and select the different query type. This just tells Drupal that we’re not just doing the date filter. We’re doing the actual ranges.

I don’t want to get into the technical aspects of it, but behind the scenes, it actually changes the type of filter that the backend uses, so it’s important that we actually make this distinction.

Now if I save and go back to the search page, now you see that I get filters that are very similar to Google. I can refine things by the past week, which I have nothing, or past month, past year. It looks like I only have stuff within the past year. But it was able to refine that based on the time range of the content that you have, so it really allows people to narrow down the things that are more recent.

Those are a couple of the tips that I wanted to share regarding the fast configuration, but I want to stop and see if there any questions before proceeding. Do you have any questions? All right. We’ll move on from facets.

The next thing that I think is pretty interesting is that instead of having a unified search page which displays all the content across your entire site, sometimes it’s useful to actually have targeted search pages. These are things like, okay, I have a blog section on my site, which we have here. I only want to search across the blogs or I don’t want to make the user actually click on blog to refine the results. This can actually be done in the Apache Solr Search Integration Module, which we’re going to focus on.

I’m going to click Configuration then go to Apache Solr Search. One thing that I’m going to do to simplify this demonstration and something that I think is useful in Drupal in general is Drupal 7 provides this nice little shortcut functionality. You see here I have Apache Solr Search with a little plus sign. I can click this and it will now add this configuration page, a link to this configuration page in the toolbar so that I can navigate to it more quickly as opposed to having to go through the normal path. I’m going to do that for an easier demonstration. If you’re configuring your search pages, you might want to do that as well.

Some of the tabs here, we have one that’s geared towards pages and blocks. I’m just going to select pages and blocks. This is where we can actually manage search pages. I’m just going to go ahead and add a search page and we can see what this will do.

The goal here again is to create a search page that just narrows down your blogs. I’m going to say this is a blog search. I’m going to scroll down. I’m going to make sure that my correct environment is selected. In this case, I’m running Solr locally, but if you’re connected to Acquia Search, you’ll have an environment for Acquia Search. Environment is really named for the backend that you’re connecting to.

Again, in title, search blogs. That’s going to be the title of the page. The path, I’m going to put in search/blogs. The part that’s going to allow me to filter just by blog content is this part at the bottom, custom filter. It’s a little complex in terms of how you do it, but first, I’m going to select that custom filter check box to make sure that I’m using a filter. We’re going to read the description down here. It says, “A comma-separated list of lucene filter queries to apply by default.”

In English, what that means is lucene is a very low-level search engine that Solr is built on, but it’s a syntax that allows you to filter by specific things and do some pretty interesting stuff. The very basic part of lucene syntax is if you want to filter by field, it will be the field name, field and then colon value. We have this use case actually down here in the comments. We see here bundle:blog. Bundle is the actual name of the Solr field, and blog is the name, is the value that’s actually stored in the index.

If you want to see all the fields that are stored in Solr, you can actually click Reports and click on Apache Solr Search Index. These are all the different field names that you have at your disposal. It doesn’t show you the values, but in our case, we know that the bundle will index the machine-readable name as we specified when we created that content type. If I go to structure content types, we see here all the different machine names. Blog is just the machine name, with _blog.

Again, I’m going to match the Solr field to this machine name. I’m going to say bundle is the name of my Solr field, and then blog is the value that we want to filter by. I’m going to save this page. Now, I have a search page that’s dedicated just to blogs. I’m going to click on this. If I say DrupalCon, now we see that it only gives me two results because it’s only filtering by the blogs, not filtering by any of the events.

Sometimes, it is nice to have these targeted searches. For example, if you do have a blog section of your site, it is very nice so that you don’t have to actually set up a separate site for your blog. You can have your blog be a micro-site that is under the same Drupal installation but just has different configurations isolating that content so users can find what they are looking for.

I want to stop there and see if there are any questions on the search pages. No? We’re good? Okay. I’m actually, just to reduce the noise here, I’m going to disable … is there a question?

Speaker 1: Yes. Is there an autocomplete module?

Chris Pliakas: Yes, there is. Let’s see if I can find it. Yes. The module name is aptly named Apache Solr Autocomplete. The project name is Apache Solr_autocomplete. This will provide the type of autocomplete functionality that people are used to.

Now, it is important to note that, and this is one of the trends that we’re going to talk about, that this actually pulls off your index and does keyword matching. But as you have larger sites and more data, then sometimes, keyword matching isn’t necessarily the best option to guide people towards relevant results. There is a trend that’s going to match statistics as well so that you can actually autocomplete based on what people are searching for as opposed to just the keywords which theoretically will guide them towards more relevant results. As I talk about the Apache Solr Statistics stuff that we’re doing, we’ll relate that back to the autocomplete.

Speaker 1: We have a few more questions.

Chris Pliakas: Okay.

Speaker 1: Can that custom search be put in a block?

Chris Pliakas: Can this search be put in a block? Yes, I believe it can. Let me just search for a module. I believe there is a module that does this. I want to see if this is what it does. I might have to get back to you on that one. I believe there is actually a module that does allow you to expose your searches in a block, but I’m not 100 percent sure on that, and so I’ll take that as an action item and post that answer after the webinar is over.

Speaker 1: Okay. Also about the statistics stuff, is that available now?

Chris Pliakas: Yes. There is an Apache Solr Statistics module that does some very basic stuff, but it’s more geared towards administrators. It does things like the keywords, but it does so more or less how many times a search page is viewed, which isn’t really that useful to site builders. But there is a new extension to that module, a new branch, I guess I should say, that is available on the community. I’ll show you where it exists and I’ll give you a bit of timeline about when that is going to get merged back in, but that’s more geared towards site builders and talks about how people are actually using your search.

Speaker 1: We have a few more questions.

Chris Pliakas: All right. I’ll take it …

Speaker 1: All of this work with non-Drupal content if some other system populates parts of the Solr index?

Chris Pliakas: The answer to that is yes. The trick is getting that data into Drupal. There are some example code, which we’ll point to the links after the webinar, that allow for more easily getting content into Drupal. But once you get the content in, you can display facets and that sort of thing.

The display of the search result doesn’t really bias towards what type of content it is. Again, it’s more or less just getting that content into Apache Solr in a way that Drupal can recognize.

Speaker 1: We have one more. Where is the extension to have autocomplete?

Chris Pliakas: Again, that’s the … we’ll do it for Google. If you search “Drupal Apache Solr Autocomplete,” I’m going to venture that it is one of the first results. It’s on drupal.org. The URL is drupal.org/project/apachesolr, all one word, _autocomplete. It’s pretty easy to find on drupal.org and it’s available on this project page.

Okay. I’m just going to clear cache just to make sure that our stuff is gone. I’m actually going to go back to Google here.

If we look at Google, we see that the search results are displayed in a format that’s pretty familiar to us. Let’s go to Yahoo!, or let’s go to Bing. Search for Drupal.

Now pretty interesting, you’ll actually see that the search results are very similar. You have the title. You have the URL. You have the snippet, and you have some additional information about it. Third thing, go to Yahoo!, search for Drupal, and we’ll see that again, different results are returned because they have different algorithms that determine the relevancy, but the display is very, very similar. The reason is because there is actually a lot of standardization that was done in 2011 by Google, by Bing, and by Yahoo! What that is is something called schema.org.

Let’s go back to Google, and we’ll look at the search results. Let’s go to our blog. We see some interesting things here. We see that when we search for our schema.org blog, you scroll down, we see one of these results has an image. This is actually a great way to talk about schema.org in that it provides some structure around your data.

When we build content types and manage fields inside of Drupal, we’re actually just configuring the data model, so that’s the underlying buckets that we put data into, and it doesn’t really have any meaning beyond what we name it. Google doesn’t understand when you create a blog content type that that’s actually blog content. It’s only blog in name only. Or when you create an event content type, it’s only events in name only. That’s almost like Drupal provides you a leg up in that you don’t have to build your database but that you can do it through the UI. But I’ll actually go back to Drupal here, click Structure, click Content Types.

You see here that I have events. If I manage my fields and I added some extra data here, the date, the event date, an address, an image, if I wanted to add another field, what you do is you create your label and then you select the type of field that you want. We see we have date, file, we have text. This is all real basic stuff that again is just really low level and doesn’t actually expose what type of content that is.

Schema.org is the layer that sits upon that which says, okay, this text field is actually an address, or this image is the primary image of this piece of content, or this event date is the actual start date of an event. It will actually go up as well and say, okay, you can say this content type event is actually an event so that it can be recognized by some standard that’s out there that’s agreed upon by the major search engines.

This actually helps your Drupal site by not only when Google and Bing and stuff index your site, it will actually read this metadata, but there is actually some work that’s being done so that it can modify the display of your internal search so that users are presented with a familiar experience.

That’s probably the thing that people will recognize the most, but the module that I want to share with you is called the Rich Snippets module. We’ll actually just install it and see what it gives us out of the box. Again, Rich Snippets, rich_snippets. There is another module that’s similarly named, but it’s important to understand that this one is geared towards your internal site search.

This takes that schema.org metadata and actually will format your results accordingly. I’m just going to install this module and see what it gives us, and then we can break it down a little bit.

Again, I’m going to go to Modules. I’m going to go to Rich Snippets, enable this. I’m going to bring up a page here so that we can see what it looks like before. Again, very blah. Now, when I enable the Rich Snippets module, we go back to my search page. I’m going to refresh the page. Now you see that it displays the results very, very differently.

The goal of this module is to work fairly out of the box. With Solr, you might have to re-index your content. But as you can see, now the results are displayed in a way that’s much more friendly and much more in line with what users expect.

As a nice UI tip, this module is going to emerge as something that’s going to be a staple on sites with search. As you can see, for DrupalCon Portland, DrupalCon Munich, it displays a little image, and it also displays the start date.

Now, for the blogs, it displays who that blog is by and when it was posted. As you can see, based on the context or based on the schema that we’ve assigned to it, the search results are displayed very differently. This is really important when we’re displaying site-wide searches. There are tools in Drupal, such as Views, which people are starting to explore to build their search pages on, but that’s not really geared towards heterogeneous content.

When you have a mix of contents, then it’s really important that you’re able to display that effectively inside your search page. Whereas views, it gets really, really tricky to say, “Okay, for this content type, display it this way. For this content type or this schema, display it another way.”

That’s the first thing that the Rich Snippets module will give us, is a nice display. Now we’ll talk about how to actually say, okay, this is a date, this is the start date, that sort of thing.

There is a module called schema.org. It’s just schemaorg, one word. It’s a very simple module that doesn’t require a lot of configuration, but you effectively download it, install it, and it allows you to effectively tag your fields and your content with the type of schema that denotes what that content actually is.

If you download and install this module, what it does is pretty simple. If I go to my structure, go to my content types, edit my content type, it gives us this new vertical tab that says schema.org settings, and this allows us to actually specify what type of content this is.

If I said, okay, this is a blog, I could start typing, and it would give me the options that are available. All the options are on the schema.org website, and I’m not going to go over them in detail because there is a lot of them. Just to give you an idea of how much there are, you start off with your basic top level stuff like an event, organization, that sort of thing, and then inside of these have various properties that say, okay, for this event, this is the end date, these are the attendees, so a lot of structured information there.

Each one has a lot. Let’s see if I can get the documentation here. Okay. That’s not what I want to show. Full list. Again, this highlights why this is a great tool for this type of search results display because as we scroll down, this is the nested hierarchy of schema.org schema and properties, so you could see there is a ton of them. The module right now supports a subset of them but it’s going to support more.

As we’re building our content, it’s really important that you use this module and explain what your fields are. When I actually create a field, if I click Edit and I scroll down to the edit settings, you see here that I also have schema.org mapping so I can say the property. I could say this is the start date. Then what the Rich Snippets module will do is based on your schema and properties, it will display your content differently.

Because this is start date, if I go back to my DrupalCon settings, then it knows to display the actual start date up here because based on this result being an event, it’s probably what people are going to be interested in, so it gives them some context about the content that’s being returned so that they can see what’s going on without necessarily having to click on the piece of content itself.

I’m going to stop there and take a couple of questions for two minutes, and then we’re going to move on to statistics and then stop for general questions.

Speaker 1: Okay. We have two questions. Can the custom Solr search results page be used in panels? This might be from the last section.

Chris Pliakas: Yes, I believe it can be. The reason why I say that is because the Acquia Commons distribution is making heavy use of panels and is using Apache Solr for its search engine. I say with confidence that yes, it can be used with panels.

Speaker 1: There is one more. Where is the extension to add to Apache Solr autocomplete which allows for statistics to be involved and not just keywords?

Chris Pliakas: That’s one thing that’s not available just yet, but it’s on the road map for the statistics module that we’re going to display next. This is one of those items where I wanted to make people aware of the different trends that are emerging. This is one case where it hasn’t been implemented yet, but it’s going to be implemented. As you start to look forward in your search solutions the next three, six months, look for this as an option.

Speaker 1: We have one more question. Why do we need Acquia Search when everything seems doable from Drupal Search?

Chris Pliakas: Yes, and that’s a great question. The first thing is that Drupal Search won’t scale. The Drupal is built on relational database technology, and relational databases simply won’t scale for full text searching. They’re really geared towards saying, okay, find me all blogs or find me all users, that sort of thing. But when you start to enter keywords into the mix, it will take your entire site down pretty quickly because it will bog your stuff down.

Regarding Acquia Search, you can run Solr locally, and we’ve contributed a lot of these add-ons back to the community. However, the value add that Acquia provides right now is that we have Solr configured in a highly available cluster, so there is a master/slave replication so that if one server goes down that end users can continue to search. We also integrate the tools that allow for file attachment indexing. We also have a security mechanism that we’ve applied on top of Solr.

Solr actually doesn’t have security out of the box, so you can actually do a Google search and find a lot of Solr instances that are unprotected. You could delete that index. You could add content to that index. We’ve added a security on top of Solr that allows you to connect securely and make sure that you and only you have access to your server. Also, we manage it 24x7.

One of the things I do want to talk about going down as we talk about statistics and contextual computing, there are things that we’re experimenting with Acquia Search that will adjust relevancy based on user actions. This will be a set of tools that integrate with Drupal and integrate with various tools that will provide more relevant results to your users beyond just keywords. There is going to be a lot of value and a lot of focus on contextual computing with Acquia Search that’s really going to differentiate it from not only core search but from using Solr locally.

Speaker 1: There’s a few more, but we can get to them at the end.

Chris Pliakas: Yes, sure. What I’m going to do is just wrap up really quickly with the statistics. There is one point that I want to hit home, and I’ll try to stop by 1:55 to save some time for some questions afterwards.

There is an Apache Solr Statistics module. Let me clear out some of these tabs here. I think that’s it. Or maybe it’s Apache Solr Stats. It’s probably Apache Solr Stats. There we go.

There is an Apache Solr Statistics module that you can download, it works for Drupal 6 and Drupal 7, that gives you some information in terms of how many requests there are, what type of things people are searching for, but it’s more geared towards site administrators, not necessarily search page builders. The reason why I say that is because if I go to my search and I search DrupalCon, it’s going to count that as DrupalCon, the keyword being searched.

If I click on events, since the page reloaded and it actually queried Solr again, that statistics module is going to say, okay, DrupalCon was searched again. What this really does is it says show me content where people have to click around to find what they are looking for. It’s not necessarily indicative of what people are actually looking for on the site.

One of the branches that’s being worked on, it’s actually a sandbox project right now that will be merged into, back into the Apache Solr Statistics module by Q1 of next year … I can’t find it here … there is a sandbox that’s an Apache Solr Statistics fork that’s used to experiment with this stuff. That’s what I’m going to be showing you today. The important thing is that it’s more geared towards the search page builders, and it also tracks what people do after they search for something. It allows you to track what we call click-throughs.

If somebody searches for DrupalCon, we can see what pieces of content people are actually selecting, so we can make informed decisions about how to configure our search and how to modify the relevancy.

What I’m going to do is click on modules, search toolkit, and enable Apache Solr Statistics. When I click on Apache Solr Search, now I have a new tab that says statistics.

What I want to do is I want to enable the query log. This captures stuff about what searches are being executed. Also, I want to enable something called the Event Log. In order to enable this, you have to copy a file from the module to your Drupal Group so that it can capture the information as users are clicking on it.

We’re also going to capture user data. By default, that’s off, but you can capture data not only what people are searching for but who is searching for it. Based on your privacy policy, you can enable or disable that setting.

There is also what I’m about to explain, the law of retention policy and backend, by default, logged to the database, but for busier sites, again, there is going to be the availability to send that to different sources.

I’m going to say a configuration, and I’m going to execute another search. If I search for DrupalCon and now click on DrupalCon Portland, if I go to Reports, Apache Solr Index, Statistics, this gives me some interesting things. It gives me the top keywords, so it shows me what the top keywords are that people are actually searching for. Equally as important, top keywords with no results, so you can see what people are searching for and not getting any results for.

If people don’t find the content they’re looking for, they’re going to leave your site, so this is a really important metric. Also top keywords with no click-throughs, so if people are searching for things and they’re getting results but they’re not clicking on anything, then there is probably going to be some modifications to make sure that they’re getting displayed the correct results.

Here, we see the top keywords. We also have click-through reports. If I click on that, it will show me the pieces of content that people are selecting in the count. As you start to gain some more traffic on your site, this will give you some transparency in terms of what people are doing on your search page, and more importantly, what they’re doing after.

As we talked about the contextual computing, it’s really important that you monitor what people are looking for, and this is a great way to do it. Again, it’s what people are looking for in your site and what they are selecting, what they find relevant. The search page is a great tool to help you modify your experience and tailor it to your users.

We have a couple of slides to end up, but that’s really what I wanted to highlight, is that contextual computing is more the trend, that there are some tools that you can employ now that are going to be improved upon in the future to make sure that Drupal is the best solution available in search to serve relevant content to your users. Search is really becoming a big data problem, and search is also becoming a solution to that problem.

Big data is capturing a lot of information and then making sense of it, doing something with it. As your sites begin to amass a lot of data, search is a great tool to help your users sift through that data and find the relevant content that they’re looking for, and that’s really where the trend of computing is going over the next five years, so definitely pay attention to search as a tool to help make sure your site is keeping up with the latest trends and desires of your end users as they look for engaging experiences.

I went over but we’ll take some more questions.

Speaker 1: Okay, great. Would you recommend using these modules on a Drupal 6 site using domain access?

Chris Pliakas: Domain access is a little bit tricky especially with search. Some of these things are … let’s take a step back. The way a domain access works is that it builds upon the Drupal node access system, so that adds some challenges in terms of search. Not only does a search solution have to be domain-access-aware, but everything around your site has to be domain-access-aware.

Theoretically, you can use your Drupal 6 site with domain access. It’s just that it gets a little bit tricky because your index is logically separated as opposed to physically separated, so there always is the chance of your content either lagging behind in terms of getting that access information or accidentally getting exposed to other sites when it shouldn’t be, so it can be done, but there has to be a lot of thought and a lot of careful planning to make sure that it’s implemented properly.

Speaker 1: The next question is does the schema.org also expose the extra info to search engine spiders?

Chris Pliakas: Yes, it does. That’s actually what the module is geared towards. It’s geared towards the external use case, and it works, it provides that metadata that Google will pick up the images and the additional metadata. But what the Rich Snippets module does is it takes that information and uses it inwards. By default, it actually is geared more outwards, but the work that’s being done right now is taking that and also apply it to your site search, so it’s a win-win.

Speaker 1: The next question is what if the non-Drupal contents are dynamic pages, how do you import those contexts? If not, is there a federated search solution?

Chris Pliakas: I think it’s important to first say a federated search solution might not be exactly what’s being asked for. When we think offederated search solution, we think of things like Kayak or other engines that actually query out different data sources and compile them together.
There are tools in Drupal that allow you to query different sources simultaneously. However, that’s probably not what you’re looking for. You’re probably looking for a unified search solution that displays results instantly.

In order to do that, you can leverage tools such as crawlers, such as Nutch, which will integrate with Solr. The key is again getting that data into a format that Drupal can recognize. But the trick is using those tools to crawl or expose your external data to get them into Drupal.

There are also ways that you can programmatically connect a third party data store and index that into Drupal using the APIs. But again, it’s more of a developer task and something that has to be coded.

But with Acquia Search, definitely look for an offering sooner rather than later to index external content and bring it into your Acquia Search Index.

Speaker 1: All right. We’ll take one more. How can you make information more important based on the statistics? What ways to set this up are available?

Chris Pliakas: Can you repeat that question one more time? Sorry.

Speaker 1: How can you make information more important based on the statistics? What ways to set this up are available?

Chris Pliakas: Sure. I’ll give one example from Acquia.com. We have an offering called Dev Desktop, which is a local stack installer for Drupal. A long time ago, it used to be called DAMP, Drupal, Apache, MySQL, that sort of thing. What we actually have noticed is that based on our statistics, people still search for DAMP more than they do Dev Desktop. We noticed that trend, and the way that we modified our search results was to take advantage of some of the things that Apache Solr has, and when people search for DAMP, we add a synonym to Dev Desktop so that when they search for DAMP, they’re actually getting the content that’s relevant to Dev Desktop, which is what the products mean now.

This is what Google does. This is why Google results are very relevant. They have hundreds of full-time engineers analyzing their search and doing things like saying, “Okay. If you search for a FedEx tracking number, we’re going to show you the FedEx webpage.” Now it’s automated, but that was used by analyzing the statistics, and those are the types of techniques that you can employ on your site based on what your users are actually looking to do.

Speaker 1: Okay, great. I think we’re going to have to end it here. Thank you, Chris, for the great presentation, and thank you everyone for participating and asking all these wonderful questions. Again, the slides and recording of the webinar will be posted to the Acquia.com website in the next 48 hours. Thank you.

Pages