Home / Taxonomy term

Webinar

Automated Tool Provides Instant Insight into Drupal Performance, Security [January 22, 2013]

Community Box 2.0, mehrsprachige Communities mit Commons [January 10, 2013]

Ensuring Success When Migrating Your Content to Drupal [January 8,2013]

Integrating a CDN with Acquia Cloud [December 20, 2012]

Click to see video transcript

Moderator: Today’s webinar is Integrating a CDN with Acquia Cloud. Our Acquia presenters are Kieran Lal, Wim Leers and Alex Jarvis. We also have Christopher Meyfarth from Akamai we’re very excited to have him on.

Kieran Lal: So a few months ago, actually starting back in June of 2012, I started really exploring how we could go beyond what Acquia Cloud had in terms of performance and the most obvious answer once we had tuned the stack, got the reverse proxy right and tuned the PHP stack and tuned the database and got everything together, we start looking at some of the really great web performance optimization tools that were available and so that got me thinking about using CDN. So I started talking to a lot of people at Acquia and collecting more and more information, looking at all the different CDNs that had been integrated with Acquia Cloud and started asking people about what were the different scenarios and different use cases that came together. So I sort of had this on the shelf for a while and then we’re lucky enough to see that Wim Leers who I thought was going to work somewhere else actually ended up joining the Office of the CTO and so that was really exciting and I think within a few days of him joining, I immediately reached out and told them that I wanted to do a webinar off somewhere in the future on CDN. Wim I’ve known for many years and we hung out together when I was over for a conference in Hungary and so I knew about his interest in CDNs and more recently some of the work that he’d been doing through his master’s and/or an internship that he had done at Facebook. So it was really great to have what I consider to be one of the leading experts of CDN integration with Drupal here at Acquia and so we worked together, put together some outlines. I showed him my notes and we planned to do this, and then as we started getting close to the webinar, I was able to reach out to some new partners at Akamai and so I’m really excited to have Christopher Meyfarth from Akamai join us who’s one of the sales engineers, but a real expert in Akamai, and so we’re able to really focus on what is the premiere CDN in the space.

Then more recently, I was at BADcamp and then bumped into Alex Jarvis who’s one of our senior technical account managers and he was talking to me about some work that he was doing and we were going back and forth and I brought up the idea of Akamai and so he started telling me about all the expertise and some of the notes and training that he’d been doing internally, and so I was really excited to really bring what I think is one of the strongest teams of technical contributors that we have in our webinar series together to show off the advantages of using a CDN with Acquia Cloud and in particular using Akamai. So we’ll see three different perspectives, but they’ll keep it pretty thorough and we’ll have a lot of great content. I’m really looking forward to some excellent Q&A and even I know a lot of the staff at Acquia are really excited to learn about this because it’s a tier of knowledge that they really want and something that almost 50 people on our support team are keen to learn about so that they can use Akamai and use chris’ knowledge of CDN’s to improve the performance of their sites. So without further delay, I’m going to hand it over to Christopher who will go first and then we’ll have Wim talk about CDNs and some specific expertise that he’s got and then Alex will finish off and talk about specific configurations of Drupal with Akamai, and then we’ll try to pause. If we see questions that are relevant or particularly topical while our speakers are talking, we’ll take the time to answering the questions in context, but the most of them, if they’re generic-enough questions, we’ll hold it to until the end where possible and then try to leave enough time to do a Q&A. So Christopher, without further delay, I’m going to hand over the slide to you.

Christopher Meyfarth: Great. Thank you, Kieran. I’ll just make sure we got the presenter view here. So as Kieran mentioned, I’m Chris Meyfarth. I’m a lead senior sales engineer working with our channel program at Akamai. I’m going to talk a little bit today about the inherent problems of the internet. We’ll talk a little bit about the Akamai technology that we’ve created to help combat those inherent problems and we’re going to look at some test results that we did last week on Acquia.com. So without further ado, I’ll just jump right into it and talk a little bit about the inherent problems of delivering traffic over the public internet. I think that it’s important to understand why you need a CDN. Acquia is spending a lot of time honing Drupal, putting a lot of time into the data centers and regardless if it’s Drupal or an application that you have in-house, you don’t want to spend all that time on the application, all that time on the data center and then just cross your fingers as that information goes out across the wilds of the public internet. When you look at the public internet, it’s a network of networks and the protocols that control the public internet, there are three major protocols: you have BGP, TCP and HTTP. The issue here is BGP first and foremost which controls the route, it’s based on the number of steps that you take, not the performance. TCP of course is an antiquated protocol, it has a slow start and then you have HTTP and HTTPS which is just the sheer volume of information that we’re sending across the line. A great example is with commerce websites now in Akamai, the average commerce customers has about a two-meg footprint of their commerce website. I can remember 10 years ago when we were willing to wait 10 or 15 minutes to download a two-meg MP3, and now according to Forrester and Gartner, we have the two-second rule. We’re looking two download a site in two seconds or else we’re going to start seeing exponential abandonment. We’re doing in two seconds today what we were willing to wait 10 or 15 minutes to do 10 years ago. So when you step back and look at the public internet, you really have a compounding inefficiency. You have a ton of data going over a pipe that has a slow start, going over a route that isn’t based on performance. It’s just simply based on the number of steps that it needs to take.

So I’m going to talk about some of the newer Akamai technologies. I’m going to leave a lot of the caching and some of the traditional CDN stuff to Wim and to Alex, but to combat this compounding inefficiency, Akamai has developed some technology called SureRoute. SureRoute leverages our 125,000 servers in 2,000 different locations. They basically all talk to each other. They build a virtual weather map of internet traffic conditions, so anytime an end user connects to an Akamai server, we go to that weather map and in real time we query it and we pull down the three fastest routes given his exact location and where he’s going. We’ll actually take it a step further. We’ll actually send a little piece of information across each one of these routes. We’ll run a race, if you would, and the one that comes in first will start passing traffic across that route. Now, we’ll continue to run those races, so if anytime maybe internet traffic conditions change, people are coming back from lunch, there’s congestion on different lines, we continue to run those races, we detect if that’s no longer the fastest route, we failover to the second fastest route and then go back to that weather map to get three new routes.

Now, it’s an extreme example, but to see SureRoute in action, this is a cable cut that happens. So this, as I said, an extreme example of internet traffic pattern shifting very quickly. What you see in red is the public. It took the public internet about two weeks to recover and it took Akamai about 20 minutes. So SureRoute is just one technology that we have; of course Akamai is known for doing caching. We continue to push a lot of that caching logic out to the edge of the internet so that we can make caching decisions based on languages or cookies and we can make those as far away from the application origin as possible. In addition, the last technology feature we’ll talk about is a feature called prefetch. Prefetch is a really interesting technology because as the end user requests an HTML file and that is coming back and being returned, actually our Akamai edge server will hold on to that HTML file and start to parse it. So we will send out requests, we’ll refresh our cache. If anything is expired, we will refresh those cache objects or request them from the origin before the end user has even received that HTML file, so by the time that end user has parsed that HTML file and starts requesting those objects, they’re located on the edge server 10 milliseconds away from them.

So a quick overview of some of the newer Akamai technologies in action, we did a test last week of Acquia.com. We were looking, so we choose six steps, just generic web pages. We chose the homepage, the Support and Cloud Services, Solutions for Marketers, Network Trial page, Careers page and the PDF. The way we conduct these trials is we’ll actually build a fully functional Akamai configuration and send it out over the public internet, and then we spin up either Gomez or Keynote, one of the major leading performance testing suites and we’ll have them hit both the Akamai configuration as well as Acquia.com as it was being accessed on the public internet. So what we have here is the comparison results. This was over the course of about three days from many cities all over the world and we have the public internet results and then the Akamai results. What’s interesting about this – I mean I’ll let everyone consume the data points on their own, but what really stands out here is step number five. If you look at the average improvement, step five is nowhere close to everyone else, so I wanted to highlight step five and dig in a little bit deeper. Step five is the Careers page and this was a very heavy page that was pulling information from a few different queries and one of the major advantages of Akamai and using Akamai as a CDN is you can build rules into Akamai to handle the caching. So what we did with step five was we said, “Well, let’s rerun these tests, and rather than just cache the images and do the SureRoute acceleration, let’s take a snapshot of that page. Let’s cache the full HTML of that page not for very long, maybe just for a few hours. We want the page to stay dynamic, but in this case we would much rather have maybe a highly performing page that we could clear out from cache on request. Akamai has the ability to interact with an API to purge things out from cache, so let’s cache that full page.” These are the results. You can see that public internet was about five and a half seconds. We turned on Akamai, it’s about four and a half, but then when we started doing that full page caching, it dropped it down to two. Those results will only get better the more people you have requesting that page. It’ll keep it in cache, it’ll keep it in cache longer and more current.

So before I hand things over to Wim, just to summarize quickly for Akamai, the trial we did for Acquia.com is very easy to do. We do it for many customers, so if you want to get set up with the CDN, we strongly advise you to talk to an Acquia rep. We can do one of those trials, we can set up the steps, build that configuration, send it out over the public internet and in just a couple of weeks we can get some real world results for you. So at that point, if there are any questions, we can open it up quickly or we can pass it over to Wim.

Moderator: There’re no questions at this point, so I’m going to pass it over to Wim.

Christopher Meyfarth: Great. Thanks guys!

Wim Leers: Hi everyone, my name is Wim. As Kieran explained, I’m a senior software engineer with Acquia and I’ll be talking a bit about Drupal plus CDN, but before I really begin, I just want to make sure that everybody actually knows what a CDN is. If you’re not quite sure yet what that is, please raise your hand right now. [Pause] Okay, I do see that at least one or a few people are raising their hand, so a CDN is essentially a short name for content delivery network. In a nutshell, what it does is move content closer to the end user, meaning that to get the data for a website, for example, there will be less physical distance to cover, meaning that the data will be faster at the end user’s computer. So essentially, move the data closer to the end user so that everything on the web becomes faster. I hope that is clear; if not, I recommend the explanation on Wikipedia. So let’s dive into the real stuff.

First of all, you should know about page loading performance. I’ll keep this also very short. I think most of you have already noticed, but in any case, when you load a webpage, then only about 10% of the time is actually spent on loading the HTML, the webpage itself, the textual content of the webpage, and about 90% is actually spent on loading the resources referenced by the HTML. These are the JavaScript files of course, but also images, fonts, videos and so on. So when we’re trying to make websites faster, it’s really important that we keep in mind that the biggest gains are to be gained in the resources department, not on the HTML side, so improving websites in the sense of making the rendering work faster, that’s nice, but it will never have the same kind of impact that you can achieve by making sure that the resources are loaded and served fast to the end user. This is what we’ll keep in mind and this actually brings us to the next aspect of using CDN.

There are actually two types of – you can use CDN in two big different ways. The first one is use a CDN for serving resources, static files, and the second way is using a CDN for everything in HTML. So the serving of resources should always be the first step since this is the thing that affects 90% of the page load time. It’s easy to do especially with origin-pull CDNs – more about that later – but in any case, the most important aspect is that this will give you the most benefit with the least amount of effort. You can also go for serving everything from a CDN, so this is an optional second step. It makes sense that this is a second step because the first aspect you can do without much changes to your application logic and it’s very easy to do, so it makes sense to always do the resources [Audio Gap], but if you go for the full approach, then you will affect 100% of your page load time since everything is being served from a CDN minus of course third party scripts. The downside is that you will have to carefully plan the integration; it takes quite a bit of effort especially if you have a website in which you are serving authenticated users a lot of content, meaning that if you have for example a news website where the typical user is not logged in, then you can serve the exact same content to millions of users, but imagine the case where you have for example a news website where each user can make his individual preferences for topics of interest. Well, then things become a lot more complex because not the same content can be served to every single user, meaning that you have to do far more complex page building to achieve the same performance. To do this kind of authenticated user, and thus, personalized content through CDN, what is typically used is Edge Side Includes and Akamai is I think the company who pioneered this.

More about that in just a second, but the one thing that you should also know is that in either of these cases, in both scenarios there will be less requests to the app server, exactly because static files are no longer served by your app server, your web server. That actually means that there is less load on your web servers. If you have many of those web servers, then it’s possible that you can even retire some of them, meaning that overall your costs for your app server should go down. So CDNs may cough some money, but they may also save some to some degree, at least. So I just mentioned Edge Side Includes which is really important if you want to serve the HTML of your website from a CDN while still having the ability to serve personalized content to every user. In Drupal 5, 6 and 7, it has been pretty difficult or rather complex and not well supported out of the box to do Edge Side Includes, but I’m happy to report that in Drupal 8, it will become much easier because in Drupal 8, there is the Blocks and Layouts Everywhere Initiative, codenamed SCOTCH. The single keyword that is really important here is Blocks Everywhere. Blocks you can regard as the atoms in the Drupal universe, as it were, the atoms of the Drupal webpage. In Drupal 5, 6 and 7, many things are blocks, but not everything is, so if we make everything a block, every single compartmentalized piece of content, if each of those is a block, then we can do much more interesting things.

In Drupal 8, we are adopting the Symfony which is another open source project, so we’re sharing code and sharing the effort there. We’re adopting the HttpKernel components from Symfony. What we’ll be doing is making sure that each part of the page, each block, each atom can be requested individually and when you, for example, would ask a Drupal 8 website to render the front page, what it will do then is use in-process subrequest to build a page. So it will actually fire subrequests within the PHP process to render each of those blocks individually. As you can tell, this very closely resembles ESI and is actually pretty much the exact same thing. Drupal 8 will make it much easier, if not trivial to support Edge Side Includes. So ESI plus CDN will become trivial in Drupal 8, but please remember that not every CDN supports Edge Side Includes.

So now we have an idea of the two big different ways of using a CDN and I’ve clarified that one of them will become significantly easier in Drupal 8, so let’s move on to the key properties of a CDN. The single most important property I believe is the geographical spread, the points of presence. PoP stands for “point of presence.” A point of presence is essentially a location where the CDN has edge servers as Chris pointed out already in his presentation, meaning that as a PoP, a CDN connects with local ISPs, so the more connections with local ISPs around the world or in different regions, the closer you are to the end user with that CDN. So what you need to do is make sure that the latency between your [Audio Gap] and your content is as low as possible, globally speaking, meaning that for as many users as possible, you want to have the lowest latency possible because then web pages will be rendered faster. So if you put those things together, what really matters is that you match your CDN to match your audience, so if you’re a company that’s mostly oriented towards European users, for example, then you would probably choose a CDN that has a strong European presence that has many PoPs in Europe. It could also be that you are going with a global CDN provider, provided that they do have a lot of presence in Europe.

So as you can see, it really depends on your needs, on your target audience which CDN is the best choice from a geographical point of view, from a best lowest latency point of view. The second most important property is the way you get files onto the CDN. Actually, there are two types: origin-pull CDNs and push CDNs. For origin-pull CDNs, there is not really a special transfer protocol involved. Your web server obviously speaks HTTP because otherwise it would not be a web server, so what origin-pull CDNs actually do is when they receive a request for a certain file, they will just go back to the origin server which is your web server and go and get the file over there. The upside is of course that there’s virtually no setup. You just do very, very little and everything will work just fine. The downside is exactly because it will work almost automatically, that there is very little flexibility in terms of what kind of preprocessing or optimization you can do before getting the file onto the CDN. Another downside can be redundant traffic. For most websites, this will not be a problem since requesting caches Again, it’s not really an issue since they’re so small, but imagine a case where you have multi-gigabyte video streams, for example. As Chris already explained, at least Specific edge servers, the PoPs of a CDN, tend to clear out files that have been requested least often in a certain period of time, so it is possible that your video file has only been requested a few times in a given timeframe, meaning that it will be removed from the CDN temporarily until it’s requested again, but that means that your origin server – meaning your web server – will have to serve the file again to your CDN. So multiply that by a number of edge servers and then you might be looking at significant traffic, but it really depends of course on your specific site and your specific goals.

That’s exactly where a push CDN can be advantageous. Exactly because it’s a push CDN, you have to push the files onto the CDN, so you control when a file is transferred from your origin server onto the CDN, so there is no redundant traffic since exactly it’s you who control that. You also have more flexibility in terms of potential preprocessing exactly because it pushes onto the CDN, but the big downside here is that there is a lot of setup involved. You have to write the script or code or some sync layer, whatever it is going to be. You have to somehow get the file onto the CDN and that might be a lot of work. I failed to add this on the slide, but this is exactly what I try to solve with my bachelor’s thesis. It’s called FileConveyor.org and it’s a Python daemon that is intended to simplify the setup of a push CDN. Essentially, its job is to weigh the differences between different ones, so it makes it easy to switch from one to the other or to advanced preprocessing and whatnot, but in any case, this is another component that you have to deal with which you don’t have to do with an origin-pull CDN.

That actually brings us to the last thing which is lock-in. Depending on the kind of CDN that you choose and its features, you may be looking at some or significant lock-in. For example, if you’re using Amazon S3 as a CDN, then you’re actually using a custom transfer protocol which means that if you want to switch to another CDN, you have to rewrite your sync layer, your script, whatever it is, so that is some degree of lock-in. Some CDNs offer unique features – for example, some kind of statistics – and you may not have the same in another CDN and so on, so you have to be careful there. You have to weigh the differences to figure out which things are most important to you.

All of this is about the differences between the different CDNs. So how do you select a CDN? Which CDNs should you choose? The first and foremost reason to go with a CDN is of course performance. That’s the whole reason we’re having this webinar. The most important thing is low latency. This is again about matching your target audience well to your CDN and making sure that the PoPs the CDN has correspond well with the geographical location of your visitors. However, this is not always the single most important thing. If you’re serving a lot of small files, for a typical website, it definitely is, but in the case of serving video streams for example or large software downloads or whatnot, what might matter more is high throughput - some call this bandwidth – meaning that it doesn’t really matter if you have to wait 0.1 or 0.2 seconds for your video to start streaming, but what you really want is that your video stream does not get interrupted. That’s the difference that you have to weigh there.

The other aspect based upon which you might want to choose a CDN is the type, origin-pull versus push, because of the inherent differences in how you get files onto the CDN and how you integrate with them. Also, advanced CDN-specific features such as, for example, automatic lossless image optimization, real time statistics, authentication in the sense that, for example, you have an ecommerce website and you sell digital goods, meaning that you don’t want everybody to be able to access the files, so then you want some kind of signed URL or [Audio Gap]. So it really depends on your use case. You have to make sure that the CDN offers the features that you really, really need.

Finally, of course there are also the support aspects, different CDNs with different kinds of support and the costs. Now, how can you maximally exploit a CDN for resources, meaning for static files? These are simple tricks that can have a significant performance impact, so it’s really recommended that you always use them. The first one is actually the one with probably the least amount of performance impact, but it’s also the single most simple one. It’s DNS prefetching; it’s just adding a small tag to your HTML head on every webpage where you use a CDN or even where you don’t use a CDN, and what you do there is reference the DNS, the domain name, for your CDN so that the web browser will already be able to look up the IP address for your CDN, meaning that when – so this is at the top of the page and only after that the loaded script images and so on are referenced. So by the time that the browser gets to parsing the actual resources, the DNS lookup will already have happened and you don’t incur the DNS lookup wait time.

The second thing is auto-balancing over multiple CDN domain names, also known as domain charting. This is very useful in cases where you have a lot of resources in a single webpage. For example, say that you have an ecommerce web shop and you have 100 product images on a single webpage. The typical web browser will do between 6 and 10 simultaneous HTTP requests to a single domain name, meaning that only, say, about 10 downloads are happening at the same time. So only when one of those 10 is ready can the next one occur, and so on. So as you can see, there is a lot of waiting going on for no good reason and the way you can solve that is by having multiple CDN domain names, multiple host names and balancing them automatically. For example, say that you would have four CDN domain names. You could automatically balance those 100 images over the four different domain names so that each has approximately the same amount - in this case, 25 – and then what you get is not 10 images being downloaded at the same time, but 40 images being downloaded at the same time. So given this theoretical case, this would accelerate the download of these 100 images by fourfold, so that’s a very significant increase, but again, it’s really only useful when you have a lot of images, or in any case, a lot of resource from a single page.

The last one is actually the one with the most impact in my experience also for very small websites and it’s called Far Future expiration. It’s actually really simple, really logical. Essentially, browser caches are always faster than a CDN unless you have an extremely, extremely slow hard disk drive. If you needed to get a file from a CDN, then you’re always going to incur some level of latency. You have to download the file, you have to save the file on the disk and so on. Browser caches avoid that. If the file is in the browser cache, you don’t need to go to the CDN. How do you make sure that files remain in the browser cache for as long as possible? Well, you need to mark them to expire many years from now and that’s why it’s called Far Future expiration. The one downside to this is that if you are changing, for example, your web shop logo and then what happens if the file is cached in the browser cache is that it will not be downloaded again exactly because it is still in the browser cache. So now for some reason, some subset of users is still seeing the old logo. The way you can solve that is by using unique file URLs. You need to make sure that each file, whenever its contents change, it’s served from a different unique file URL and that will cause of course the browser cache to retrieve the new file. One funny side aspect or interesting side aspect to this is that actually if you implement this, it will actually cause your CDN costs to go down because there will be fewer requests to the CDN since the browser cache retains the file longer. So yes, these are free tricks that are really useful.

Now I’ve explained the different types of usage and overall CDN information, so now let’s move on to the Drupal CDN module. I maintain this module and if you have any questions, I look forward to seeing you in the question queue. As you can see, this is the default admin screen for the CDN module and there is a message at the top that says “If you install the advanced help module, the CDN module will provide more and better help.” So if you install it for the first time and you’re still finding your way around it, please install it and it will give you examples and technical background information, but in any case, there is very little to configure as you will soon see. This is the general tab and here you can either disable or enable the integration with CDN or you can enable the testing mode. In testing mode, you can play around with it without harming your actual users. You can give specific users access to the files on the CDN by giving them specific permission that allows them to do so, so this is great for giving it a try.

The second tab is called “Details” and this is where the bulk of the work happens, the bulk of the configuration. Essentially, there are two modes: origin-pull CDNs and file conveyor for integrating with the project that I mentioned earlier for doing more advanced CDN integration. So we are going to go with origin-pull; this is actually exactly how it’s configured on my personal website. All you have to do really is copy-paste the CDN domain name that you get from your CDN provider, paste it in the CDN mapping field that you can see at the bottom of the current slide and that’s it. Just hit save and you will have CDN integration. I really tried to make it as simple as possible; however, you can also enable Far Future expiration. CDN module automatically does all the aforementioned maximally-exploiting CDN tricks. Far Future expiration obviously involves the unique file URL aspect and the CDN module shifts with several methods for generating unique file identifiers. You can even add your own.

The Drupal CDN module is great, I hope and believe, for simple use cases or for many use cases, even for more advanced ones that also does domain charting, but there are also times when you should not use it. That’s when every millisecond matters. It’s really designed for ease of use and frontend performance, i.e., the serving from a CDN in the most easy manner possible. It’s not designed to have the absolute lowest overhead possible. In that case, you should just write a hook implementation of “hook_file_url_alter” which I added to Drupal 7 and that will allow you to easily do what you need to do without the overhead of an entire Drupal module. It’s also feasible or reasonable to use your own code when you have a very complex CDN mapping, but even the CDN module has support for that in the sense that it has a callback in which you can implement custom logic to determine which CDN should be used for which file.

So now I’ve explained the different ways you can use the CDN, how we can maximally exploit it, how we can use it with Drupal, but what you really want to do is actually prove that the CDN integration that you’ve done is actually having a positive performance impact on your website. That’s what these last two slides will be about. Ideally you are already doing continuous integration for your website or your application which is essentially making sure that no new bugs enter the website and that everything continues to work correctly while you should also, in theory or in the best case possible, have a continuous performance monitoring that makes sure that whenever you add new features or make improvements and so on, you’re not actually making or adding performance regressions. You want to make sure that your website stays as fast or gets faster, or at least if it got slower, that you actually know it got slower.

So there are two ways to do that: synthetic user monitoring which is a test script essentially, but it only works in a few browsers and it’s a very controlled environment in terms of networking and OS and so on, so it’s not really realistic. It doesn’t give a realistic picture of what your end users are seeing or experiencing, but it is really great as a reference point for tracking the performance of your sites as it evolves. Exactly because it is in a controlled environment, the only variable is really the changing code, so for internal use, this is excellent, but for making sure that your website as it is experienced by your actual visitors and you want to improve that, then what you need is real use monitoring. This is actually measuring your actual visitors, hence in all browsers, hence in real world environments and does give very realistic results. It actually shows you how fast your website is for real users. This also gives you the ability to see in which specific location or if browser site performance is good or bad so that you have more useful information on reproducing the problem and improving it for those specific cases.

So if we go look at synthetic user monitoring, I’m going to show you how you can do synthetic user monitoring for free as well as real user monitoring for free. So the first step here is configure a dev or staging app or web server. Maybe you want to do this on production traffic; that’s also possible, but I’m assuming that you want to do this internally so that you can track the performance internally before it is pushed live. What you need to do is ensure that your pages’ resources are served from the CDN domain, then you can perform a test with the free and open source WebPageTest.org with a node, i.e., a desk server, a browser that is far away from your origin app server. Why exactly one that is far away? Well, we want to make sure that the CDN is having a positive performance impact and if it’s far away from your app server, then by definition the latency is higher, and in theory, if the CDN is working well, it should have a lower latency. Hence, this should show a significant difference with the case where you’re not using a CDN.

So point three is using it with a CDN and in point four, what we’re going to do is use WebPagTest’s scripting engine to point to your origin server instead of a CDN, so the CDN domain name will make it point to your origin server by remapping it to a different fixed IP address, run the test again and then all you have to do is compare the results. Of course this is a single measurement; you can repeat it, of course, but you have to make sure that what you’re looking at is actually representative of the real world, so I recommend to repeat it a few times to at least make sure that the largest variance is mitigated.

Finally, real user monitoring, what you need to do then ensure your production site’s pages’ resources are served from the CDN domain. Why production? Well, exactly because otherwise you’re not really testing with production with real users, unless of course you have some kind of mechanism where you’re serving the newest version of your website to a subset of your users, then you can use that instead, but in either of those cases, what you need to do is make sure that you’re using a CDN only for a subset of your users - for example, 50% of the users that you’re testing this against because we want to compare non-CDN traffic versus CDN traffic. What you need to do then is install some kind of real user monitoring performance measurement tool - New Relic RUM and Torbit Insight have both got free packages, so you can try either of those – and then again compare the results.

So that’s all I wanted to share with you. I hope it was clear. If there’re any questions, then I’m sure I’ll hear about it later. I’ll now pass it on to Alex who will talk about Drupal plus Acquia.

Alex Jarvis: Hi everyone. Thanks, Wim. My name’s Alex Jarvis. I’m a senior technical account manager at Acquia and it’s been my good fortune over the last two years to help some of Acquia’s biggest customers adopt Akamai and integrate Akamai tools to improve their site. I’m going to go over fairly quickly in high level the steps you’ll want to take on a Drupal site to take advantage of the Akamai network.

So the first rule of a CDN and an Akamai integration is really that every site is unique. I mean as Wim was pointing out, there’re a lot of factors and you need to customize the experience to be what your site needs. Really, I’m going to go over some best practices, but work with Akamai or your CDN provider to really understand the feature sets that they’re offering and to figure out how those work for your site because that’s going to be the most important thing. That said, for Akamai in particular, there are certainly some best practices and some initial steps that you need to take to really be able to leverage Akamai integration. At the most basic level, those changes that you need to make are in settings.php, so Drupal, as I hope you know, already has some CDN reverse proxy flags in settings.php and you’ll want to uncomment those or they’re commented by default. In particular, you’ll want to change the “reverse_proxy”, “reverse_proxy_header” and “omit_vary_cookie” config lines and uncomment those. What that’s going to do is tell Drupal that it is now behind a reverse proxy in the CDN and so that it will track its traffic differently particularly like the reverse proxy header tells it to look at a different HTTP header for the real IP address of the users that are accessing it.

Akamai for example sends requests as a HTTP true client IP header and that’s the user. The reason you want to do that is, for example, in Drupal 7, there’s an automatic throttling in place for things like failed login attempts. If you’re behind the CDN, all of your users’ IPs that are coming from the same edge servers will look the same, so if you have a university and a city that are all hitting the same edge networks and one person in the city makes a failure to log in, suddenly the users in the university nearby are getting blocked even though they have a different IP because it all looks the same. So that’s the note there. Similarly for Akamai and I believe others as well, the CDNs do not handle the very headers that Drupal sends out by default. Those are seen as cache busters and they will break cacheability of your site, so saying omit the vary cookie header will solve that because the vary cookie header encoding will not be included on request.

Finally, another issue that comes up fairly frequently for basic configuration is the base URL for your content where Drupal will try to use the URL that the request is coming into it for all of the static assets. So this is your aggregated CSS and JS and often your embedded content like images as well and it will preserve that URL, but when you’re behind Akamai, when it makes requests back to origin, it makes it against an Akamai name, so the Akamai name may be “origin-akamaidomain.com” and you don’t want users making requests past Akamai. You want them to be going through Akamai for everything and to ensure that that happens, you need to check what the request that’s coming into you is – so that’s the line about forwarded hosts there – and grab the name of the server that is being requested and if that matches that origin domain, that means this request is coming to your origin from Akamai and you want to then rewrite your base URL to be the public Akamai domain so that all those static assets are served from that domain and are coming to Akamai and not to your origin directly.

Alright, so we got a quick question here. I think we’ll keep that for a later time. So two other best practices for Drupal and Akamai is Akamai provides a staging network that they set up and by default they will point that at your production site along with their production network and when you’re doing testing, you’re testing against production. A much more helpful configuration is to have Akamai point their staging network at your staging tier so that you can test things, change configurations and work with Akamai to make changes without impacting production at all or having any risk of impacting our users. The second piece of that is that you should really have separate domains for the public, so a CDN-facing domain which is where users are coming in and an edit domain which is for your administrative users where people log in that are really creating the content and administering the site and this is a best practice with the CDN in general in my experience because you don’t want your administrators coming to the same site that the public is using for a variety of reasons. Chief among those are the CDN is specifically designed to cache as much as it can. I mean if you’re optimizing for the site, you want it to be as performing as it can be and give the best experience to the users, and that’s not really the same objective that you have for your administrators who, by definition, need to see an un-cached version of the page or have access to really see the latest and greatest that’s happening in the site before anyone else. For that reason, they shouldn’t be going through the front door, as it were, because the objectives of those two experiences are quite different. In Akamai especially, you will have unpredictable experiences where administrators will not see the content that they expect or they’ll be seeing some cache things, or even worse, Akamai or whatever network you’re using can cache administrative content on the public-facing site. So the user comes to a page that the administrator was just on and sees your admin bar or other undesirable content.

Similarly, by splitting out an edit domain from the rest of the site, you can take advantage of a whole bunch of additional security measures to make sure that the only thing talking to you from the public domain is your CDN environment and the only thing talking to your edit domain are your privileged users. I’ll talk about that a little bit more in a minute. Finally, as I already mentioned, you really want to make sure that you’re able to maximize cacheability and that you don’t need to put in any special rules or needlessly complicate your caching strategy by trying to use the same domain for both purposes. As I mentioned a moment ago, since you’re securing access, the way that I typically recommend that we approach that is if at all possible, if your company is running an internal DNS server, to have the edit domains entirely on your internal DNS so that no one outside of your network has access to it and the few people that hopefully do need access are able to set up their own host file entries for accessing that domain. Similarly, you can implement an IP whitelist that says “I don’t allow any access to my edit domain except from this IP range inside my network, my administrator’s home IP address” – hopefully they’re using a VPN and that may not even be necessary – “and my CDN or Akamai edge servers.”

So those are the best practices and there’s a whole lot more that you can do as I talked here briefly about some of the other things that you can take advantage of once you have those split domains. One that I really enjoy and adds a lot of security to the site is a blank user roles table. So Drupal by default stores all of the user’s role mapping. So user ID 19 has these roles in the users role table and if you create a copy of that and if you’re using MySQL, you use the BLACKHOLE storage engine where it’s basically writing everything to blackhole and nothing is actually written there. With a separate edit domain, you can add a database prefix to your user’s roles table to be “blank_” and then it will use this newly BLACKHOLE’d table when it does its lookups. As a result, everyone on the site will have no roles assigned, so even someone who’s an administrator on the site, if they accidentally logged in on the public-facing domain, will not have any roles assigned to them and there is no risk that they can access administrative content and have that cached for the public. It’s fairly easy to set up and it gives you a really nice benefit and then you only have to grant access to content that you have for authenticated users. That becomes the only thing where if maybe you have some content like comments that are only available to authenticated users, they’ll still be authenticated, but they will not have any other privilege roles assigned to them.

Similarly, you can mask that you’re even a Drupal site or that your site allows login or has any administrative functions. By modifying your .htaccess file for the site, you can check again incoming domain requests and if it matches the public-facing site, you can 403 or even 404 Not Found administrative paths, the user path, the directories that are in the site by default that are not needed typically for serving site content, and if you really want to mask potentially that it’s Drupal, for example, you can disallow all access to node and only allow your aliases to be accessed, so this gives you a lot of control over customizing the user experiencing and securing the site in a way that isn’t generally convenient otherwise.

So some other things, again, this is fairly high level and it’s site-specific, but some other cool options that you can do with something like Akamai in front of your site or a similarly good CDN is you can work with them and set up exclusion paths for certain things where AJAX callback paths may not be safe to cache in your CDN, so things to look out for are things like a login, lockout paths, AJAX callback, obviously any content creation that any of those paths need to be excluded. Another issue that comes up frequently is if you’re really caching content on your site for a long time and since you have a long TTL – time to live - on the site because maybe certain pieces of content don’t change frequently that Drupal’s aggregated CSS and JS files by default go away after a period of time and that can be problematic if the CDN maybe has the content of the page in place, but doesn’t have all the static assets in cache anymore and then comes back to your origin and requests these aggregated files that haven’t existed for a week or however long. There’re a couple of options for that. In Drupal 7, there’s a new variable, “drupal_stale_file_threshold” that you can increase to be as long as your maximum TTL time on your CDN and it will ensure that those files are preserved for at least that length of time. On Drupal 6, that variable’s not yet present; however, there is the advanced aggregation module called “advagg” available that can similarly be set to preserve these files for an extended period.

Some other things that often come into play are issues around cookie domains making sure that sessions are handled as you’d expect. You may need to change the cookie domains in your settings and if you’re leveraging Akamai in particular, Akamai has a lot of other really nice options for using their services including net storage where you can switch Akamai from being a pull-based system where it’s coming to origin for its request to you have a pushed-based system – and Wim was giving the benefits of those earlier – where you can push the assets that you want to be served on the edge.

Moderator: Alex?

Alex Jarvis: Yes?

Moderator: I’m sorry to interrupt, but I just wanted to say thank you to anyone that has to sign off now. If anyone wants to stay on and finish the presentation and ask questions, we’ll continue.

Alex Jarvis: I’m sorry; we’ve run a little bit over. I’m almost done here, so I’ll wrap it up really quickly. So NetStorage has nice advantages for going from a pull to a push-based system. Similarly for securing the site, Akamai SiteShield allows you to restrict access back to your origin webs and if you really hone the advantages of CDNs in Akamai, you can do some crazy things like allow users to sign in, get details about what role they have on a site and then send them over to your Akamai domain and give them customized content as an anonymous user, but that’s still customized to what rules they have. It’s cool things like that that can be done that Akamai and Acquia have done together that we’d be happy to help you with, and Acquia’s cloud is set up to do this. Aside from the initial configuration that I already mentioned, we work out of the box with Akamai on these things and the sky is the limit of some of the ways that you can optimize and enhance the experience on your website.

So that’s all I had. Thank you so much and I believe I may give it back to Kieran here and we’ll have some Q&A.

Kieran Lal: Great. Thanks, Alex. That was really great content from everybody. One of the things we seem to have found through the presentation was that people were either asking questions directly of the speakers or maybe there’s something going on with WebEx. As the organizers, we only saw some of the questions, so I’d like to ask the panelists, Chris and Wim and Alex, if you’ve seen any additional questions that have shown up in your windows, if you could assign them to me, then we’re going to go ahead and we’re going to take a couple of questions that were directed at Wim earlier and Wim had had a chance to read them, but they were around talking a little bit more about targeting assets that are pushed to a CDN versus one that are not. So Wim, do you want to take that away?

Wim Leers: Sure. So essentially the question was “Can you talk a bit more about targeting what assets are pushed to CDN versus which ones are not?” Looking at the exact timestamp, I think this question relates to the Drupal CDN module. In general, you can implement any kind of logic you want to determine which files are served by the CDN or not. In the case of a push CDN, you control which files are pushed. In the case of an origin-pull CDN, you control the URL, and if you control the URL, then you determine when it’s served from the CDN or not because for example, if your CDN would be mysite.CDN.com, if you use that domain name, then the origin-pull CDN will come back to your origin and get the file so it’s served from the CDN. If you decide to serve it from mysite.com, then it won’t be. So that’s the general answer, and in specific cases, a Drupal CDN module, what you do there is essentially say, “Hey, these specific file types should be served from the CDN.” So you list the domain name then you list file extensions – for example, CSS, JS, JPEG, PNG, GIF, whatnot. Those files are then searched from the CDN, so it’s based on file extension in the case of the CDN module because that’s the easiest and most understandable and most performance solution there.

The other question was “Can dynamic responses static assets be targeted for CDN?” The answer is yes, but it really depends on what you understand on dynamic responses. So essentially anything is possible, but for example, in the case of – I don’t know – a thumbnail that shows the latest promoted product and that changes every day, that is something that can definitely be a use case. The tricky thing is always to ensure that it’s not cached for too long or to make sure that it’s cleared when it needs to be cleared. In the case of Akamai for example, they support explicit purging, so there you can execute or perform a purge command on that specific file so that then the CDN will come and get the new version. That’s one way of doing it, but the most HTTP standard-like way is by using the proper cache control. Essentially what you do there is say this file can be cached for X hours or X days or X seconds and if your CDN listens or takes that specific method into account, then you can rely on it, but you should make sure that your CDN actually listens to those correctly because that’s not the case for every single CDN, or maybe they ignore any caching duration that is below, for example, five minutes or below one minute because otherwise the caching doesn’t make sense from a CDN point of view.

So the answer depends there, but if you’re talking about dynamic responses in the sense of page caching, serving the entire website from the CDN, then I think that maybe Alex or Chris from Akamai can answer the question better because I have little experience in that area.

Alex Jarvis: Sure, I’ll pick up on that. I’m going to jump back real quickly to the first question about pushing content to the CDN and just say from an Akamai perspective, for example, if you’re using NetStorage, that your Akamai configuration will control what kind of assets you want to do and what the lookup order is. You should look for these sorts of assets in that storage first and then you will explicitly control yourself which files you push up to that environment. Speaking more to the second question about dynamic content, in most cases, as Wim indicated, you have some options there being creative with your cache control, but if it’s something where you really need something to be completely dynamic and it cannot be cached every single time, then what you’re usually looking at doing is some form of ESI where you’re going to expose that dynamic piece in some individually-chunkable way that can be requested by ESI so that when the page renders, three-fourths of the page comes from the cache hopefully or as much of it as possible comes from the cache and the dynamic pieces making a callback to some service or some portion of the site that can deliver that dynamic content very efficiently and very quickly because you’re bypassing the CDN for that. In my experience, it’s not particularly feasible to have the CDN directly cache or directly hold dynamic content because the CDN is going back to the origin for anything that it doesn’t have on itself. It’s not processing things. In order to be fast, it needs to be able to just serve something itself and not be concerned about computing it, so if you need to do that, you really need to be looking at how to provide a way for origin to provide that quickly.

Kieran Lal: Okay, great. Thanks, Alex. We’ve got one more question and let me just pull it up here so that I can read it. So it was from Kai and he said “We’ve seen examples with which media files are hosted independently from the site itself. When or why would this be used instead of a traditional CDN?”

Alex Jarvis: I guess I’m not quite following the question. It sounds to me like that basically is a CDN or at least that approach to charting the assets so that they can be requested separately from the site content. I’d be curious if perhaps I’m misunderstanding the question slightly.

Kieran Lal: Kai, feel free to jump in and clarify your question, but I guess my thought would be when do I put my videos on YouTube and have them embedded and delivered as part of my site from YouTube? Then when do I host them, but have the videos delivered directly from the CDN or something along those lines?

Alex Jarvis: So in a case like that, I mean basically anything that you can offload from your origin is a win. The advantage to running your own CDN versus something like YouTube is that you have a lot more control over where that content is coming from and how it’s being handled. If you’re relying on a third party, then you’re also relying on your content delivery system, so you don’t know how YouTube distributes access to their videos. I mean in this case, in YouTube specifically, we’re talking about Google and we can assume probably that it’s going to be pretty good, but you don’t know precisely what’s happening, whereas with the CDN, you, by definition, have a lot more control even if it’s an abstract flavor. I mean you don’t know where every single edge server in Akamai is for instance, but you do know that you can look at the statistics, you can get the information about what’s being served, you can get a sense of where in the world those things are coming from and how they’re being used, so it really comes down to having a better sense of where your content is and how it’s being accessed by whom and how it’s being served. That’s where having a CDN that you’ve set up and you understand empowers you in ways that using a third party service that may not provide or that won’t provide the same level of information that’s helpful.

Kieran Lal: Great. Thanks, Alex. Kai, if you needed to – okay, so one of the things he was saying, the example he was thinking about was, say, an audio hosted independently, but I think he understands based on your response, so great.

Okay, we’re a good chunk over, but as promised, we had some really outstanding content and some real depth and expertise from all of our speakers, from Chris and from Wim and from Alex, so thanks everybody for attending. Hannah, was there anything else you wanted to say to wrap up?

Moderator: No. Thanks everyone for attending. We’ll send you slides and the recording within the next 48 hours.

Alex Jarvis: Alright. Thank you all very much and happy holidays to everyone.

Accessible Theming in Drupal [December 19, 2012]

Click to see video transcript

Hannah: Today’s webinar is Accessible Theming in Drupal with Dan Mouyard who’s a Senior Interface Engineer at Forum 1 Communications and I’m Hannah Corey and I’m a marketing specialist here at Acquia

Dan: Hello, I’m hoping everybody can see the slides Accessible Theming in Drupal. As Hannah said my name is Dan Mouyard, Senior Interface Engineer at Forum 1 Communications. I’ve been working in Drupal for about four years now and I’ve been involved in Drupal 8 development core development and I’ve been involved with the HTML5 Initiative, I’ve also been involved with the mobile initiative and I’ve also been a member of the accessibility team for the past couple of years where we get together about once a month and chat and try to talk over some of the accessibility issues going on in Drupal core and contrib. Interface engineer so that’s basically a fancy way of saying themer and so I have a lot of experience with HTML, CSS, Java Script as well as PHP.

As a personal note I’m also legally blind and I wear hearing aids so a lot of the accessibility issues that I work with every day affect me on a personal level and I hope I’ll be able to share some of that insight. The focus for today’s talk is primarily on theming especially in Drupal 7 although some of the topics that I’ll cover I’ll also bring on some of the techniques that we’ve been working on for Drupal 8.

Hopefully you’ll have a good grasp of HTML, CSS and maybe a little bit of Java Script as well as just basic theming topics such as template files and stuff like that and also finally the focus is on accessibility and with a title like accessible theming most of you you tend to probably you’re already familiar with accessibility so I won’t go too much into detail as far as what it is or why it’s important or what that we need to work on but you wanted to point out that my view on accessibility is a little bit different and here’s a great quote from Tim Berners-Lee about the world wide web and he said, “The power of the web or the universality accessed by everyone regardless of disability is an essential aspect.” How that everyone because to me that is the focus of what we do when we build websites is that we want to build them in such a way that everyone can access them regardless of whatever limitations they may have and those could be disability limitations, they could also be device limitations, internet connection, there’s also what’s called situational impairment. Such as being in a bar where it’s very noisy and you’re trying to watch TV. That situation can hamper things so stuff like post captioning could really help and the same thing occurs on the web. There’s things we can do that are vitally important to those with disability but they also help everyone.

Before I get into some of the nitty gritty details of the techniques for advanced theming I just want to highlight really quickly my overall philosophy and that’s through this idea called progressive enhancement and the whole idea of progressive enhancement is just layer on functionality from the simplest most basic to more advanced.

First you start off with HTML. That most – devices and browsers can understand and then you layer on CSS and then you layer on behavior with Java Script and also even within those different areas you can add on layer to more complexity. With HTML you can layer on some of the newer HTML5 elements and with CSS you can layer on top some of the more advanced CSS3 reflective and the rounded corners just stuff like that and again with Java Script.

The reason for this approach is this ground up approach is that regardless of however you’re accessing that page you still get a usable interface and a usable way to consume that content. The main thing that I’ll be covering today is first I’ll be going over some of the accessibility stuff in Drupal Core that you should really be familiar with during theming and then I’ll cover some HTML basic of creating accessible HTML and how you can implementing that in your theme. I’ll also be going over quite a bit of different CSS techniques to make things more accessible and then finally I’ll be going over Aria it’s just a new way you can add basically meta data and semantic to HTML that can help assistive technology understand.

First Drupal Core, what have we done with Drupal Core particularly Drupal 7 that makes things easier accessible to wire that you really should be familiar with when you’re theming? The first one is we’ve added three helper classes. Those are element dash hidden, element dash invisible and element dash focusable.

The first one Element dash hidden here’s to CSS. Basically you’re just applied to CSS property display none. Now what this does is it tells browsers and any other devices that, “Hey, this part of HTML just ignore it.” And so all browsers and devices and such technologies such as screen waiters they won’t present that information. There are times however you want stuff to be hidden visually, but yet that information will still be available to devices such as screen waiters and that’s where we use element dash invisible. You can add this class to anything in Drupal and it’ll essentially hide it from view yet still be there so that screen waiters don’t read aloud whatever the content is inside of it and the finally some of the work done element invisible is the element dash focusable class.

Here’s the CSS. Basically what it does is any hidden element using invisible if you also add element dash focusable class to it whenever keyboard users or other devices tab through the mark up when they hit one of the units focuses it’ll bring it back into view and go back to your tech test and how will you just skip links.

Skip links are the first links on every page which allow users to skip directly to other sections of the same page most often to the main content. Now in Drupal itself it uses only one skip link and it links to the main content but you can have others such as a link to the main navigation or a link to the search box. This is the first link that occur on the page so you really want to limit how many they are. You should never have more than three.

Here’s an example of the bar tech theme and you can see at the very top where it says skip to main content and that’s the skip link. Normally you don’t see that because it has the element dash invisible class but when the keyboard uses a tab to it because it’s using the element dash focusable it pops into view and that allows people to jump down to the main content.

That’s also used in the seven theme here’s an example, which is the default admin theme for Drupal 7. Again if you would tab the first thing that would pop up would be the skip to main content. How this is done is in the HTML template which is provided by the core system module. To add just a little bit of HTML at the very top right after the body tag and the important thing to really focus on here is the target of that link where it says main dash content and the reason that it’s important because that needs to be a valid link and sometimes whenever new themes are created they ignore the skip link or in the page template they don’t provide the target for that link.

Then the page template you should have one of two things. you should either have an anchor tag with the ID of main content or you should have a div or some other HTML with the ID main dash content that enables a skip link to functional correctly and then finally in Drupal Core we do a lot of work on forms and one thing that’s really important about forms and accessibility is that all form elements need to have either a label or a title attribute and Drupal’s form API uses the title property to set these.

Now Drupal’s form API is very powerful. It uses the raise help developers build these forms and so here’s a little a snip at a PHP. Now you’re creating a new form element and just for the sake of this demo we’re creating a text field such as names stuff like that and then the title property that palm title you see where we’re setting a description and for those of you who are not familiar the T function that just allows it to be translated.

What this will do is this will create a text field form element and the label for that element will be description and the markup will be put in such a way that the label is correctly linked to that element but there are times when theming when you might not want to show that label or you might want an attribute of third you might want to move it around. In Drupal 7 the form API was the title display property and there are four possible options.

By default it had before with a label show up before the element. The next after where the label comes after the element in the mark up order and then invisible what this will do just will add the element dash invisible class to the label. It’s still there and screen readers can still hear the association of what that label is to that element and then finally attribute and let you set that then instead of outputting a label for a form element it will set that title as the pedal attribute for that element.

Here’s an example that same one we had before where we added the title display property and we set it to invisible so again this markup will output a text field and description will be the label for that element and it will have the class element dash invisible so it won’t be visible however it will still be read by screen readers.

Next HTML theming, you need to really focus on for accessibility and the first one is just a general best practice is you want to have semantics? Here’s an example code of – no article. Perhaps it’s a node or something like that. You can see where users some of the HTML5 elements and stuff. We have an article, we have the heading, the footer element which is new in HTML5 which doesn’t necessarily have to be at the bottom it’s just a semantic saying, “Hey, this is information about this part of the mark up.”

There’s also the figure element for adding images and the caption and the paragraph where you can emphasize things and even an abbreviation. All this stuff you can make it use different code that’s unsemantic and it will look at the same as well as in theming and just use bids and class and stuff and making it the same exact look however you lose some of that semantic meeting and some of that meeting it doesn’t really make a difference accessibility wise but in the future especially with some of the HTML5 elements newer sister technology can take advantage of this information and so you can change this markup usually in template files in Drupal.

It’s pretty easy you can just open up a template and you can just rename this to article something like that. Where it gets really tricky however is whenever you have custom content types and you have say a bunch of fields on those content types and by default the mark up Drupal outputs for those fields is pretty cluttered because it has to be usable of course through a wide variety of used cases.

You can use individual field templates to overwrite those mark ups to however you want however it’s very difficult to create those field templates that will work across all projects. What I recommend is using the fences module and what defensive module does and its jupitaorg/project/fences and this allows you so that when you create those content types and you create those fields you can define the mark up.

Here is a content type where you can manage fields and what fences will do is if you go under operations and you click edit next to one of those fields now that’s where you can set, “Is this required? What’s the maximum length? What’s the default value.” What the fences module does is it allows you to define the rapid mark up for that particular field.

Usually whenever you’re creating these content types, that’s when you have the best idea of the semantic meeting of those individual fields. The next thing to really focus on for HTML is that the document order is the tab order. If you go to any page HTML page and you use a source you see all the mark up and machines and assistive technology and screen readers they will read through that in the order of that code and that’s very important because for keyboard users they might have visibility issues and can’t use a touch device or a mouse. They have to use a tab on the keyboard or some other binary movement device. They can tab through to different areas on the page and it goes in the same order as this code.

Here’s the example, if you look at the page template. Now this is the default how it orders things. It also has the header, the navigation, the bread crumb the main area and then the footer and then within those the header. The logo the site name and slogan then if you have any blocks assigned to header regions they come after that. You can see the order of how things are and the code. Whenever somebody is tabbing through the stuff on a keyboard you have to be mindful of what order these are because whenever they’re tabbing through the page this is the order that they will hit things.

Now you also notice that the header and the navigation they come at the top. You have to go through quite a bit before you actually reach the main content and that’s the big benefit of the skip link because the skip link is in the HTML template which is a wrapper for the page template so it would come first and so if you hit that you can quickly skip and jump down to the content. You don’t always have to continue tabbing through all of the logos and the navigation and that stuff.

Another thing with HTML is in addition to the source order there’s also the mark up that you can use. Now in Drupal 8 with the HTML5 Initiative we went through all the templates and we tried to use the best HTML6 elements that we could. In this example, the page template we can use the new HTML5 elements. Now we can use the header element, there’s the nav element for the navigation stuff. There’s also the section element so you can create different sections and also the footer element.

The next thing for HTML you really want to be mindful of whenever you’re theming accessibility wise pretty much the one thing you have to focus on for images is you need to have text alternatives for that content. Just like with videos you need to have some tech so that people who can’t see the videos, can’t hear or have perhaps they have flash block and they can’t see the video at all there’s some other information there that they can get some the same thing with images and the primary way to give that alternative tech for images is with the alt attribute.

Now the key thing to keep in mind with the alt attribute is one every image element needs to have an alt attribute. For decorative images perhaps it’s a flower that’s part of the design but it’s not really part of the content you can just leave the alt attribute empty. If it’s a content image then you have want to have that alt tech be short concise description about the image and if that image happens to be inside of a link that image is pointing somewhere then that alt tech should describe the destination of that link and the reason you even want an empty alt attribute on the decorative image is because a lot of screen readers if they see the empty alt they’ll ignore the image which is want you want. Otherwise sitting on a screen reader they’ll say, “Blank image blank image or this is an image or unknown image.” It’ll repeat that so like really interrupt the flow of content.

Also with the HTML as we get more and more into not just desktop but also the tablets and mobile devices you also want to keep in mind the view port. The view port is essentially what the screen is for that particular device. How does it render it? What I’ve seen all too often are these two view port Meta tag.

The first one sets an initial scale a minimum scale to maximum scale and they set it all to one and that prevents zooming which you really don’t want to do because on mobile devices and the tablets sometimes the text is too small for people with vision problems so they want to zoom in and they use the touch zoom in and they can’t and so this prevents them from being able to read that text.

The second one prevents scrolling. Sometimes they might have zoomed in but they can’t scroll to part of the page that is now something out of the view port. The recommended that I recommend is just have width equal device width. This works very well on all devise and just says, “Hey, whatever the device width of the device you have set that as the width of this page.” It allows zooming, scrolling and all that kind of stuff.

Okay so the next big section is CSS and so all the styles and there’s a bunch of different stuff you can do with CSS to make sure your site is accessible. The first one is you should be familiar with the image replacement technique. Just like we had images in HTML and we have the alt texts sometimes you might want to add those images that are decorative as CSS backgrounds but you still want screen readers and search engines to still be able to get that text and that information

Here’s an example where you have – here’s a header where the texts visits the capital. You still get that semantic information but you maybe want to show an image set. so you can use CSS to define an image background for that header and then it’s important you set the height and the width to be the dimensions of the image and there’s a couple of different image replacement techniques that you can use and the one that I have shown here is one of the newer ones so that it will hide the text and still only show the image background.

The next important area for CSS is styling links and three main key ideas you want to keep in mind whenever you’re styling links. One is you want to make them obvious. If you have links on the page you want people to know right away, “Hey, that’s a link I can interact with.” Next is you want to design all sticks of lengths. When it first comes up you also want to have something different for hover or focus whenever somebody tabs through and is focused on it. It’s like whenever somebody actually clicked on that link and also visited so people know where they’ve been and that’s why you want to make them easy to click.

This is just general usability best practice and it’s also very helpful for people with mobility problems. They have a hard time being very accurate with their mouse movements. It’s also very helpful for tablets and touch devices where they have to use fingers and people have really big fingers compared to the length that they’re teaching.

Some cool trick that I’ve done with CSS and links is one if you don’t’ have enough time to design the focus state which is ideal just at a really good default is that whenever you set CSS properties on hover do the same thing for focus and this way if you add the background, different color or stuff like that whenever a mouse hovers over a link the same styles get applied whenever somebody is tabbing through on a keyboard. They can easily see where they are and what’s highlighted.

Another problem I often see is the link outline. Now quite often in reset style sheets you’ll see this just generally applied to all anchors it just says outline zero and that’s because on certain links and stuff on browsers if you click on them and stuff you get this weird outline and it sometimes doesn’t look good however that outline is very important for keyboard users who are tabbing through and so a better default is to use this then you reset and set. Where you set the outline to zero for the hover in the active state but for the focus state you make sure it’s still there that way keyboard users still get the benefit of that outline.

Finally you want to have a big target area for people to work on and this is what I like to use as the default anchor tags is set the margin to negative two pixels and the padding to two pixels and this works on all the inline links and basically just to add two pixels of clickable area around each link and also if you’re tabbing through the outline won’t be right next to the text anymore it’ll be two pixels and it’ll make that text a lot more readable.

The one caviar if you’re having this in your reset is that you want to make sure whenever you’re styling other links such as menu links just stuff like that you want to make sure that you’re overwriting the margins and the paddings so it doesn’t screw things up.

The big part of making themes accessible with CSS is to really pay attention to typography. Now and it’s been said web design is like 90% typography because that is basically what people go to websites for is to get content and to read that information and you want to make that as easy as possible for people to do.

Here’s a great quote from Emil Ruder, he was the head of the Swiss Design School then he’s in – he’s a big influence in a lot of typography work and he said, “A printed work which cannot be read become the product without purpose.” That is the essential purpose of text is to be read and so some general things you want to keep in mind with typography is first of all you want to choose the right fonts and you want to make sure that they’re legible and you want to do as much as you can to make sure that they’re readable easy for people to see what they are.

Some of the display fonts sometimes the lens look they may look cool but they’re a lot harder to read and so you have to balance the design versus how readable it is. Another thing is you want to pay attention to the measure.

Now the measure the typographic term essentially saying how wide a line of text is. If you have a block a paragraph the measure is how wide it is and you don’t want that measure to be too wide because then as I read down the line and get to the end and it comes back to the beginning if that distance is too great and you miss your landing spot. It’s hard to see where the next line is but at the same time you don’t want that measure to be too narrow because then your eyes become like a pinball machine jumping back and forth very quickly from one link to the next.

Another thing is to use the appropriate leading. Leading is another typographic term and it basically means how much space is there between lines and text and in CSS this is the line height property and again just use the go to actual. You don’t want too much space between lines and you don’t want too little space so they crowd together.

Another good general rule of typography is you want to create a nice hierarchy. You want to create a nice hierarchy in the sense of perhaps heading structure. You want to have the heading structure be such a way so that you know this chunk of text belongs in this area. You also want to create hierarchy with lengths and stuff to make things easier to scan and you also want to use font sizes to make this hierarchy more noticeable more make this stand out and then some more detail just stuff to pay attention to.

The first big one is font size and so for nay of the body copy – like so you go through a page and it’s an article page you want that text to be the default font size. If you’re on CSS you want it to be 100% which basically is 16 pixels in all bounces. You want make that – they set it at that size for a reason because it’s very readable and back when the spec was first defined the 16 pixels on a monitor with the same size with PowerPoint that you would see in a book and one of the good things about using the default font size to body copy is that I had mentioned before about the hierarchy you have a much broader spectrum of font size that you can use for some of the smaller detailed stuff. The byline, the more information, full length that kind of stuff.

I remember it was several years ago where like people would set the body copy in like 13 pixels or 12 pixels because it was very very small it’s hard to read and everything looked the same because you couldn’t really get much smaller than that. so the byline, the more information link they were all the same size and very small and the small tech would create eye strain having to read stuff that’s small especially on monitors because it’s not as crystal clear as it is on a pungent magazine or newspaper and the final one good hint I have for whenever you’re designing with font size is to use actual text rather than just the Latin lorem ipsum because you have actual text there. As you’re designing you’re like, “Oh.” You can tell right away whether something is readable or not.

The other thing you pay attention to typography wide is the text alignment and in general you want to let the link the main chunk of body text. For a line if you have a right to left language and you want to avoid using fully justified texts because they create rivers and rivers are like the larger spaces between words that flow through the document and it can really throw off the rhythm of the text and it’s especially hard for people who are dyslexic and again having text alignment that left align that makes it easier to scan text. It easier to jump from paragraph to paragraph and scan through length and stuff like that.

Finally you want to make sure that there’s enough color contrast between the text and what’s behind it. What’s very good at increasing visibility to make things easier for people to read? Here’s a great website contrast dot com and it’s a great example of how high contrast design can still be beautiful and it also has a lot of good facts of why it’s important and the color contrast you want to make sure the smaller text it needs high contrast because it’s smaller. You need to make it stand out a little more and one caviar however is that you can’t have too much contrast. I don’t see this mentioned often whenever people talk about accessibility contrast but for people who are deflecting if there’s too much contrast it can actually make it harder for them to read and so some good tools I recommend is one is a really good tool that we’ve put out recently the contrast ratio and it uses the W3C W Color 3.0 double A and triple A contrast ratios you can easily input color values and see how readable stuff is.

There’s also online color filters great bit dot com and color filter dot work line dot org and they allow you to just put in the address that whatever website you’re looking at and you can see what that text looks like in black and white and under different types of color blindness and then my favorite is the color blindness simulator is that color ore core dot org and this is the program that you install on your computer and they have versions for Windows, Mac and Linux and what this will do is while you’re working you can have whatever – if you’re in a website or if you’re in Photoshop designing it will easily switch the entire monitor to a certain type of color blindness and you can check out and make sure that the color contrast works for whatever you’re designing.

Finally for CSS techniques the tone to responses design paradigm where you build and theme stuff in such a way that it looks good across all devices. So the big idea behind that is the layout. We use media queries to adjust the layout and media queries are basically just CSS that says, “Hey, if this condition is try apply this CSS.” And then you want to set the break point based on the content rather than the devices.

The break point are the different queries where things change. for example if you had a mobile layout where everything’s a single column at some point you have enough room that you may have two columns and where that changes is the break point and you want to set the break point based on how the content looks make sure it’s still readable rather than changing it at different devices and that’s for two main reasons.

One is device sizes are going to change. In five, ten years there will be a monitor on your refrigerator where you can see the website of the grocery store for example and we don’t know what size that screen is going to be and so if you set these break points based on content.

Another reason why you don’t want to set it on device size is that these media queries can be triggered when people zoom in in their web browser. People who have vision problems can zoom in and they can trigger these media queries on caviar. That doesn’t currently work in web browsers but they’re in the bug for it and they are going to be working on it.

Here’s an example of a media query. Here’s some CSS and so for the body we’re shedding with the margins, the max width and the padding for it and so what’s what every browser and device will use that and below it where it has the media screen and min width so that says, if the device screen if the width is 35M or larger then apply the CSS inside and so that’s where we can have a body and set the max width and change things around.

Finally one area of technically we can work on is WAI and so basically this is the Web Accessibility Initiative which was created by the W3C to tackle accessibility stuff and so this is the accessible rich internet application. There’s are – it’s especially Meta data that you can apply to HTML that helps assistive technology understand the purpose of that HTML. Just another layer of semantic that gives more information.

The use of it. Basically it defines a way to make web content and web application more accessible to people with disabilities. It especially helps with dynamic content and advanced user interface control developed with AJAC, HTML, Java Script and related technology. The different area, roads and landmarks and attributes that you can apply to make things more usable to assistive technology.

The only thing we really need to worry about in theming as far as getting started is the land mark roles. If we look at the page template we’ll see these big areas. We’ll have a dim with an ID header, navigation to make content and the footer. What was done for triple A is that we’ve used a new HTML5 tag the header tag, the nav the footer and in the future those tags will go assistive technology can take advantage and they would know what those tags mean.

Some of these technology however can’t understand those yet. At the stop gap we can use Aria. We can use for example replace it where for the header we give it the role of banner and for the nav we give the role of navigation. For the main content we can give it the role of main and for the footer we can give it the role of info and there’s even a new element called the main element which they’re discussing at the W3C and it looks like it’s going to gain track and it will be used but it’s not quite ready yet. We can use the main element instead of the section with the main content and assistive technology that can understand that can now also have the ability to jump that main content rather than having to use a skip link right now.

Okay, before we stop for questions I just wanted to demo really quickly the responses layout. For the past seven months or so at Forum One we’ve been working on the EPA dot gov that can be moving things over to Drupal and so we’ve been building the Drupal platform for them and I was the fine art architect for that project and theming and so one of their big wishes was for responsive design and also accessibility. They really wanted to focus in on that.

Here’s an example of one of their learning pages and one of the benefits of having these responsive layouts is that in addition to the screen readjusting itself at different device width it’s also help for people who have vision impairment that might need to zoom in and the immediate queries can be handled that way. So for example we can zoom in but first let’s look at the default desktop view and suppose somebody needs to zoom in and wanted to be able to read better.

As they zoom in it triggers the immediate queries and you’ll see things and the layout and stuff will change and you’ll see like for example the search field ban that may not widen the bigger the stuff that’s still readable and usable even for people with vision problems they need to zoom in. they rearrange and fit.

Again maybe people are really need zoom in even more and they keep going they can even get the mobile what people might see on a mobile layout. You could see how things change. Length that gives a little more padding so that people with fingers it’s easier for them to touch stuff and again all this stuff makes things usable not just for people with devices but also people with accessibility problems and they might need to zoom in. Okay, we have some time for questions.

Hannah: Hi everyone, if you have any questions please ask them in the Q&A pad in the web XEY please. Okay we have one question come in. what web accessibility checking tools would you recommend now that Bobby is no longer.

Dan: Yes, as far as automated tools one of my favorite currently is the wave tool bar and the new beta version is like the version five of the wave and it’s actually very good. There are also the new bookmark it basically you add it to your bookmark and it’ll go through and check all the mark up for you.

Hannah: Okay, the next question is do you recommend using Zen theme?

Dan: Yes, basically when it comes to themes you want to use whatever you’re most comfortable and a really good thing is you’re choosing themes and you’re really concerned about accessibility is on each theme they might have the tag D7AX which says they really paid attention to accessibility issues. I know John he has put work accessibility wise into the Zen theme and we also have others such as adaptive themes. I know Omega have done some so yes.

Hannah: One more is there a Jay Query book that focuses on accessibility design?

Dan: As far as accessibility design for Jay Query I haven’t seen on but there’s actually a very good book I think it’s called Progressive Enhancement with Java Script and it’s put out – yes I think like within two years ago and it talks all about progressive enhancement and how you add on Jay Query to create widgets that are accessible.

Hannah: Okay, the next one is can you go over how zooming is accessed in media queries?

Dan: Sure, again with media queries now say here we have the media screen and then inside that parenthesis you can say you can test for different things such as aspect ratio, device width, device height that kind of stuff and in general we tend to use the width as the thing we test for whenever doing these layout stuff. Especially if you use M rather than pixels then whenever you zoom in those media queries get triggered. Stamp on this site zooming in and out it triggers the same media queries just as if you work your resize screen. Any other questions?

Hannah: None are coming through so I want to say thank you so much Dan for the great presentation and thank you everyone for attending. Again the slides and recorded webinar will be posted to the Acuaia dot com website in the next 48 hours. Dan do you want to close with anything?

Dan: Sure, you can reach me at dmouyard@forumone.com if you need to email me any questions and also you can follow me on Twitter at DCmouyard that’s also my Drupal.org user name that’s also my RC Nick and also my Skype handle.

Hannah: Great, thanks. Everyone have a great day.

Building a Common Drupal Platform for Your Organization Using Drupal 7 [December 18, 2012]

Constructing a Fault-Tolerant, Highly Available Cloud Infrastructure for your Drupal Site [December 12, 2012]

Click to see video transcript

Hannah: Today's webinar is: Constructing a Fault-Tolerant, High Available Cloud Infrastructure for your Drupal Site.
First speaking we have Jess Iandiorio, who is the Senior Director of Cloud Products Marketing, and then we have Andrew Kenney who is the VP of Platform Engineering.

Jess, you take it away now.

Jess: Great, thank you very much, Hannah. Thanks everybody for taking the time to attend today, we have some great content, and we have a new speaker for our Webinar Series. For those of you who attend meetings you know we do three to five per week.

Andrew Kenney has been with the organization since mid-summer, and we are really excited to have him, he comes to us from ONEsite, but he is heading our Platform Engineering at this point, and he is the point person on all things; Acquia Cloud specifically, he'll speak in just a few minutes.
Thank you, Andrew.

Just to key up what we are going to talk about today, what we want to talk about, is we want our customers to be able to focus on Web innovations, and creating killer websites is hard, so that’s why we wanted to be able to spend all of the time you possibly can, figuring out how to optimize your experience and create a really, really cool experience on your website. Hosting that website shouldn’t be as much of a challenge.

The topic today is designing a fault-tolerant, highly available system and the point of the matter is, if your site is mission-critical how do you avoid a crisis, and why do you need this type of infrastructure?

Andrew has some great background around designing highly-available infrastructure and systems, and he's going to go through best practices and then I'll come back towards the end just to give a little bit of information about Acquia Cloud as it relates to all the content he's going to cover, but he's just going to talk generally about best practices and how you could go about doing this yourself.

Again, please ask those questions in the Q&A Tab as we go, and we'll get to them as we can. For the content today, first Andrew is going to discuss the challenges that Drupal sites can have when it comes to your hosting, what makes them complex and why you would want a tuned infrastructure in order to have high availability. He's been able with the types of scenarios that would cause failure, how you can go about creating high availability and resiliency, talk about the resource challenges with some organizations may incur, and then you may go through practical steps in best practices around designing for failure and how you can actually do that and architect and automate the failover as well. He'll close with some information on how you can test failure as well.

With that, I'm going to hand it over to Andrew, and I'm here in the background if you have any questions for me, otherwise I'll moderate Q&A and I'll be back towards the end.
Andrew: Thank, Jess. It's nice to meet you, everyone. Feel free to ask questions as we go or we can just have those at the wrap up, and I'm more than willing to be interrupted though.

Many of you may be familiar with Drupal and its state as a great PHP Content Management system, but even with it being well engineered in having a decade-plus of enhancements, there some number of issues with hosting Drupal and these issues were always present if you're hosting in your own datacenter, or environmental server in, let's say, RackSpace or a SoftLayer but even more challenging when you're dealing with Cloud hosting.

The Cloud is great at a lot of things, but some of these more Legacy applications are very, very complex and extensive applications may have some issues which you can solve with modules, you can solve with great platform engineering, or you can just work around in other ways.

One of these issues is Drupal expects POSIX file system, this essentially means that Drupal and all that’s filing the output calls were designed with the fact that there's a hard drive underneath the Web server, if not a hard drive in there, is an NFS server, there's a Samba server. There's some sort of underlying file system. This is not oppose to some new applications where maybe they're built by default to go store files inside Amazon [Espree 00:04:16] or inside Akamai NetStorage, or inside documented oriented database, like CouchDB or one of those databases.

Drupal has come a long way especially in Drupal 7 in making it so that you can enable modules that will use PHP file streams instead of direct app open … Legacy, Unix file operations, but there's a number of different version of Drupal and they don’t all support this and there's not a lot of great file system options inside the Cloud. At the end of the day Drupal still expects to have that file system there.

A number of other issues are: Drupal may make … you may make five queries on a given page, you may make 50 queries on a given page, and when you're running everything on a single server this is not necessarily a big deal. You may have latency in the hundredth of milliseconds, when you run you're running something on the Cloud it may be the same latency on a single server, but now let's talk about you're running and even with the same availability zone in the Amazon you may have your Web server on one rack and you may have your database on a rack that is a few miles away within the same availability zone.

This latency, even if it's only one millisecond or 10 milliseconds per query it could dramatically add up. One of the key challenges in dealing with Drupal both at the scale of [horizontal 00:05:49] layer as well as just in the Cloud in general, it's how you deal with high latency MYSQL operations. Do you improve the efficiency of the overall page and use less dynamic modules or less … 12-way left joins and views and different modules? Do you implement more cashing? There are a lot of options here but, in general, Drupal still needs to do a lot of work in improving its performance at a database layer.

One other similar-related note is Drupal is not built with partitions tolerance in mind, so Drupal will expect to have a master database that can you can go commit transactions to. It won't have any automatic charging built in so if you move, let's say, the articles on your website, your article section may go down but you'll still have your photo galleries; your other node-driven elements.

Some other new-generation applications may be able to deal with the loss of a single backend database node, maybe they're using a database like a REOC or Cassandra that has grades, partition tolerance built into it, but unfortunately MySQL doesn’t do that unless you're in familiar in charting manually. We can scale out Drupal and scale up Drupal to MySQL layer and we can have availability MySQL, but at the end of the day if you lose your MySQL layer you are going to lose your entire application essentially.

One of the other issues with Drupal hosting is, there's a shortage of talent, there's a shortage of people that have really driven Drupal at a massive scale. There are companies like … the economies of the world who are the Top 50 Internet site that’s powered by Drupal, or there's talent that the WhiteHouse giving back Drupal, but there's still a lack of good, dev ops expertise in terms of selling that … an organization that runs hundreds of Drupal sites. How to go to deploy this either on your internal infrastructure in, let's say, a university IT department, or to go deploy it on a rack space or a traditional datacenter company?

Drupal has its own challenges, and one of those challenges is: how do you find great, either engineering operations, dev ops people to go help you with your Drupal projects?
Now there's a number of ways, and you're all may be aware. of how an application would die in a traditional datacenter. That may be someone tripping over a power cord, it may be you lose your Internet access or one of your actual upstream ISPs or you have DDOS attack.

Many of these also go from the Cloud, but the Cloud also introduces other more … complex scenarios or in a couple of scenarios. You can still have machine loss, Amazon exacerbated this by that machine loss may be even more random and unpredictable, so Amazon may announce that a machine is going to be available on a given day, which is great and probably something that your traditional IT department or infrastructure provider didn’t give you unless they're very good at their jobs.

There's still a chance that at any given moment, Amazon machine may just go down, and may become unavailable, and you really have to introspection into why this happened. The hypervisor layer, all this … the hardware abstraction is not available to you, Amazon shields us, RackSpace Cloud shields us. All these different clouds shield you from knowing what's going on at the machine layer, or there may just be a service outage, so Amazon may lose a datacenter, RackSpace, just this weekend, issued in the Dallas region with its Cloud customer.

You never know when your actual infrastructure and service provider is going to have a hiccup. There may just be network disruption, this could be packet loss, this could be routes being malformed going to different regions different countries, a lot of different ways that the network can go impact your application, and it's not just traffic coming to your website, it's also your website talking with its main [cache 00:10:10] layer, talking with its database layer, all these different things.

One of the key points of Amazon Cloud specifically, is that its file system, if you're using Lasix Box storage, there's been a lot of horror stories out there about EBS outages have taken down Amazon's EC2 systems or anything that’s backed by EBS. In general, it's hard to go have an underlying, like I said before, a POSIX file system at scale, and EBS' instrument technology, but it's still in its infancy. Amazon, although it's focused on reliability and performance for EBS has a lot of work to do to go and improve that, and even people like RackSpace are just now deploying their own EBS-like sub-systems with an open stack.

Your website may fail just from a traffic spike. This traffic may be legitimate traffic, maybe someone talked about your website on a radio program, or TV broadcast, or maybe you get linked from the homepage of TechCrunch or Slashdot, but traffic spike could also be someone that’s initially trying to take down your website. The Cloud doesn’t necessarily makes us any worse other than the fact that you may have little to no control over your network infrastructure, you do not have access to a network engineer who can go to point out exactly we are upstream and all these factors are coming from, and go implement routing changes, or firewall changes to do this, so the Cloud may make it harder for you to go control this.

Your control thing, your ability to go manage, your service in Cloud may go down entirely; this is one of the issues that crops up on Amazon when they have an outage. They may go all the way back so you can do anything. It may go down entirely and you have to be able to engineer around this and ensure that your applications will survive even if you can't go spin up new servers or go adjust resizing and different things like that.

Another way to system failure is that your backups may fail, it's either a network to go and do backups of servers and volumes and all these different things in the Cloud but you have no guarantee even when the API says that a backup is completed, that it's actually done. This may be better than traditional hosting but it's still a lot of progress to be made in engineering to go accommodate this.

In general, everyone wants to have a highly-available and resilient website, there's obviously different levels of SLAs, some people may be happy if the website can sustain an hour of downtime, other organization may feel their website condition is critical and even a blip of a few minutes is just too much because it's actually having financial transactions or just publicity if the website is down.

In general, Drupal specifically should be … your hosting at Drupal should be engineered with high availability and resiliency in mind. To do this you should plan for failure because that’s in the Cloud, just know that any given time a server may die and you can have either the hot standby and process in place to go spin up a new server. This means that you want to make sure that your deployment and your configuration are as automated as possible.

This may be a puppet configuration, it may be CFEngine configuration and may just be the chef or a batch script that says, "This is how I spin up a new machine and install the necessary packages. This is how I check out my Drupal code from GitHub, but at the end of the day when you're woken up by a pager … due to a pager at 2:00 in the morning, you don’t want to have to go think about how you built the server, you want to have a script to go spin it up, or ideally, you want to use tools to go have it scale over automatically; and so you actually have no blips.

Obviously, to have no blips means that you need to have this configured automatically. You should have no single points of failure, that’s the ideal in any engineering organization, any consumer-facing or internal-facing website application have no single point of failures. In a traditional datacenter would mean having dual UPSs, having dual upstream power supplies and network connectivity; having two sets of hard drives in their machine, or having RAID, or having multiple servers, having your application low-distributed across … in regions … geographic regions.

There's lots of single points of failure out there. The Cloud abstracts a lot of this, so in general it's a great idea to run the Cloud because you don’t have to worry about the underlying network infrastructure, you can actually spin a server up in one of the five Amazon East Coast availability zones, and you don’t have to worry about any of the hardware requirements or the power, or any of those things. In order to have no single points of failure, it means you have to have two of everything, or if you due to the downtime have … you can use Amazon's Cloud formation along with CloudWatch to go quickly spin up a server from one of your versions and just boot that up that way, but definitely it's good to have two of everything, at least.

You will want to monitor everything, before I said you could use CloudWatch to go monitor your servers, you can use Nagios installations, you can use Pingdom to make sure that your website is up, but you want everything monitored, so your website itself is returning … Drupal returning the homepage, do you actually want to submit a transaction to go create a new node and validate that this node is there, using companies like Selenium.

Do you want to just make sure that MySQL is running, do you want to see what the CPU help is, or how much network activities there is, and one of the other things is you want to monitor your monitoring system. Maybe you trust that Amazon's CloudWatch isn't going down, maybe trusting Pingdom not to go down, but you probably won't trust the fact that if you're running Nagios and your Nagios server goes down, you can't sustain an outage like that, you don’t want that to happen at 2:00 in the morning and then someone tells you on Monday morning your website has been down all weekend, and a good idea to monitor the monitor servers.

Backing up all your data is key for resiliency and business continuity, and ensuring that your Drupal Cloud system is backed up; your MySQL database is backed up. Your configurations are all there, and this includes not just backing up but validating that your backups are working, because many of us may have been in organization where, yes, the DBA did back up a server but when the primary server failed and someone tried to restore it from the backup someone found out that, oh, well, it's missing one of the databases or one of the sets of cables. Or, maybe the configuration or the password wasn’t actually backed up so there's no way to even log in to that new database server.

It's a very good idea to always go and test all of your backups, and this also includes testing emergency procedures, for organizations have to have business continuity plans, but no plan is flawless and plans have to be continually iterated just like in software. The only way to ensure that the plan works is to go actually engage that plan and test it, so it's all of my recommendation that if you have a failover a datacenter, or you have a way to failover for your website, you will want to test that failover plans.
Maybe you only do it once a year or maybe you do it every Saturday morning at a certain time, if you can engineer out so there's no hiccup for your end users, or may be your website has no traffic at any given point in time of the week, but it's a great idea to actually go test those emergency procedures.

In general, there's challenges with Drupal management, and just the resource challenges. The Cloud tells you that your developers no long have to worry about all the testy details but are necessary to go launch and maintain a website. You don’t have to have any operations staff to be more … invest in Hype. I think a lot of engineers always felt that the operation team is just a bottleneck in their process and once they have validated that their code is good, either versus their opinion or they're running their own system test, or unit test. They wanted to go just push that live and that’s one of the principles of continuous integration.

The reality is that developers aren't necessarily great at managing server configurations, or engineering a way to go deploy software without having any hiccup to the end user client who may load a page and then there's an AJAX call that refreshes in another base so we want to make sure that there's no delay in the process, and that code doesn’t go impact the server, and the server configurations are maintained.

Operations staffs are still very, very likely and you have to go plan for failure to go plan your performer process in reality. It's very hard to go find people that are great at operations as well as understanding an engineer's mindset, and so dev ops is resource challenge.

Here's an example of how we design for failure. Here at Acquia, we plan for failure; we engineer different solutions to different clients' budgets to make sure that we give them something that will make their stakeholders, internally and externally happy. We have multiple availabilities on hosting so for all of our managed Cloud customers when we launch one server we'll then have another backup server in another zone.

Drupal will replicate data from one zone to the other. If there's any service interruption in one zone it will go serve data from the other zone, so this includes the actual Web node layer, or the Apache servers that are serving the raw Drupal files includes the file system. Here we use Cluster effects to go replicate the Drupal file system from server to server and from availability zone to availability zone.

It's also the MySQL layer, we'll have a master database server in its region, or we may have a slave against those master database servers, but it's ensuring that all the data is always in two places and anytime there's a hiccup in one Amazon availability zone it won't impact your Drupal website.

Sometimes that’s not enough. There's been a number of outages recently in the Amazon's history where maybe one availability zone goes down, but due to the control system failure, or due to other issues with the infrastructure there's multiple zones that are impacted. We have the ability to have multiple region-hosting, so this may be out of the East Coast, and the West Coast, U.S. West, and maybe the … our own facilities.

It really depends on what the organization wants, but the multi-region hosting gives businesses the peace of mind and the confidence that if there is a natural disaster that wipes out all of U.S. East, or if there's a colossal failure that’s a cascading failure in the U.S. East, or one of these different regions that your data is always there, your website is always in another region, and you're not going to experience catastrophic delays in bringing your website up-to-date.

During Hurricane Sandy there were a number of organizations that learned this lesson when they had their datacenters in, let's say, Con Edison's facilities in Manhattan and maybe they're in multiple datacenters there, but it's possible for an entire city to go and lose power for, potentially a week, or to have catastrophic damage by water to the equipment. It's always important to have your website available for multiple regions and we offer that for our clients.

One of the other key things … since they are to prevent failure is making sure that you understand the responsibilities and the security model for all the stakeholders in your website. You have the public consumer who is responsible for their browser and them engaging with them and showing they don’t distribute their passwords to unauthorized people.

You have Amazon who is responsible for the network layer for … during that two different machine images on the HyperVisor don’t have the ability to go disrupt each other. Making sure that they are … the physical medium of the servers and the facilities are all locked down and that customers using the security groups can't go from one machine to the other, or have a database called on from one rack to the other for different clients.

Then you have Acquia who is responsible for the software servers to the platform as a service layer with Drupal hosting. We are in charge of the operating system patches, we are in charge of configuring all of the security modules for Apache and in charge of recommending to you that you have Acquia network inside tools that you need to update … you need Drupal modules to ensure a high security, and you do all these things, but that brings it back to you. At the end of the day you're responsible for your application, your developers are the ones that go and make changes too and implement newer architectural things that may need to be security tested, or that choose not to go update a module for one point of view or another.

There's a shared security model here which covers both security availability in compliance, there may be a Federal customer who has to have things enabled a certain way just to go comply with a [FISMA 00:24:23] or Fed ramp accreditation. Obviously security can go impact the overall availability for your website and you don’t engineer for a security up-front them half of them can go take down your machine or they'll compromise your data so you don’t want your website back online until you’ve validated exactly what has changed.

What's very important to understand in the shared security module, and as you're planning for failure. Another thing I had briefly touched before was monitoring. This includes both monitoring your infrastructural application as well as monitoring for the security threats I just mentioned. At Acquia we use a number of different monitoring systems which I'll go in detail in, including Nagios, including your own 24/7, 365, operation step, but we also use third party software to go scan our machines to ensure that they are up-to-date and have no open ports that may be an issue, or have no demons running that are going to be an issue. Or have no other vulnerability.

This includes Rapid7, OSSEC, monitoring the logs, and for thwarting any … lots of issues across issues during security scans. It's important to monitor your infrastructure both from making sure the service is available as well as there's no security holes.
Back to monitoring, we have a very robust monitoring system, it's one of the ways … it's one of the systems we have to have, it's something we have 4,000-plus servers in the Amazon's Cloud, so all the Web servers and database servers and the Git and SVN servers, and all these different types of servers, they are monitored by something we call [Mon 00:26:01] Server, and these, on servers check to makes sure the websites are up, check to make sure that MySQL and Memcache is running, all these different things.

The mon servers also monitor each other, you see that from the line form mon server to mon server at the top, so they monitor each other in the same zone. They may choose to go monitor a mon server in another region, just to ensure that if we lose and entire region we want to get a notice about it.

The mon servers may also be the [height 00:26:32] of Amazon's Cloud, that we may go through rounding from someone like Rackspace, just to have your own business continuity, best-breed monitoring to ensure that if there is a hiccup or service interruption in one of the Amazon regions that we go and catch it. It's important to have external validation of the experience and if we … we may just use something like [Pingdom 00:26:50] in order to go ensure your website is always there.

Ensure that it is operating within the bounds of its SOA, so there's all sorts of ways to do monitoring but it's important to have the assurance that your monitored servers are working and each monitor that goes down has something else alert you that it's down, just so you don’t impact your supporter operations team in trying to recover from an issue.

In pattern high availability resiliency in your monitoring infrastructure is very important. One of the other things; just being able to recover from failure; this includes having database backups; this includes having a file system, snapshots, so you can recover all the Drupal files, making sure that all your EBS volumes are backed up. Pushing those snapshots coming way over to [Espree 00:27:42], making sure that the process is replicated using a distributive file system technology-like luster. With all of this, you can potentially recover from catastrophic data-failure because having backups is important.

You can choose if you want to have these backups live, live replication of MySQL or the file system, or just hourly snapshots, or weekly snapshots, and that depends on your level of risk and how much you want to go spend on these things.

In terms of preventing failover, we utilize a number of these different possibilities, but you can use Amazon Elastic load balancers, multiple servers behind an ELB, and these servers can be distributed across multiple zones. For example, we use ELBs for a client like the MTA of New York, where they wanted to go and ensure that Hurricane Sandy wiped out one of the Amazon availability zones, we can still serve their Drupal website from the other availability zones.

We also used our own load balancers just in our backend to go and distribute traffic between all the different Web nodes, so one of the availability zone may go for request to the other availability zone, where you can do round robin, and that’s a different logic in there to go to distribute the request to all the healthy Web nodes, and to make sure that any unhealthy Web node we cannot sent and travel too, so while our operations team are automating systems to go recover from the reason it's unhealthy.

We have the ability to also use DNS switch to take a database that's catastrophically failed or has other replication labs or something out of service. We always choose, at Acquia, to ensure that all your data transactions are committed. We'd rather have no data loss than incur a minimal service disruption, and so you're potentially losing, usually uploading the file or a user … and account being created or some other … we have people building software service business on top of us, so that loss and protection is very important to us, and so we utilize a DNS switch mechanism to make sure that that database traffic all flows to the other database server.

For the larger sites, multi-region sites, we actually use the manual DNS switch, to switch from region to the other, this prevents a flopping of an issue and having a cache server turned into something even worse, where you may have data written to both regions. The DNS switch allows us and allows our clients to build their Web site over when they choose to and then when everything is status quo again, they can go build back.

As I said before it's very important to test all of your procedures and this includes your failover process. It should be scripted so you can go, failover to your secondary database server, so you shut down one of your Web nodes and have it auto-heal itself. People like Netflix are brilliant about this, where they have their Simian Army as they call it, that they can go shut down RAM and shut down servers, and shut down entire zones and ensure that everything is recovered.

There's a lot of best practices out there in terms of actually testing the failover, and these failover systems and the extra redundancy that you’ve added to the [limiting 00:31:22] or points of failure is key and other non-disaster scenarios. Maybe you were upgrading your version of Drupal or you're rolling out a new module and you need to go add a new database or alter a cable, go through that process within Drupal.

You can failover to one of your given database nodes and then apply the [modular 00:31:44] schema changes to that node without impacting your end users. There's ways you use these systems and in your normal course of business to make sure that you use the available nodes to their full capacity and minimize the impact to your stakeholders.

Jess, do you want to talk about why you would to do everything yourself?

Jess: Sure, yeah. Thank you so much, Andrew. I think that was a really good overview, and hopefully people on the phone were able to take some notes and think about, if you want to try this yourself, what are the best practices that you should be following.

Of course, Acquia Cloud exists and as I'm in marketing I would be remiss not to talk about why you'd want to look at Acquia, but the reasons why our customers had chosen to leave DIY, they are mainly pocketed into these three groups. One is: they don’t have a core competency around hosting let alone high availability, and so if that core competency doesn’t exist it's much easier and much more cost effective to work with a provider who has that as their core competency and can provide the infrastructure resources as well as the support for it.

Another main reason people will come to Acquia is they don’t have the resources or have no desire to have the resources to support their site meeting 24x7 resources available in order to make sure that the site is always up and running optimally, so Acquia is in a unique position to respond to both Drupal application issues as well as the infrastructure issues. We don’t make code changes for our customers but we always are aware of what's going on with your site, and can help you very quickly identify the root cause of an outage and resolve it quickly with you.

Then one of the other reasons is it can be a struggle when you're trying to do this yourself, either hosting on premise and you have purchase servers from someone or if you’ve actually gone straight to Amazon or Rackspace. Oftentimes people have found themselves in between sort of blame game and a lot of finger-pointing if the site goes down, their instinct would be to call the provider and if that provider says, "Hey, it's not us, lights are on, you have service," then you have to turn around and try to talk to your application team, what's wrong, and so there can be a lot of back and forth, a lot to time wasted and what you really is your site up and running.

Those are reasons to not try and do this yourself, of course you're welcome to, but if you try and you haven’t had success, the reasons you're going forward with Acquia is our White Glove service so, again, fully managing on a 24x7 basis for the Drupal application support as well as the infrastructure support, as well as our Drupal expertise, so we have about 90 professionals employed here at Acquia across operations, who are able to scale up and down your application.

We have engineers, we have Drupal support professionals, and they can help you either on the break-fix basis or on an advisory capacity to understand what you should be doing with your Drupal site between the code and configuration to make it run more optimally in the Cloud, so that’s a great piece of what we offer. Of course all of the engines covered today in terms of our high availability offerings and our ability to create full backups and redundancy, across availability zones as well as Amazon Regions.

We are getting to the close here, if you have some questions I'd encourage you to start thinking about them and put them into the Q&A.

The last two slides here just showcase the two options that we have if you would like to look at hosting with Acquia, Dev Cloud is a single server self service instance, so you have a fully-dedicated single server, you manage it yourself and you get access to all of our great tools that allow you to implement continuous integration best practices.

This screen shot you're seeing here is just a quick overview of what our user interface looks like for developers and we have separate dev staging and prod environments pre-established for our customers, very easy-to-use drag and drop tools that allow you to push code files and database across from the different environments while implementing the necessary testing in the background, to make sure that you never have made a change to your codes that could potentially harm your production site.

The other alternative is Managed Cloud, and this is the White Glove service offering where we promise your best day will never become your worst day with someone playing traffic spike that ends up taking your site down. We'll manage your application and your infrastructure for you, our infrastructure is Drupal tuned with all the different aspects that Andrew has talked about. We've used exclusively Open Source technologies as part of what we add to Amazon's resources and we've made all the decisions that need to be made to ensure high availability across all the layers of your stack.

With that, we'll get to questions, and we have one that came in. "Can you run your own flavor of Drupal on Acquia's HA architecture?"

Andrew: The answer is, yes. You can use any version of Drupal and I think we are running Drupal 6, 7 and 8 websites right now. You can install any of Drupal modules you want, we have a list of which HA extensions we support. We support most popular modules out there. There's always been a day, maybe there is some random security module or some media module that need something and we may need to go sell it for you or recommend the alternatives. You can … people have taken Drupal and just … for lack of a better word, and just bastardized it, and just built these kind of crazy applications on top or we've written chunks of it, and then it also works with our HA architecture.

Our expertise is in the Core Drupal, but our professional services and our technical account managers are great at analyzing applications and understanding how to improve them in performance so by now we support pretty much any … the platform can host any PHP application, or static application. It's optimized for Drupal, but the underlying MySQL and file system and Memcache and all these different requirements for Drupal website; they are the AJ capabilities of that works across the board.

Jess: And we do have multiple incidents where customers have come to us, and they’ve got their application running and in our Cloud environment fine, but they came to us from hosting directly with RackSpace or Amazon and they found it to be either unreliable or it just wasn’t cost-effective for them because of the amount of resources that had to be thrown at the custom code.
Another good thing about Acquia is through becoming a customer you can have access to all these tools that help test the quality of your code and configuration, so when you have extensive amounts of custom codes that are brought into our environment we can help you quickly figure out how to tune it and/or if there are big issues that are the culprit for why you would need to constantly increase the amount of resources you're consuming; we can let you know what those issues are and we can do a site audit through [PS 00:38:37] like Andrew mentioned.

Our hope and our promise to our customers is that you're lowering your total cost of ownership if you're giving us the hosting piece of it along with the maintenance, and if there's a situation for any of our customers where we are continually having to assign more resources because of an issue with the quality of your application; that’s when we'll intervene, and suggest, as a cost-savings measure, work with our PS team to do a site audit so we can help you figure out how to make the site better and therefore use less resources.

Andrew: In a lot of cases we can just grow more hardware at a problem to go have that be a Band-Aid, but it's at both our best interest and the best interest from the customer in terms of both their [builds 00:39:17] as well as having an application that will last for many, many more years, to have our team recommend this is what you should not have done. This is how you can best use this module or this other recommendation to go have a more highly-optimized website for the future.

Jess: The question on, "Why did Acquia choose Amazon to standardize the software, Cloud, on?"

Andrew: Acquia has been around for the past four or five years and Amazon was the original Cloud Company, I was at the Amazon Reinvent Conference a couple weeks ago and one of the key agencies there said, "Amazon is number one in the Cloud and there is no number two." We chose Amazon because it was the best horse and the time, and we continue to choose Amazon because it's still the best choice.

Amazon is … their release cycle for new product features and new price change and all these things is accelerating. They continue to invest in new regions and Amazon is still a great choice to go reduce your total cost of ownership by increasing your agility and your velocity to go build new websites and deploy new things, and move things off your traditional IT vendor to the Cloud, and so we are so very, very strong believers in Amazon.

Jess: "Does Acquia have experts in all theirs … as a Drupal architecture across the data base, the OS, caching?" Then the marketing person is going to take a stab at this, where it's a [crosstalk 00:40:50]…

Andrew: We definitely have experts at all different levels. The RBS team may go and we have some Red Hat experts, we have some … a bunch of experts so they can go recommend different options, for people who don’t host with us. Internally we are all gone to based-hosting so that that may be the expertise about operations staff. Database, we know we have operation staff dedicated just to MySQL. We have support contracts with key MySQL either consulting or software companies for any questions that we can't handle.

It's one of the ways that we go scale if you don’t have to go pay at the corner of the world a 10 grand fee for something that we can just go ask them. Caching , we have people that have … help design some of the larger Drupal sites out there and live through them to be under heavy traffic storms, people that they may go contribute after Drupal Core caching modules, be it Mem-cache or regis-caching and all these different capabilities. With [Agar 00:41:56], we don’t have to use Agar internally but we do interact with it and support it, a lot of our big university or government clients may be using Agar in their internal IT department and they may go and choose to use us for maybe some of the flagship sites or for some other purpose. Yes, we do have experience across the board.

Jess: [Ken 00:42:22], unless you have you any questions that you came in straight to you; that looks like the rest of the questions that we have for the day. Hopefully that you found this valuable, you’ve got 15 minutes back in your day, hopefully. You can find good use for that.
Thank you so much, Andrew, for joining us, I really appreciate it and the content was great.

Andrew: Thank you.

Jess: Again, thanks everybody for your time; and the recording will be available within 48 hours if you'd like to take another listen, and you can always reach out directly to Andrew or myself with any further questions. Thank you.

Andrew: Thanks everybody.

University Shares Tips for Migrating Thousands of Sites With One Install Profile [December 5, 2012]

Click to see video transcript

Female: Thanks for joining today. Today’s webinar is University Shares Tips for Migrating Thousands of Sites with One Installation Profile with Tyler Struyk who’s the Drupal Developer at the University of Waterloo and was on the Drupal Implementation Team there.

Tyler Struyk: Today I’m presenting on what I’ve been doing last six months at the University of Waterloo and what they’ve been doing for the last three years.

A little bit about myself. A lot of people know me as iStryker on drupal.org. I’ve been using Drupal since 2007. Most of that time, I’ve been working as a freelancer and then as I said, I’ve been working at University of Waterloo fulltime for the last six months.

We actually have quite a few websites. Our site here … the two things I wanted you to note is the uwaterloo.ca and pilots.uwaterloo.ca. The uwaterloo.ca are actually the number of websites that are live currently and the Pilots are the websites that are currently in migration to be put into production.

A little bit of history. Certainly, the University of Waterloo, what they used to do was they usually use Dreamweaver templates to give a common look and feel for all their websites. Now, this was quite a bit up trouble because if they ever need and make a change to the website, they will have to push the new template up to uwaterloo.ca and then everyone who owned the website would have to pull that new template down and then push it up to their own existing website. This whole process might take sometimes quite a few months for all the changes to be the same across all the websites on campus.

After that, I think three years ago, right after DrupalCon San Francisco, the University of Waterloo started moving to Drupal and what they started doing was creating one of Drupal 6 websites. This was fine, this was great, the only problem is there’s no way to keep track a lot of them. Sometimes if you had a new feature, you could push out to one or two, but once we eclipsed 25 websites, it was very hard to maintain them all.

Here at University of Waterloo, what we actually call our website is the uWaterloo Content Management System. It’s one installed profile. At least in production, all the websites are on one server. It’s used by all Faculties across the campus and they are all running the same features for the most part. Some websites have one or two one of customization modules, but overall, they all have the same modules and features.

Now, here is an example of the environment websites what looks like right now. This is actually our homepage. Our homepage is a little special. It’s running on Drupal. We actually relaunched this website back in August, end of August. It has the same features and modules, et cetera, but it’s a little trimmed down. It doesn’t need as much features as the Content Management System running on for the other websites. As you see, the website is running a different theme. That’s the most main difference is this running a different theme.

Here at University of Waterloo, there’s actually 15 or less. We have eight people doing the Content Migration and Training. We have five actually doing the actual development and pushing on new features and doing the bulks. We actually have one guy pretty much devoted to accessibility. Now, here at University of Waterloo, the government has come down with an accessibility guideline that we have to meet. Every public sector website must be accessible and this is going to be pushed … I think the standards that we have to meet is coming out in 2014. We’re getting a little ahead of ourselves, but yes, every new piece of content that we have have to meet these goals. Then we actually have a system administrator who does all the backend stuff for our website. I’ll get more into detail what he actually does near the end of the presentation.

As I said, there’s eight people working on the Content Migration and Training. It’s actually broken into two groups. We have two fulltime employees and they work on everything like the migrations, save the migration meetings, doing Q&A, training, support and communication. Now, here at University of Waterloo, we are the school that’s known for co-ops. Nine percent of our courses here at University of Waterloo have a co-op component to it and I don’t know if people on the states around the world know what co-ops mean. You might think of it as internships. Every four months, there’s new co-ops to come in and we actually break the six co-ops in two different groups. Four of them are doing Content Migration and we have them … each one of them is doing a separate site at a time and then we have two of them doing the Q&A and drop in labs. I’ll talk more about the drop in labs in a couple of slides.

The process of migration. If you want your website on campus to be migrated to the new Content Management System, you first put a request. Here at University of Waterloo, we’re using the Request Tracker System. I don’t know if you guys know that. That’s what it matter. After that we request in, we set up a review meeting. We go over the requirements needed. It’s pretty much that. You train and also determine if they’re a good candidate to actually move in the system.

Now, sometimes, there’s websites out there that are not good candidates and a good example of that is the athletics. They have quite a bit of customization and they might probably get into our new Content Management System once we roll out more features, but this is probably in a couple years time down the road, just a list of couple of customizations that you have e-commerce components selling tickets. They have custom content types for sports and teams. They keep track of sport scores. They have advertisement, sponsorship and as you see on the page here, they have a custom layout.

Now, after this meeting, we actually create a website for them to start the migration and we created on the pilots.uwaterloo.ca. We actually assigned a co-op student to help them out. Now, this co-op student might do everything for them and might do the whole migration process of covering the content from their old websites to their new websites or they might just do a small portion and that’s the person who owns the actual content and do it themselves. We set the launch dates. For simple websites, the launch date might be two weeks away, but for complicated websites, they might be a couple of months away. Ten days before, we actually launch the site. We do a lot of Q&A and the one tool that I want to mention is we actually run the Wave Accessibility Test. The WAVE is custom two variation into Firefox and it analyzes your page and tells you what’s … if you have any problems, asystic problems with your current website.

Here at University of Waterloo, we have quite a bit training to support the whole Content Management System. Some of these courses are mandatory. If you have want to maintain the content on your site, you have to take the Content Maintainer Course and then there’s a more advanced reason, which is the Site Manager’s Course. The difference between the two Content Maintainer’s Course is basic Content Maintainer so that you can create new contents, review contents, create new drafts, add new images, things like that. Site Manager’s Course is pretty much like changing layout maybe, just more advanced things. The other course we have is webforms. If you actually want a webform on your website, you have to take the Webform Course. The reason why we force to do this is there’s a lot of things you can do with webforms such as credit card information, et cetera. For privacy reasons, this is the reason why we make them take this course.

Then, there’s a variety of additional tools. Twice a week, there’s a drop in lab so it’s ran all-day. If you have let’s say you have questions, you come in and ask. Once a month, we have a developer’s drop in lab so the five people that do the bug and testing, you can actually ask them to them more advanced questions and have one-on-one time with them. We also do quite a few training videos. I’d like to mention that I actually use Camtasia Studio 7 to do the recordings, but I recommend using version 8 to do it. As version 8, you can have a lot more multiple layers, where Camtasia 7, you can only do one or two layers. An example of that is here. I just pulled this off a website. It just gives you basic idea of what it looks like and the tools you have.

Now, there’s other courses we offer like advanced forms. There’s quite a few tools that you can do with advanced forms, more advanced stuff like regular expressions, filtering and there’s other supporting courses like … that you’ll need to take such as writing for the web and writing accessible content.

The other thing the Content Migration Team does is communication and there’s lot of ways they do this. We have multiple mailing list set up to the reach different departments and we also have the web resource website. On this website, we communicate everything, the upcoming courses, new features, common things like what colors you should be using. If you’re a part of … here at University of Waterloo, each faculty has their own colors and you can come to this website and say, “Hey, what color shall I be using for my faculty?” There’s also accessibility tips and we also push out new news such as term reminders such as on your website, take down your co-op students, et cetera, because quite a few people actually forget to do this.

Back in November last year, we actually pushed out version 1.0. Since then, we are at version 1.5.8. The changes within these two are such things as live streaming, a better slideshow integration and there’s quite a few more tools that have changed between that version and this version.

Websites and servers, these two websites we actually use. One is Request Tracker I mentioned and we use this for bug tracking. We also have this in-house agile project management website that we used for people to press new features and I’ll talk about this later on in a couple of slides. We actually have quite a few servers here that we use for development and testing. I’ll actually switch the next slide to give you a better diagram.

For testing, I kind of want to go to each one of these and the reason why we actually use each one for different scenarios. For your Local Dev, we usually have a standard installed profile on there, just plain Drupal or Drupal 7 or maybe in your case Drupal 6. If there’s a new bug, we test to see … it’s like against Drupal to see if it’s actually Drupal that’s broken or it’s actually our installed profile that’s broken. We also do a lot of Local testing locally. If you’re working on something, you’re going to break something huge, but it’s going to cross the server so you can test out little clear.

Now, we have a web server, the Sandbox and the version of the profile on there is actually the trunk version so any new changes that you push gets pushed up there nightly and gets rebuild. If you want to do testing on what’s upcoming, that’s where you do testing on. Now we have Dev, which is the current release that’s on production. It’s on Dev. If you want to test something or … it’s mostly for bugs. If you want bugs, bug testing, this is where you go to test. It’s pretty self-explanatory.

At first, we just only have one server, the separate Sandbox to production, but it’s broken into two different ones. We introduced the profile server. On here we actually … it’s identical to the Dev server, but we use this specifically to test other installed profiles such as Commerce Kickstart and CiviCRM. The reason why we do this is this installed profiles can break a lot of things and we kind of … having two servers, it’s either to troubleshoot what the actual problem. Is it the actual profiles, other extra modules installed on the server or it’s our installed profile that’s the problem.

Then we have two different servers. As I mentioned before, Pilots and Production and these are two servers that we don’t touch for testing. We do have one exception though. On production, we have a test website and the reason why we use this is because on production, there’s a lot of exterior components I guess that you won’t see on the server such as caching. We use various caching pretty instantly here at University of Waterloo and we can’t mimic that while on the other servers so if we need testing, we will actually push in production and test it there. There’s two things moving forward that would probably going to change. On Sandbox, we were actually probably going to switch over to Jenkins and by switching over Jenkins, what will happen is … so pushing changes once a day up there, they rebuild once a day, but we can rebuild the server let’s say every five minutes.

For production, we actually want to start using Vagrant and what Vagrant is good for is … I’m just reading the questions here. I’m going to get that effect. What Vagrant is good for is you can copy what the settings are for production instead of multiple virtual servers so you can actually set up a virtual server for caching and production and then we can do the testing on there. That way, we will not have to touch production at all in the future.

Someone asked are we actually using Aegir? We are not using Aegir at all. For development to push to create websites, there’s too many bugs and we weren’t able to use this efficiently. Then, do we use Drush? Yes, we use Drush extensively and I’ll kind of get more into detail of the Drush later on in a couple of slides when I’ll show you how we actually push and then release out.

Typical bug fix, I’ll just quickly go over this. Someone creates a ticket in RT and then the Implementation Team works it Local again, standard profile, things that are breaks, Sandbox, trunk, testing in trunk and Dev and Profiles for the current release that’s currently in production. Now, if a bug is breaking production well, then we’ll roll a new version right away. If it’s not urgent, then we’ll commit it to trunk and then roll it out in the next release.

Our installed profile is actually pretty simple. It’s actually made up of four … I think it’s five files and so you got the .profile, the .install and the .make file. The .make file, I don’t know if you guys know. The .make file lets you pulls down all the modules, themes, everything from our repository and it kind of looks like this. All you have to remember, if you don’t know what a make file is, it’s just a grocery list. It says grabs or theme pulls it down.

When you’re committing trunk, there’s two things you got or make in commit. There’s two things that you’ve got to do. You have to commit the trunk and it just create a new tagged version. When you’re creating a new tagged version, we use an increment system so whatever the current version, that’s that in production. An example down here, you see 1.24. If that’s in production, then you create increments such as 1.24.1 and et cetera. The other thing I want to point out, when you’re re-rolling a new feature, the one problem we found is that our features actually are not just configuration.

Let me see this. Our features don’t just contain configuration components. We actually utilize the .module file for features as well as we tie in JavaScript and to assess style-sheets and currently, there’s a bug in features. The way you re-roll a new feature out actually strips the style-sheets from the .info file. Every time we commit, we have to make sure if that feature actually has the style-sheets attached to it that we have to reedit.

Creating a new release, as I kind of mentions before, every increments, we kind of increase it so as I saw 1.24.2, we increment this to 1.25. It’s pretty self-explanatory. Sometimes in creating new release, it takes quite a bit of time because sometimes you’re doing a lot of increment decisions so you might be incrementing quite a few times, testing it and employing back down, other times, this is quick fix and you push up. I’ll kind of touch that more later on.

I’ll kind of get to couple of questions here. Someone asked if we’re using Git. No, we’re not using Git right now. We do want to move to it. Currently, we’re using SVN, but yes, in the next month or so, we’re going to move to Git. How many themes are you running in your installed profile? We are currently using one theme minus the homepage, which is using their own theme. How often do you add new features to production sites? Pretty regularly, we add … as for new features, maybe every release, we add it with one or two features.

As for making changes to the current features, we actually make quite a few changes. Between feature releases, you might be updating 10 features. How many features do we currently have? We have quite extensive list of features. I believe … I don’t know the number off the top of my head, but we have over 50 features broken down to different components. Then the other question was set up as a multi site. The answer is yes and I’ll kind of show you an example of that indirectly later on.

Okay, so here’s an example of pushing out a new release. Now, it’s not every time … when you push out a new release, sometimes things run smooth and majority of time, they don’t run smooth. You do all these works. You usually on a Sandbox, you had the trunk version. The trunk version, you take and then you push this to Dev and Profiles for testing. You test this on the sites that are on Dev and Profile. It could be … I think there’s like 20 odd websites on there right now. If these sites don’t break, then you push off to Pilots.

Usually on Pilots is … at any given time, 60 websites so then you’re testing those websites to see and if it breaks anything, everything is running smoothly or not. If something breaks on Pilots, then you might pull it down to Dev or Profile to testing again on them and it pushed up the Pilots, the next release of two Pilots or if something is really huge, you might have to pull it down with … right down to Local and test it there.

How we actually push it up from Dev, Profiles to Pilots then Production, we actually have Drush scripts that we actually build ourselves. It’s actually pretty simple. You run the make file, you pull everything down, you run DB updates so you update the database and then you feature for everything.

As my notes, so when we’re creating new releases, sometimes it create data releases and sometimes we create release candidates. It’s the same thing you do on drupal.org with modules, we might get the same thing.

Someone asked here are we currently running on Drupal 6. We are actually running on Drupal 7. Everything is Drupal 7.

I actually kind of touch this earlier. I forgot that I had a slide on this. The installed profile … here’s all the files that are part of it so .info, .install, .profile, .make and then we actually have this rebuild.sh, which is a script. Every time you pull down the repository, you actually run the rebuild script and delete the entire profile and then runs the .make file and pulls everything back down. Then we actually have a special server and on that, I think back in the slide number one, I think I should jump there. We had a server called wms-aux.uwaterloo.ca and you see there’s only two there.

On this server, we actually have a custom script that we use Drush for it or server, which have a custom drip Drush command that we run that creates a website for us. We can actually specify a lot of different parameters to it so we can specify which installed profile to use, or which server to install that new website on. When we do that, it creates all the files that you need. It generates the database on our database server and then proof, we have a website up and running. The details of the script, I can’t tell you. I don’t really know. That’s our system administrator. He did this. He spent I think quite a few months building the script, perfecting it, tweaking it, et cetera.

I shall answer a couple of questions before I jump to next topic. Actually, the question that are coming in right now, I’m going to leave to the end.

I’ll talk a little bit about the Agile Project Management Website, this is a website that we use to take out a new feature request that people want to see in our Content Management System. I’m not going to … I’m going to quickly go over the Agile process. I could do a full presentation on what Agile is and how to run this successful Agile sprints, system, et cetera. I’ll just touch base on the topics. People make feature request. This feature request that come in, we turn them into user stories and then we do sprints on those user stories. If you make a feature request, you can become a stakeholder of that user story. In our system, we rank.

As a stakeholder, you can be … well, as a person, you can be a stakeholder of multiple issues and you have to rank the issues that are most important to you using a drag and drop module. The module I have that I custom made is actually on drupal.org. It’s called User Priority Ranking. If you also become a stakeholder, you also can … if you don’t want to become a stakeholder, you can follow us and then … this email communication so if anyone creates a new comment on a user story and you will get an update on the current status of it.

Here’s the ranking system. There’s the module. We use a system I’ll come and reiterate. We use this too as a point system. If you’re rank number one, you get 10 points, if you’re rank number two, you get 9 points and so on. Then, based on our points users, this gives us an idea of what people actually wants and then these are the feature request of user stories that we work on first.

Why do we actually use our Agile Management websites? It’s actually the only way to keep yourself sane. At first, it wasn’t … we didn’t actually need to use it that much because we only had a few websites in the Content Management System. Once you actually get to … once your number hits … well at least for us, once our numbers hit over a hundred, we felt that we became a kind of like … I mean it’s only … or the development team became like “maintenance only” so instead of working on new features, we are working on bugs. It just … new features just came to halt and that was a bad idea so by using the Agile process, people can see, people can track the new features so you know when they are going to be rolled out and this helps the development team keep on schedule and keep its sanity.

Now, we’ll talk about the System Administration. I shall look at the questions first right here before I jump to next one. Now, the questions … I’ll leave the questions that came in to the end.

All right, so how we have it set up is that if you go to uwaterloo.ca, that’s one website. If you go to uwaterloo.ca/physics, that’s a different website and things like /environment, /chemistry, these are all separate websites that are running on the same domain.

Now, there’s two ways to control how this works. One is to do some bulk links and the other way is to use Apache. We actually chose Method number two and I’ll kind of go into detail why we did this. The main reason is because symbolic links actually make are re-directive very messy and it’s … yes, I don’t really … yes, this pretty much it.

Apache includes a directory and in this directory, there’s a special configuration file for each website such as you see there for one for environment like applied-health-science. What they do is you come … a pilot come … went over this already. If you go to let’s say uwaterloo.ca/about, that goes back to uwaterloo.ca/. If you go to uwaterloo.ca/environment/about, well, it points, it doesn’t redirect … goes to the environment website and pulls the content from it and from that database. Then you can go to an additional level degree such as ecology, which is under environments and you can kind of see on the slideshow the structure we have there for our files.

Now, here at University of Waterloo, we use three types of servers, Pound, Varnish and Apache. This is a little different from how other people set up. It’s very common to have Apache and Varnish. If you guys don’t know, Varnish is a front in caching so that the Apache server doesn’t get hit too much. If you’re having questions about Varnish, I recommend that you look it up.

We use Pound for a couple of reasons. One of the main reasons we use it for is to transition HTTPS to HTTP and then this way, you can cache stuff. Here at University of Waterloo, what we want to do is actually have all the websites switched over to HTTPS and to do this, this is the only way to do this is to have Pound in front so that stuff doesn’t go straight to Varnish. It stops at Varnish and then serves up … it serves up caches if it can. I kind of went over the details about this already. In fact, Varnish does not handle HTTP access that well and the reason why the strips the headers off.

Now, the other one problem we have is we always … like you see Dreamweaver websites here at University of Waterloo and we … to grab it Dreamweaver templates for these other websites. They actually go to uwaterloo.ca/css and grab the CSS files as well as /images to grab all the image files. Now, before uwaterloo.ca was a Drupal website, before it was, this was no problem and this was easy for the system as a shredder to handle, but since uwaterloo.ca became a Drupal website, what happens is when you hit that link, Drupal wants to take over and do its own thing. We actually use Pound to revert around this process so if you actually hit our system, uwaterloo.ca/css, Pound actually serves up these style-sheets to you instead of actually hitting the Drupal website. This is very important, like I said, there is over 800 websites out there that are currently used in the Dreamweaver templates that are clearly not migrated to the new system.

One of the other problems we have is with redirects. How the current system works is if you’re not in the … our new web Content Management System, you have your own sub domain. When you come into our new system, sometimes, when you migrate over, you have content … you can’t migrate into our new system and what you need to do is have … redirect to the old system. Sometimes, you might list off on your own that never gets migrated to the new site for quite a successful time or the things too that happens is … you see a one to … how do you explain this? You need a one-to-one mapping of the old content to the new content. This might be like a straight … page one to page one or might be some funny name to a good Drupal website name.

How do we actually handle this is that we actually have a single file that Apache looks at and it just goes through and it goes, “Okay, you hit this page, okay, great then you need to grab this page.” It’s just a one-on-one sequential looking file.

The other thing that … when you have these many websites that becomes unmanageable is these things, a .php file that exist and what we actually done was actually we broke this out into multi different components. Now, I’ll show you an example here. The image on top is actually what the settings .php file is for each individual websites. As you see, it’s only eight lines there. What it does, it pulls in other sending from additional files that are usually pretty static. Right there it pulls up some of the files you down below, it holds some … the standard database settings and then it pulls in the other files that have other settings such as … as you kind of see in the screen now.

There’s special settings at .php file for the database, the host, the network and the settings. If you ever change the domain for your database, you only have to change one file and poof, every other website works. The same if you change the IP address, poof, they’re like … you change one file and then it gets pushed out to all the other websites out there

All right, so what’s coming up. Here at University of Waterloo, we’re going to work on Proper High Availability and we actually want to look at Net App Files for storage as well as my HTML, putting out multiple servers to handle this. The other thing we want to look into is Nginx and this might be able to replace the system we have now with the Pound, Varnish and Apache. The one thing holding us back for that is the Nginx module we write or we write module. Sorry, how is this?

How Nginx handles URL rewrites is quite differently different than what Apache does. Apache has quite a standard control of the things you can do, where Nginx has a smaller slim down version and it can’t do as much. Switch over will take some work this two … it’s on like a one-to-one switch over. It’ll take some time. I should actually state like I’m not a system administrator here at University of Waterloo so some of the stuff I’m just mentioning now is a little over my head, but I kind of want to give you an overview of some problems that I actually have. As I said, I mentioned before, we actually want to switch over to HTTPS for all the websites. We’re currently not doing that over now, but in the next few months, we will be switching over through that.

The other thing I want to talk to you is … this is kind of a little off topic, but how we actually handle search here at University of Waterloo, we actually purchase a Search Engine Appliance from Google that … it’s a server by itself, it does exactly everything that Google does, but it just indexes your own websites. We actually have a thousand websites and that indexes everything for us so it had a little search. You can do the same thing like using Apache Solr, which is an open source projects. We just decided to go with the Search Engine Appliance because it did all this out of the box with very little configuration and comparing to cost, I think when we set up the Search Engine Appliance, it cost us like $50,000 for each one. We’re going to purchase probably the second one and it cost $50,000 for … to use that server for two years and we felt that was good, the better way to go instead of hiring/contracting out someone to set up Apache Solr for us.

The other thing too moving into the future, we actually want to segment our content, how it shows up in their search engine. For example, google.com right now, they have different sections so they have a section for videos, they have section for images, like that, but we have different sections and the Google Search Appliance is set up so that we can set this up for custom content for ourselves. We can have a special tab in search for just events or one for news or videos and podcasts. It’s very easy to set up to all these things with little System Administration support. Whereas with the Apache Solr, there’s quite a bit more support you need to set that all up.

Here’s a question here. What does the question say … someone asked how do we interface the GSA with the Drupal websites and one actually asked someone in fact to … my boss here and I’ll get him to answer the question for you and again, in a sec. We’re about almost done here. Just a couple of notes, we actually don’t have any e-commerce integration with our Content Management System. The reason why we don’t or one of the reasons why we don’t is for … here at University of Waterloo, we have strict policy, we don’t want to get sued by people … like handling credit cards and things like that.

We actually have a special server set up just to handle e-commerce and this is the servers behind strict firewalls and et cetera and right now, we don’t have a need to set up e-commerce. If you want to set up e-commerce, then you will have to set up yourself. You can actually set up your own Drupal website, install Drupal e-commerce, but you won’t be part of the Content Management System. We won’t be giving you support for that.

All right, so that’s what I have. That’s all the slides that I want to go through. I can take any questions you have now.

Female: Hi everyone, if you have any questions, please ask them in the Q&A tab.

All right, we have a few coming in. The first one is how to update multiple databases for multi sites?

Tyler: Currently, we actually have only one … I could be wrong. I think we only have one database and yes, we have only database. It’s segmented. Each website is … sorry, we only have one database server … well, we have two servers. We have one that’s backup and one is our primary one. On this database server, every website has its own database. There you go.

Female: Okay, great. Are your set up as a master slave?

Tyler: Yes.

Female: Okay, great. The next one is, for approximately 60 websites, are they all the same domain or are they running different domains?

Tyler: Sorry, the 60 websites? The …

Female: It must have been on your I think slide three is what they were referring that. I’m not sure.

Tyler: Oh yes. Are you referring to the Pilots? The Pilots itself here. If you were referring to the Pilots, the Pilots is running identical to the uwaterloo.ca, the set up so they’re all on the same domain.

Female: Okay, great. How do you open a new site? What is the procedure?

Tyler: Okay, so this is a little of my end. I know we have a custom script that goes in and creates the file system for us, creates the database for us and pulls down all the modules of the file that you need from the .make file. That’s inside the installed profile.

Female: Okay, great. The next one is, how many centers do you have running all your courses and what department do they belong to?

Tyler: How many courses? We actually don’t have any courses online. It’s not part of our Content Management System. It’s just content itself.

Female: Okay, great. The next one is, what Agile Project Management tool are you using?

Tyler: So far … we’re actually … as I mentioned, we are in our own in-house built Agile sites doing to all … to do to handle everything.

Female: Okay, we’re currently using a contextual home group portal system that serves about 250 different differences at our university. Is Drupal well suited to create a similar system?

Tyler: I didn’t quite understand it. Contextual?

Female: We can move on to the next question if that person will clarify.

Tyler: I can just mention that like … the current system we have now … so we have 178 websites moving forward over the next two to three years, you’ll see that the number grow substantially so we have over 1,000 websites here at University of Waterloo and the next three or four years, that number will jump to probably 500 plus. Right now, we think our current system will be able to handle that no problem.

Female: I believe the next question wants you to show the Apache config shot for your Vhost again.

Tyler: Okay, this is the slide. You have to be … I kind of went … this is an overview of everything. To understand all this, you might be more … this site is more geared to a system administrator. You have to look at this right away and go, “Oh, I get that.”

Female: Okay, our last question is, have you looked into Squid cache system, if so, what is your opinion?

Tyler: No, we have not look at all. To tell you the truth, I don’t know what that is.

Female: Okay, great. Thank you Tyler. Thank you for the great presentation and thanks everyone for your questions and attending today. Again, the class …

Tyler Struyk: They talk to my boss and that one question the person had about the search, I can answer that.

Female: Okay, great.

Tyler: For the search, currently, we are sending everyone to uwaterloo.ca/search, which is a custom theme sites that look at down to our Content Management System, but that’s just the sever itself. Once we switched over to version 7 or GSA, then we’ll be able to pull in the contents right into Drupal. For now, we’re just going … sending them to the actual server to display the information to them.

Female: Okay, great. Now, I think that’s it for questions. Again, thank you Tyler and the slides and recording of the webinar, we posted to the acquia.com website in the next 48 hours. Thanks everyone.

Tyler: Thanks everyone.

Bâtir votre Réseau Social d'Entreprise avec Drupal Commons 3 [November 29, 2012]

Calculating the Savings of Moving Your Drupal Site to the Cloud [November 28, 2012]

Pages