Visualizing and Solving Drupal Performance Issues [November 8, 2012]
Visualizing and Solving Drupal Performance Issues [November 8, 2012]
Want to learn more about Acquia’s products, services, and happenings in the Drupal Community? Visit our site: http://bit.ly/yLaHO5.
Page load times and latency are critical to your Drupal site's success online, for impressing new users and retaining your existing visitors. Learn how to use full-stack application tracing for your Drupal website using our specialized tools for Drupal to visualize what requests and activities on your site are slow and costing you money. Presented by Dan Kuebrich, TraceView Co-Founder & Product Manager, you’ll come away with the exact knowledge of how to view and troubleshoot your site’s performance.
In this webinar, you'll learn how to:
• Troubleshoot your site’s performance with causal data
• Visualize latency and performance throughout your stack
• Monitor errors in your stack with detailed traces and data
The TraceView product is being offered free for two months in partnership with Acquia, so that you can see what your performance data looks like.
Jess Iandiorio: With that we will get to today’s content. Again, I'm Jess Iandiorio and I do product marketing for both Acquia Network and Acquia Cloud. I’d like to introduce Dan Kubrick, who is my co-presenter. Dan, if you could say hi?
Dan Kubrick: Hi, everybody. Thanks for joining us.
Jess Iandiorio: Great. I'm going to go through a little bit of information upfront about the Acquia Network, for those of you who aren’t aware, and then I’ll turn it over to Dan, who’s going to do the bulk of the presentation, as well as a great demonstration for you.
For those of you who received the invitation through us, you probably heard the heads up, that Trace View has joined the Acquia Network. We’re really excited about it. Those of you who don’t know what the Acquia Network is, it’s Acquia’s subscription service, where you can obtain support tools to help your Drupal site perform better, as well as access to our extensive knowledge library.
The library, we like to think of it, has the answers you need to all of your burning Drupal questions. There are about 800 articles, a thousand Frequently Asked Questions, a really extensive library of podcasts, webinars and videos. We do have a couple of partnerships with drupalize.me through Lullabots, as well as Build a Module for other training resources that you can get access to.
In terms of our support team, we have a 24/7 Safety Net and our support offering follows the sun, so wherever you’re located, you’ll have a local resource that can respond to your request. We also perform remote administration, which means for customers, we can go in and make Drupal module updates for you, as well as security patches. We have about 60 people across the world on our Drupal support team, so the best concentration of really high quality Drupal talent you can find, if you do happen to have Drupal supp
ort needs. We encourage you to learn more about that on our Web site.
The last area that Acquia Network provides is all the tools, and we refer to it as the Acquia Network Marketplace. Some of the tools we built ourselves, like Acquia Insight. If you’re not familiar, it’s about 130 tests we run proactively against your Drupal code and configuration to tell you how your site’s performing across security, overall performance, as well as Drupal best practice implementation. It’s a really great tool that customers use, probably on a daily basis, to get them a to-do list to figure out how they can enhance their site.
SEO Grader is a very similar tool that we built with our partner Volacci, and it has the same UI as Insight. You get a score, you get proactive alerts for tests that have tasked and failed, recommendations for fixing. It’s just looking at different criteria than Insight does. It’s looking at the things that help improve your site’s search engine optimization.
Acquia Search is our hosted version of Lucene Solr. That’s the webinar that we have next week. If you want to learn more about that, please feel free to sign up. On the right-hand side, we get to third-party tools that our great partners provide to the Acquia Network customer base. I mentioned drupalize.me and Build a Module already, and those are tools that are helping you learn more about Drupal.
When it comes to optimizing your site, we have a variety of other partnerships with Blitz and Blaze Meter for load testing, Yottaa for site speed optimization and Trace View and New Relic for looking at your application, and actually taking you through the stack and figuring out other areas for performance enhancement, and that’s what you’re going to hear about today from Trace View.
Lastly, we have partnerships that help you extend the value of your site. Oftentimes, these are most valuable to a marketing audience, but it could be to someone technical as well. Mollom, for instance, is spam blocking. The technical person would implement it, but at the end of the day, the marketing person typically cares about spam and how it could harm your site and your brand.
Visual Website Optimizer is A/B testing when you want to figure out whether one, promotion or call to action on your Web site performs better than another. Chartbeat is real-time analytics, trying to figure out where are your site visitors coming from and what are they engaging with on your site. Really great, easy-to-use tool, similar to Google Analytics, a little bit more of a focus on social activity and where people come and what their social behavior is.
Lingotek is translation/localization services, so you can work with Lingotek to bring your site into a new geography, localize the content and they have a variety of different ways that you can work with them. You can have a machine translate, you can tap into an extensive community of people who can help with translation or you can actually have a moderator that you can hire through Lingotek, to watch all of the translation process and ensure its success.
That’s a really quick overview of the Acquia Network. I’ll be on the webinar the whole time, monitoring Q&A and able to take questions at the end, but at this point I would love to turn it over to Dan for the remainder of the presentation. Dan …
Dan Kubrick: … that introduction, Jess. Again, I'm Dan Kubrick, one of the co-founders of Trace View, and what we provide is a SaaS-based performance management solution for Web applications. It’s part of being included in Acquia Network. We’re actually providing a free 60-day trial of our best day plan. You can sign up with no credit card required and try it out, but I'm going to talk a little bit more about what we actually do and show you a quick demo, and then you can see if you want to sign up after that. Without further ado … can you see my screen all right, Jess? Great.
Thanks again for tuning in. What is Trace View? As I just mentioned, we provide a SaaS-delivered performance insight service for PHP-based Web applications. We also support Ruby, Python and Java, but we’re really excited to work with the Acquia Network, and one of the reasons that they selected us to come onboard is because of our particularly strong Drupal insights.
That comes from a combination of our work, as well as community support in the form of a Drupal module that provides very deep insights. I’ll get into that in a minute. The types of users that really enjoy and take advantage of Trace View are developers, people in Ops for Web applications, support and also engineering management.
The goal of Trace View is to provide low overhead performance management and insights for production environments. What this means is, you have a Web application, I'm sure you’ve run into problems, as I have in the past, where there’s something that either due to production load, production hardware or production datasets, it’s very different performance-wise, in terms of throwing errors or whatever from development, and because users are really perceiving the performance of your production Web application, you need to be monitoring that all the time.
Trace View provides a very low overhead solution for providing extremely deep insights continuously in real time for your application. Our secret sauce that differentiates a little bit from other APM solutions is what we call full-stack application tracing. Basically, what this means, I’ll dig into it in a second, is that we’re watching the request from the moment it leaves your user's browser as it goes through the Web server, the application layer, out to the database and caches and ultimately return to HTML that then gets rendered and parsed in the user's browser. This provides the true end-user experience, as well as great diagnostic detail to get into what's actually going on in your application.
Finally, we take this data and put it into a slice-and-dice interface that’s really designed to provide the most actionable and clear insights for your performance data, and that means going beyond averages into very precise distributions, helping finding outliers, slow queries and ultimately, down to the line of code within request.
How does this all work? Let’s take a look at full stack application tracing for a minute. What we’re going to be getting in the browser is the network latency for communication between the browser and your Web servers, the time it takes to process the DOM, the elements in the HTML is returned, and finally to fully render the page and all of the things that go on with it, up until it’s document-ready.
On the server side of this, be that virtual or physical hardware, we can start looking at the request, starting at the load balancer or the Web server to see, are the requests getting queued up before they hit the application backend?
What’s our app layer performance like? What end points in the code are particular hotspots? Are we throwing exceptions, if so, from where? How are we using the database? What queries are being run? Are they taking a long time? Cache performance, how many cache requests are we making per page load? What’s our hit rate? Finally, down to the actual hardware underlying all of it, what's the I/O latency like? Do we have excessive CPU utilization or are we barely touching the cores at all? Taking all these data and providing not only visualizations and insights, but also proactive alerts based on it.
To make this a little bit more concrete, let’s look at an example of a Web application you might be familiar with. This is a simple LAMP stack, Apache, PHP and MySQL and I’ve also added in memcached and an external API, maybe you’re using Mollom, maybe it’s your payment gateway, whatever else. As a request comes into this system, it makes requests down to the different components of your application, calling out to memcache. Perhaps it’s a cache miss, so you go to the database and pull back some results out to the API and ultimately, you return HTML to a user.
After installing just a couple of debs or RPMs, which is the form of installation for Trace View, or actually a single click if you’re hosted by Acquia, we can put instrumentation at various points throughout your application, requiring no code modification, that reports data back to us in the cloud in real time. The cool thing about our instrumentation is how lightweight it is. A tunable fraction of request coming into the top level of your stack is selected for tracing.
At each point that’s interesting. We fire off a UBP packet non-blocking to a daemon running on local host. This daemon does the work of forwarding over a secure channel to us, and what this means is that throughout the request path of the request your application is serving, there’s actually no blocking calls, and so there’s no chance for the request to get held up. Additionally, the overhead is completely configurable. In production environments for our customers, we see one percent or less overhead from this tracing that’s, at the same time, providing very deep application insights.
The final thing I should mention here is that our insight isn’t really limited to the interactions between components and your stack. Though we can start observing the browser and proceed through the load balancer and so on, there’s actually a great deal of Drupal internal insight that we’re able to provide, and this is largely thanks to a custom Drupal module that’s available on drupal.org. What you’re going to get from that is being able to break down your performance by menu item.
For instance, if you have many URLs that really map to the same underlying code path, you might want to look at particular URLs or you might want to look at that code path within your application. Being able to filter on menu item is pretty interesting. I’ll show all these in a minute.
The second interesting piece of functionality is the ability to partition your performance data by the type of user. Oftentimes, the same code path will exhibit different characteristics for authenticated versus anonymous users, depending on how many customizations there are on the page. There may be administrative pages that are slower or you don’t care about the performance of, and the module also picks up worked on by Drush, and so it’s nice to be able to filter out all of those populations separately in terms of performance data, so you can optimize what you really care about.
In terms of the Drupal internals, there’s two interesting things. The first one is time spent in Drupal hooks. You can see in hooking nits and watchdog and so on, really how your time is being spent throughout the stack as well as viewing individual node loads during the processing of requests. This module is very cool, and probably the best way to explain what's really going on here is to dive into a quick demo.
What we’re looking at right now is the performance overview for several different environments in one of our customers who’s been generous enough to share their data with us today. The company is All Players and they do product for groups to track and engage in common activities. We’re going to dive into the staging environment here and look at the data for the past 24 hours.
What we’re looking at is the performance of average requests, broken down by time spent in each layer of the stack over the past 24 hours. We can see that, on average, we’re spending a fair amount of time processing requesting PHPs as well as in the database through these two separate interfaces. Additionally, Apaches, our Web server and Nginx lowdowns are on top. Courtesy of the Drupal module, we’re also getting insight into the time spent in various different Drupal internals and we’ll dive into this a little bit more in a minute. We can see that on average, it looks like PHPs MySQL calls are contributing a lot to the latency of our application here.
In addition to just figuring out the time that’s spent on each layer of the stack felt, Trace View is also pulling out interesting features of the requests themselves. For instance, the domain and URLs are being requested. The menu items and the menu item parameters that are being requested, so they go pass through the application. Cache performance, in this case, because of the staging environment, we can see that our hit ratio is not very good. The traffic partitions that I was just mentioning, as well as the queries and RPT calls that may be expensive to make in your app.
Now, all of these tables are filters for our data, so if you wanted to see the performance of a particular endpoint here, in this case, it looks like our rest API, we can select that and we've now filtered the data, so we’re only looking at the performance here. We can see that for this particular click path, it looks like there’s a lot of query latency on average and in fact, here’s the top two queries that are usually coming out of here. It’s almost exclusively accessed by authenticated users as well.
Now, here’s all these data coming from? We’ve been looking at aggregate data, but I mentioned our unique data source the trace, so I'm going to switch over to the second tab here, which is like a view source, if you will, for the data we are just looking at and now we can see a list of traces. Traces are snapshots of individual requests in full detail as they go through your application.
Let’s take a look at a trace. For this particular trace, we’re looking at a request to this URL. We can see the time spent on average, or the time spent by this request, in each layer of our stack, and we can also see this utilization up here, which is the flow of control of the request through the application. I'm just going to zoom in a little bit here, because there’s a lot going on in this particular request.
The first thing that happens is the request enters Nginx which is our low balance here, and we can see that we've gathered some information about how the HTTP request came in and how it was proxied through to Apache, which is the second tier of stack here and finally into PHP underneath it. PHP starts to queue the Drupal bootstraps, so the first thing that happens here is we’re looking something up in memcache. We could see where it’s coming from in the application code, and a number of small queries start to fire.
As you proceed through this request, we can see, for instance, details about the different queries being executed, where they’re coming from within the code, how long each one took. This one only took one and a half milliseconds, and what exactly the code was trying to do.
Here’s the [boot hook 00:18:05], and so what we’re seeing is overall, this is taking about 85 milliseconds, and as part of it doing a number of sub-actions including hook a net here, which then triggers this query and so on. With the individual trait details here, you can drill down on what's going on, what sequence did the events happen for a particular request, what was the slow thing that really bugged it down. There’s some really interesting details down in here.
One of the cool things in PHP, that even though we instrument some other languages, you can't get is the sandbox notion of memory usage. We can actually see throughout a request here, we can see the memory use at the beginning and at the end of this particular query, the peak memory at any point in the request and so on, and this could be really useful for debugging problems where you’re hitting a memory limit for individual request. There’s a lot of great detail down here in individual traces, but let's actually go back up a level and come back and look at our aggregates.
In addition to being able to drill down on endpoints, they were interested in optimizing, we might also want to be able to view the data in a more précised manner. We’re looking at our averages here. I'm going to switch over to a different view of this exact same data which we probably keep mapped. I'm going to overlay the average on it again here.
What we’re looking at is like a scatter plot on the X axis, we still have time on the Y axis latency. The density of color in each root square indicates how many request had a particular latency at a certain time over the past 24 hours. We can see that while this red line indicating our average trapped this middle path, there’s actually really two distinct bands in the data here. There’s some faster ones, there’s some slower ones. The heat mass interactive, so I'm going to grab a little patch of these outliers from the second band and see what's going on exactly.
These are actually all requests for the same endpoint, then some resource here as a view stage made by anonymous users. It’s not surprising that they’re clustered like this. This is pretty interesting because a lot of times when you have numerous endpoints with distinct performance characteristics, an average really just blends them together. Similarly, we've got the request volume on this bar underneath, and you can see that there’s a large request volume around this time of the day. They’re actually requests for relatively fast pages to load which brought our average down. You can still see that it wasn’t that our application got faster overall, it was just that the distribution of requests that were made changed.
We can think about optimizing in a different way when we see that there’s this constant population of relatively slow requests here that are spending from 6 to 10 seconds on the server side. Heat map is a powerful tool for drilling down on these types of performance problems.
In addition to providing the interface for slicing-and-dicing this data and filtering down to what you’re really interested in optimizing, we also provide alerting based on this, so you don’t have to be watching your application 24 hours a day. It’s pretty easy to set up alerts. They’re based on latency, on the performance of different hosts in your application, or on the error rate. You can actually filter each of this down to particular layers of the stack you’re interested in, or even URLs or menu items.
For instance, it turns out that latency is actually a pretty good predictive alert, but maybe your latency for the application overall is kind of noisy and so instead, you decide to restrict to particular URL like your checkout page, and then you can get alerted if pages that are important to you start to perform outside of your standards.
The last thing I’ll mention on the server side is our host monitoring capabilities. Latency and the behavior of applications are obviously very important, but sometimes what's really underlying it, i.e. the hardware, is the long point attempt is something that you need to keep an eye on. We’re also gathering machine data in real-time without the performance of different hosts in your stack.
You can see there’s a view here where we can look at all the different machines that we’re monitoring, but actually, sometimes it’s useful to be able to correlate that performance data with the application’s performance itself. We can overlay the host metrics on our performance data here, so what I'm doing is we’re looking at the past date again, I'm pulling up the CPU utilization on our frontend node, and we can see that as our request volume spiked yesterday afternoon, so did our CPU usage.
The other thing that you can get out of Trace View is end-user monitoring. You may already be doing this with something like Webpage Test or even with Chrome Inspector, but it’s useful to be able to get not only the point of view of your test sessions, but of real users around the internet.
I'm switching over to a different customer kind here that runs some high traffic logs. We can see that they’ve actually done a pretty good job of optimizing the server site performance here at the average request taking about a quarter-second on the server site, yet the full page load is actually close to 11 seconds on average.
Let's drill down on the end-user performance data. We can see that on average, we’re spending a lot of time in down processing, so getting together all the elements of the page and also in doing the page render so getting the document ready. There’s a little blip here of network latency, but other than that, it’s behaving pretty well there.
In addition to getting a latency here again, we’re also associating it with the features of request. That includes geographically where the requests are being made from, the files are being used, the URLs requested, and the code path is within the application. If we wanted to figure out what our performance is like in the United States or maybe in British Columbia, we can filter down to data from this region.
We can see the URL is being requested and which ones are performing well or poorly as well as the browser is being used. We can get comparative browser usage and finally associate all of these down again to individual requests and individual browser sessions so that we can get into that performance data in a highly granular way.
That’s Trace View in a nutshell. I’d like to hand it back over to Jess and open it up for questions.
Jess Iandiorio: Thanks, Dan. Sorry, we’re on mute here. That was a great demo. We really appreciate it. The first question we have is, are you related to Stanley Kubrick?
Dan Kubrick: No, but thank you for asking.
Jess Iandiorio: Sure. We have one question. Would you mind reading it, and I can’t see them. Do you support Drupal six and seven?
Dan Kubrick: Yes. We support Drupal six and seven, and the community module does as well.
Jess Iandiorio: Okay. That person also asked about eight, but that’s not available yet, but I assume once that’s available next year, you guys will be supporting that as well.
Dan Kubrick: Definitely.
Jess Iandiorio: Do you support Linux distributions?
Dan Kubrick: Yes. Currently, we provide debs and RPMs for Red Hat’s CentOS to Debbie Anne and Amazon Linux-based environments. I should also mention, if I didn’t earlier, that it’s a one-click install for Acquia’s hosted members of the network.
I see there’s another question about the setup in general. After you register for a free trial, you actually get walked through the install process within the application. It’s basically just installing three components from most users: a package that has our base, a package that installs Web server instrumentation, say an Apache module, and a package that installs a PHP extension.
After that, as you install each component, you’ll get immediate visual feedback within the application which will prompt you to continue, then in the future, because we’re providing packages, it’s actually very easy to use, either Puppet or Chef to work this into your automated deploy.
Jess Iandiorio: All right. We’ve got about 10 questions in the queue here so hopefully we can get through all of these. The next is, do you support cached versus non-cached, CDN versus non-CDN analytics, can they break it out down at that granularity?
Dan Kubrick: We currently don’t have visibility into requests that are going to the CDN except for to the extent that they speed up your end-user performance. Getting more statistics on full-page caching behaviors is something that we’re interested in the future.
Jess Iandiorio: We have two questions on the difference between Trace View and New Relic. Could you speak to that at a high level?
Dan Kubrick: Sure. We get asked about this pretty frequently and there’s basically three main differences. The first one is our full-stack application tracing. The same technology that allows us to follow requests starting in Nginx or Apache or Lighty also allows us to cross the wire for subsequent RPC calls if we’re using backend services, maybe with restful APIs. We can actually piggyback the unique identifier across those, so you can associate the work being done in your frontend and backend services as well, which is pretty cool.
The second thing is our data analysis and visualization, in terms of the granularity of the individual request view, particularly those three-point internals as well as the analysis that you can do with the heat map, is pretty unique compared to New Relic.
The last thing is actually our pricing model. Instead of pricing per host, Trace View is priced per trace, and the number of traces that you send us is configurable via your sample rate. What this means is that you don’t have to worry about buying a number of licenses to cover your entire deployments or having auto scaled nodes not be covered by your application performance instrumentation. You can actually set it up and use consumption based pricing.
We offer plans that start at just $95 a month, but there’s a two-month free trial, so you can definitely get your feet wet without having to worry about it. A lot of people find that our pricing model can be interesting for their environment because of the lack of purpose pricing.
Jess Iandiorio: Great. Just for the folks on the phone who might not be Acquia Network customers, can they do a trial directly through you guys if they’re not an Acquia Network customer?
Dan Kubrick: Yes. Unfortunately, it’s not nearly as lengthy, but you can head to appneta.com/starttracing, or just go to appneta.com and follow the links to sign up for a free trial.
Jess Iandiorio: Okay. Does Trace View work with an environment where Varnish is used?
Dan Kubrick: Trace View does work with Varnish, but we don’t provide any Varnish specific insights.
Jess Iandiorio: Okay. We got a question on mobile. How can this be used to monitor the performance in tablet and other mobile devices?
Dan Kubrick: As far as applications on mobile devices, those should be monitored from the perspective of the API calls that they’re making to say a restful back end service, or actual browser page views on mobile devices that’s completely covered by our real-user monitoring instrumentation, and you’ll find that just looking at the real-user monitoring data we gather there’s some very long page that’s from mobile devices, which is pretty cool to be able to do to separate out there. Our instrumentation works on all mobile devices, but mobile applications are viewed from a server-side perspective.
Jess Iandiorio: Okay. Where is the performance data stored and how much storage do you need to store it? Any metrics that you can provide or …
Dan Kubrick: We actually take care of all the storage at a SaaS-based service, so you don’t have to worry about any storage or scaling the storage on your side maintaining our upgrades. What you do as a Trace View user is install the instrumentation that gathers the data and we’ll take care of the rest.
Jess Iandiorio: Great. This question is lengthy, asking about more information, do you collect pops or Sourcepoint breakdowns? What Geo has the slowest response time? I know you showed some Geo stats earlier.
Dan Kubrick: In terms of geography, what we’re basically doing is using the IP to look up the origin of the request. In terms of actual network conditions on the point between your servers and the end-user, Trace View doesn’t provide insights instead network connectivity, but AppNeta has other SaaS-delivered solutions that actually provide a great deal of insights into network performance, even from just a single side of the connection.
If you’re interested in that, feel free to shoot me an email afterwards to inquire or head to appneta.com. Trace View will tell you the latency and the fraction of it spent the network, but not a great detail about hop to hop performance.
Jess Iandiorio: Okay. Are there any HTTPS implications or loss of fidelity or metrics or details?
Dan Kubrick: No. We’re not putting any proxies in between. HTTPS works fine with Trace View.
Jess Iandiorio: Okay. You may have already answered this one. Is there a specific node under the lowdown’s identification, instrumentation, HTTP or MySQL daemons?
Dan Kubrick: Sorry. Can you repeat that question?
Jess Iandiorio: It’s there a specific node under LD’s identification or instrumentation, HTTP or MySQL daemons?
Dan Kubrick: I'm not sure I clearly understand the question, but in order to get installed, we actually observe many of the components from the application layer itself, and we live inside the Web server as far instrumentation goes, and the application layer so you don’t have to worry about modifying your database servers or anything else, if that’s what the question was.
Jess Iandiorio: Okay. If that person is not …
Dan Kubrick: If you’d ask that one again?
Jess Iandiorio: Okay, so there’s some clarity. Let's say there are five nodes under a node balancer wherein one node performs different than the others, can Trace View help identify the outlying node?
Dan Kubrick: Yes, because we’re gathering metrics on a purpose basis, especially if that’s showing up in terms of that node is thrashing or using more CPU, that’s something that you can identify using Trace View.
Jess Iandiorio: Okay. You’re doing a great job. We only have two more questions left on this first batch, so if anybody else has questions, please feel free to submit them now. The last two. First is, does this also monitor Solr search server performance?
Dan Kubrick: We watch connections mid Solr and view the performance there, and we also have Java instrumentation that can look into Solr’s internals to some degree, mostly CPU and load-wise, but a little bit inside job as well.
Jess Iandiorio: Okay. Are there any issues installing this on Amazon, classical balance servers with EC2 instances and RGS database?
Dan Kubrick: No. We have tons of customers running in EC2. The only caveat is, if you’re using RDS, you can't actually install our system agent on that RDS machine, and so we’ll just be observing the queries and query latency for RDS.
Jess Iandiorio: Okay. What about Windows and IIS support?
Dan Kubrick: Windows is on the roadmap for 2013, but today we only support Linux-based environments.
Jess Iandiorio: Okay. Does Trace View also track affiliate program performance codes on third-party sites?
Dan Kubrick: Not out of the box. You can add custom instrumentation that will allow you to track the metrics that you’re interested in for your application, but that’s not one of the things that we have automatic instrumentation for.
Jess Iandiorio: Okay. Someone heard Apache being mentioned a couple of times. Is Nginx supported as well?
Dan Kubrick: Yes. We provided a module for Nginx, as well as a number of package versions of Nginx that contain the module, so yes.
Jess Iandiorio: Okay. Great. We have a question about can we use New Relic and Trace View at the same time? I’ll answer from the Acquia Network customer perspective and then Dan may have something else to add. If you are an Acquia Network customer, and you’re currently using New Relic, you cannot run New Relic and Trace View at the same time.
You would need for us to turn off your New Relic agents in order to enable the Trace View ones, and then we would need to turn the New Relic ones back on for you after the Trace View trial, if that was your preference or you could move forward just with Trace View. That’s for Acquia Network customers right now, and I don’t know if that’s different for you, Dan, for other people who might want to work directly with you guys that aren’t Acquia Network customers.
Dan Kubrick: We can’t control what you do, but we don’t recommend it. Both of the New Relic extension and Trace View hook in to PHP’s internals, and so we can't always be on top of the releases that New Relic is putting out, and they’re not always keeping in stuff with us, and so we don’t advise customers to go down both rods at the same time. What we do have, especially during evaluations is often some customer will try New Relic on one or two machines and Trace View on one or two machines as well. That’s the ride I’d go.
Jess Iandiorio: Okay. Great. Well, that’s actually all of the questions we have. Nice work. That was a lightning round of questions. It’s really nice to see people engaged and asking lots of questions as someone who does two or three of these per week sometimes. We really appreciate all of the interest and attention and questions.
If anybody has any last questions, I'm just going to flip to the last slide here. It’s just contact information, if you’d like to get in touch with either Acquia or New Relic or Trace View. Any other questions? We’ll just hang out a couple of minutes here. Let’s see here. Is there a raw data extraction life cycle?
Dan Kubrick: Currently, we provide an API for exporting some of the data in the interface, namely the slowest queries, the slowest URLs, the slowest code path to the application. We don’t have a full read at API, but some of it is extractable.
Jess Iandiorio: Great. All right. Well, that was very productive. Everybody has 15 minutes back in their day. Thank you so much, Dan. Really appreciate your presentation, great overview and lots of good answers to the questions. You can get in touch with AppNeta and that’s the company that owns Trace View, if that wasn’t clear. It used Tracelytics, now the company name is Trace View and it’s owned by AppNeta, just to clarify that. You can get in touch there or you can get in touch with Acquia. Please feel free to send us any more questions you might have on Trace View and/or the Acquia Network.
Dan Kubrick: Great. Thanks so much, Jess, and thanks, everybody.