Score:0

How to optimise javascript delivery on wordpress / apache / AWS?

cv flag

I'm down the rabbit-hole of website speed optimization. I have a site that's getting terrible marks from all of the usual suspects (PageSpeed and GT Metrics, specifically; it looks OK on Pingdom tools).

My setup is a single T3-Medium server running Apache and Wordpress, behind an AWS ELB, with deployment to CloudFront as the CDN.

My first attempts to improve performance included

  • upgraded to Medium (the server runs about 1/2-dozen websites, but they're tiny — collective traffic across all of them is only a few-thousand visits a day — nevertheless, a "Small" instance was too sluggish on even a single hit, because of memory constraints),
  • installed the WP Super Cache plugin (I was already running behind CloudFront, but installed the plugin to actually cache the pages themselves)
  • Added cache-control headers to the CloudFront behaviour
  • Removed query strings from the cache key (this isn't a traditional e-commerce site, and twitter ads was adding a unique string per user, which was basically making every page view a cache miss)

However, even with that, the default performance is unacceptable. The culprit seems to be javascript delivery (and execution). Here are some results from GT Metrics, based on how much manual intervention I perform:

            Cache-Miss                Cache-Hit                 Cache-Miss                 Cache-Hit               Cache-Miss               Cache-Hit
            Default                   Default                   Hammer YouTube             Hammer Youtube          Sledge Hammer            Sledge Hammer
TTFB        1,100     91     75          66     75     72   |      74     74    122           74     54     75  |     76     86    104         86     63     45
FCP         2,200  1,100  4,200         494    519    383   |   1,100  2,700  1,400          710    415    622  |  1,200  1,200  1,200        388    338    256
LCP         3,300  2,000  6,300       1,900  1,700  1,800   |   1,800  3,500  2,000        2,700    881  1,600  |  1,900  2,000  2,500        684    617    490
Onload      4,600  3,200  7,600       2,300  3,000  2,700   |   2,900  4,900  2,700        3,100  1,500  2,500  |  2,000  2,000  2,400        766    824    489
TTI         4,700  3,300  7,600       2,500  2,900  3,200   |   3,000  8,000  2,800        3,400  1,700  3,000  |  8,000  6,800  1,200      6,700  6,700  6,400

As you can see, if I don't do anything (the first two, "default" subtables), the site is taking nearly 3 seconds to load with a cache hit, and often north of 4 on a cache miss.

This is really the root of the question. What should I do from here? I'll describe what I'm now doing, below, but I don't believe what I'm about to describe is standard practice on the Internet, and clearly the Internet isn't, in general, suffering from this kind of performance.

Applying A Sledge Hammer

I'm not sure which of those metrics matter most from a user-experience perspective, but I suspect in my case it's LCP and onload (I can imagine for some websites it's TTI, but in my case there's nothing to do above the fold, so the old First CPU Idle metric would have been great, if Lighthouse were still reporting it).

What I'm doing now is what I have in the "Sledge Hammer" section of the table. I wrote a script that strips out all non-essential javascript src tags, and only puts them back after either 5 seconds or first user interaction. You can see the results. Even on a cache miss, the site is loading in around 2 seconds, and on a cache hit, it's well under 1 second (we can ignore the TTI number, because that's my 5 second delay, and isn't affecting above-the-fold appearance or interactivity).

OK, the site is "working", but really, not only does it seem like one shouldn't need to do this in the abstract, but also there's some javascript I'd really love to have working from the beginning.

Digging in, the problems all seem to be third-party JavaScript (i.e., stuff that isn't / can't be on my CDN). Some of this is just egregious, and I can deal with it internally (e.g., I can tell the marketing people, "Only use HotJar when you really need to, then turn it off — it adds a full second to page-load time"). But some of it is just "Internet standard" — Stripe and Google analytics are each adding ~500ms between load time and run time.

I can continue to fine-tune my sledgehammer, as Tim suggested in the comments, but this still doesn't feel right. There has to be something I'm missing in terms of this set-up and (particularly third-party) JavaScript.

Tim avatar
gp flag
Tim
Reading your update, the whole 5 second delay thing is really unusual. You can download js hosted anywhere and put it on your server / CDN if you want to. Suggest you try my idea, as you haven't given us enough information to really help you out. If you really want help post the URL.
philolegein avatar
cv flag
@Tim, I can't find the post where I got the original idea for the delay, but I think the idea was, "Wait a long time to start loading, but do it sooner if the user scrolls/clicks/etc..". However, I have taken your suggestion and turned the delay down to 650ms, based on the data I put in the revision. You can see the page in question here: [link](https://www.chrisrichardson.info/lp/prague-b/)
Tim avatar
gp flag
Tim
First load took a few seconds for me in New Zealand, second and later loads were very, very fast so the CDN is working. Web Page Test says it's fine. https://www.webpagetest.org/result/220531_BiDcPJ_7TB/ . GTMetrix is a bit slow first load https://gtmetrix.com/reports/www.chrisrichardson.info/8JdbxgoE/ but once it's cached on the CDN node near the testing site it's super fast https://gtmetrix.com/reports/www.chrisrichardson.info/4jXER8rm/ . So I think first load is your problem. I don't think this is an infrastructure problem. Variation is likely due to geography. I'd lose the delay entirely
Tim avatar
gp flag
Tim
I would remove the five second JS loading delay and make sure caching headers for the static resources (js / jpg / etc) are set so they can be cached by CloudFront.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.