PageSpeed Insights API is a very powerful tool as it can give us lots of data to enhance the speed performance in a bulk way for many pages and we can even store this data in a database to analyze the speed evolution over the time as we make changes to improve the pages speed. The only thing that we need for getting the most out of PageSpeed Insights is being aware of all the data we can extract and being able to manipulate JSON files.

In today’s post we are going to cover extensively all the metrics that PageSpeed Insights API can provide us with to analyze our speed performance and detect some areas with room for improvement. For this purpose, I am going to show you how to make a request and the different JSON keys that you need to find the data of your interest. Concretely, I am going to walk you through:

  1. Core Web Vitals.
  2. Overall performance score.
  3. Number of long tasks.
  4. Audits for each of the 43 metrics: network requests, dom size, critical request chains, redirects, render blocking resources…

Let’s get this started!

1.- How to make a request with PageSpeed Insights API?

Making a request to get data from PageSpeed Insights API with Python is pretty straightforward, as we only need to make use of the urllib.request module and make a requests to this URL: “https://www.googleapis.com/pagespeedonline/v5/runPagespeed” specifying the URL that we want to analyze and the device with the parameters “url” and “strategy”. So, if for instance we would like to get data from the URL https://www.danielherediamejias.com/, we would only need to make the following request:

For Mobile device: “https://www.googleapis.com/pagespeedonline/v5/runPagespeed?url=https://www.danielherediamejias.com/&strategy=mobile&locale=en”

For Desktop device: “https://www.googleapis.com/pagespeedonline/v5/runPagespeed?url=https://www.danielherediamejias.com/&strategy=desktop&locale=en”

In case we would need to make many requests, it is recommendable to get a key to avoid 403 response errors. This key can be gotten on this page, and once you have your key, you only need to insert it as a parameter into the request URL with the following parameter: “&key=yourAPIKey“.

Once the request is made, we can parse the PageSpeed Insights audit and read it as a JSON file. For this, we will only need to use the JSON library.

If we put everything together, the Python code would look like something like this:

import urllib.request, json
url = "https://www.googleapis.com/pagespeedonline/v5/runPagespeed?url=https://www.danielherediamejias.com/&strategy=mobile&locale=en&key=yourAPIKey"
#Note that you can insert your URL with the parameter URL and you can also modify the device parameter if you would like to get the data for desktop.
response = urllib.request.urlopen(url)
data = json.loads(response.read())  

Note that we save the JSON file under the variable “data”, which we are going to use to manipulate the JSON file. If you would like to get the insights for mobile and desktop devices, a helpful technique is creating a list with the values “mobile” and “desktop” and iterating through it, inserting in the request the device parameter by using the value which is gotten from the list iteration.

2.- Finding the data we are searching for

2.1.- Core Web Vitals

To get the Core Web Vitals we will need to go over the Loading Experience key and select the percentiles for each metric. We need to be careful as all the metrics are returned in milliseconds units, so it can look different at first sight in comparison to the results which are shown on the normal interface as First Contentful Paint and First Input Delay are returned in seconds on the normal interface. (if needed, you can divide between 1000 to transform milliseconds into seconds).

In the code below we get the score for each Core Web Vitals metrics: First Contentful Paint (FCP), First Input Delay (FID), Largest Contentful Paint (LCP) and Cumulative Layout Shift (CLS):

fcp = data["loadingExperience"]["metrics"]["FIRST_CONTENTFUL_PAINT_MS"]["percentile"] #into seconds (/1000)
fid = data["loadingExperience"]["metrics"]["FIRST_INPUT_DELAY_MS"]["percentile"] #into seconds (/1000)
lcp = data["loadingExperience"]["metrics"]["LARGEST_CONTENTFUL_PAINT_MS"]["percentile"]
cls = data["loadingExperience"]["metrics"]["CUMULATIVE_LAYOUT_SHIFT_SCORE"]["percentile"]/100

If we just wanted to get the score for each of the Web Core Vitals, you can also get this information replacing the “percentile” key with the “category” key. It will return three type of performances: “Slow“, “Moderate” and “Fast“. Below you can find an example about how to get this info easily.

fcp_score = data["loadingExperience"]["metrics"]["FIRST_CONTENTFUL_PAINT_MS"]["category"]
fid_score = data["loadingExperience"]["metrics"]["FIRST_INPUT_DELAY_MS"]["category"]
lcp_score = data["loadingExperience"]["metrics"]["LARGEST_CONTENTFUL_PAINT_MS"]["category"]
cls_score = data["loadingExperience"]["metrics"]["CUMULATIVE_LAYOUT_SHIFT_SCORE"]["category"]

2.2.- Overall Performance Score

With PageSpeed Insights API we can also get the overall performance score that the normal interface returns. The only difference is that in this case the API will return a value which ranges from 0 to 1, so we will need to multiply it by 100 to have the same format as the one we get from the normal interface.

overall_score = data["lighthouseResult"]["categories"]["performance"]["score"] * 100

2.3.- Long Tasks Report

Long tasks is basically JavaScript code which monopolizes the main thread for a long period of time and makes the UI “freeze”. It affects mainly the “Time to Interactive” metric that we can get from PageSpeed Insights as well.

If you are curious about Long tasks, you can read much more about it on this post.

Long tasks are considered those tasks which take over 50ms to load. With PageSpeed Insights we can also get the total number of tasks, the total tasks time and the number of long tasks which take over 50ms to load:

total_tasks = data["lighthouseResult"]["audits"]["diagnostics"]["details"]["items"][0]["numTasks"]
total_tasks_time = data["lighthouseResult"]["audits"]["diagnostics"]["details"]["items"][0]["totalTaskTime"]
long_tasks = data["lighthouseResult"]["audits"]["diagnostics"]["details"]["items"][0]["numTasksOver50ms"]

Additionally, you can also get the number of long tasks with the code which is shown below, but sometimes the number of long tasks does not match between the number of tasks which take over 50ms to load and the display value which is shown once the specific long tasks audit is consulted:

long_tasks = data["lighthouseResult"]["audits"]["long-tasks"]["displayValue"]

2.4.- Audits for each metrics

Finally, what we can get from PageSpeed Insights is an audit for each metric. They usually deliver a normalized score and a display value which depends on the specific metric such as time, potential savings, duration, etcetera. In addition, in some cases where there can be room for improvement on specific resources, we can also get those URLs and some of their specific peculiarities such as total number of bytes, wasted number of bytes, total number of milliseconds, wasted number of milliseconds, etcetera.

In the next section, we are going to go over each of these 43 metrics to show how to work with them and what you can get from them. These 43 metrics are: Network requests, Mainthread Work Breakdown, Use of Passive Event Listeners, Dom Size, Offscreen Images, Critical Request Chains, Total Byte Weight, Use of Responsive Images, Render Blocking Resources, Uses of Rel Preload, Estimated Input Latency, Redirects, Unused Javascript, Total Blocking Time, First Meaningful Paint, Cumulative Layout Shift, Network Rtt, Speed Index, Use of Rel Preconnect, Use of Optimized Images, Unminified Javascript, Font Display, First Cpu Idle, Long Tasks, No Document Write, Use of text Compression, Third Party Summary, Largest Contentful Paint Element, Efficient Animated Content, Unused Css Rules, Screenshot Thumbnails, Network Server Latency, Layout Shift Elements, Use of long Cache Ttl, First Contentful Paint, Main Thread Tasks, Max Potential Fid, Server Response Time, Largest Contentful Paint, Interactive, Unminified Css, Final Screenshot, Bootup Time and Use of Webp Images.

2.4.1.- Network Requests

  • Key: data[“lighthouseResult”][“audits”][“network-requests”]
  • Description: Lists the network requests that were made during page load.
  • What you can get: a list with the URL of these resources, end time, start time, transfer size and resource size.
listrequests = []
for x in range (len(data["lighthouseResult"]["audits"]["network-requests"]["details"]["items"])):
    endtime = data["lighthouseResult"]["audits"]["network-requests"]["details"]["items"][x]["endTime"]
    starttime = data["lighthouseResult"]["audits"]["network-requests"]["details"]["items"][x]["startTime"]
    transfersize = data["lighthouseResult"]["audits"]["network-requests"]["details"]["items"][x]["transferSize"]
    resourcesize = data["lighthouseResult"]["audits"]["network-requests"]["details"]["items"][x]["resourceSize"]
    url = data["lighthouseResult"]["audits"]["network-requests"]["details"]["items"][x]["url"]
    list1 = [endtime, starttime, transfersize, resourcesize, url]
    listrequests.append(list1)

The loop will return a list which contains the requests with their start times, end times, transfer sizes and resource sizes.

2.4.2.- Mainthread Work Breakdown

  • Key: data[“lighthouseResult”][“audits”][“mainthread-work-breakdown”]
  • Description: Consider reducing the time spent parsing, compiling and executing JS. You may find delivering smaller JS payloads helps with this. Learn more
  • What you can get: Normalized score (0 is the worst, 1 is the best), mainthread work breakdown duration in seconds and a list with the duration of each process (Style & Layout, Rendering, Other, Parse HTML & CSS, Script Evaluation and Script Parsing & Compilation).
mainthread_score = data["lighthouseResult"]["audits"]["mainthread-work-breakdown"]["score"]
mainthread_duration = data["lighthouseResult"]["audits"]["mainthread-work-breakdown"]["displayValue"]

#Iteration to get each process duration
listprocesses = []
for x in range (len(data["lighthouseResult"]["audits"]["mainthread-work-breakdown"]["details"]["items"])):
    duration = data["lighthouseResult"]["audits"]["mainthread-work-breakdown"]["details"]["items"][x]["duration"]
    process = data["lighthouseResult"]["audits"]["mainthread-work-breakdown"]["details"]["items"][x]["groupLabel"]
    list1 = [duration,process]
    listprocesses.append(list1)

The loop will return a list which which contains the different processes with their durations.

2.4.3.- Use of Passive Event Listeners

  • Key: data[“lighthouseResult”][“audits”][“uses-passive-event-listeners”]
  • Description: Consider marking your touch and wheel event listeners as passive to improve your page’s scroll performance. Learn more.
  • What you can get: Normalized score (0 is the worst, 1 is the best) and a list with the event listeners which could be marked as passive.
event_listeners = data["lighthouseResult"]["audits"]["uses-passive-event-listeners"]["score"]

listevents = []
for x in range (len(data["lighthouseResult"]["audits"]["uses-passive-event-listeners"]["details"]["items"])):
    url = data["lighthouseResult"]["audits"]["uses-passive-event-listeners"]["details"]["items"][x]["url"]
    line = data["lighthouseResult"]["audits"]["uses-passive-event-listeners"]["details"]["items"][x]["label"]
    list1 = [url, line]
    listevents.append(list1)

The loop will return a list which contains the events that could be marked as passive and the line where they are located on the code.

2.4.4.- Dom Size

  • Key: data[“lighthouseResult”][“audits”][“dom-size”]
  • Description: A large DOM will increase memory usage, cause longer style calculations, and produce costly layout reflows. Learn more.
  • What you can get: Normalized score (0 is the worst, 1 is the best) and Dom size elements.
dom_size_score = data["lighthouseResult"]["audits"]["dom-size"]["score"]
dom_size_elements = data["lighthouseResult"]["audits"]["dom-size"]["displayValue"]

2.4.5.- OffScreen Images

  • Key: data[“lighthouseResult”][“audits”][“offscreen-images”]
  • Description: Consider lazy-loading offscreen and hidden images after all critical resources have finished loading to lower time to interactive. Learn more.
  • What you can get: Normalized score (0 is the worst, 1 is the best), potential savings and a list with the images which could be lazy-loaded.
offscreen_images_score = data["lighthouseResult"]["audits"]["offscreen-images"]["score"]
offscreen_images = data["lighthouseResult"]["audits"]["offscreen-images"]["displayValue"]

listoffscreenimages = []
for x in range (len(data["lighthouseResult"]["audits"]["offscreen-images"]["details"]["items"])):
    url = data["lighthouseResult"]["audits"]["offscreen-images"]["details"]["items"][x]["url"]
    totalbytes = data["lighthouseResult"]["audits"]["offscreen-images"]["details"]["items"][x]["totalBytes"]
    wastedbytes = data["lighthouseResult"]["audits"]["offscreen-images"]["details"]["items"][x]["wastedBytes"]
    wastedpercent = data["lighthouseResult"]["audits"]["offscreen-images"]["details"]["items"][x]["wastedPercent"]
    list1 = [url, totalbytes, wastedbytes, wastedpercent]
    listoffscreenimages.append(list1)

The loop will return a list which contains the images which could be lazy-loaded, their wasted bytes, total bytes and wasted percentage of bytes.

2.4.6.- Critical Requests Chains

  • Key: data[“lighthouseResult”][“audits”][“critical-request-chains”]
  • Description: The Critical Request Chains below show you what resources are loaded with a high priority. Consider reducing the length of chains, reducing the download size of resources, or deferring the download of unnecessary resources to improve page load. Learn more.
  • What you can get: number of critical requests chains and a list with the resources which comprise the critical request chains with their start times, end times and transfer size.
critical_requests = data["lighthouseResult"]["audits"]["critical-request-chains"]["displayValue"]

listchains = []
for keys in data["lighthouseResult"]["audits"]["critical-request-chains"]["details"]["chains"].keys():
    try: 
        for values in data["lighthouseResult"]["audits"]["critical-request-chains"]["details"]["chains"][keys]["children"].values():
            url = values["request"]["url"]
            startime = values["request"]["startTime"]
            endtime = values["request"]["endTime"]
            transfersize = values["request"]["transferSize"]
            list1 = [url,startime,endtime,transfersize, keys]
            listchains.append(list1)
    except:
        continue

The loop will return a list with the URLs which comprise the critical request chains, their start times, end times, transfer size and the request chain where they belong as there can be over one critical request chain.

2.4.7.- Total Bytes Weight

  • Key: data[“lighthouseResult”][“audits”][“total-byte-weight”]
  • Description: Large network payloads cost users real money and are highly correlated with long load times. Learn more.
  • What you can get: Normalized score (0 is the worst, 1 is the best), total bytes weight and a list with the bytes weight for each resource.
bytes_weight_score = data["lighthouseResult"]["audits"]["total-byte-weight"]["score"]
bytes_weight = data["lighthouseResult"]["audits"]["total-byte-weight"]["displayValue"]

listbytes = []
for x in range (len(data["lighthouseResult"]["audits"]["total-byte-weight"]["details"]["items"])):
    url = data["lighthouseResult"]["audits"]["total-byte-weight"]["details"]["items"][x]["url"]
    bytes_total = data["lighthouseResult"]["audits"]["total-byte-weight"]["details"]["items"][x]["totalBytes"]
    list1 = [url, bytes_total]
    listbytes.append(list1)

The loop will return a list which contains the different resources with their bytes weights.

2.4.8.- Use of responsive images

  • Key: data[“lighthouseResult”][“audits”][“uses-responsive-images”]
  • Description: Serve images that are appropriately-sized to save cellular data and improve load time. Learn more.
  • What you can get: Normalized score (0 is the worst, 1 is the best), potential weight savings and a list with the images which could be responsive with their wasted bytes.
responsive_images_score = data["lighthouseResult"]["audits"]["uses-responsive-images"]["score"]
responsive_image_savings = data["lighthouseResult"]["audits"]["uses-responsive-images"]["displayValue"]

listresponsivesavings = []
for x in range (len(data["lighthouseResult"]["audits"]["uses-responsive-images"]["details"]["items"])):
    url = data["lighthouseResult"]["audits"]["uses-responsive-images"]["details"]["items"][x]["url"]
    wastedbytes = data["lighthouseResult"]["audits"]["uses-responsive-images"]["details"]["items"][x]["wastedBytes"]
    totalbytes = data["lighthouseResult"]["audits"]["uses-responsive-images"]["details"]["items"][x]["totalBytes"]
    list1 = [url, wastedbytes, totalbytes]
    listresponsivesavings.append(list1)

The loop will return a list which contains the different images with their wasted and total bytes.

2.4.9.- Render Blocking Resources

  • Key: data[“lighthouseResult”][“audits”][“render-blocking-resources”]
  • Description: Resources are blocking the first paint of your page. Consider delivering critical JS/CSS inline and deferring all non-critical JS/styles. Learn more.
  • What you can get: Normalized score (0 is the worst, 1 is the best), potential savings and a list with the resources which are blocking the rendering and their wasted milliseconds.
blocking_resources_score = data["lighthouseResult"]["audits"]["render-blocking-resources"]["score"]
blocking_resoures_savings = data["lighthouseResult"]["audits"]["render-blocking-resources"]["displayValue"]

listblockingresources = []
for x in range (len(data["lighthouseResult"]["audits"]["render-blocking-resources"]["details"]["items"])):
    url = data["lighthouseResult"]["audits"]["render-blocking-resources"]["details"]["items"][x]["url"]
    totalbytes = data["lighthouseResult"]["audits"]["render-blocking-resources"]["details"]["items"][x]["totalBytes"]
    wastedbytes = data["lighthouseResult"]["audits"]["render-blocking-resources"]["details"]["items"][x]["wastedMs"]
    list1 = [url, totalbytes, wastedbytes]
    listblockingresources.append(list1)

The loop will return a list which contains the resources which are blocking the rendering with their wasted and total bytes.

2.4.10.- Use of Rel Preload

  • Key: data[“lighthouseResult”][“audits”][“uses-rel-preload”]
  • Description: Consider using <link rel=preload> to prioritize fetching resources that are currently requested later in page load. Learn more.
  • What you can get: Normalized score (0 is the worst, 1 is the best), potential savings and a list with the resources which could be preloaded.
rel_preload_score = data["lighthouseResult"]["audits"]["uses-rel-preload"]["score"]
rel_preload_savings = data["lighthouseResult"]["audits"]["uses-rel-preload"]["displayValue"]

listrelpreload = []
for x in range (len(data["lighthouseResult"]["audits"]["uses-rel-preload"]["details"]["items"])):
    url = data["lighthouseResult"]["audits"]["uses-rel-preload"]["details"]["items"][x]["url"]
    wastedms = data["lighthouseResult"]["audits"]["uses-rel-preload"]["details"]["items"][x]["wastedMs"]
    list1 = [url, wastedms]
    listrelpreload.append(list1)

The loop will return a list with those resources that can be preloaded and their wasted milliseconds.

2.4.11.- Estimated Input Latency

  • Key: data[“lighthouseResult”][“audits”][“estimated-input-latency”]
  • Description: Estimated Input Latency is an estimate of how long your app takes to respond to user input, in milliseconds, during the busiest 5s window of page load. If your latency is higher than 50 ms, users may perceive your app as laggy. Learn more.
  • What you can get: Normalized score (0 is the worst, 1 is the best) and estimated input latency milliseconds duration.
eil_score = data["lighthouseResult"]["audits"]["estimated-input-latency"]["score"]
eil_duration = data["lighthouseResult"]["audits"]["estimated-input-latency"]["displayValue"]

2.4.12.- Redirects

  • Key: data[“lighthouseResult”][“audits”][“redirects”]
  • Description: Redirects introduce additional delays before the page can be loaded. Learn more.
  • What you can get: Normalized score (0 is the worst, 1 is the best), potential savings with no redirects, a list with the redirects and their wasted milliseconds.
redirects_score = data["lighthouseResult"]["audits"]["redirects"]["score"]
redirect_savings = data["lighthouseResult"]["audits"]["redirects"]["displayValue"]

listredirects = []
for x in range (len(data["lighthouseResult"]["audits"]["redirects"]["details"]["items"])):
    url = data["lighthouseResult"]["audits"]["redirects"]["details"]["items"][x]["url"]
    wastedms = data["lighthouseResult"]["audits"]["redirects"]["details"]["items"][x]["wastedMs"]
    list1 = [url,wastedms]
    listredirects.append(list1)

The loop will return a list which contains the redirects and wasted milliseconds on each redirect.

2.4.13.- Unused JavaScript

  • Key: data[“lighthouseResult”][“audits”][“unused-javascript”]
  • Description: Remove unused JavaScript to reduce bytes consumed by network activity. Learn more.
  • What you can get: Normalized score (0 is the worst, 1 is the best), potential savings and a list with the unused Javascript files and their wasted bytes.
unused_js_score = data["lighthouseResult"]["audits"]["unused-javascript"]["score"]
unused_js_savings = data["lighthouseResult"]["audits"]["unused-javascript"]["displayValue"]

listunusedjavascript = []
for x in range (len(data["lighthouseResult"]["audits"]["unused-javascript"]["details"]["items"])):
    url = data["lighthouseResult"]["audits"]["unused-javascript"]["details"]["items"][x]["url"]
    totalbytes = data["lighthouseResult"]["audits"]["unused-javascript"]["details"]["items"][x]["totalBytes"]
    wastedbytes = data["lighthouseResult"]["audits"]["unused-javascript"]["details"]["items"][x]["wastedBytes"]
    wastedpercentage= data["lighthouseResult"]["audits"]["unused-javascript"]["details"]["items"][x]["wastedPercent"]
    list1 = [url, totalbytes, wastedbytes, wastedpercentage]
    listunusedjavascript.append(list1)
    

The loop will return a list with the unused Javascript files, their total number of bytes, wasted bytes and wasted percentage of bytes.

2.4.14.- Total Blocking Time

  • Key: data[“lighthouseResult”][“audits”][“total-blocking-time”]
  • Description: Sum of all time periods between FCP and Time to Interactive, when task length exceeded 50ms, expressed in milliseconds. Learn more.
  • What you can get: Normalized score (0 is the worst, 1 is the best) and total blocking time duration.
blocking_time_score = data["lighthouseResult"]["audits"]["total-blocking-time"]["score"]
blocking_time_duration = data["lighthouseResult"]["audits"]["total-blocking-time"]["displayValue"]

2.4.15.- First Meaningful Paint

  • Key: data[“lighthouseResult”][“audits”][“first-meaningful-paint”]
  • Description: First Meaningful Paint measures when the primary content of a page is visible. Learn more.
  • What you can get: Normalized score (0 is the worst, 1 is the best) and first meaningful paint time.
fmp_score = data["lighthouseResult"]["audits"]["first-meaningful-paint"]["score"]
fmp = data["lighthouseResult"]["audits"]["first-meaningful-paint"]["displayValue"]

2.4.16.- Cumulative Layout Shift

  • Key: data[“lighthouseResult”][“audits”][“cumulative-layout-shift”]
  • Description: Cumulative Layout Shift measures the movement of visible elements within the viewport. Learn more.
  • What you can get: Normalized score (0 is the worst, 1 is the best) and number of visible moving elements.
cls_score = data["lighthouseResult"]["audits"]["cumulative-layout-shift"]["score"]
cls = data["lighthouseResult"]["audits"]["cumulative-layout-shift"]["displayValue"]

2.4.17.- Network RTT

  • Key: data[“lighthouseResult”][“audits”][“network-rtt”]
  • Description: Network round trip times (RTT) have a large impact on performance. If the RTT to an origin is high, it’s an indication that servers closer to the user could improve performance. Learn more.
  • What you can get: network RTT milliseconds.
network_rtt = data["lighthouseResult"]["audits"]["network-rtt"]["displayValue"]

2.4.18.- Speed Index

  • Key: data[“lighthouseResult”][“audits”][“speed-index”]
  • Description: Speed Index shows how quickly the contents of a page are visibly populated. Learn more.
  • What you can get: Normalized score (0 is the worst, 1 is the best) and speed index.
speed_index_score = data["lighthouseResult"]["audits"]["speed-index"]["score"]
speed_index = data["lighthouseResult"]["audits"]["speed-index"]["displayValue"]

2.4.19.- Use of Rel Preconnect

  • Key: data[“lighthouseResult”][“audits”][“uses-rel-preconnect”]
  • Description: Consider adding preconnect or dns-prefetch resource hints to establish early connections to important third-party origins. Learn more.
  • What you can get: Normalized score (0 is the worst 1 is the best) and warnings for the number of elements which use preconnect.
rel_preconnect_score = data["lighthouseResult"]["audits"]["uses-rel-preconnect"]["score"]
rel_preconnect_warning = data["lighthouseResult"]["audits"]["uses-rel-preconnect"]["warnings"]

2.4.20.- Use of Optimized Images

  • Key: data[“lighthouseResult”][“audits”][“uses-optimized-images”]
  • Description: Optimized images load faster and consume less cellular data. Learn more.
  • What you can get: Normalized score (0 is the worst, 1 is the best), potential savings and a list with the images that could be optimized.
optimized_images_score = data["lighthouseResult"]["audits"]["uses-optimized-images"]["score"]
optimized_images = data["lighthouseResult"]["audits"]["uses-optimized-images"]["displayValue"]

listoptimisedimages = []
for x in range (len(data["lighthouseResult"]["audits"]["uses-optimized-images"]["details"]["items"])):
    url = data["lighthouseResult"]["audits"]["uses-optimized-images"]["details"]["items"][x]["url"]
    wastedbytes = data["lighthouseResult"]["audits"]["uses-optimized-images"]["details"]["items"][x]["wastedBytes"]
    totalbytes = data["lighthouseResult"]["audits"]["uses-optimized-images"]["details"]["items"][x]["totalBytes"]
    list1 = [url, wastedbytes, totalbytes]
    listoptimisedimages.append(list1)

The loop will return a list which contains the images which can be optimized and their wasted and total bytes.

2.4.21.- Unminified JavaScript

  • Key: data[“lighthouseResult”][“audits”][“unminified-javascript”]
  • Description: Minifying JavaScript files can reduce payload sizes and script parse time. Learn more.
  • What you can get: Normalized score (0 is the worst, 1 is the best), potential savings and a list with those Javascript resources which can be minified.
unminified_javascript_score = data["lighthouseResult"]["audits"]["unminified-javascript"]["score"]
unminified_javascript_savings = data["lighthouseResult"]["audits"]["unminified-javascript"]["displayValue"]

listunminifiedjavascript = []
for x in range (len(data["lighthouseResult"]["audits"]["unminified-javascript"]["details"]["items"])):
    url = data["lighthouseResult"]["audits"]["unminified-javascript"]["details"]["items"][x]["url"]
    wastedbytes = data["lighthouseResult"]["audits"]["unminified-javascript"]["details"]["items"][x]["wastedBytes"]
    totalbytes = data["lighthouseResult"]["audits"]["unminified-javascript"]["details"]["items"][x]["totalBytes"]
    wastedpercent = totalbytes = data["lighthouseResult"]["audits"]["unminified-javascript"]["details"]["items"][x]["wastedPercent"]
    list1 = [url, wastedbytes, totalbytes, wastedpercent]
    listunminifiedjavascript.append(list1)

The loop will return a list which contains the Javascript resources which can be minified, the total number of bytes, wasted number of bytes and wasted percentage of bytes.

2.4.22.- Font Display

  • Key: data[“lighthouseResult”][“audits”][“font-display”]
  • Description: Leverage the font-display CSS feature to ensure text is user-visible while webfonts are loading. Learn more.
  • What you can get: Normalized score (0 is the worst, 1 is the best) and a list with those font files and their wasted milliseconds.
font_display_score = data["lighthouseResult"]["audits"]["font-display"]["score"]

listfonts = []
for x in range (len(data["lighthouseResult"]["audits"]["font-display"]["details"]["items"])):
    url = data["lighthouseResult"]["audits"]["font-display"]["details"]["items"][x]["url"]
    wasted_ms = data["lighthouseResult"]["audits"]["font-display"]["details"]["items"][x]["wastedMs"]
    list1 = [url, wasted_ms]
    listfonts.append(list1)

The loop will return a list which contains the font resources with their wasted milliseconds.

2.4.23.- First CPU Idle

  • Key: data[“lighthouseResult”][“audits”][“first-cpu-idle”]
  • Description: First CPU Idle marks the first time at which the page’s main thread is quiet enough to handle input. Learn more.
  • What you can get: Normalized score (0 is the worst, 1 is the best) and First CPU idle time.
first_cpu_idle_score = data["lighthouseResult"]["audits"]["first-cpu-idle"]["score"]
first_cpu_idle = data["lighthouseResult"]["audits"]["first-cpu-idle"]["displayValue"]

2.4.24.- Long Tasks

  • Key: data[“lighthouseResult”][“audits”][“long-tasks”]
  • Description: Lists the longest tasks on the main thread, useful for identifying worst contributors to input delay. Learn more
  • What you can get: number of long tasks and a list with those tasks with their durations.
long_tasks = data["lighthouseResult"]["audits"]["long-tasks"]["displayValue"]

listlongtasks = []
for x in range (len(data["lighthouseResult"]["audits"]["long-tasks"]["details"]["items"])):
    url = data["lighthouseResult"]["audits"]["long-tasks"]["details"]["items"][x]["url"]
    duration = data["lighthouseResult"]["audits"]["long-tasks"]["details"]["items"][x]["duration"]
    list1 = [url, duration]
    listlongtasks.append(list1)

The loop will return a list which contains the long tasks and their durations.

2.4.25.- No document write

  • Key: data[“lighthouseResult”][“audits”][“no-document-write”]
  • Description: For users on slow connections, external scripts dynamically injected via document.write() can delay page load by tens of seconds. Learn more.
  • What you can get: Normalized score (0 is the worst, 1 is the best) and resources which are injected via document.write().
no_document_write_score = data["lighthouseResult"]["audits"]["no-document-write"]["score"]

listnodocumentwrite = []
for x in range (len(data["lighthouseResult"]["audits"]["no-document-write"]["details"]["items"])):
    url = data["lighthouseResult"]["audits"]["no-document-write"]["details"]["items"][x]["url"]
    line = data["lighthouseResult"]["audits"]["no-document-write"]["details"]["items"][x]["label"]
    list1 = [url, line]
    listnodocumentwrite.append(list1)

The loop will return a list with resources which use document.write() and the code lines where they appear.

2.4.26.- Use of Text Compression

  • Key: data[“lighthouseResult”][“audits”][“uses-text-compression”]
  • Description: Text-based resources should be served with compression (gzip, deflate or brotli) to minimize total network bytes. Learn more
  • What you can get: Normalized score (0 is the worst, 1 is the best).
text_compression_score = data["lighthouseResult"]["audits"]["uses-text-compression"]["score"]

2.4.27.- Largest Contentful Paint Element

  • Key: data[“lighthouseResult”][“audits”][“largest-contentful-paint-element”]
  • Description: This is the largest contentful element painted within the viewport. Learn More
  • What you can get: number of found elements and a list with the paths to these element and their selectors.
lcp_elements = data["lighthouseResult"]["audits"]["largest-contentful-paint-element"]["displayValue"]

listpath_selector = []
for x in range (len(data["lighthouseResult"]["audits"]["largest-contentful-paint-element"]["details"]["items"])):
    path = data["lighthouseResult"]["audits"]["largest-contentful-paint-element"]["details"]["items"][x]["node"]["path"]
    selector = data["lighthouseResult"]["audits"]["largest-contentful-paint-element"]["details"]["items"][x]["node"]["selector"]
    list1 = [path, selector]
    listpath_selector.append(list1)

The loop will return a list which contains the elements with their paths and selectors.

2.4.28.- Efficient Animated Content

  • Key: data[“lighthouseResult”][“audits”][“efficient-animated-content”]
  • Description: Large GIFs are inefficient for delivering animated content. Consider using MPEG4/WebM videos for animations and PNG/WebP for static images instead of GIF to save network bytes. Learn more
  • What you can get: Normalized score (0 is the worst, 1 is the best).
animated_content_score = data["lighthouseResult"]["audits"]["efficient-animated-content"]["score"]

2.4.29.- Unused CSS rules

  • Key: data[“lighthouseResult”][“audits”][“unused-css-rules”]
  • Description: Remove dead rules from stylesheets and defer the loading of CSS not used for above-the-fold content to reduce unnecessary bytes consumed by network activity. Learn more.
  • What you can get: Normalized score (0 is the worst, 1 is the best), potential savings, overall total number of bytes, overall number of wasted bytes and overall percentage of wasted bytes.
unused_css_score = data["lighthouseResult"]["audits"]["unused-css-rules"]["score"]
unused_css_savings = data["lighthouseResult"]["audits"]["unused-css-rules"]["displayValue"]
css_total_bytes = data["lighthouseResult"]["audits"]["unused-css-rules"]["details"]["items"][0]["totalBytes"]
css_wasted_bytes = data["lighthouseResult"]["audits"]["unused-css-rules"]["details"]["items"][0]["wastedBytes"]
css_wasted_percentage = data["lighthouseResult"]["audits"]["unused-css-rules"]["details"]["items"][0]["wastedPercent"]

2.4.30.- Screenshot Thumbnails

  • Key: data[“lighthouseResult”][“audits”][“screenshot-thumbnails”]
  • Description: This is what the load of your site looked like.
  • What you can get: encoded thumbnail images as base 64 strings which show the phases about how your page loads.
import base64

for x in range (len(data["lighthouseResult"]["audits"]["screenshot-thumbnails"]["details"]["items"])):
    img_data = data["lighthouseResult"]["audits"]["screenshot-thumbnails"]["details"]["items"][x]["data"].replace("data:image/jpeg;base64,","")

    with open("Thumbnails "+ str(x) + ".png", "wb") as fh:
        fh.write(base64.b64decode(img_data))

The loop will download the sequence of thumbnails as PNG files, naming them “Thumbnail” + the loading phase number.

2.4.31.- Network Server Latency

  • Key: data[“lighthouseResult”][“audits”][“network-server-latency”]
  • Description: Server latencies can impact web performance. If the server latency of an origin is high, it’s an indication the server is overloaded or has poor backend performance. Learn more.
  • What you can get: wasted milliseconds because of a poor network server latency.
network_server_latency = data["lighthouseResult"]["audits"]["network-server-latency"]["displayValue"]

2.4.32.- Layout Shift Elements

  • Key: data[“lighthouseResult”][“audits”][“layout-shift-elements”]
  • Description: These DOM elements contribute most to the CLS of the page.
  • What you can get: number of elements and a list with their paths and selectors.
layout_shift_elements = data["lighthouseResult"]["audits"]["layout-shift-elements"]["displayValue"]

listpath_selector = []
for x in range (len(data["lighthouseResult"]["audits"]["layout-shift-elements"]["details"]["items"])):
    path = data["lighthouseResult"]["audits"]["layout-shift-elements"]["details"]["items"][x]["node"]["path"]
    selector = data["lighthouseResult"]["audits"]["layout-shift-elements"]["details"]["items"][x]["node"]["selector"]
    list1 = [path, selector]
    listpath_selector.append(list1)

The loop will return a list which contains the elements with their selectors and paths.

2.4.33.- Use of cache memory

  • Key: data[“lighthouseResult”][“audits”][“uses-long-cache-ttl”]
  • Description: A long cache lifetime can speed up repeat visits to your page. Learn more.
  • What you can get: Normalized score (0 is the worst, 1 is the best), number of resources which could be cached and a list with those resources with their total number of bytes, wasted bytes and cache life time in milliseconds units based on max-age.
cache_memory_score = data["lighthouseResult"]["audits"]["uses-long-cache-ttl"]["score"]
resources_to_cache = data["lighthouseResult"]["audits"]["uses-long-cache-ttl"]["displayValue"]

listcache = []
for x in range (len(data["lighthouseResult"]["audits"]["uses-long-cache-ttl"]["details"]["items"])):
    cachelifetime = data["lighthouseResult"]["audits"]["uses-long-cache-ttl"]["details"]["items"][x]["cacheLifetimeMs"]
    totalbytes = data["lighthouseResult"]["audits"]["uses-long-cache-ttl"]["details"]["items"][x]["totalBytes"]
    wastedbytes = data["lighthouseResult"]["audits"]["uses-long-cache-ttl"]["details"]["items"][x]["wastedBytes"]
    url = data["lighthouseResult"]["audits"]["uses-long-cache-ttl"]["details"]["items"][x]["url"]
    list1 = [cachelifetime, totalbytes, wastedbytes, url]
    listcache.append(list1)

The loop will return a list which contains the elements which can be cached with their number of total bytes, wasted bytes and cache life time in milliseconds units.

2.4.33.- First Contentful Paint

  • Key: data[“lighthouseResult”][“audits”][“first-contentful-paint”]
  • Description: First Contentful Paint marks the time at which the first text or image is painted. Learn more.
  • What you can get: Normalized score (0 is the worst, 1 is the best) and first contentful paint time.
fcp_score = data["lighthouseResult"]["audits"]["first-contentful-paint"]["score"]
fcp_time = data["lighthouseResult"]["audits"]["first-contentful-paint"]["displayValue"]

2.4.34.- Main Thread Tasks

  • Key: data[“lighthouseResult”][“audits”][“main-thread-tasks”]
  • Description: Lists the toplevel main thread tasks that executed during page load.
  • What you can get: list of tasks with their duration and start time.
listmainthreads = []
for x in range (len(data["lighthouseResult"]["audits"]["main-thread-tasks"]["details"]["items"])):
    startime = data["lighthouseResult"]["audits"]["main-thread-tasks"]["details"]["items"][x]["startTime"]
    duration = data["lighthouseResult"]["audits"]["main-thread-tasks"]["details"]["items"][x]["duration"]
    list1 = [startime, duration]
    listmainthreads.append(list1)

The loop will return a list which contains the tasks with their durations and start times.

2.4.35.- Max Potential FID

  • Key: data[“lighthouseResult”][“audits”][“max-potential-fid”]
  • Description: The maximum potential First Input Delay that your users could experience is the duration of the longest task. Learn more.
  • What you can get: Normalized score (0 is the worst, 1 is the best) and max potential FID value.
max_potential_fid = data["lighthouseResult"]["audits"]["max-potential-fid"]["score"]
max_potential_fid_value = (data["lighthouseResult"]["audits"]["max-potential-fid"]["displayValue"]

2.4.36.- Server Response Time

  • Key: data[“lighthouseResult”][“audits”][“server-response-time”]
  • Description: Keep the server response time for the main document short because all other requests depend on it. Learn more.
  • What you can get: Normalized score (0 is the worst, 1 is the best) and server response time value (how many milliseconds the root document took).
server_response_time_score = data["lighthouseResult"]["audits"]["server-response-time"]["score"]
server_response_time = data["lighthouseResult"]["audits"]["server-response-time"]["displayValue"]

2.4.37.- Largest Contentful Paint

  • Key: data[“lighthouseResult”][“audits”][“largest-contentful-paint”]
  • Description: Largest Contentful Paint marks the time at which the largest text or image is painted. Learn More
  • What you can get: Normalized score (0 is the worst, 1 is the best) and largest contentful paint time.
lcp_score = data["lighthouseResult"]["audits"]["largest-contentful-paint"]["score"]
lcp = data["lighthouseResult"]["audits"]["largest-contentful-paint"]["displayValue"]

2.4.38.- Time to Interactive

  • Key: data[“lighthouseResult”][“audits”][“interactive”]
  • Description: Time to interactive is the amount of time it takes for the page to become fully interactive. Learn more.
  • What you can get: Normalized score (0 is the worst, 1 is the best) and time that takes for the page to be interactive.
time_interactive_score = data["lighthouseResult"]["audits"]["interactive"]["score"]
time_interactive = data["lighthouseResult"]["audits"]["interactive"]["displayValue"]

2.4.39.- Unminified CSS

  • Key: data[“lighthouseResult”][“audits”][“unminified-css”]
  • Description: Minifying CSS files can reduce network payload sizes. Learn more.
  • What you can get: Normalized score (0 is the worst, 1 is the best) and potential overall savings.
unminified_css = data["lighthouseResult"]["audits"]["unminified-css"]["score"]
unminified_css_savings = data["lighthouseResult"]["audits"]["unminified-css"]["details"]["overallSavingsMs"]

2.4.40.- Bootup Time

  • Key: data[“lighthouseResult”][“audits”][“bootup-time”]
  • Description: Consider reducing the time spent parsing, compiling, and executing JS. You may find delivering smaller JS payloads helps with this. Learn more.
  • What you can get: Normalized score (0 is the worst, 1 is the best) and bootup time value plus time used for scripting and compiling and parsing the JS.
bootup_time_score = data["lighthouseResult"]["audits"]["bootup-time"]["score"]
bootup_time = data["lighthouseResult"]["audits"]["bootup-time"]["displayValue"]
scripting_time = data["lighthouseResult"]["audits"]["bootup-time"]["details"]["items"][0]["scripting"]
parsing_compiling = data["lighthouseResult"]["audits"]["bootup-time"]["details"]["items"][0]["scriptParseCompile"]

2.4.41.- Use of Webp Images

  • Key: data[“lighthouseResult”][“audits”][“uses-webp-images”]
  • Description: Image formats like JPEG 2000, JPEG XR, and WebP often provide better compression than PNG or JPEG, which means faster downloads and less data consumption. Learn more.
  • What you can get: Normalized score (0 is the worst, 1 is the best), potential savings and images which can be transformed to Webp with their total number of bytes and wasted bytes in case they used a Webp format.
webp_images_score = data["lighthouseResult"]["audits"]["uses-webp-images"]["score"]
webp_images_savings = data["lighthouseResult"]["audits"]["uses-webp-images"]["displayValue"]

listwebpimages = []
for x in range (len(data["lighthouseResult"]["audits"]["uses-webp-images"]["details"]["items"])):
    url = data["lighthouseResult"]["audits"]["uses-webp-images"]["details"]["items"][x]["url"]
    wastedbytes = data["lighthouseResult"]["audits"]["uses-webp-images"]["details"]["items"][x]["wastedBytes"]
    totalbytes = data["lighthouseResult"]["audits"]["uses-webp-images"]["details"]["items"][x]["totalBytes"]
    list1 = [url, wastedbytes, totalbytes]
    listwebpimages.append(list1)

The loop will return a list which contains the images that can be transformed into WebP Images, their total number of bytes and wasted bytes.

2.4.42.- Third Party Summary

  • Key: data[“lighthouseResult”][“audits”][“third-party-summary”]
  • Description: hird-party code can significantly impact load performance. Limit the number of redundant third-party providers and try to load third-party code after your page has primarily finished loading. Learn more.
  • What you can get: Normalized score (0 is the worst, 1 is the best), how many milliseconds the third party code has blocked the main thread and a list with the third party codes and their blocking times.
third_party_score = data["lighthouseResult"]["audits"]["third-party-summary"]["score"]
third_party_blocking_time = data["lighthouseResult"]["audits"]["third-party-summary"]["displayValue"]

listthirdparty = []
for x in range (len(data["lighthouseResult"]["audits"]["third-party-summary"]["details"]["items"])):
    blockingtime = data["lighthouseResult"]["audits"]["third-party-summary"]["details"]["items"][x]["blockingTime"]
    url = data["lighthouseResult"]["audits"]["third-party-summary"]["details"]["items"][x]["entity"]["url"]
    text = data["lighthouseResult"]["audits"]["third-party-summary"]["details"]["items"][x]["entity"]["text"]
    list1 = [blockingtime, url, text]
    listthirdparty .append(list1)

The loop will return a list with the third party codes and their blocking times and URLs.

2.4.43.- Final Screenshot

  • Key: data[“lighthouseResult”][“audits”][“final-screenshot”]
  • Description: The last screenshot captured of the pageload.
  • What you can get: encoded screenshot images as base 64 string about how the final screenshot looks like.
import base64

img_data = data["lighthouseResult"]["audits"]["final-screenshot"]["details"]["data"].replace("data:image/jpeg;base64,","")

with open("FinalScreenshot.png", "wb") as fh:
    fh.write(base64.b64decode(img_data))

The code will download the final screenshot screen image as a PNG file, naming it “FinalScreenshot.png”.