How to use Google’s pagespeed insights API with Python

There’s renewed interest in website performance in the last year or two. I think the trend started with Google announcing that Pagespeed is a ranking signal for both search and ads (quality score).

If you know anything about how the SEO industry works, it’s pretty obvious what happened next. Everyone jumped on the trend-wagon of optimizing for Google Pagespeed Insights scores, trying to squeeze out every little bit of competitive edge.

Why does Google care about website performance?

Good user experience affects Google’s bottom line. Website performance is really all about mobile performance. Desktops have (generally speaking) access to better internet connections with no data caps. On the other hand, mobile users are often limited by their data plans and slower connections depending on location and carrier coverage.

With more and more searches and visits (and thus search ads, display ads, etc.) from mobile devices, Google wants to make sure users can have a decent experience when they visit a website, even with a poor 3G connection.

If you click on a search ad from your phone, and it takes 30 seconds to load the website (you can check yours here), chances are you won’t stay around that long. This also means that the advertiser wouldn’t pay for this click.

Moreover, inefficient websites take more resources for crawling. Google spends billions each year crawling the web. Crawling slower websites also affects their bottom line.

This has created an urgency where the economic objectives are aligned with the user needs.

What is the Google pagespeed API?

The pagespeed API can help you get pagespeed scores at scale in an automated way. Sometimes, it’s useful to check the pagespeed score for each page of your website to uncover potential issues that hinder the performance of specific pages.

Maybe it’s a plugin you can do without, some unnecessary JavaScript specific to a page, or something else altogether – evaluating the pagespeed score of each page is a great starting point for deciding what to do next in terms of performance optimization.

How to use the pagespeed API with Python

There are a few different versions of the API, but the script below uses Version 5, which is the latest one as of the time of writing.

To use it, you will need to create your own credentials (API key) here.

The script itself is straightforward. After importing a few necessary libraries and parameters (e.g., our API key), it reads the list of URLs we want to measure from a CSV file, gets the pagespeed score, and writes the results in a new CSV file that it creates.

You can easily create such a CSV file with a list of URLs by crawling your website with Screaming Frog and exporting the crawl data in CSV.

Importing the libraries and initializing

import requests
import json
import csv
from time import sleep
API_Key = "qwertyuiopasdfghjklzxcvbnm"
baseURL = ""

Creating the request function

This is where the bulk of the work is done. The function first forms the request URL “response_url,” then uses the requests library to parse the response into the “response” object, which we then parse as JSON.

The API response contains a lot of detailed data, so we only parse the mobile and desktop pagespeed score as numbers on a scale of 100 (similarly to the Pagespeed Insights web page).

def get_pagespeed(API_Key, page_URL, baseURL, strategy):
    response_url = baseURL+page_URL+'&key='+API_Key+'&strategy='+strategy
    response = requests.get(response_url)
    json_data = response.json()
    lighthouseResult = json_data["lighthouseResult"]
    categories = lighthouseResult["categories"]
    performance = categories["performance"]
    score = performance["score"]
    return (score*100)

Putting together the response data

This is the part of the code where the execution happens. We read the list of URLs from the CSV file, and for each row, we run the “get_pagespeed” function we defined above. Then write the results in a new CSV file with 3 columns: URL, Desktop Pagespeed and Mobile Pagespeed.

myFile = open('url_list.csv','r')
outputFile = open('results/pagespeed_results.csv', 'w', newline='')
outputWriter = csv.writer(outputFile)
reader = csv.reader(myFile)
outputWriter.writerow(["URL","Desktop Pagespeed","Mobile Pagespeed"])
for row in reader:
    url = row[1]
    if url == "URL":
            desktop_pagespeed = get_pagespeed(API_Key, url, baseURL, "desktop")
            mobile_pagespeed = get_pagespeed(API_Key, url, baseURL, "mobile")
            print (url, desktop_pagespeed, mobile_pagespeed)            outputWriter.writerow([url,desktop_pagespeed,mobile_pagespeed])
            outputWriter.writerow([url, "Error", "Error"])
            print('ERROR WITH URL: ', url)

If you are looking for an alternative way to do this that doesn’t involve running Python code on your computer, you can also use Screaming Frog. The crawler can use a few different services through their APIs and pull associated metrics for every page it crawls. However, keep in mind that the API functionality is limited to the paid version, which comes at £149 per year.

If you’d like to run the code, you can copy-paste it to your code editor or download the files from GitHub to prevent any formatting errors.