Introduction
Recently, I started on setting up a monorepo project using Nx.
Every time I create a pull request (PR), I trigger a CI process using Github Actions. This process involves linting, running tests, and building the application.
However, I encountered an issue with the duration of this CI process, particularly with my small basic project. It was taking up to 1 minute and 30 seconds to complete, which seemed excessively long for a project of its size. This raised concerns, as it would only worsen as the project scaled.
In this article, I’d like to talk about how I managed to significantly reduce the CI runtime to around 30-40 seconds, along with some insights and tips I discovered during the process, particularly relevant to the nestjs, prisma, and graphql stack.
Previous CI process
My CI configuration file, named ci.yml, was structured as follows:
yamljobs:code_quality_and_build:runs-on: ubuntu-lateststeps:- uses: actions/checkout@v4with:fetch-depth: 0- run: yarn install --frozen-lockfile- uses: nrwl/nx-set-shas@v3- run: git branch --track dev origin/dev- run: npx nx run api:gen-types --no-cloud- run: npx nx affected -t lint,test,build --parallel=3 --no-cloud
And here are the results of the execution:
As observed, the step that consumes the most time is the installation of dependencies and the subsequent execution of linting, testing, and building. Therefore, we will address each of these steps individually.
Cache the dependencies
Add those lines into the ci.yml
yaml- uses: actions/setup-node@v4with:node-version: 20cache: 'yarn'- name: Restore cached yarn dependenciesuses: actions/cache/restore@v4with:path: |node_modules~/.cache/Cypress # needed for the Cypress binarykey: yarn-dependencies-${{ hashFiles('yarn.lock') }}- run: yarn install --frozen-lockfile- name: Cache yarn dependenciesuses: actions/cache/save@v4with:path: |node_modules~/.cache/Cypress # needed for the Cypress binarykey: yarn-dependencies-${{ hashFiles('yarn.lock') }}
In this step, if the yarn.lock file remains unchanged, we will utilize the cached node_modules directory instead of reinstalling it.
Result:
The steps setup-node@v3, Restore cached yarn dependencies, and Cache yarn dependencies serve this purpose. Hence, we can claim to have saved approximately 20 seconds in total during this step! 😋
Note: If you use prisma as ORM
Due to Prisma generating the Prisma client within the node_modules folder, we need to adjust the ci.yml slightly.
Run the command npx nx run api:gen-types before saving cache
yaml- uses: nrwl/nx-set-shas@v4- run: git branch --track dev origin/dev- run: npx nx run api:gen-types # run this first# cache after the prisma client has been generated inside node_modules- name: Cache yarn dependenciesuses: actions/cache/save@v4with:path: |node_modules~/.cache/Cypress # needed for the Cypress binarykey: yarn-dependencies-${{ hashFiles('yarn.lock') }}-${{ hashFiles('apps/src/prisma/schema.prisma') }}
Include the hash content of schema.prisma file in cache key
yamlkey: yarn-dependencies-${{ hashFiles('yarn.lock') }}-${{ hashFiles('apps/src/prisma/schema.prisma') }}
Since updating the schema may alter the Prisma client, it's important to consider this when making changes. Here's the complete code:
yaml- uses: actions/setup-node@v4with:node-version: 20cache: 'yarn'- name: Restore cached yarn dependenciesuses: actions/cache/restore@v4with:path: |node_modules~/.cache/Cypress # needed for the Cypress binarykey: yarn-dependencies-${{ hashFiles('yarn.lock') }}-${{ hashFiles('apps/src/prisma/schema.prisma') }}- run: yarn install --frozen-lockfile- uses: nrwl/nx-set-shas@v4- run: git branch --track dev origin/dev- run: npx nx run api:gen-types# cache after the prisma client has been generated inside node_modules- name: Cache yarn dependenciesuses: actions/cache/save@v4with:path: |node_modules~/.cache/Cypress # needed for the Cypress binarykey: yarn-dependencies-${{ hashFiles('yarn.lock') }}-${{ hashFiles('apps/src/prisma/schema.prisma') }}
Connect to Nx cloud
Follow this tutorial to connect to Nx cloud.
Test the cache
- Re-run npx nx run-many -t lint,test,build --parallel=3
- Run nx reset to clear the cache on local
- Run npx nx run-many -t lint,test,build --parallel=3 and check the logs
Make sure you hit the cache as expected
You can check whether your commands are hitting the cache by reviewing the action logs or accessing the Nx Cloud dashboard.
If the cache is hit, it will display either the remote or local , otherwise, it will indicate a cache miss.
If something goes wrong, you can use Troubleshoot cache misses feature.
Enable Nx cache for necessary commands
By default, Nx enable cache for all lint, test & build command for us. However, if we execute any additional commands during each CI run, we need to enable caching specifically for those commands. In my case, it is npx nx run api:gen-types.
To enable cache, add "cache": true in for that command in the project.json file.
json{"gen-types": {"cache": true,"command": "npx prisma generate --schema=./src/prisma/schema.prisma","options": {"cwd": "apps/api"}}}
Enable PR Integration
Typically, Nx will provide a link in the action logs directing you to the Nx Cloud logs, as shown here :
shellView logs and investigate cache misses at https://cloud.nx.app/runs/n6LsQrNjtY
For easier access to Nx logs, Nx provides an Nx Cloud app that generates a helpful comment on the pull request, like this.
Install git hooks
Encouraging developers to execute the command on their local machines before pushing code to the remote repository via Git hooks ensures that the CI process always accesses the cache, resulting in minimal runtime.
Installation instructions for Husky can be found in this guide.
In conclusion
With these modifications, I've managed to cut down the CI runtime by 50%, and I anticipate significant time savings as the project scales up. Here are the final results.
Share this post