HomeAbout UsNews, company reports and blogBlogTechNext Generation Agility with Salesforce DX

Next Generation Agility with Salesforce DX

Article by David Dawson
Published on Read in 11 mins

Salesforce DX? Continuous Integration? Source Controlled Development? Where to start??

Salesforce has recently introduced it’s new Developer Experience (DX) as of October 2017. The purpose of DX is summed up very nicely on the DX homepage

“Welcome to the modern developer experience that will change the way you build Salesforce applications. Whether you’re an individual developer or working in a large team, Salesforce DX provides you with an integrated, end-to-end lifecycle designed for high-performance agile development. And best of all, we’ve built it to be open and flexible so you can build together with tools you love.”

Let’s break this down.

The first thing apparent to us is that Salesforce is introducing a new way to build applications on the Salesforce platform. The next two sentences are the real kicker. A development experience designed for high-performance agile development, and it’s built without prescriptive tooling so you can use whatever tools float your boat. Most developers don’t like to be told what tools they must use; some like Macs and some like Windows, some like IDEs and some like text editors and some like Git Clients and some like Git Bash. When we are working in a comfortable environment we do our best work, and if we can find a balance between contributing to the same build pipeline whilst using whatever tools make you comfortable then we are looking at some happy developers doing their best work.

Our goal of building CI on the Salesforce platform required two major changes that we hoped DX would provide that traditional Salesforce development could not. We needed developers to be working in their own environment so their changes were in silo (preferably a solution that did not come with the overheads of maintaining seperate sandboxes for each dev) and we needed the metadata represented in a better way than XML for every major config that was 2000+ lines of code.

What did our deployments look like before we started?

We had a third party tool that was largely operating the same as change sets, although in this tool the packages were called Patches. The developers shared a sandbox, and when they finished their work they created a patch (change set) where they needed to declaratively select and add their changes (fingers crossed you remember everything you changed and included any dependencies like profiles and record types). How were you sure your patch only had your changes? Well… you weren’t, hence why patches and change sets only deploy profile configuration for the metadata that is alongside the profiles within the package.

So we have a patch and we deploy that patch to the test environment. The metadata is packaged up, so how do you peer review the code and configuration without opening up the package or jumping into the org and checking the build? Again… no dice, even if you went to this length to perform a code/build review where is the traceability of the approval or even better the trail of requested changes and actions?

On the third last day of the iteration we would take all patches that were created this iteration and have been flagged for deployment and combine them into one big patch. This combined patch would then be deployed to staging and the team would spend the day deploying and performing Staging Verification Testing (a sanity check that features have been deployed correctly known as SVT). If SVT found no issues, we would then push the combined patch to Production and perform Production Verification Testing (same as SVT, but PVT).

Now that doesn’t sound so bad right? Deploying every second week would be an improvement for a lot of Salesforce implementations within the market but to give some context our peers have not only have automated CI but also CD, so where we can drop once an iteration they can drop the second a feature is done (build + tested). Salesforce is just another component on our stack, and looking into the purpose of DX and the flexibility it gives our developers to work independently and also give us granular access to metadata as a DX project we firmly believed we could close the gap between our completely manual-packaging-and-deploying-once-an-iteration and our peers ability to drop as soon as something is ready.

First things first, we need a DX Project

The first thing we need to do was convert production into a DX project. If you are starting a new project you have the luxury of creating and working from a new DX project, but if like us you have one (or multiple) Salesforce orgs already existing then we need to convert those orgs into a DX project. The Salesforce command line tool (Salesforce CLI) provides some commands to convert a metadata package into a DX project, specifically: sfdx force:mdapi:convert. But of course, our Production has certain features and packages that need to be enabled/installed/configured as dependencies in any scratch org otherwise the metadata will not compile. So, to get started we:

  1. Generated a package.xml using Eclipse and used Workbench to download the entirety of Production as a metadata package
  2. Use sfdx force:mdapi:convert to create a new DX project with the converted contents
  3. Specify the features we need enabled in the scratch org definition files based off what features we know are enabled.

We also need to create a new scratch org based on this above output. Our scratch org creation process covers some automated steps and some manual due to the current DX limitations.

  1. Create a new Scratch org and enable any dependant features manually that cannot be specified in the scratch org definition files
  2. Install (and configure if applicable) all dependant packages using sfdx force:package:install
  3. Push the DX project using sfdx force:source:push

Once you have a scratch org created and the project converted, the next little while is a case of trial and error. you push the org, look at the compile (deployment) issues (and expect a lot of them to begin with) and determine how to fix them. Did you miss turning on a feature? Are you missing a package? Has your conversion given you metadata that cannot be deployed or are you missing any metadata? These issues just need to be worked through until you get the compile errors from XXXX to 0 and it successfully deployed. To give some context we have converted two existing orgs; one has 1600+ errors on first compile and the other had 1200+ errors and both took roughly 3 days to work through the issues each (the latter has Salesforce CPQ which added some complexities).

UPDATE: As of October 2018 Salesforce CLI has the ability to retrieve from a non-source tracked org (aka a Sandbox, Developer edition or Production). You can see the Salesforce CLI release notes here with the new sfdx force:source:retrieve command.

Setup source control

Once you have a scratch org it is time to look into setting up your source control. We use GitHub Enterprise at REA so we marched on with this, but almost any source control solution would work. Uploading the files into the repository is the easy bit, but you need to agree on a branching strategy and a deployment process that is derived off features proceeding through the various branches. We use a modified version of the GitHub Flow having added a persistent feature branch that sits before the master branch. It looks as follows:

Salesforce Git Strategy

We decided as a squad that:

  1. Developers would create a feature branch
  2. When the feature is complete, and unit tested, the developer would raise a PR to merge their feature branch into develop
    1. This would be where code review/build review takes place
    2. It would be good if the CI solution could validate the changes
  3. If the feature passes review, it is merged to develop
    1. This deploys to our test environment
  4. When the feature is ready for deployment, we merge develop into master
    1. This triggers a Staging deployment, after which the CI would pause allowing us to test Staging
    2. If staging testing is successful, we could tell the CI tool to continue and it would deploy in Production.

Our branching strategy had its positives and negatives as all do, for example:

Positives

  • Our develop branch gives us a merged buffer between the feature branches and master.
  • Without a persistent develop branch, it would be difficult (if not impossible) to have a persistent integrated test sandbox.
  • The develop to master push is a like-for-like environment push, minimising the differences between orgs and increasing the chances of successful deployments.

Negatives

  • Can’t pick and choose features (without manual reversion, everything in develop goes to master).

We have done some additional work to lock down our persistent branches, as well as setup CODEOWNERS and other additional features that can be enabled in GitHub to make the CICD process smoother overall.

Continuous Integration Setup

Now that your DX project & your source control has given you source controlled development, we can look to cap this off with CI. There are several CI tools on the market, and while some are better than others the tooling that is used is not terribly important. We use Buildkite, but I have also seen working DX implementations using Jenkins and the Salesforce documentation demonstrates TravisCI so it is completely down to choosing a tool that is right for you. We took the rough git branching outcomes outlined above and set to work creating our CI triggers and processes.

We wanted Buildkite to be notified in the following 3 events:

  1. A commit is made on a feature branch
  2. A merge has occurred to develop
  3. A merge has occurred to master.

So, we created hooks from Buildkite into GitHub that would detect these 3 events (the hooks push all commits and merges, but you can choose what you react to within your CI tool).

Now that Buildkite was aware of the events, we created a pipeline specific for each of these events:

  1. When a commit is made to a feature branch, create a deployment package (and a destructive changes manifest if applicable) and validate both against the Test environment.
  2. When a merge is made to develop, create a deployment package (and a destructive changes manifest if applicable) and deploy both against the Test environment.
  3. When a merge is made to master, create a deployment package (and a destructive changes manifest if applicable) and deploy both against the Staging environment, pause and await a user to confirm Staging testing was successful and carry out the same deployment steps in Production.

Our packages are generated dynamically based on the commits in git, including the destructive manifest outlining files that have been deleted. How do we do this?

  1. Use the git diff command to determine and separate the changes into a new temporary directory, all whilst maintaining the DX project structure.
  2. Use sfdx force:source:convert to convert this DX project containing only the diff files into a metadata api package.
  3. Carry out the same process in reverse to determine the files that have been deleted, but this time rename package.xml to destructiveChanges.xml.

Once we have the package(s) created as artefacts, we can push those packages through the Apache ANT Salesforce library (or force.com deployment tool as it is also referred to). The conversion process as explained above is very high level and there are specifics that have been handled in order to convert and DX diffs and push the resulting package(s) through the metadata API, but overall this is the process that permits us to deploy multiple times an iteration, completely automated.

Our CI solution has a lot of features that we have added as we have gone along:

  1. Dynamically removing permissions from profiles that are required in the scratch org but not compatible with sandboxes/production.
  2. The ability to run the destructive package post deployment (default), pre-deployment or not at all.
  3. Dynamically adding files/directories into .forceignore to prevent them from being deployed along with other changes.
  4. Some special consideration to the required *-meta.xml files required by things like classes, pages and components.

Was it worth it?

There was a steep learning curve for our teams, especially since the large majority are declarative developers. Also, it cannot be underestimated the huge amount of work it took to get this CI solution up and running. The investment to building and maintaining your own product does come with significant overheads vs a paid product, but the flexibility for us to upgrade our product to suit our delivery habits has been a significantly beneficial investment.

The automation and overall simplification of our release pipeline speaks for itself in this pre and post simplification diagram:

To summarise:

The Good

  • From 1 deployment per iteration to roughly 3 or 4 drops.
  • Manual package building and pushing to automated deployment flows.
  • Complete source-controlled development including change history tracking.
  • Much better control gates utilising GitHub Pull Requests, Code Owners and locked branches.
  • From roughly 10 to 20 manual deployment steps per deployment to 0 (as Salesforce CLI does all of the source change tracking for you).
  • 25% to 50% increase in velocity.
  • From 2 days for whole team to push to staging and prod to complete deployment in 15 minutes for one person (+ testers).
  • Complete flexibility with self-maintained deployment solution.
  • Developers no longer need Sys Admin (minimising those with Sys Admin access to least privilege).

The Bad

  • Adoption learning curve was steep and required a lot of pairing and mentoring.
  • Git strategy means that we cannot deploy individual features, everything in the develop branch must go together unless manually reverted through PR first.
  • We are far beyond any other known implementation, including those within Salesforce. Especially considering the boundaries we are pushing with DX + CPQ we are very much without documented support and rely on our own teams to support the solutions going forward. Salesforce support, whilst responsive do not support a lot of the features (more so the combination thereof) that we are using in our CI solution.
  • DX as a technology, whilst really exciting is very new and still has ways to go in stability and overall feature maturity, so there are some imperfections (like profile extractions not working) that come as part and parcel of being early adopters.

 

So what’s next?

  • CPQ CLI We are currently building a Salesforce CLI extension to extract CPQ product configuration data into the git repository and treat it as we do metadata, including deploying configuration changes via the CI pipeline.
  • Automated Regression We are building an automated regression test suits powered by Selenium that we will include as an integral part of our PRs and overall release management process.
  • Automated Code Reviews Software like Code Scan (a plugin for SonarQube) would allow us to do automated code reviews.
  • DX Shapes We are looking to emerging DX features and improvements to improve our scratch org creation and maintenance, with DX Shapes being a very interesting development.

Where do I start?

Here are a few helpful links to get your hands dirty with Salesforce DX and get an understanding of the shift in development process as well as the doors that DX opens. Remember, DX is a set of tooling with can change the way your developers access the Salesforce metadata, but it is not an off the shelf product in itself. Your processing, tools, source control and CI pipeline needs to be either bought or built separately, but this is a good place to start:

More from the blog

From Teaching to Software Development – A Year On

Category: Career stories
Published on

A vision of the future is a bet we are prepared to make

Category: Tech
Published on

One year at REA: Lessons, Growth, and Gratitude

Category: Life at REA
Published on

Introducing Argonaut – Part Three.

Category: Tech
Published on

Introducing Argonaut – Part Two.

Category: Tech
Published on

Introducing Argonaut – Part One.

Category: Tech
Published on

View all articles