Tumgik
#@SalesforceArchs
tak4hir0 · 3 years
Link
Content last updated December 2020. Roadmap corresponds to Spring ‘21 projections. Our forward-looking statement applies to roadmap projections. Guide Overview You want to customize Salesforce. It may be something simple, like adding a field; or maybe it is much more complex, like a custom application with dozens of objects and thousands of lines of code. How do you deploy into production? The answer, of course, is “it depends". The method you choose for moving a change into production will depend on many factors, including the urgency of the change, its complexity, the size of your team, and the metadata involved. This decision guide explores seven different deployment options, ranging from simple but not-so-scalable techniques to significantly more complex yet highly scalable approaches: Manual Changes in Production Change Sets Metadata API: Direct Deployments Metadata API: Deployment with Source Control + Continuous Integration Org Dependent Packages Unlocked Packaged Managed Packages For each option, we discuss the limitations of the approach, why you might choose it, why you might not, and (where appropriate) how to mitigate some of its potential downsides. This guide also covers some hurdles you may encounter as you begin using the more complex techniques, deploying changes other than metadata, the various Salesforce environments involved in moving changes to production, an example deployment that combines multiple approaches, and third-party tooling you can use. You can read this guide straight through, or just jump to the parts that you need. If you do decide to skip around, we’ve summarized the key points that you’ll want to take away with you no matter what: Takeaway #1: The options available to you for deploying changes exist on a spectrum, and each has its own pros, cons, and limitations. In choosing which option to use for a specific deployment, you’ll need to weigh your objectives, your team’s skills, and limitations in Salesforce. You may use different approaches for different projects, or even blend approaches on a single project. Takeaway #2: Salesforce is investing heavily in improving this experience, so change is happening quickly. Stay up to date on roadmaps and report issues. Takeaway #3: Don’t think of deployments only in terms of metadata — often, other items must change in an org for a deployment to be complete. Takeaway #4: Don’t use first-generation managed packages. Takeaway #5: Prefer permission sets over profiles. This guide focuses on moving changes between environments with the goal of eventually deploying to a production environment you control. In other words, if you’re a customer or developing changes on behalf of a customer, then this guide is for you. If you are an ISV or AppExchange partner who is building assets for multicustomer or AppExchange distribution, then you should follow the documentation for second-generation managed packages. Deployment Techniques, from Simplest to Most Scalable First, let’s get familiar with the deployment options and the technology behind them. Manual Change Sets Metadata Deployments Packages Direct Single Source Org-Dependent1 Unlocked Managed Deletion Manual Not Available Manual, Scripted Manual, Scripted Built-In Built-In Built-In Apex Changes Not Available Available Available Available Available Available Available Reversion Manual Not Available Manual Manual Built-In Built-In Built-In Scratch Org Support Supported Not Available Supported Supported Supported Supported2 Supported2 Sandbox Support Supported Required Supported Supported Supported Supported Supported Dependencies Any Any Any Any Any Packaged3 Packaged3 Repeatability Low Low Medium High High High High Delay None Medium None None Medium Long4 Long4 1 Org-dependent packages are Beta in Winter '21 release. 2 Even if you don’t use a scratch org to create your package, it must be able to deploy to a scratch org or package creation will fail. 3 Every dependency must either be in the package or in another package. 4 You can skip validation on package version creation to reduce package build time via a flag on the command. These options exist on a spectrum: As you move from left to right on the spectrum, you: give up some simplicity but gain more scalability move into areas of the platform that are changing faster increase the level of technical skill required spend more time on the process than the product encounter barriers to further motion rightward on the spectrum For the purposes of this document, scalability means support for: larger deployments of more changes more teams or larger teams working on more projects simultaneously more testing and automation enabling more frequent deployments more consistent and reliable deployments Manual Changes in Production Deletion Apex Changes Reversion Scratch Org Support Sandbox Dependencies Repeatability Delay Manual Changes in Production Manual Not Available Manual -- -- Any Low None The simplest option for deploying to production is to make a manual change, directly in production. Yes, some of you just had a panic attack. But hear us out. Limitations of Manual Changes You cannot modify Apex code in production. You must write Apex somewhere else and migrate it to production. If your project involves only Apex, then move along, this is not the option you’re looking for. Why You Might Choose Manual Changes For some metadata types, it’s the only supported option. For some metadata types, it’s a reasonable, low-risk enabler of business agility; for example: Reports are a metadata type. You might even be deploying folders of reports that are used on a Lightning page or really important reports used on your operational dashboard. But you may also encourage users to modify reports in their own private folders without going through any sort of deployment process. Many orgs treat ListViews similarly. For small changes, it’s very fast: You may need to make an emergency change to turn off a validation rule, even if you deployed it via a more complex technique to the right on the spectrum. You need to give someone permission to do something temporarily, or figure out what’s wrong with the permissions. In this case, an admin can create a new permission set, assign it to a user, get the work handled, and then remove and delete that permission. It’s convenient if you’re doing a first-time set up of Salesforce before go-live, when the exposure is lower and you have a serious testing plan in place before users get access to the system. You enjoy similar high-risk activities like free-climbing and fielding unexpected calls from executives. Why You Wouldn't Choose Manual Changes Many types of changes can be extremely dangerous. That simple validation rule where you accidentally used instead of < can bring your company to a halt. Even worse, your automation might accidentally email customers and create a PR mess. Production changes can be difficult to test. The data in your org is your real data, so an automated testing process can do a lot of damage that is hard to reverse. Even if you have a backup, you’re going to have a lot of work to clean it up. And even if everything goes well, you’re still finding and cleaning up records like “TestOpportunity15”. It’s difficult to scale for a team. People are working on top of each other. If you’ve got someone working on Service and someone else on Sales, but both of them are changing the Contact object, there’s potential for some conflicts. It’s harder to reverse or abandon changes. Imagine you’ve updated your application, and let your users try it out. They give you some feedback and you realize it’s completely the wrong approach. Before you can create your new version, you have to get rid of the old version, backing out each change you made in the correct order. It’s difficult to deploy large, complex changes while people are on the system. If your change includes adding three new fields, plus layout changes, plus validation rules, plus changes to several existing Flows based on the new fields, plus permissions, then there will be some point in time where your change is only partially deployed. Users might be entering data without validation, or without the proper processes running. You either have to lock them out or work unusual hours. You’re now working under a time crunch and trying not to make a mistake. Mitigation for Risks You May Face with Manual Changes If you have changes that can only be done manually, then you can verify them by first completing the steps to make the changes in a sandbox, testing the results, and then repeating the same series of steps in production. For example, Prediction Builder currently (Winter ‘21) doesn’t support any sort of deployments other than manual. Prediction Builder lets you configure a model and look at its accuracy. You can tweak the model until you’re happy with it. But eventually, you’ll have Einstein start writing predictions to fields, and you may start using those fields for process automation. Before that starts, you might want to test how it works in a sandbox. Once you’ve done that, you do have to create and enable the model again in production (for example, with production on one monitor, sandbox on the other, carefully making them match). This approach is not foolproof, but at least when you move to production, you’ll sleep more soundly. Change Sets Deletion Apex Changes Reversion Scratch Org Support Sandbox Dependencies Repeatability Delay Change Sets Manual Supported Not Available Supported Supported Any Low Medium Change sets are a point-and-click way to move changes. They let you choose which items to move, with a checkbox for each field, object, layout, class, and so on. The org admins can decide which environment can send and receive changes sets from other environments via deployment connections. For example, consider the following scenario: Three orgs exist: Dev (a developer sandbox), QA (a full sandbox), and Production Dev sends changes to QA and receives changes from Production QA can send changes to Production Production can send changes to Dev and can only receive changes from QA Once you make your selections with all those checkboxes, you “upload” them to an allowed destination org. Some time later, the change set appears in Inbound Change Sets in the destination org and you can deploy or validate the change set. Limitations of Change Sets Change sets can only work with sandboxes, and all the sandboxes have to be created from the production org. You can click the View/Add Dependencies button to find dependencies, but it may not catch everything. (For example, you may have an Apex Class that doesn’t have a formal dependency on its test class, but you generally do have to have test coverage so a certain change set will fail to deploy if you don’t include the tests.) You’re limited to 10,000 files (items represented by a checkbox). Sometimes, a sandbox is on a different release than the destination org. When that happens, some metadata types can’t be deployed because they’ve changed between releases and you have to either spin up a new sandbox on the correct version or wait until the orgs are on the same version. Change sets can’t remove any metadata or configuration. Why You Might Choose Change Sets It has existed for a long time, so it’s well-known by most everyone. It is admin-friendly (it requires no local tools, code, or terminal commands). Unlike changes performed manually, all the changes hit production simultaneously. You can validate that a deployment would deploy while everyone is using the system on Friday, but actually deploy the change during off-hours. If your change fails to deploy or validate, you can clone the change set and add what you left out. If you’ve made one of those “emergency” changes in production, change sets can also send the change to sandboxes. Deployment connections provide good control over how changes move through environments. Permissions control who can create and deploy change sets. Why You Wouldn't Choose Change Sets As you’re building, you have to track your changes. People who have been doing this for a long time have elaborate spreadsheet templates so that when it’s time to upload a change set, they know everything that needs to go in it. If you move a change set from Dev to QA and want to move it from QA to Production, you need to create another one — with all those checkboxes again. Not every metadata type is supported. See link API Support for details. The delay between when you upload a change set and when it arrives and becomes deployable in production is indeterminate. Sometimes it’s a minute, sometimes it’s an hour. And there’s no real way to know when it’s going to arrive. Once it arrives, sometimes deploying will result in a message that it’s not ready yet. Moving from Change Sets to Metadata Deployments If your team has been using change sets, but is considering moving to source-based deployments, there is an option to retrieve change sets via the Salesforce CLI. This enables you to create a change set from your sandbox. Then a CLI user or a script can retrieve the change set by name using the CLI retrieve command and extract the source (give the command the change set name as the package name). In this case, you’re not really using change sets for deployment, but are using them to extract source to enable a deployment technique further to the right. Metadata API: Direct Deployments Deletion Apex Changes Reversion Scratch Org Support Sandbox Dependencies Repeatability Delay Metadata: Direct Deployments Manual, Scriptable Supported Manual Supported Supported Any Medium None The Salesforce Metadata API lets you migrate metadata. Few people use it directly — if your process has been around for a long time, you may have used Ant scripts. More recently, the Salesforce CLI and Salesforce Extensions for Visual Studio Code and many other tools make use of the metadata API to retrieve and deploy metadata. Scenarios here include a developer retrieving metadata from a sandbox, making changes on their machine, and deploying it back to production. You can also perform deployments that look like, “take the metadata described in this package.xml(the traditional Salesforce manifest file) and move it from QA to Production.” In metadata-based deployments, you’re adding or modifying only what’s specified. The deployment doesn’t delete or change any files that are omitted. Limitations of Direct Metadata Deploys Similar to change sets, only 10,000 files are allowed per transaction. The total unzipped size of the files cannot exceed 400MB. The Metadata API doesn’t support all metadata types (see API Support). Why You Might Choose Direct Metadata Deploys The deployment is repeatable. Each deployment of the same set of files should result in the same state. It’s also easy to repeat the same deployment to multiple targets (e.g. QA, then production) You can specify deletions. The Metadata API includes support for destructive changes — you can specify metadata that should not exist in the target and remove components as the new metadata is deployed. You can deploy settings. Imagine you need to deploy something that you haven’t activated in production (for example, a chatbot or path). Within the Metadata API there are also Settings types that represent actions you would normally do manually in the Setup UI. Scriptability. Later, we’ll discuss items that need to deploy with your metadata. You can create repeatable deployment scripts to make sure these items are in the proper state before and/or after the deployment. Why You Wouldn't Choose Direct Metadata Deploys It’s hard to trace. Metadata deployed from someone’s local filesystem looks just like metadata modified in production. And it’s difficult to trace back what was part of the deployment if you need to reverse something. It’s hard to control. If multiple developers can deploy changes to production, they may deploy over each other’s versions. Also, your policy may say they should test before deploying, but they may not. Mitigations for Risks You Might Face with Direct Metadata Deploys Companies tend to have a single person who has access to make these deployments (e.g. a “Release Manager”). While this helps ensure control, it can become a bottleneck. Metadata API: Deployment with Source Control and Continuous Integration Deletion Apex Changes Reversion Scratch Org Support Sandbox Dependencies Repeatability Delay Metadata: With Source Control Manual, Scriptable Supported Manual Supported Supported Any Medium None Most developers are comfortable working with a source control system (also referred to as a version control system or VCS), like git. Such systems have all sorts of useful features (like branches, pull requests, diffs, file history, and more). Source tracking is a solved problem with high-quality tooling outside of Salesforce, and we recommend using it. Back to deployments — the idea here is to use source repository branches as the source for a deployment. Developers are prohibited from deploying source directly beyond their dev environment; only by merging into a branch does their code deploy anywhere else. Typically, these branch merges are deployed by a system (e.g. CI) rather than a person. The steps are: Get the source from the repo Authenticate to the org Deploy Because the Metadata API and CLI are available, you’re free to use the tools you prefer. For example, GitHub acts as a source control system but also handles things like code review, and GitHub actions support automation based on events (like merging a branch). When you begin using this deployment option, you may find yourself facing some challenging questions. What do my repos look like? Am I always deploying the whole thing, or is there some subset of metadata? Do I have one repo with one directory, multiple package directories in a single repo, or multiple repos? Multiple projects within a larger monorepo? How do I express and control dependencies between them? This challenge is not exclusive to this method; you’ll be addressing the same questions when you begin moving to packages. There’s some preliminary discussion about modularization on the Developer blog, but it’s a few years old. In particular, it doesn’t account for some newer options like Org-Dependent Packages that help with this problem, and the CLI is now much better at working with multiple source directories within a single project. Limitations of Metadata Deploys with Source Control See Limitations of Direct Metadata Deploys. Why You Might Choose Metadata Deploys with Source Control Source control is a solved problem. Many companies have built high-quality tooling for source control and CI. Developers know it. This is the default operating model for developers outside of Salesforce. Source control supports automation. Deploy from GitHub Actions or have your CI system subscribed to webhooks to take those actions. Besides just deployments, this allows testing automation, code analysis, and linter/styling to check pull requests. Scales better for larger teams. You can use multiple repos to break up the codebase. Developers are merging in source control and not into the org; they know when they are conflicting with changes they don’t have. Imagine you’ve got several internal teams plus various contractors and multiple SIs working on Salesforce-related projects. The reduced finger-pointing alone is a good reason to keep them from directly changing the org. Branches help multiple projects happen simultaneously. Even on a small team, you may have a simultaneous mix of small features, emergencies, release checks, bug fixes, experiments, and large projects. Keeping them organized helps your team work faster. Feature branches allow for partial deployments. Imagine two features merged into the final QA environment. Users are good with the first but don’t like the second. You can merge the first feature into main and deploy it to production while the second gets some more attention. Why You Wouldn't Choose Metadata Deploys with Source Control Your team’s customizations largely consist of metadata types that don’t deploy well. Source control is unknown territory for your team. Perhaps you have mostly Salesforce Admins or “Adminelopers” (who write some code but don’t come from a traditional developer background), or you have seasoned Salesforce developers who’ve worked exclusively on the platform. This type of process may be difficult for them to adapt to quickly. See People and Skills for mitigations. Your company’s releases are usually large and infrequent. You’re not able to invest the time setting up this kind of tooling. Sometimes people see the value here but can’t have a team off of “primary tasks” during implementation of the new process. Interlude on Packages The next three deployment techniques dive into Second-Generation Packages. Background on Packaging Salesforce has used packages since the launch of AppExchange, and most admins are familiar with installing managed packages into their orgs. First-generation managed packages were designed primarily for this ISV use case. Managed packages are very restrictive; once you release a package, there are many changes that are no longer allowed because the developer can’t know what a customer org may have built dependencies on. There were also some customers using unmanaged packages, which were impossible to upgrade. We recommend not using at first-generation managed packages, also called Classic Packaging, at all. If you’re not an ISV currently using them, you have no reason to start now. Second-generation packages are where it’s at: managed packages primarily for ISVs and unlocked packages primarily for customers. Second-generation packages are created from source, and not from the contents of an org. The naming of the various kinds of packages can be confusing and hasn’t been consistent over time. In this guide, unless we’re speaking of first-generation managed packages, we’ll drop the “generation” label and refer to packages as unlocked or managed. Note: This also follows the naming you'll see used in the Metadata Coverage Report, as well. Package Basics The idea of a package is to have a subset of metadata that is versioned. You can upgrade to a newer version of a package, or in some scenarios revert to the previous version. You can cleanly uninstall a package without having to know everything that was in it. You can remove some metadata from a package, and when the package is installed, the metadata is removed from the org. Packages can be built on top of other packages and have explicitly declared dependencies. Packages make it easy to share code across multiple orgs. Controls Packages also offer some deployment controls. When you create a package version, the version begins in Beta status. You can install the package in scratch orgs and sandboxes, but not in a production org. To deploy in production, you must first promote a package to released status. By controlling that phase, packages enable easy distribution for testing but a formal, controlled release. How code becomes a package Specify a folder of source code that you want to become the package. Create a package using the CLI. This package is owned by a Dev Hub. Create a version of that package. This is a snapshot of source code at a point in time. The packaging process and the strictness of its requirements depend on which type of package you’re using. The next three sections describe package types. Org-Dependent Packages Deletion Apex Changes Reversion Scratch Org Support Sandbox Dependencies Repeatability Delay Org-Dependent Packages Supported Supported Supported Supported Supported Any High Medium Org-dependent packages are technically unlocked packages with a special flag (—skipvalidation) during creation. They allow dependencies outside of the package that aren’t in another package — in other words, they depend on something in your org. For example, let’s say you’re building a package that includes a Flow, and that Flow refers to a custom notification type (NotificationTypeConfig). That metadata type is supported in the Metadata API as of Winter ’21 but it can’t be packaged. When you review the Metadata Coverage report, keep in mind that org-dependent packages are unlocked packages. The supported types will be the same. An org-dependent package lets you package your Flow and optimistically assume that the Custom Notification Type will be present in the destination org. It’ll throw an error on installation if that Custom Notification Type is not present. You’ll want to use a sandbox that supports source tracking so that it contains all the metadata you might depend on that’s outside the package your changes are tracked, enabling you to pull them into source control Limitations of Org-Dependent Packages Org-dependent packages are currently Beta, with plans to be GA Spring ‘21. Other packages cannot depend on an org-dependent package. Org-dependent packages can’t depend on other packages (to be more specific, Salesforce won’t check that dependency). Why You Might Choose Org-Dependent Packages You want to create a package that depends on something without packaging support. You have some metadata in the org that isn’t ready to be packaged. For example, it has some tangled circular dependency that makes that process difficult. You want some of the benefits of packaging but don’t control the metadata you depend on (for example, it’s owned by another team at your company, or an AppExchange app). You want some of the benefits of packaging but you can’t modularize your existing metadata. You can deploy over existing unpackaged metadata. For example, let’s say your current org has Whatever__c deployed to it. If you deploy a package that includes Whatever__c, then that metadata will be recognized within the org as being part of the package from then on regardless of how it was originally deployed. You are unable to create a scratch org that supports the contents of your package, even if it has no external dependencies. Because org-dependent packages skip the step that validates packages in a scratch org, you can use them to work around this limitation. Why You Wouldn't Choose Org-Dependent Packages If your package can include/declare all of its dependencies, prefer an unlocked package. You’ll avoid surprise deploy-time errors. You want to be able to deploy the package to a scratch org. For example, you have automatic CI testing using scratch orgs. The org-dependent package has to go to some type of sandbox where the dependency is met, which can take much longer to create and cannot be destroyed immediately. All packages take significant time to create, release, and install. Unlocked Packages Deletion Apex Changes Reversion Scratch Org Support Sandbox Dependencies Repeatability Delay Unlocked Packages Supported Supported Supported Supported Supported Packaged High Long If you’re a customer using packages, unlocked packages should be your primary deployment option. Unlike org-dependent packages, unlocked packages have all dependencies either inside the package or inside another package explicitly declared in the package’s dependencies Unlocked means simply, “allows changes not via the packaging process.” For example, imagine you’ve packaged a formula field that’s causing problems. An unlocked package allows you to modify the formula in production! You can put out the fire immediately. But next time you deploy the package, whatever is in the source will deploy over any changes made in production; that is, whatever is in the package wins. The only way to make a permanent change is to remember to update the formula in the package so that subsequent deployments include the fix. Limitations of Unlocked Packages You cannot have unpackaged external dependencies. Everything down the dependency graph must be packageable, packaged, and in the dependencies manifest. You must be able to configure a scratch org to support everything your package requires. Let’s say your package depends on PersonAccounts. That’s OK, because that’s a feature that can be configured in a scratch org. Underneath the covers, the packaging process deploys your source into a scratch org with a given configuration to build your package. 75% minimum Apex test coverage. If you worked with or read about unlocked packages prior to Winter ‘21, you know that Salesforce added code coverage requirements. Tests will run as part of the packaging process, so you can’t rely on the target environment. Why You Would Choose Unlocked Packages It offers a known, good state of your metadata. You know the exact state of metadata at any point in time. The org has a record of package version deployments, and packages are linked to source control. The package can be deployed to a scratch org for testing. You can revert to a previous version. Similar to org-dependent unlocked packages, you can deploy over unpackaged metadata. Why You Wouldn't Choose Unlocked Packages Production changes are overwritten by new package deployments. If you find yourself frequently making production fixes within a package, and not getting those back into the package source, you’ll be unhappy with your deployments re-breaking those patches. You’ll want to create some process for tracking in-production changes to make sure they work their way back through your normal packaging process. Packages have formal ancestry requirements, so large refactorings can lead to situations where you can’t upgrade. This may take developers some experience to get used to. All packages take significant time to create, release, and install. Salesforce previously announced a concept of locked packages, which were less strict about changes than managed, but didn’t allow manual changes in the org. This has been deprioritized. Mitigations for Risks You Might Face with Unlocked Packages For packages intended for non-production environments, you can skip the package validation step. This speeds up the packaging process so you can deploy and get test results sooner. If you’re using automated tests and frequent builds, this can be useful. You’ll still eventually need to validate the package and promote it before you deploy it to production. Managed Packages Deletion Apex Changes Reversion Scratch Org Support Sandbox Dependencies Repeatability Delay Unlocked Packages Supported Supported Supported Supported Supported Packaged High Long The workflow for this option is the same as Unlocked Packages, so that diagram is omitted here. Managed packages have more limitations than unlocked packages. They’re normally used by AppExchange partners who want to prevent customers from creating dependencies on code or components that aren’t designed to be depended on. Limitations of Managed Packages Once you expose something, it’s difficult to delete it (packaging assumes there may be dependencies you don’t know about). You’ll need a namespace associated to your Dev Hub, and anything referring to the package’s code will need to use that namespace in the reference. Why You Might Choose Managed Packages You're a partner looking to build and deliver a packaged solution on AppExchange. You’re working with multiple orgs and are creating a package to be used in them, and you have a compelling need to block changes in production that can't be accomplished through governance and permissions alone. You have a compelling need to formalize what a package exposes and better encapsulate some of the internals that cannot be met by the equivalent capability of Unlocked Packages. You have a compelling need to access to namespaces to help keep code organized and modular that can't be accomplished through governance and development standards alone, and have adequate engineering expertise to design for the added complexity this will add to things like LWC cross-namespace operations. Why You Wouldn't Choose Managed Packages You're not an AppExchange partner and have no compelling reason to use them. You’re not 100% sure of how metadata might be reused and don’t want to prevent reuse of everything. The package’s functionality changes frequently or might need to allow for major refactoring. For example, some custom Apex utilities for handling security or caching might be more suitable than dense business logic. Your team is not absolutely sure of how to design for additional namespace-related complexity for both developers and users of packages. This is especially true where dynamic code or configuration are used. All packages take significant time to create, release, and install. Barriers to Moving Rightward with Migration Techniques There are a few common constraints to moving to the right on the spectrum. People/Skills Metadata and packages require teams that have some familiarity with code and source control. Use of the Salesforce CLI is recommended, as is Visual Studio Code and the Salesforce Extensions. Even experienced Salesforce admins and developers may not be familiar with the latest tools. From an individual user’s perspective, the extra steps may feel like more work because it’s not always easy to see the eventual big-picture gains. Scratch orgs and sandboxes that support source tracking make it easy to retrieve what you’ve changed in an org. The Salesforce Extensions for VS Code make it possible to easily retrieve those changes (via shortcuts) and built-in GitHub integration means you don’t have to use the CLI, either. Even developers who adore terminals benefit from how few keystrokes this requires. The new Salesforce DevOps Center, currently in Developer Preview and planned for GA in 2021, will help admins work with source-controlled deployments. It offers a simpler way for non-developers to make changes in a developer environment, automatically track those changes (no spreadsheets for building change sets!), and commit them to source control (no IDE or CLI!). Join the Trailblazer Community Group for DevOps Center to stay informed. Tooling Assuming people are willing to learn all these new tools, there’s the issue of tooling and access itself. For example, some companies restrict installation of developer tools on machines. You may have to get exceptions to software restrictions, ports unblocked, a public npm registry allowed, and so on. And while we take for granted cloud-based source control (e.g., GitHub and GitLab) that may not be an option for some companies. The setup described below may involve several on-premises servers and be more complex than most of us experience. Or, your company may require a security review of the cloud services you choose. API Support Not everything can be deployed by every technique due to product gaps. The best resource for understanding these gaps is the Metadata Coverage Report. The following images and scenarios are as of Winter ’21 (api version 50) and subject to change. For example, let’s say you’ve set up and tested LiveAgent (Chat) in a sandbox. To move that to production, based on the chart above, you could: Deploy all the changes manually (hopefully you tracked them carefully!) Use change sets to deploy everything except the LiveAgentSettings, which would need to be done manually before deploying. Deploy all the changes via CLI or IDE. You could not use a package to do the deployment. Note: There are more objects related to deploying LiveAgent. We simplified this example so you don’t have to scroll all over the Metadata Coverage Report to look at all of them. If you can’t find what you’re looking for on the coverage report, for example predictions built with Prediction Builder or portability policies, then they aren’t supported in anything other than manual setup. Why doesn’t everything support all the deployment options? This is almost always a prioritization challenge. Products are built by teams and owned by a product manager who has to prioritize what their users want. Each team is responsible for adding support for deployment techniques to their product, and they balance this against new features, bug fixes, and other work. If you are using a feature that doesn’t deploy well, reach out to the product manager to make sure they know that you value deployability. Your process may dictate product adoption Eventually, you may have an amazing deployment process you love that’s running like a well-oiled machine. Then, Salesforce creates some killer new feature that your users are really excited about, but it lacks support for your preferred deployment process. At this point, you face an uncomfortable choice: Create a whole new process around deploying that one tool (adding complexity and reducing agility), or wait until the new feature has proper deployment support that lets you preserve your process. Ultimately this becomes a “greater business value” choice. This is another scenario in which it’s important that the product team understands that their deployability support is preventing your adoption of their feature. Downstream Effects Continuing the previous example, imagine you have additional functionality that depends on your LiveAgentButton (think of an Experience, formerly known as a Community, where that button is embedded). You would also not be able to use unlocked or managed packages for that community (ExperienceBundle), even though ExperienceBundle itself works with unlocked packages, because the ExperienceBundle package wouldn’t contain all the source without that button. You could, however, use an org-dependent package that assumes the button will exist in the target environment. Platform Quality As you look through the coverage report, you’ll see links to known issues. Second-generation packaging is relatively new, as are the SFDX tools and features like Code Builder and DevOps Center. Additionally, Salesforce is making huge efforts to expand metadata coverage and releasing new products and features. You are more likely to find blocking bugs and gaps in newer functionality. You may even find that within a metadata type, certain features don’t behave as expected. For example, orgs with source tracking will register changes when adding a new field or renaming an object, but changes to an object’s field history tracking are ignored. You can edit the object metadata by hand to change the value, and then it deploys as expected. Please report issues so that Salesforce gets them fixed and other customers are aware of them. Deployments are More than Metadata OK, so let’s say you’ve chosen one or more options described in this guide for your project. You’ll typically find “other stuff” that you need to deploy with your change. Here are some examples: You’ve added some new functionality and created a new permission set. Besides just deploying the app, you need to assign those permissions. You have things stored in data (records) that your code, Flows, or processes depend on, like a new chatter group needs to be created a custom setting needs to be populated business hours need to be set CPQ rules need to be migrated You’re using something to the right of direct-in-production changes, but you’re using some features that lack API support, so that is the only option. In these examples, if you’re doing manual deployments, these are just additional manual steps. If you’re using packages, you have the option to create Apex classes that run as a post install script. They can verify the state of records in the org and modify where necessary. There is no option to do a pre-install as part of the package installation, so you’ll need to make those changes manually or from a script outside the package. If you’re doing deployments from metadata, you’ll need to complete these steps either manually or via a script that runs before and/or after your deployment. Manual Change Sets Metadata API Deployments Packages Pre-Deploy Manual Manual Manual, External Script Manual, External Script Post-Deploy Manual Manual Manual, External Script Manual, External Script The advantage of using deployment scripts is testability. You can run the script on a non-production environment and verify the result, and modify the script as necessary before using it in production. Permissions Permissions are worth a special call-out. Historically, when people create new objects, fields, and so on, they often assign them to one or more profiles. With change sets, it’s possible to deploy those profile changes. The scenario get a little more complicated with metadata deployments. You can modify a profile and retrieve it from a source-tracking org, but it’s going to be the entire, very large profile instead of just what you’ve added. The metadata API tries to handle these modifications for you and may be successful. For example, if you retrieve from a change set or package.xml, the API will try to return the portion of the profile that corresponds to the other metadata in the retrieval scope. Similarly, you can deploy a profile that contains only what’s in your package and the API will attempt to merge it with the existing profile that covers the rest of the org. You probably want finer-grained control than that. Packages are even more interesting. On deployment, they assign (by default) permissions for everything in the package to the System Administrator profile. If you ever retrieve that profile, it’s going to have references to all the installed packages. If you’re using an org with source tracking to build, adding any object or field to a profile will cause the profile to be marked as “modified.” When you retrieve the source, the entire profile downloads, not just the field you change. The profile source probably refers to something in the org beyond the scope of your project, so you’ll be manually cleaning those to keep from polluting the profile source files. You likely see the point: Profiles are not the right tool. Salesforce recommends using permission sets to assign permissions. They’re more granular, aren’t 1:1 with users, and eliminate dependencies on a profile. If you’re not familiar, take a moment to read Migrating to Permission Sets for DX. You can also use permission set groups to reduce the work of managing common clusters of permission sets. Environments Once you’ve moved beyond manual changes in production, you’re into the world of developer environments. This section covers the different types of environments available to you and how we recommend using them. Sandboxes A sandbox is an org that’s created from your production org and remains connected to that org. Usually, the metadata matches the production org’s metadata at the time of sandbox copy. It is possible to copy a sandbox from another sandbox. Your sandbox allocation is based on the edition of your production org. Salesforce will sell you more if you need them. Sandboxes can be deleted or refreshed (a fresh copy from production) but are limited in the frequency. The minimum duration in the table below describes how long a sandbox must be active before it can be refreshed or deleted. Sandboxes Scratch Org Developer Developer Pro Partial Copy Full Data Storage1 200MB 1GB 5GB Matches Production 200MB File Storage 200MB 1GB Matches Production Matches Production 50MB Minimum Duration (Days) 1 1 5 29 None2 Data Copy None None Per Template Per Template or All None Metadata Copy Matches Production Matches Production Matches Production Matches Production None Features/Licenses Matches Production Matches Production Matches Production Matches Production Matches Definition File3, Shape4 1 Salesforce has an unusual way of calculating storage based on record count. 2 Scratch orgs have a maximum duration of 30 days. 3 Scratch org definiton files allow for various feature enablements and license configuations. 4 Scratch org shape (Beta) allows scratch org feature and license enablements to match a source org. Developer and Developer Pro Sandboxes The main difference between Developer and Developer Pro sandboxes is their storage capacity. You should use them for creating most changes. Because they support source tracking, you can use the CLI or DevOps Center to capture your changes for you. Once you’re done with changes, you can migrate those changes as described in the Deployment Techniques, from Simplest to Most Scalable section. The challenge with these sandbox types is that they contain no data when created. As a result you need to do one of the following: Manually create sample data needed to build and test your changes Manually load data through a data loading tool or the Salesforce CLI Script data loads via CLI commands Use a data migration tool to copy some subset of data from production Partial Copy Sandboxes Partial Copy sandboxes allow for more storage and let you create a sandbox template to copy a subset of production data. This dramatically simplifies the data set up and can be used for complex testing where a lot of test data is required. The main limitation of templates is that they work per object. If you have a small amount of production data, you can pick the objects that you want. But if you need Person Account records, but have 3 million of them in production, you can’t select which Accounts/Contacts the template should copy; it’s all-or-nothing at the object level. Because each Person Account is 4k of storage (one account at 2K plus one contact at 2K), 3 million of them will consume about 11GB of data storage. Full A full copy can be exactly what it sounds like — just copy everything. Because sandbox copy time is related to the amount of data to copy, you can limit the copy to the items you need using a template. Besides data, you can also choose to include or omit things like field history and Chatter to speed up the copy process. Full Copy sandboxes are best suited for: Final UAT, where users want to preview your changes using real data that feels like production Training users in a realistic environment before the changes are released Scale testing. Sure, your code or Flow worked fine with a few hundred records. But was your SOQL query not selective enough given how much data you have in production? Does your dashboard take four minutes to load? Debugging using an exact copy of production, after you’ve been unable to reproduce issues in simpler environments Sandbox versioning During Salesforce releases windows, you will want to carefully time your sandbox refreshes to make sure sandboxes are on the version you want. The dates can change each release, as well as which instances receive the release preview before or after production, so always check the release information. You might want some developer sandboxes to remain on the same release as your production org for scenarios that involve hot-fixes or debugging. At the same time, you may be working in Developer sandboxes on a project that should go live after the next Salesforce major release. There, you’ll want those orgs on preview so that you can test against the target release and use any features in that release that aren’t currently available. There may be a period where you can’t promote certain changes from your preview sandboxes until destination orgs receive the release. You may wan to use another sandbox to prepare for releases with user training, creating release documentation, or similar activities. Sandbox planning requires keeping up with the release dates for not only production but also specific sandbox instances and planning refreshes around release windows. About those copy times Sandbox copies on a large org can take several days. Large orgs will want to plan accordingly if they really need everything in their org in a Full Sandbox. Salesforce has announced Quick-Create Sandboxes (both Developer and Full) that dramatically reduce the copy time—a large org might copy in less than 10 minutes instead of days. Not only does it reduce the wait time when users manually create sandboxes, but also allows for more automation in CI processes. This will be especially useful when changes can’t be packages or when using Org-Dependent packages. A Developer Preview for Quick-Create Sandboxes may occur in the Spring ‘21 release. A word on sandboxes and security Sandboxes do copy your users from the production org, including their permissions. If your sandbox owner is an administrator in production, their sandbox is ready to go — they can do what they need to. Some companies either limit developers’ privileges in production or don’t allow developers a production login at all. If that’s the case, someone with production admin permissions will need to create the sandbox, log in, and then elevate the permissions of the sandbox user to whatever is required for their changes (usually full permissions). For sandboxes that copy from production data (Partial Copy and Full Copy types), or for organizations that are copying production data to Developer or Developer Pro Sandboxes, this opens another potential security problem. Specifically, there may be production data developers should not have access to. Besides access by the developers themselves, developers may make changes to security policies or open temporary security gaps. Some examples: Removing SSO requirements or IP restrictions for more convenient access by local development tools and other apps Connecting off-platform apps they’re building before those apps have been security tested Testing AppExchange packages before you’ve reviewed their security Making callouts to insecure external systems This risk is mitigated by using a tool that masks production data like Salesforce Data Mask. The Production Administrator can decide which data should be obfuscated or deleted as part of the sandbox creation process to prevent developer access or inadvertent exposure. Scratch Orgs Scratch orgs are very different from sandboxes. They are meant to be created quickly, destroyed quickly, and be more configurable. An example of configurability: Let’s say you want to experiment with a new feature like Salesforce CPQ. Sandboxes will create with the licenses, configuration, and metadata from your production org. You’d have to get CPQ licenses added to production, then create your sandbox (or sync licenses of an existing sandbox). But with scratch orgs, you can specify the org’s features in a configuration file. You can spin up the org and do your proof-of-concept. For developers building features, scratch orgs help with enforcing dependencies. Because scratch orgs don’t start with your production metadata, you’ll be able to capture in source control everything your changes require. If you forgot to include something, you won’t be able to deploy it to a scratch org. Scratch orgs are the preferred option for creating non-org-dependent packages. And behind the scenes, they’re where Salesforce builds your package from your source. Some forward-looking statements For some customers, the complexity of configuring and scripting the set up of scratch orgs has been a barrier to their use. They really are empty unless you specify the configuration, settings, metadata, and data. To make this process easier, scratch org shapes (Beta in Winter ‘21) provide two options: Export a configuration file that matches production. This lets users start with a production-based configuration and optionally add or remove features and settings. Spin up new orgs based on the current shape of production without maintaining a file. This option also supports limits, features, and settings that aren’t otherwise available in configuration files and metadata API. Additionally, building scripts for scratch orgs are another challenge. First, you have to create and maintain the scripts. This can be especially challenging for users without shell-scripting experience. Second, they can take a long time to run, especially if you’re installing a lot of packages (your own or AppExchange). To accelerate this experience, there’s a pilot for scratch org snapshots that let you get an org to a known state (for example, installing all the packages and doing data setup) and then store the snapshot of it. Then, future orgs can start from that snapshot. Example Scenario: Blended Processes Remember, most companies can’t (or shouldn’t) use a single technique for all deployments. The following is a realistic scenario of a company trying to move to a packaging approach while running multiple techniques. In this scenario, you want to use packages because you have a complex enterprise environment where several teams (both internal and SI) are working on multiple projects that eventually deploy into a single org. Occasionally, these separate teams come into contention on shared objects like Account and Contact. Where you can’t use packages, you do metadata deployments from source (at least for now), and you have one project using its own process due to Salesforce limitations within specific features. The Preferred Approach Several parts of your org’s configuration are currently deployed using Unlocked Packages. This is your preferred option when possible. Any changes run through your CI process, create a new package version, and deploy. Each package contains a few permission sets that are sometimes modified, too. Process for the Preferred Approach Each package is stored in its own repo. A developer creates a Git branch and uses a scratch org to make changes. The developer pulls changes from the scratch org, commits to the repo, and creates a pull request (PR). The PR initiates various automated tests in a scratch org. Once reviewed and merged, CI automatically builds a new package version and installs to a series of orgs before deploying to production. Depending on the nature of the change, users may manually review the changes in the QA sandbox (UAT) before the package deploys to production. If users have problems with a release, the previous version of the package is deployed. With this approach, you may experience... Occasionally, someone needs to make a production change. Unlocked packages allow for this. It’s considered an “exception” so part of the exception process is creating a work item for that change to get into the next package. Almost all new projects are used this way, unless there are technical limitations preventing it. The defect rate is dramatically reduced by the automated test and dependencies caught by the packaging process itself. To support this process, you’ve created a few docs and videos that walk admins through the basics of using VS Code to connect to GitHub and to the orgs. They know how to create a branch, push/pull changes, commit, and open PRs. They’ve seen DevOps Center and think it might make their lives easier. To support both admins and developers, each project also maintains some scripts that set up an org with the required configuration, users, permissions, and basic data. Everyone knows how to run the script. You’ve opened several cases with Salesforce support around unexpected issues with packaging. There is often a lot of debate about packaging strategy. Should we split this one into two? Should these be combined? Can we break out part of one because it might be a shared dependency for a new project and an existing one? This is new territory for everyone and it’s the place where teams, who can usually work independently, tend to find conflict. You’re wondering if “Packaging Strategy” should be someone’s job to decide. A Secondary Approach Additional metadata exists in a single large GitHub repo per Metadata API: Deployment with Source Control + CI. You’d like to break part of it off into a few more packages, but haven’t had time yet. It’s not clear how it should be organized in a package because the dependencies are so tangled. You use GitHub Actions to deploy this between environments. Some parts of it probably end up being Org-Dependent Packages eventually, but you don’t like using non-GA features. Process for the Secondary Approach The source lives in a single repo. A developer creates (or refreshes) a Developer sandbox before beginning work. The developer then makes changes in the sandbox, pulls them to local source, commits to GitHub, and opens a PR to merge into the integration branch. CI deploys the entire repo to a Partial Copy sandbox named Integrate to verify and run larger tests. If new metadata is being created, the developer may also need to add new test data to the sandbox(es) manually or, preferably, by script. If everything looks good, the changes from the feature branch are merged into the QA branch, which initiates a metadata deployment to a Full Copy sandbox. Users can test larger changes there. You have options here (more on branches): Each change is merged from the feature branch into the main branch, which deploys to production. This is a lot of manual merges and a complicated branch operation, but does allow for granular changes to go when they’re ready. It may also create a lot of manual reviews in the QA sandbox. When everything in QA looks good, a production deployment happens. This is simpler and perhaps more predictable (“if we liked QA, we’ll like Production”) but does allow for a not-ready change to keep other things from deploying (since you’re all working in one big repo) and this can become a bottleneck. With this approach, you may experience... Because you’ve enjoyed your packaged projects so much, you’ve got a team testing org-dependent packages. You’re not using them for production deployments, but plan to as soon as it’s GA. You prioritize decomposing the remaining into packages based on how often the bottlenecks happen. Speeding up the different teams and reducing contention has real ROI. Several projects have tried to break off from this approach and used unlocked packages but ended up back here after a bit of wasted effort. You’ve set up a tracker for which metadata types you are using that aren’t supported in packaging and update it each release per the Metadata Coverage Report to plan future migrations to your preferred process, to help prevent those wasted efforts from happening. This has a lot more manual steps and is more error prone than your packages. You’ve created checklists and source reviews to prevent mistakes from happening. A Non-Standard Approach You have an Experience using Salesforce CMS that you work on in a Full Copy sandbox. You’ve had bad Experiences (pun intended) moving these via metadata because of product gaps and bugs, so you typically build LWC for the community in the full sandbox, deploy them to production via Metadata API containing just the LWC and supporting Apex classes, and let the experience administrators manually add them to the production version of the Experience rather than try to deploy the Experience. This entire process is owned by a single developer. Process for a Non-Standard Approach The production admin creates/refreshes a Full Copy sandbox. The developer connects to the sandbox using VS Code and retrieves selected LWC/Apex classes via Org Browser. The developer makes the changes locally, auto-saving to the sandbox on each change. The developer previews the changes in the Experience. Sometimes, large changes are previewed in the Full Copy sandbox (UAT) by someone else before deploying to production. The developer commits changes to source control for safety and reversibility, but the deployment of individual LWC and Apex occurs to production from local source. On the spectrum of techniques, this is a combination of: Metadata API: Direct Deployments for the developer even though they’re using source control because the deployment is not directly from the source control system. Manual Changes in Production for the community admin With this approach, you may experience... This works fine when it’s just LWC and Apex Classes that are being modified. The developer must be careful not to modify any Apex classes or LWC that do not belong to this Experience project. Occasionally, a change extends beyond the scope of this work (for example, creating some new fields on an SObject used internally and in the Experience). This tends to become a larger effort coordinating across project boundaries, where the Experience LWCs are waiting for changes in packages. There have been a few occasions where you choose to copy-paste some code from elsewhere in the org rather than create a dependency on existing code. It’s a trade-off accepted to keep this process more independent. You make notes of these in the code and eventually plan to eliminate these duplications. You don’t see this work becoming multideveloper or multiteam anytime soon. Every year or so, you check the progress of metadata types and ExperienceBundle deployments to see if it’s possible to improve this area and be more consistent with your other deployments. Third-party Tooling Source Control The SFDX command line tools are agnostic to your source, and the use of scripts should let you work with the tool of your choice. Early iterations of DevOps Center work with GitHub, so if you can use that for source control, you should. CumulusCI CumulusCI is a free, open-source CI tool used heavily by the Salesforce.org ecosystem (not-for-profits). Support for second-generation packaging is in progress, so we do not recommend using its default automation unless you’re an ISV. However, it is extensible to use any Salesforce CLI command (or any shell command in general). It includes some powerful features for automating UI tests (simulating a browser), loading test data, creating fake data, dealing with namespaces, and managing releases and release notes via GitHub. Salesforce Partners Several tooling vendors are working to solve some of the complexities of deployments. You should explore them as part of any company-wide deployment strategy initiative. IDEs CI/CD Providers Release Management Partners Closing Remarks Someday, API support may be so ubiquitous that you can select deployment mechanisms solely based on your team’s preferences. Until then, the Metadata Coverage Report is your friend. Give it a look anytime you’re planning to move rightward on the spectrum or introduce new metadata types into your deployments. Tell us what you think Help us make sure we're publishing what is most relevant to you: take our survey to provide feedback on this content and tell us what you’d like to see next.
0 notes
releaseteam · 3 years
Link
via Twitter https://twitter.com/releaseteam
0 notes
tak4hir0 · 3 years
Link
Content last updated December 2020. Roadmap corresponds to Summer ’21 projections. Our forward-looking statement applies to roadmap projections. Guide Overview Looking to build forms on the Salesforce Platform? You’ve got multiple options, spanning the entire low-code to pro-code continuum. Representing low-code, Dynamic Forms in Lightning App Builder and Screen Flows in Flow Builder. Hanging out in the middle of the continuum is the ability to extend Screen Flows with LWCs. And representing pro-code is the LWC framework and its ever-growing library of base components. Options are great, but how do you determine which one (or which combination) is the right option? That’s where this doc comes in. Takeaway #1: For basic create/edit forms on a Lightning record page on desktop, use Dynamic Forms. Takeaway #2: Use Flow to build multi-screen forms. If you need to also meet precise UX requirements, layer in LWCs. Takeaway #3: If you need test automation, start with LWC. You can write unit tests for any LWC, regardless of where you plan to embed it. This doc focuses on form-building. You’ll see a similar assessment in Architect’s Guide to Building Record-Triggered Automation on Salesforce. A bit later, we’ll go into depth on these use cases and more, including how to choose between click-based tools and code-based tools (and when to combine them), but here are the major considerations to start with when choosing between these three options. Low Code Pro Code Dynamic Forms1 Screen Flow Screen Flow + LWC LWC Custom Object on Desktop Available Available Available Available Any Object on Desktop Roadmap Available Available Available Dynamic Visibility Available Available Available Available Multi-Screen Form Not Available Available Available Not Ideal Cross-Object Not Available Available Available Available Logic or Actions Behind the Form Not Available Available Available Available Dynamic Event Handling Roadmap Not Available Available Available Pixel-Perfect Styling Not Available Not Available Available Available Unit Testing Not Available Not Available Available Available 1Dynamic Forms is a feature of Lightning Pages. Lightning Pages are declaratively configured with the Lightning App Builder. As part of Summer ‘20, Dynamic Forms is a non-GA preview feature, and we aim to GA it in Winter ‘21. Available: works fine with basic considerations. Not Ideal: possible but consider an alternative tool Roadmap: estimated to support by Summer ’21 (targeted go-live mid-June 2021). Our forward-looking statement applies to roadmap projections. Not Available: no plans to support in the next twelve months. tl;dr Quickly, let’s elaborate on the takeaways and table above. If Lightning Pages and Dynamic Forms meet your requirements, use them. That means you need a create or edit form for exactly one object on desktop, and you need to control field visibility. Lightning App Builder may be a declarative tool, but this is where it excels. If your requirements aren’t met by those constraints, keep reading. Either Flow or LWC will be a better fit. If you need additional logic or actions behind the form, use Flow or LWC. Both tools offer ways for your solution to do more than create or edit a single record. That “more” might be more advanced logic, such as branching or iteration, and it might be more actions like integrating with external systems, sending emails, or pushing notifications to a user’s mobile app. If you’re building a multi-page form or a wizard, start with Flow. Flow provides a linear navigation framework for orchestrating multiple forms together. You could use LWC to construct your own framework for navigating between forms, but we recommend letting Flow do the hard work for you, so that you can focus on the forms themselves. Got sophisticated UX requirements? Need to dynamically handle more than visibility? Build that stuff in a LWC. If your requirements can be achieved with simple theming and column-based layouts, you can build your forms directly in a low-code builder. For more fine-grained control over your form’s style, you’ll need the ultimate flexibility of LWC. Keep in mind, your choice doesn’t have to be an either/or – you can combine the power of multiple options. For example, if you need both Flow’s built-in navigation system and the full styling flexibility that LWC offers, use them together. What About Page Layouts? You may notice that Page Layouts are missing from our comparison in this document. Moving forward, the recommended way to configure record detail pages is Dynamic Forms in Lightning App Builder using Lightning Pages. It’s been a long time since we enhanced page layouts, and that trend will continue. Here’s why. Dynamic Forms are more flexible – you can place fields and sections wherever you want directly in Lightning App Builder, where you can take advantage of sections, tabs, and accordions. And just like you can do with components on the Lightning page, you can control the visibility of your fields and sections without defining multiple page layouts or record types. With Accordion and Tab components, you can restrict the amount of fields that are displayed initially. Guess what that means? Faster page load times. Layout management is simpler with Lightning Pages, since you can manage everything about your pages from Lightning App Builder – whether that’s the contents of the page or which users have access to the page. It’s no longer necessary to make updates in your page layout to make a change happen in your Lightning page. Not to mention, with the power of component visibility rules, you no longer have to create multiple pages (or page layouts) to control who sees which fields when. And that also means you only need to assign users a Lightning page rather than doing that and also assigning the page layout. As of this non-GA preview, Dynamic Forms has a handful of limitations. We recommend using Dynamic Forms wherever possible, and falling back to Page Layouts only when necessary. For reference, here are the high-level gaps we’re aware of and when we plan to fill them. Timeline for Lightning Pages & Dynamic Forms Support for standard objects Spring '21 Support on the Salesforce mobile app Summer '21 Support in Community record pages Far Future Configure tab visibility Summer '21 Show, hide, and collapse section headers Spring '21 Conditional formatting of fields Summer '21 Conditionally make a field required Summer '21 Conditionally make a field read-only Summer '21 What About Performance? Any performance considerations related to Dynamic Forms, screen flows, and LWC center on what framework those technologies themselves sit on. The ones that are based in LWC (besides, of course, an LWC) are going to outperform ones that are based in Aura. The LWC framework offers better performance because core features are implemented natively in web engines instead of in JavaScript via framework abstractions. If you’re not familiar give this blog post a read. Back in 2019, we did a case study comparing the performance of the same functionality in Aura vs. in LWC. As a result of converting DreamHouse from Aura to LWC, not only was the development experience far more aligned with current web front-end development standards and patterns, but the performance gains are significant. Lab measurements showed gains in the range of 2.4 percent to 24.7 percent for cold cache and gains in the range of 31.83 percent to 63.32 percent for warm cache on the same two pages. Now, which framework are our form technologies using? In other words, which form technologies benefit from this superior performance? Dynamic Forms, which is integrated in the Lightning pages metadata, are built on a brand new foundation that uses the LWC stack, which will enable us to implement some long-requested features. Building everything from ground up takes time, which is why Dynamic Forms currently has some limitations – like standard object and mobile experience support. Screen flows are built on a mixed stack. Today, the flow runtime client uses an Aura container application, and most of the individual components you can display in a flow screen are Aura. A few have been converted to LWC so far: Text, Checkbox, Date, and DateTime. The Flow team is committed to converting the flow runtime client to use 100% LWC components instead of Aura, with the exception of customer-created (that’s you!) Aura components. We can’t convert those for you, but there is an excellent Trailhead module that explains how to do so: Lightning Web Components for Aura Developers. It goes without saying: if you’re thinking about building a custom component for a screen flow or any other container, always go LWC. LWC is built on ... LWC of course. This is a freebie. 🤓 Navigating the Low-Code to Pro-Code Continuum Most of this doc focuses on helping you understand what functionality and level of customization is possible with Dynamic Forms, screen flows, and LWC. LWC is the most customizable and robust option for building a form, but it has the fewest guardrails in place. It’s up to you to build a component in a way that ensures security and scalability. Dynamic Forms is the least flexible, but there are far fewer opportunities for missteps. Flow sits somewhere in the middle – more powerful than Dynamic Forms but not quite at the level of LWC. By the same token, it has fewer guardrails than Dynamic Forms but is harder to break than custom code. If multiple tools fit the bill, the decision comes down to which tool is the right one for your team. Introducing the Salesforce Architect Decision Guides on the Salesforce Architects blog introduces some aspects to consider when making that decision. We won’t go into the details of each of those aspects here, but what we will do is interpret them for the specific tools this doc is assessing. Specialized Skills: What percentage of your team is already an expert in the tools you’re comparing? How many makers are well-versed and familiar with LWC or Javascript? How about makers who are experts in Flow Builder or have expressed an interest in dipping their toes? Generally speaking, Dynamic Forms and Flow are more attainable for a broader population of makers. Dynamic Forms is the most declarative form-building tool and will always be easier to learn than Flow. That said, the Flow team is committed to getting that bar as low as possible. Delegation of Delivery: Just because part of your requirements require LWC doesn’t mean the entire solution needs to be built with LWC. Consider how you can build your solution modularly, such that the bits that require LWC are coded, and the bits that don’t are built in a low-code solution. Doing so maximizes the impact of a diverse team and ensures that makers are solving problems appropriate for their specialization. Maintainability & Long-Term Ownership: If you anticipate this form will be maintained in the future by pro-code makers and your current team is highly familiar with Javascript frameworks, it makes sense to choose LWC as your solution of choice. If, on the other hand, low-code makers will be responsible for maintaining the form, consider how you can make the solution as configurable as possible for that audience. Diving Deeper As promised, we’re diving deep into a variety of comparison points and functional differences between Dynamic Forms, Screen Flows, Screen Flows with embedded LWCs, and the LWC framework itself. Available: works fine with basic considerations. Not Ideal: possible but consider an alternative tool. Requires ...: possible with help, such as from Apex. Roadmap: estimated to support by Summer ’21 (targeted go-live mid-June 2021). Our forward-looking statement applies to roadmap projections. Not Available: no plans to support in the next twelve months. Low Code Pro Code Dynamic Forms1 Screen Flow Screen Flow + LWC LWC Object Scope Single Object Available (Custom Objects) Available Available Available Cross-Object Not Available Available Available Available Object-Agnostic Not Available Available Available Available Form Scope Single-Screen Form Available Available Available Available Multi-Screen Form Not Available Available Available Not Ideal Location Lightning Record Page Available Available Available Available Lightning Home or App Page Not Available Available Available Available Communities Not Available Available Available Available Embedded Snap-Ins Not Available Available Available Available Utility Bar Not Available Available Available Available Object-Specific Action Not Available Available Available Roadmap Global Action Not Available Not Available Not Available Roadmap Salesforce Mobile App Not Available Available Available Available Field Service Mobile Not Available Available (Object-specific action) Not Available Roadmap Mobile SDK Not Available Not Available Not Available Roadmap External Sites & Apps Not Available Available Available Available Custom LWC Not Available Not Available Not Available Available Controller Logic & Actions Not Available Available Available Available Operation Within One Transaction Not Available Available Available Requires Apex Operate Across Multiple Transactions Not Available Available Available Available Integration Not Available Available Available Requires Apex Modular Design & Reuse Not Available Available Available Available Validation Respect System-Level Validation Available Available Available Available Custom Field Validation Specific to this Form Available Available Available Available Custom Form-Level Validation Not Available Not Available Available Available Security Elevate User Permissions Not Available Available Available Requires Apex Restrict Who Can Access Available Available Available Available Restrict Allowed Locations Not Available Not Available Not Available Available Interaction Design Conditional Visibility Available Available Available Available Conditional Requiredness Roadmap Not Available Available Available Conditional Formatting Roadmap Not Available Available Available Conditional Read-Only State Roadmap Not Available Available Available Standard Event Handling (such as onblur, onfocus) Not Available Not Available Available Available Custom Event Handling Not Available Not Available Available Available Styling Org and Community Themes Available Available Available Available Pixel-Perfect Styling Not Available Not Available Available Available Layout 2 Columns Available Available Available Available 4 Columns Roadmap Roadmap Available Available Beyond 4 Columns Roadmap Not Available Available Available Tab and Accordian Containers Available Not Available Available Available Translation Labels Entered in the Builder Roadmap Available Available* Not Available Labels in the Code Not Available Not Available Available Available UI Test Automation Unit Tests Not Available Not Available Available Available End-to-End Automation Requires Code Requires Code Requires Code Available Metrics Page Views Available Available Available Available* Time Spent on Form Not Available Not Available Available Requires Apex Time Form Completion Not Available Not Available Available Requires Apex Track Success Rate Not Available Not Available Available Requires Apex 1Dynamic Forms is a feature of Lightning Pages. Lightning Pages are declaratively configured with the Lightning App Builder. As part of Summer ‘20, Dynamic Forms is a non-GA preview feature, and we aim to GA it in Winter ‘21. Object Impact What objects will the form operate against? Just one object? Multiple objects? Dynamic Forms Screen Flow Screen Flow + LWC LWC Single Object Available (Custom Objects) Available Available Available Cross-Object Not Available Available Available Available Object-Agnostic Not Available Available Available Available If your form operates against a single Salesforce object, any of the tools we’re comparing will work. Things get a little more complicated with cross-object or object-agnostic forms. By object-agnostic, we mean inputs that don’t map to any Salesforce object. Perhaps your form represents a data structure that you’ll send to an external service, like Stripe or Docusign. Or perhaps you’re using several inputs in your form to calculate a value, and then committing that value to the database. For both cross-object and object-agnostic forms, Flow is a solid option. The components available in flow screens are agnostic by nature, so you can choose what to do with that data behind the scenes. For example, use the data entered in one form to create multiple records behind the scenes, or use the data to perform other actions like generating Chatter posts, sending emails, or connecting to external services. For simple cases, using existing LWC components like lightning-record-form can be a simple way to lower code needed to provide a robust solution. However, for scenarios where multiple objects are involved, Flow provides cohesive control for all objects and removes complexities of developers having to traverse complex relationships and dependencies. Form Scope Do you need a single screen, or will the user need to navigate between multiple screens to complete a task? Dynamic Forms Screen Flow Screen Flow + LWC LWC Single-Screen Form Available Available Available Available Multi-Screen Form Not Available Available Available Not Ideal If you can get all of your user’s inputs from a single-screen form, start with Dynamic Forms. If you need more functionality than what Dynamic Forms offers, the choice between Flow and LWC depends on a few other questions. What skills does your team have? For a more admin-heavy organization, we recommend starting with Flow. For a more developer-heavy organization, start with LWC. Is it OK to display a navigation bar at the bottom of your form? If the flow navigation bar is undesirable UX, swing towards LWC. What needs to happen behind the form? If you need the behavior to be configurable by an admin, build a flow. Otherwise, build a LWC. If you choose Flow, you may need to build a LWC anyway to achieve the right UX. If you’re already building a LWC to style your form correctly, consider whether embedding that component in a flow is overkill. If, on the other hand, your solution looks like a wizard, where the user navigates between multiple screens, think Flow. Flows come with a built-in navigation model, so you don’t have to build and maintain that yourself. The navigation is linear in nature, with forward-moving actions (Next and Finish), backward-moving actions (Previous), and a mechanism for saving the form for later (Pause). You can also build a form with non-linear navigation if it suits your purposes. For a great example of that, check out this Salesforce Labs package: Digital Store Audit. Location Where do you want to embed the form? Dynamic Forms Screen Flow Screen Flow + LWC LWC Lightning Record Page Available (Desktop) Available Available Available Lightning Home or App Page Not Available Available Available Available Communities Not Available Available Available Available Embedded Snap-Ins Not Available Available Available Available Utility Bar Not Available Available Available Available Object-Specific Action Not Available Available Available Roadmap Global Action Not Available Not Available Not Available Roadmap Salesforce Mobile App1 Not Available Available Available Available Field Service Mobile Not Available Available (Object-Specific Action) Not Available Roadmap Mobile SDK Not Available Not Available Not Available Roadmap External Sites & Apps Not Available Available Available Available Custom LWC Not Available Not Available Not Available Available 1 Flows and LWCs are supported in the Salesforce mobile app, but the Salesforce mobile app doesn’t support all the ways you can embed flows and LWCs. For example, object-specific actions are supported in mobile, but utility bar items are not. Since they require a record context, Dynamic Forms are supported only in Lightning Record pages. However, Dynamic Forms aren’t supported in Lightning Community pages. This limitation is in place because Lightning communities don’t use the underlying framework that Dynamic Forms depends on: Lightning Pages. We are definitely evaluating this as we do often hear feedback of wanting Dynamic Forms in Communities. Roadmap! Today, Dynamic Forms are supported only for custom objects on desktop, but Salesforce is actively working on supporting them for standard objects and the Salesforce mobile app as well. You can build flows that require a record context or flows that work globally. As such, you can embed flows in a variety of locations. For the record-contextual flows, that might be a Lightning record page, a Community record page, an object-specific action, or an Actions & Recommendations deployment. For the global flow, that might be the utility bar, other Lightning or Community pages, a snap-in, or an external application. Flows aren’t currently supported as global actions, but as a workaround you can wrap the flow in an Aura component. LWC supports a high degree of reusability, since you are creating components and able to associate with targets via metadata across Salesforce, Communities, and even in Open Source Projects. LWC components can also be embedded inside of your own website via Lightning Out. LWCs aren’t currently supported as quick actions (object-specific or global), but – much like you can with flows – as a workaround you can wrap the LWC in an Aura component. None of the form technologies covered in this doc are officially supported in Mobile SDK templates today. If Mobile SDK is paramount to your use case, you’re better off building your form in natively in your mobile application or building a Visualforce page. Roadmap! The Mobile SDK team is actively working on supporting LWCs within Visualforce pages. Controller What actions or logic do you want to be performed behind the scenes? Dynamic Forms Screen Flow Screen Flow + LWC LWC Logic & Actions Not Available Available Available Available Operate Within One Transaction Not Available Available Available Requires Apex Operate Across Multiple Transactions Not Available Available Available Available Integration Not Available Available Available Requires Apex Modular Design & Reuse Not Available Available Available Available Dynamic Forms is perfect if you need to use the values in your form to create or update a record. For anything beyond that capability, you’ll need to leverage Flow or LWC. That might be a layer of decisioning or iteration, or you might generate Chatter posts or emails using the inputs from the form. Flow offers standard actions for posting to Chatter, sending email, and interacting with Quip documents, so you don’t have to write code for the same operations. LWC offers rich interactions with single records and related objects through the use of wire adapters that interact with UI API. LWC can also interact with multiple records when using the wire for getListUi. Both Flow and LWC integrate with Apex, so you can easily close the gaps in whichever solution you choose. For example, if you require filtering records from an LWC you can always use the wire adapter for Apex to create complex SOQL queries. If you’re swayed by the click-based story, consider Flow as a viable alternative to an Apex controller for your server-side needs. A secondary question to answer here about the actions is whether you want to immediately commit them, or defer them to a particular part of your form. This is especially relevant if you’re in a multi-page form. Flow makes it easy to combine inputs from multiple forms (flow screens) and use them much later in the wizard (flow) to perform some operations. In fact, we recommend designing flows in just that way – perform actions at the end – in case the user bounces back and forth between screens, thereby changing their answers. Do you need to control which operations occur in which transaction? Transactions and governor limits are a way of life on the Salesforce Platform. If your use case is fairly simple, it may not be as important to control what transaction a particular operation occurs in. However, there are a few use cases where you might want to combine multiple operations into a single transaction rather than performing those across multiple transactions. Some examples: To Rollback Or Not to Rollback: That is the question. Let’s say your form creates multiple records behind the scenes. If the third record fails to be created, should the first two records be rolled back? If each of your actions are independent of each other, feel free to execute them in separate transactions. If they’re dependent, however, and you want the failure of one to also rollback the others, implement them in a single transaction. Downstream Impact on Governor Limits: Especially when your form creates or updates a record, consider what the downstream implications of that operation are. What processes, workflow rules, flow triggers, Apex triggers, or other items in the save order are going to fire based on this record change? And how do those collective changes impact the governor limits being consumed in that transaction? If a particular record change will result in a lot of downstream changes that impact your limits, consider isolating that record change into its own transaction. Batch Processing: Even in a UI context, you may need to batch multiple updates together. Let’s say your multi-screen form iterates over a large group of records. Rather than committing a record update after each screen, wait until you’ve collected the updates for all of the records and then submit one request to update all the records. When you use a Dynamic Form to create or edit a record, you’re only ever performing one operation, and that operation is always the start of a net-new transaction. When building a screen flow, you have significant control over what happens in a given transaction. Screens and Local Actions act as boundaries between transactions. Here’s a high-level summary of how transactions are managed in the screen flow architecture. The end user interacts with a screen, and then clicks Next. The client posts a request to the API with the inputs. The API receives the request, and a transaction and database connection are opened. The API then then calls the Flow engine to invoke the request. The Flow engine takes over and follows the appropriate path in the flow definition – until it hits a Screen or Local Action node. The engine then returns information about that node to the API. The API creates a response object that contains the details of the next screen to render, and returns that object to the client. At this point, database changes are committed (cue the save order execution), and the database connection and transaction are closed. The client uses the API response to render the next screen for the user to interact with. Rinse and repeat. In other words, screens “break” transactions. When that happens, any pending actions or DML are committed, the prior transaction is closed, and a new transaction is started. The right design – which operations you group into a given transaction – is your call. On the left, you can see a flow that collects inputs across multiple screens, and then performs several actions in one transaction. The flow on the right performs each operation in a separate transaction. For more details, check out these resources on Salesforce Help: Flows in Transactions and Flow Bulkification in Transactions. Your ability to control the transaction from an LWC comes down to the underlying services that LWC is using to perform its operations. If you’re using the lightning-record-form base component, the underlying operation (creating or updating the record) happen in a standalone transaction as soon as the form is submitted. In general, these rules apply: Each UI API call is isolated into its own transaction. If you need to perform multiple operations within a single transaction, send the inputs off to a server-side technology like an Apex controller or a flow. The regular transaction rules for that technology apply. Do you need to integrate with external systems? Both Flow and LWC support API integrations with an assist from Apex. Additionally, Flow supports External Services – which enables you to declaratively generate process integration building blocks. Anyone can integrate with legacy systems or web apps, as long as those services can be described with an OpenAPI-compliant schema. For example, you can generate actions for integrating with Slack or Google Sheets, and then configure your flow to post to a Slack channel or add a row to a particular Google Sheet. The end-to-end solution involves touching zero lines of code. Regardless of whether you do so with custom Apex or an External Service, a callout is a callout. Here’s what you need to know. A callout can take a long time. When a callout is executed synchronously, it's performed while a database transaction is open. Salesforce doesn't let you keep a database transaction open if you've got pending database operations. The main limitation to keep in mind is the danger of what we call a dirty transaction, where you perform a create, update, or delete operation and then, in the same transaction, execute a callout. This pattern isn’t allowed because of consideration #3, which of course exists because of considerations #1 and #2. You can work around this limitation by breaking the transaction. As we mentioned above, screens and local actions both reintroduce the browser context, which breaks the transaction. Use a Screen to break a transaction if it makes sense to display a new form to the user prior to making the external callout. If it doesn’t, we recommend a no-op local action, like this one available to install from UnofficialSF. Not familiar with local actions? They’re Aura components with no markup; the flow executes the invoke() method in the component’s Javascript controller. They’re the only actions available in Flow that aren’t performed server-side. In addition to being useful for breaking transactions, local actions are great for performing browser-level actions like firing toasts or force navigating the user. On the left is a flow that updates a record, then uses a callout to request a list of Slack channels. This flow fails at runtime, because the callout occurs in the same transaction after a pending database operation (the record update). To the right is a flow that updates a record, executes a no-op local action, and then uses a callout to request a list of Slack channels. This flow succeeds at runtime, because the callout is performed in a separate transaction from the record update. The impact of callouts on the transaction is less complicated with LWC. Generally speaking, you’ll perform your data operations using the Lightning Data Service, and then use an Apex controller to make the external callout. This design protects you from dirty transactions, since the LDS call is isolated in its own transaction separate from the Apex callout. What are your requirements for reusability and modularity? Dynamic Forms doesn’t support reuse. Each Dynamic Form is tied to a specific Lightning record page for a specific object. Though you can assign that Lightning record page to multiple apps, profiles, and so on. Much like you can write libraries, utilities, and components that are intended to be used across multiple other components, you can apply similar design patterns when creating flows with the power of subflows. Save your flows in smaller, more modular buckets, and then call them from other flows by using the Subflow element. If your design calls for it, you can build a flow that both stands on its own and is useful as a subflow of another one. Flow and LWCs can both be built for reuse, such that you can embed them in a variety of locations including external sites and Lightning Out applications. Validation What are your validation requirements? Dynamic Forms Screen Flow Screen Flow + LWC LWC Respect System-Level Validation Available Available Available Available All technologies that attempt to create or update a record adhere to system-level validation – whether those are classic validation rules or custom validation built into an Apex trigger. No matter what technology you use to perform a record change, every change goes through the save order. That means in addition to validation rules, the record change is processed by any number of before- or after-save flows, before or after triggers, escalation rules, assignment rules, and more. If you haven’t already, now’s a good time to bookmark and familiarize yourself with the Order of Execution. Inputs on a flow screen are by nature unbound, so the screen itself doesn’t natively adhere to system-level validation associated with a particular object. Whatever values you use to Create or Update records, however, are processed by the save order, which means they pass through the object’s system-level validation. Dynamic Forms Screen Flow Screen Flow + LWC LWC Custom Field-Level Validation Specific to this Form Available* Available Available Available Custom Form-Level Validation Not Available Not Available Available Available Just like page layouts, Dynamic Forms let you set requiredness and read-only state at the page level. However, you can’t override system-level settings. Flow provides flexibility for customizing validation on a form’s inputs. While some checks are performed in the client (like flagging missing required fields or incompatible values), none of the client-side validation blocks the user from trying to navigate. The real stuff happens on the server. When a user clicks Next, Flow sends the inputs to the server for validation. If any inputs are returned as invalid, navigation is blocked and the appropriate error is displayed. The server validates the inputs by checking: The input’s requiredness setting, or whether the entered value is compatible with the underlying data type. Custom validation on that input: Several standard components (Checkbox, Currency, Date, Date/Time, Long Text Area, Number, Password, and Text) support custom validation on a per-screen basis. Supply a Boolean formula expression and an error message to display when the formula expression isn’t met. Custom validation on the underlying component: If you’re building a custom LWC for a flow, add your own validation code to the validate() method. On the LWC side, most base components perform their own client-side validations. For example, lightning-record-form respects system-level requiredness, but not page-level requiredness. For your custom components, you can build your own validation mechanisms. Security What are your security requirements? Should the form check the user’s access before performing certain operations? (Especially important when building for guest users) Dynamic Forms Screen Flow Screen Flow + LWC LWC Elevate User Permissions Not Available Available* Available Requires Apex Do your users have field-level security to see this field? Do they have permission to create records for this object? What about access to this specific record, based on your org’s sharing rules? When something runs in user context, we enforce those access checks. Users can run a case update form only if they have the ability to update cases, the appropriate field-level security, and access to the record in question. But what if you don’t want to grant users a particular permission? What if you want users to be able to perform a particular operation when they're using your form, but not through any other form or interaction? That’s where system context comes in. System context is a way to elevate the running user’s permissions for the duration of the session, so that the user doesn’t need Update access to the Case object to successfully complete your case update form. This is especially useful for unauthenticated communities. Instead of granting guest users dangerous abilities, set your form to run in system context. Of course, system context is a double-edged sword and you should use it only when necessary. When a form runs in system context, every single CRUD operation bypasses object- and field-level security & sharing – not just the specific operation you care about. Note that system context has no bearing on who Salesforce considers the actor – the name you see in the Last Modified By field. For each operation that your form performs, such as the case update, the actor is the running user even if the form runs in a different context. Dynamic Forms always run in user context, and there’s no way to override this behavior. Screen flows run in user context by default, but you can set them to run in system context. It’s your choice whether the flow should grant access to all data or if it should still enforce record-level access like sharing. If you embed a Lightning component in a flow that runs in system context, the flow doesn’t override the component’s context. If you need to bypass user access checks, we recommend using the flow to perform those operations and pass the appropriate data into or out of the Lightning component. If your flow calls Apex actions, there are some more nuances to understand. If the Apex class is set to inherited sharing, it runs in system context with sharing no matter what the flow is set to. If the class has no explicit sharing declaration, it runs in system context without sharing no matter what the flow is set to. If the class is set to with sharing or without sharing, it does so and overrides the flow's context. Best practices: Leave the flow to run in its default context unless you need to elevate the running user’s access for a specific operation. If the flow performs a variety of operations and not all of them require elevated access, use a subflow to isolate the operations that should run in system context. LWCs run in user context by default, but you can override that in an Apex controller. Operations performed through the UI API are run in user context. Operations performed through an Apex controller depend on that class. To perform those operations in system mode, set the Apex class to with sharing or without sharing. Do you want to control who can access the form? Dynamic Forms Screen Flow Screen Flow + LWC LWC Restrict Who Can Access Available Available Available Available To address this requirement, often you can look to the container you’re embedding your form in. For example, you can assign Lightning pages to be available for particular apps, record types, or profiles. If particular inputs in your form are sensitive, use visibility rules to further control what is displayed to who – this feature applies to both Dynamic Forms and screen flows. You can restrict a flow to particular profiles or permission sets, much like you can an Apex class or Visualforce page. By default, flows are unrestricted, which means that any user with the Run Flows user permission can run it. Best Practices: If you’re exposing a flow to guest users, grant the guest user profile access to only the flows they need. You can add Run Flows to the guest user profile but we consider that a dangerous practice. Be especially careful with flows that operate in system context. We highly recommend you restrict those flows to a particular set of users, since they have fewer checks and balances in place to protect your data. For LWCs, you can check the running user’s permission assignments to confirm if they have a particular standard or custom permission. Directly from Javascript, you can import Salesforce permissions from the @salesforce/userPermission and @salesforce/customPermission scoped modules. Or you can use Apex to check the same. Do you want to control where the form can be embedded? Dynamic Forms Screen Flow Screen Flow + LWC LWC Restrict Allowed Locations Not Available Not Available Not Available Available Once a screen flow is activated, it’s available in all the locations that screen flows are supported, regardless of whether you intended it to be available everywhere or not. That said, Flow Builder supports multiple types of flows that have screens. The bread-and-butter type is Screen Flow, but there are a few other specialized types that are restricted to specific locations. For example, only Field Service Mobile Flows are supported in ... you guessed it, the Field Service mobile app. The same story goes for Contact Request Flows, which are supported only in communities. Regardless of the flow type, the individual making the flow has no control over where the flow can be embedded. The flow will be available in every location supported for that flow type. LWCs, on the other hand, are available in a given location only when it’s been added as a valid target. So you can make a component available on Record pages and not available as a utility bar item. Interaction Design Should your form react dynamically to interactions or conditions? Static forms are a thing of the past. Today, it’s all about dynamically updating the form with the right properties and values for this user, this time, this place. Let’s talk about what’s possible in this vein for Salesforce’s form-building tools. Dynamic Forms Screen Flow Screen Flow + LWC LWC Conditional Visibility Available Available Available Available Conditional Requiredness Roadmap Not Available Available Available Conditional Formatting Roadmap Not Available Available Available Conditional Read-only State Roadmap Not Available Available Available Visibility can be dynamically controlled in all three tools. Both Dynamic Forms and Flow Builder address this with features called Component Visibility. With this, you can declaratively show or hide fields based on other values on the form or whether the user is on a mobile device or not. With Dynamic Forms, you’re limited to fields on the associated object and there are some limitations on the supported field types and operators. With Flow, you can base a visibility rule on other inputs on the screen, as well as other resources populated earlier in the flow like formulas or values from other records. Device-based Rules: It’s not obvious from the get-go, but you can use a formula to show or hide a particular field when the user is on a mobile device. Write a flow formula that checks the value of the $User.UIThemeDisplayed global variable. If the value is Theme4t, the user is on the Salesforce mobile app. Evaluating Other Resources: Manual variable and formula references are evaluated only on the server. So whatever value that resource has when the screen first renders is the value it will have until you navigate to another screen. On navigation, the flow runtime submits a request to the flow engine (the server) and gets back the latest values of the manual variables and formulas. If you expect your visibility rule to update as the user passes through a single screen (aka onblur), make sure that you’re referencing only values from the other components on the screen. If you need to dynamically control any other properties, such as whether a field is required or read-only, your best bet in the short term is LWC, where you get full control. That’s especially true if you have bespoke requirements for what to do onblur or onclick. Roadmap! For Lightning Pages, which includes Dynamic Forms, we anticipate enabling conditional requiredness, conditional formatting, and conditionally setting an input to read-only in the next 12 months. Dynamic Forms Screen Flow Screen Flow + LWC LWC Standard Event Handling (such as `onblur`, `onfocus`) Not Available Not Available Available Available Custom Event Handling Not Available Not Available Available Available Now for custom events. If some of your inputs or the entire form need to communicate with something else in the page, LWC is your only option. For more details, check out Communicate with Events and Communicate Across the DOM in the Lightning Components Dev Guide. Styling How sophisticated is your desired styling and CSS? Dynamic Forms Screen Flow Screen Flow + LWC LWC Org and Community Themes Available Available Available Available Pixel-Perfect Styling Not Available Not Available Available Available Both Dynamic Forms and flows respect declarative theming features. If you need control beyond what Salesforce Themes or Community Branding Sets support, you need the wide open spaces of LWC. Reminder: You can embed Lightning components in flows. So if you need pixel-perfect control over the look-and-feel of your form but want to use the other benefits of flows, like the navigation model, you can have the best of both worlds. Layout What are the layout requirements for your form? Dynamic Forms Screen Flow Screen Flow + LWC LWC 2 Columns Available Available Available Available 4 Columns Roadmap Roadmap Available Available Beyond 4 Columns Roadmap Not Available Available Available Tab and Accordion Containers Available Not Available Available Available Dynamic Forms supports two-column layouts. Dynamic Forms can be broken up into individual sections with fields. These sections can be placed in components such as tabs and accordions to create easy to use and organized layouts. Flows can be rendered using a two-column layout. This feature is supported only when you add a flow to a Lightning or Community page, or when you use a direct flow URL, such as for a custom button. Due to our near-term roadmap, we recommend not using this feature. Roadmap! Salesforce is actively working on multi-column screens for Flow Builder. When this feature ships, you’ll be able to declarative configure screens with up to 4 columns. With LWC, you can use lightning-record-[edit|view]-form and the supporting lightning-[input|output]-field to control layout. The only layout restrictions are those from HTML/CSS. lightning-record-form respects the section configuration in the associated page layout – if a section is two-column in the page layout, it’s two-column in this component. Translation Does your form need to be localized to other languages? Dynamic Forms Screen Flow Screen Flow + LWC LWC Labels Entered in the Builder Roadmap Available Available* Not Available Labels in the Code Not Available Not Available Available Available If you’ve localized your custom fields, those translated labels are respected on Dynamic Forms. However, localization isn’t supported for labels that you add to components in the same Lightning page. For example, the label for a tab in the Tabs component. Roadmap! Salesforce is actively working on filling this gap. Soon, you’ll be able to reference Custom Labels for any string or rich text property in Lightning App Builder, such as to localize a custom Tab or Accordion label or the title you added to a Report component. With the power of Translation Workbench, Flow supports translation of user-facing labels for some, but not all, of the available screen components. For the following screen components, you can localize the label, help text, and error message: Text, Long Text Area, Number, Currency, Checkbox, Radio Buttons, Picklist, Multi-Select Picklist, Checkbox Group, Password, Date, and Date/Time. The other components don’t yet support translation, because they’re Lightning components under the hood and we don’t have a way of identifying which Lightning component attribute should map to label vs. help text vs. error message. The same issue applies for our out-of-the-box actions, like Send Email or Post to Chatter. However, there is a workaround! If you define the translated labels with a Custom Label, you can reference that custom label in the action or component when you configure it in Flow Builder. Create a flow formula that references the custom label, and reference that formula in the appropriate places in your flow. Now for LWC. Certain base components automatically inherit translations of the associated object’s fields, help text, and validation messages if they’ve been configured in Translation Workbench. For example lightning-record-form . If you need to introduce novel translatable labels in your code, Custom Labels are still the unsung hero. Declare the custom label you need, and then import it into your component from the @salesforce/label scoped module. UI Test Automation Do you need automated testing? Dynamic Forms Screen Flow Screen Flow + LWC LWC Unit Tests Not Available Not Available Requires Code Available End-to-End Automation Requires Code Requires Code Requires Code Available Consider your requirements for UI test automation. Unit tests enable more granular automation and validation that works with industry standard CI/CD systems and tools, which can test the components business logic, its Javascript controller, and its outputs. Going exclusively with low-code you will not be able to self author tests, but Salesforce rigorously tests our end-to-end offerings. If your component’s methods are complex enough that you want them to be tested individually, put the methods into dedicated JS files. That way you can import them into a LWC and into a Jest test with something like import { sort } from 'c/utils';. With end-to-end (Selenium) automation, you can simulate how the user interacts with your form. However, these tests can't verify the outputs of each method being performed. You can write these tests for any standard or custom UI – Lightning pages and screen flows inclusive. This recent blog post UI Test Automation on Salesforce compares the various options you have for building end-to-end automation on Salesforce. Included are considerations for when to use a no-code solution from an ISV, build your own custom test automation solution, or use an open source test framework like Selenium WebDriver or WebdriverIO. These solutions are valid for any UI interaction in Salesforce, whether that’s a Dynamic Form in a Lightning page, a screen flow in a utility bar, or an LWC in a flow in a quick action. Metrics Do you need to track usage of your form? Dynamic Forms Screen Flow Screen Flow + LWC LWC Page Views Available Available Available Available* Time Spent on Form Not Available Available Available Available Track Form Completion Not Available Available Available Available Track Success Rate Not Available Available Available Available If you need to track overall usage and adoption of your form, start with the low-code tools. Both Dynamic Forms and Screen Flows are trackable using out-of-the-box custom report types, though you’ll get more granularity from the Screen Flow tracking reports. If you need to track usage of a LWC, out-of-the-box availability depends on where you’re using that LWC. If it’s on a Lightning page, whatever is available for tracking Lightning page usage applies to your LWC. The same story goes for LWCs that are embedded in flows. Dynamic Forms themselves aren’t trackable out-of-the-box, though you can track the usage of the parent Lightning page through Lightning usage objects. To track the standard Lightning pages, use the Users with Lightning Usage by Page Metrics custom report type. For the same on custom Lightning pages, use the Users with Lightning Usage by FlexiPage Metrics custom report type. For tracking adoption of your specific form (not just the page it lives in), Flow’s got you covered. Use the “Sample Flow Report: Screen Flows” to answer questions like: What’s the completion rate for this form? Is it being well-adopted? How long does it take users to complete this form? Which screen do users spend the most time on? How often do users navigate backwards? How often do errors occur? If the standard report doesn’t meet your needs, clone it to make your own changes or build your own from scratch by using the Screen Flows report type. To track the same for an LWC that isn’t embedded in a screen flow or Lightning page, there’s no out-of-the-box option. You can build a DIY solution by using Apex. Closing Remarks Hello, and welcome to the end of this guide! 🏁 Kudos for making it through the equivalent of 9 double-sided pages. Have a good day and thanks for the read. Hope you learned something. Tell us what you think Help us make sure we're publishing what is most relevant to you: take our survey to provide feedback on this content and tell us what you’d like to see next.
0 notes