Estimate effort of a Salesforce-to-Salesforce project
What does end-user see?
Sender: List View > Forward To Connections > 'External Sharing' related list
Receiver: yellow nag on record detail
How to enable S2S feature?
- Log into a sandbox first (S2S makes permanent schema changes)
- Create one new Account and one new Contact to represent "remote" org admin
- Switch to Classic > Setup > Customize > Salesforce to Salesforce Settings
- Enable Salesforce to Salesforce, then you will need to "invite" the remote admin's contact
- Remain in Classic, look for the new tab appearing in All Tabs > Connections (not setup!)
- After connection setup, 'Forward to Connections' button appears on object list view
How much does it cost?
Salesforce to Salesforce is included in Enterprise Edition at no cost.
Is S2S a supported feature and best of breed?
Propellerheads and purists will groan out loud when they hear "Salesforce to Salesforce" but the underlying implementation of S2S itself is durable as far as the well defined bounds of pushing records across orgs.
Pros:
- Uses pub/sub model, easy to grok and explain to customer
- All background S2S operations are event-driven and asynchronous
- Solution handover is assisted because you can point to Salesforce docs
- S2S runs under its own integration user which offers some auditability
- Automatically logs local/remote schema changes and record sync failures
- For every record that doesn't sync, admin receives an email with details
Cons:
- Not quite real time, 1 minute typical
- Often inappropriate for heavily customized orgs
- Unrealistic customer expectations, risk behind apparent simplicity
- There is no metadata. No source control. No change sets. 100% mouse forever.
What makes S2S easy?
- Fast time to value, stand up working customer demo very quickly
- Powerful graph capability, eg sync account AND its contacts/tasks in one go
- If you spend 1 day playing with S2S you will find its constraints well defined
What makes S2S hard?
- Bidirectional sync is fussy, each record must originate in one system
- Anything off happy path gets messy, eg validation rules, picklist values etc
- It is intended for syncing a limited subset of records, try not to fight that
Well known limitations
- Only certain objects (and custom objects)
- Any field change syncs the whole record
Not well known limitations
- Forget State and Country picklists
- Loops occur if the same field triggers updates on both sides of S2S.
- There are undocumented rate limits like this:
** REQUEST_LIMIT_EXCEEDED**
The Salesforce to Salesforce operation from [Master] to [Slave] failed because the Rate limit either in [Master] or [Slave] was exceeded. Error Message: CODE - REQUEST_LIMIT_EXCEEDED. Message - ConcurrentPerOrgLongTxn Limit exceeded. We'll retry the operation in an hour. If more attempts are needed, they'll take place 2, 4, 8, and 16 hours after the previous retry.
What are the coarse steps?
Proof of concept. Before going anywhere near an estimate or a chargeable service, just do it - smoke out a stupid simple bidirectional sync in 2 sandboxes. S2S has its peculiarities which must be experienced first hand. S2S is implemented with this in mind as sandboxes can only sync to other sandboxes, leaving production orgs untouched.
Qualify the customer. Do they understand the analysis/overhead involved before AND after introducing S2S. Or do they expect it to be magical mind-reading glue between environments. The challenge is education: if customer pays money to build their custom bi-directional sync, you and they together will share the necessary analysis and learning along that journey. If customer does not have the requisite maturity, S2S turn-key OR custom-built sync will both be 'tamagotchi' solution requiring feeding and fixing all the time.
Qualify the orgs. Is there a safe overlap in source and target orgs that it can be reasonable to expect objects and fields to line up, or are the environments already brittle, rapidly changing, nonsensical validations etc. Will upkeep of S2S config be able to match the changes in the orgs?
Qualify the data. For the sales data will it sync a big hierarchy or mostly flat data? Or singularly set up some reference data? Will there be new records introduced into the system on an ongoing basis?
Sign off the exact objects and limit to standard fields if at all possible. Implement one object in a sandbox and see if customer rejects it straight away because of (eg) the manual workflow to initiate the sync. Don't run in and try to automate the sync initiation with Batch Apex or such, try and ensure real users take responsibility for syncing/forwarding any record.
Add a checklist item "S2S sanity check" before any other integration with any of those objects, emphasize the need to train / socialize the existence of S2S and make sure it does not end up fighting against (eg) your duplicate management system.
Trivia
S2S can destroy data. Consider this scenario:
- Setup bidirectional sync for field 1
- Create a record in Master and sync it
- Modify bidirectional sync to sync field 2
- Update a record in Slave (sync will fire)
- Slave overwrites Master field 2 value with blank (!)
Detect the S2S integration user:
- Name: Connection User
- UserType: Partner Network
- Email: [email protected]
For example to disable validation rules during sync:
/* Salesforce To Salesforce validation bypass */
NOT ISPICKVAL($User.UserType, 'PartnerNetwork') && ...
Connections take on the Organization Name of the remote org at setup time (forever and ever amen) so ensure all the synced orgs aren't called "Salesforce.com, inc" to avoid permanent confusion.
Great answer by @bigassforce!
Disclaimer: I have not used S2S before but I spent 10 minutes skimming the docs and came to the conclusion it is not an enterprise grade product and I wouldn’t be surprised if Salesforce hides it in the attic soon.
Multi-master replication(also called master-master replication) is a hard engineering problem even just at the RDBMS layer. Read this doc for a high level view of the complexities and algorithms involved. There are also numerous hackernews articles that go into finer details of this.
Now, on top of this we have all the Salesforce specific application layer quirks and customer customisations to contend with. It would be a very hard engineering task to build a turn-key product that is general enough to work for most Enterprise scale customers (and their customisations)
I would recommend you do a thorough gap-fit analysis before jumping in.
Some considerations I would include as part of the Gap-fit analysis:
- Record Type(and their Picklist maps) mismatches between orgs
- Delete and Undelete complexities
- OWD mismatch between the orgs, Sharing(Implicit or explicit) and Group Maintenance
- Territory Management and associated sharing intricacies
- Approval Processes (and the associated child ProcessInstance intricacies )
- High complexity Managed packages such as CPQ that have heavy integration with objects such as Opportunity
- Field Type changes and how it affects already published data
- Chatter Feeds
- Shield Encryption
- Partner Accounts, Portal Roles and intricacies around it
- Person Accounts and complexities around it
- Contact to multiple Accounts and its usual complexities
- Account Merging
- Deduplication Rules
- Account and Opp teams (and default team assignments)
- Collaborative Forecasting and its intricacies with Opty fields and team splits
My few cents inputs as doing & did as current project.
- Changes to formula field doesn't reflect, only at the time of sharing record for 1st time.
- Picklist value mis-match can be handled at connection setting.
- We should have trigger for all incoming records at target org to modify any changes like Record type, owner assignment etc.
- All existing workflow, process, flow, validation rule and trigger should have logic to run or not when changes are by connection user. [only for those object which we are accepting].
- Debugging is need to set from tooling api for connection user. As well once it is set anytime can be updated from dev console debug menu. Debug should be set for few minutes or hours as it generates ton of logs for all incoming request and you lose debug capacity. Deletion itself become long process.
- We should not share record when bulk update happens, it will result in failure. Max we can share 20-30 record at a time using code/process/flow.
- At connection setting we can see failure of record processing for only incoming records.
- Connection tab only visible for Salesforce Classic interface. Which is center hub to do all setting and monitor things for connection.
- When record is modified or created from connection user, Last Modified or Created By is set to Connection User. This is not visible for lightning experience.
- If we are enabling sharing for a object which has already lot of trigger and reaching sf limits. First make that correct because when that object get enabled for accepting incoming records and requests trigger run many times.
- Sharing of records can be done by code or process based on business need. PartnerNetworkRecordConnection is the object used for this purpose.
- Few common exception scenario covered in my blog and other handful things about connections. https://salesforce-tweaks.blogspot.com/2020/04/deep-dive-into-salesforce-to-salesforce.html
- When record is shared and unshared and shared back, duplicate records get created at target. Handle this well. Either don't allow resharing or same recors. Or delete old record in target org.
- Salesforce support is really bad on this. You have to be on your own for any things happen wrong around it.