Warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in C:\HostingSpaces\lifeonpl\lifeonplanetgroove.com\wwwroot\wp-content\plugins\bwp-external-links\includes\class-bwp-external-links.php on line 310
Warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in C:\HostingSpaces\lifeonpl\lifeonplanetgroove.com\wwwroot\wp-content\plugins\bwp-external-links\includes\class-bwp-external-links.php on line 311
I’m working on a project that imports over 1 million users from an Oracle database used with SharePoint/Forms Authentication into the SharePoint user profile store. This is done as a custom SharePoint timer job that pulls the users from the DB and creates/updates User Profiles through the SharePoint API.
When running a job on a recordset of this size, there are several things to strive for:
- Limit the time that the process needs to run (jobs can take days and overlap themselves)
- Reduce memory usage (the OWSTIMER.exe can already consume quite a bit with the regular timer jobs)
Two ways you can achieve this:
- Avoid UserExists() method
- Use a DataReader if possible
Avoid UserExists() method
Most code samples on the web that deal with programmatic creation of User Profiles will show code such as this:
On small recordsets, this is fine, but for large recordsets the UserExists method represents a bottleneck that can increase the duration that your process runs. In addition, in the code above, you will unknowingly call this method a second useless time, because the CreateUserProfile() method internally calls UserExists() as well.
There are two ways to avoid this method:
- Cache profile IDs in a Dictionary/Hashtable type object
- Use reflection to create user profiles
Cache Profile IDs (and MemberGroup IDs too)
The UserProfileManager object is an IEnumerable that you can iterate over and access all the Profiles in SharePoint. Caching the IDs of these profiles up front enables you to index into a Dictionary object to see if your profile exists, rather than hitting SQL Server with UserExists(). The following code helped to reduce processing time significantly (you take a hit up front, but it’s far less than the delay imposed by UserExists over large recordsets):
In addition, caching the Guid of the UserProfile lets you later use the overloaded method of GetUserProfile() that takes a Guid as a parameter, which seems to perform slightly better than the alternative that takes a string for AccountName.
This approach also works very well when importing large numbers of MemberGroups:
NOTE: If you are wondering why not simply cache the entire UserProfile in the Dictionary (Dictionary<string, UserProfile>), the memory usage for this will be much higher, which will undo any gains by avoiding UserExists().
Use Reflection to Create User Profiles
The UserProfileManager’s CreateUserProfile() method internally calls the UserExists method, and then calls an internal constructor on the UserProfile object to actually create the profile. By using reflection, you can call this internal constructor yourself and avoid UserExists():
Once you’ve got the reflected information, you can use the following code to create your UserProfile:
NOTE: I’ve tried creating a user profile in this manner that already existed to see what would happen. The existing profile was updated, and I did not get any duplicate records in the SharePoint db. It appears the SQL under the hood already takes care of avoiding duplicates. General cautions about reflection still apply here though (API may change, etc.).
Use a DataReader if Possible
Instead of pulling a huge recordset into a DataTable, DataSet, or into a collection of custom objects, try to process your records one at a time using a data reader if your data source permits. This will keep memory usage down, as the garbage collector will dispose frequently any variables you create within a while(reader.Read()) loop. A DataTable with 1 million records in it will take up tons of memory on top of the large memory consumption that OWSTIMER.exe does already.
Of course other best practices also apply, such as:
- Getting pages of records, rather than all at once
- Implementing incremental change queries rather than all records all the time
- Only getting what you need from the data source
- Disposing your objects and data connections properly