Have you ever thought about what is inside your application causing high RAM usage in Task Manager? Take a moment and check out your program memory usage right now. Does it look good/healthy/fit/expected/normal? Do you have an idea about what the top three application types consuming the most RAM are? What is your guess how much megabytes they conquer?

How is memory expected to be used?

Memory is consumed by objects that carry data required for an application to do its job.

Does it mean the more data is there, the more memory is inevitably needed?

Is there anything that can be done to break the loop?

Let’s start with finding top memory consumers and figure out if anything can be done there.

How to get per type statistics?

A memory snapshot of the live process is a gold mine of information and can be collected in 4 clicks:

  1. Task Manager
  2. Details
  3. Right click on target process (w3wp in case of IIS-hosted app)
  4. Create dump file

Importing the dump file into dotMemory highlights the largest size taken by immutable string and Sitecore.Data.ID (reference version of Guid) types:


Sitecore reads data from the database and caches it inside the process to avoid further expensive database calls. In other words - it seem to be expected that text data and its IDs would be in top 3.

How does that align with your prediction? Did you expect strings to consume half of the total heap size?

Reduce memory pressure

Despite some Sitecore fields (like created by sitecore\admin) having repeatable values, each read from the database will turn text into a new string object instance and cached by the application shortly after.

.NET already supports the concept of re-using immutable object instances by using an implementation of the Object pool pattern.

Instead of caching the value straightaway, Sitecore can intern it first (either by grabbing a reference to the interned object, or by adding it to the intern pool) and thereby lowering the number of objects carrying duplicate data.

All there is left for a developer is to locate repeatable data in fields, which can be done by using this simple SQL script:

WITH DuplicatedFieldValues AS  (
	CONVERT(NVARCHAR(250),v.[Value]) AS [Field Value],
	COUNT(1) AS [Hits]
	[VersionedFields] v
	[Items] FieldDefinitionRow ON FieldDefinitionRow.ID = v.fieldID
	v.FieldId NOT IN
		/* Fields already interned OOB by Sitecore.Interning.config */			
		'BADD9CF9-53E0-4D0C-BCC0-2D784C282F6A' /* updated by */,
		'5DD74568-4D4B-44C1-B513-0AF5F4CDA34F' /* created by */,
		'52807595-0F8F-4B20-8D2A-CB71D28C6103' /* owner */,
		'3E431DE1-525E-47A3-B6B0-1CCBEC3A8C98' /* workflow state */
	FieldId, [Name], CONVERT(NVARCHAR(250),[Value])
	COUNT(*) > 500 /* How many same field values must be met to be shown */)

SELECT * FROM DuplicatedFieldValues

	REPLACE(CONCAT('<', [Name], '>'), ' ','_') AS [BeginTag],
	CONCAT('{',FieldId,'}') AS [FieldId],
	REPLACE(CONCAT('</', [Name], '>'), ' ','_') AS [EndTag],
	SUM(Hits) AS [Duplicates],
	COUNT(1) AS [Unique Values]
	[FieldId], [Name]
	[Duplicates] DESC

Located fieldIDs are to be added into the Sitecore.Interning.config, fieldIdsToIntern section to let Sitecore apply interning logic for them.

Verify it works phase

The real gain is to be measured with ON/OFF interning:

  1. Kill existing w3wp.exe process (since applications do not like to give away conquered memory)
  2. Start Sitecore application with unlimited cache sizes
  3. Capture a full memory snapshot after a big pile of content has been loaded and full garbage collection has occurred. Some sample code:
var db = Sitecore.Configuration.Factory.GetDatabase("master");
using (new SecurityDisabler())
	var root = db.GetItem(new ID("{110D559F-DEA5-42EA-9C1C-8A5DF7E70EF9}"));
	var descendants = root.Axes.GetDescendants();
GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced, true, true);
GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced, true, true);

Repeat the steps with enabled/disabled interning (setting name=”Interning.Enabled” in Website\App_Config\Include\Sitecore.Interning.config) to get the base lines.

Let’s make one more prediction before getting dotMemory into the game - what is the impact of configuring interning?


The local results shows that 30% less objects stays in the heap and one fourth of the RAM usage has been removed.

In simple words - the application can consume less than 6 GB instead of more than 8 GB as previously:


Did you predict that big performance win?

Lesson learnt

Object pool and Immutable object design patterns allow noticeably reducing the application memory pressure without altering any functionality!

The cost (extra operation to pool object) is much lower than gain - not only tons of saved memory, but also a huge boost in comparing pooled objects - fast by-reference equality in certain cases instead of by-value comparing.


Nikolay Mitikov

Performance engineer

Chief memes architect

You may also like

Using JWT in Laravel

Json web token JWT, it is a JSON-based open standard that is implemented to transfer statements between web application environment(RFC 7519).The token is designed to be compact and safe, JWT statements are generally used to pass authenticated user identity information between identity providers and service providers, In order to obtain...

Four tools to improve the efficiency of Flutter development

This article introduces 4 tools that can greatly improve the efficiency of Flutter development.