Recently I've been trying to figure out, why updating items in the content tree (like the name of a parent item containing sub-items) did not trigger an index update in the Master and/or Web index, when I published the items in Sitecore.
In this blog post I'll go into the details on what caused this behaviour, and how you can resolve such issues.
The Solr indexes that refused to update
It all started a couple of days back, when I discovered that the Solr indexes were not being updated correctly. The issue showed its ugly face, when I changed the name of the parent item from parentNameA
to parentNameB
, hence I performed a search against the Solr index and realized that I was not getting any results back.
When the name of the parent item changes, what happens is that the path of the sub-items changes as well, from /sitecore/content/siteRoot/parentNameA/subItem
to /sitecore/content/siteRoot/parentNameB/subItem
. As such, when querying the sub-items under the parent item, I changed my query to find sub-items under the new path, which yielded no results. However, when I performed a query using the old parent item name, I was able to get results back from the Solr index.
I tried to do a full re-publish of the parent item (including sub-items), even a full site publish, but either way the index refused to be updated.
Analyzing the problem
After going through a lot of trail and error, I noticed that the crawling.log
file was filled with a lot of log entries, indicating that a great amount of items (magnitude of thousands) was failing. For each failing item, these would either keep logging errors like these:
11596 13:21:07 ERROR Error while resolving field boosting. Field={5DD74568-4D4B-44C1-B513-0AF5F4CDA34F} Item=sitecore://web/{5359A362-D396-41A1-A257-0072E1390A24}?lang=en&ver=1
Exception: System.NullReferenceException
Message: Object reference not set to an instance of an object.
Source: Sitecore.ContentSearch
at Sitecore.ContentSearch.Pipelines.ResolveBoost.ResolveFieldBoost.FieldDefinitionItemResolver.Process(ResolveFieldBoostArgs args)
at (Object , Object[] )
at Sitecore.Pipelines.CorePipeline.Run(PipelineArgs args)
at Sitecore.ContentSearch.Boosting.PipelineBasedBoostingProvider.ResolveFieldBoosting(IIndexableDataField field)
at Sitecore.ContentSearch.Boosting.BoostingManager.ResolveFieldBoosting(IIndexableDataField field)
or warnings like these:
11596 13:21:07 WARN Could not compute value for ComputedIndexField: _templates for indexable: sitecore://web/{5359A362-D396-41A1-A257-0072E1390A24}?lang=en&ver=1
Exception: System.InvalidOperationException
Message: Item template not found.
Source: Sitecore.ContentSearch
at Sitecore.ContentSearch.IndexOperationsHelper.GetAllTemplates(Item item)
at Sitecore.ContentSearch.SolrProvider.SolrDocumentBuilder.AddComputedIndexFields()
Finding a solution to the problem
Based on several leads from Mark Cassidy and Thomas Stern, both pointing in the direction of something suspicious going on in the Event Queue, I decided to take a closer look in the Event Queue table in the Core database.
To my surprise, I could see that the Event Queue table was very large, containing thousands of rows. More importantly, and even worse, it kept increasing in size for every time the indexing crawler would run. This meant that the failing items would keep piling up onto each other, as the Event Queue kept increasing in size.
I cleared the Event Queue database table and removed items that were failing, as indicated in the crawling.log
(luckily they all had the same parent item). Afterwards I rebuild the Master and Web indexes, and now the indexing was working again. This meant that I could now change the parent item name, do a republish, and see that my Solr indexes were reflecting those changes as well.
So what exactly happened?
My guess is that, since the Event Queue kept growing in size, the Event Queue eventually got flooded.
To back up this claim, there is a setting in the ContentSearch configuration file that says, that if the number of items in the history engine exceeds the number specified in ContentSearch.FullRebuildItemCountThreshold
, then it should re-trigger a full rebuild of the index.
Due to the unusual amount of rows to be processed in the Event Queue, the triggering of a full index rebuild attempt must have happened. When it did, I think what happened is that the failing items would fail once again, and as a result these would be left to be processed the next time something triggered an update in the Event Queue. As such, every time a publish would occur, the re-processing of the failing items would be triggered, and once more they would fail. As a consequence, the Event Queue increased in size with the previously failed items for the first run, and then with the items from the second (added to the amount items from previous failed runs) and so on. Ultimately, the indexing that should have occurred when the publishing was triggered, ended up getting lost in the flood of failing items that could not be indexed, and the index was left in an invalid state.
But what caused the items to fail?
After doing some more testing, it came down to the fact that the items that failed were missing one or more of their templates. The items were imported from a legacy Sitecore solution, whereas not everything was imported correctly. I discovered this when I tried to look up the failing item(s) in the content tree using the Content Editor, where Sitecore responded with throwing an null-reference exception. I've only seen Sitecore behave like this, when an item is missing one of its templates. This was also confirmed by Kamruz Jaman, who explained that he had seen a similar issue before, which was caused by one or more items with missing templates.
If you experience a similar issue, you should verify that all the items you are indexing are indeed in a valid state, since otherwise they can make your crawler shut down abruptly, whereas your indexes will not be updated correctly.