One of the great things about having Continuous Crawl running is with the fact that it’s supposed to magically work, i.e you enable it and ‘mini-crawls’ are kicked off periodically keeping your index fresh. However, it’s not very easy what’s happening in case something goes wrong or if you simply need to find out why a specific document isn’t searchable yet, since the crawl logs are aggregated over 24 hours for the Content Source. This means that you can’t get information about each “Mini-crawl” from crawl logs themselves.
One way to see what’s happening is to take a look at the SQL table itself. The table is called MSSMiniCrawls and it resides in your SSA DB. This will at least provide you with information on when a specific mini-crawl has been kicked off or will get kicked off (based on the set interval) and can come handy if someone mentions that they’ve just uploaded a new document and are wondering when it will get crawled/indexed.
Each Mini-Crawl is recorded in this table and given a MiniCrawlID. If you have many Content Sources, it’s also easy to find out what the ContentSourceID correlates to via Powershell.