Skip to main content

Common questions about crawling

The initial crawl duration for any datasource varies based on two key factors:
1

Datasource Size

The total volume of content, including the number of documents/messages and their individual sizes, directly impacts crawl time.
2

API Rate Limits

The datasource’s API rate limit affects how quickly Glean can retrieve items. Lower rate limits result in longer crawl times.
For typical enterprise datasources, initial crawls generally take between 3 to 10 days, with larger datasources or those with low API rate limits trending toward the longer end of this range.

Estimating crawl completion time

If your datasource supports initial crawl estimates, you’ll have the option to enter an estimated document count during setup. Based on this input and historical data, you’ll see a projected time range for when the initial crawl is expected to finish.This feature is available for select data sources and is designed to give you a data-driven estimate, so you can better anticipate when your content will be ready to use in Glean. Initial crawl estimates are historical averages computed from past datasource crawls. Please note that actual crawl time can vary due to factors such as data volume, change frequency and structure.
Crawl duration is primarily determined by your datasource’s size. Larger datasets or applications with low API rate limits naturally require more time to process. If your crawl duration exceeds expectations, we recommend contacting Glean support for assistance.
While some connectors (like GitHub) offer restriction configuration during setup through the UI, most datasources require Glean support assistance for implementing crawl restrictions. Available restriction methods include:

Time-based

Limit crawling to content created or accessed within a specific timeframe (e.g., last 6 months)

User-based

Restrict crawling to content from specified users

Group-based

Limit crawling to content from specific AD groups

Site/Channel-based

Restrict crawling to specific sites or channels
Available restrictions depend on the datasource’s API capabilities. Most applications support both greenlisting (explicit inclusion) and redlisting (explicit exclusion).
If you encounter errors in your crawl status, this may indicate connectivity issues with your datasource or problems with the data itself. We recommend:
  1. Verifying your datasource configuration
  2. Contacting Glean support if issues persist
Yes, Glean fully supports concurrent crawling of multiple datasources.
Crawl status can be monitored under the Content crawling heading in the apps table:
  • Job in progress: Indicates an active crawl
  • Synced: Indicates a completed crawl
Detailed progress indicators are not currently available.
If your data source supports initial crawl estimates, you can enter an estimated document count to receive a projected time range for crawl completion during setup.Initial crawl estimate setup interfaceThis feature is available for select data sources and uses historical averages to estimate timing, though actual crawl time may vary depending on data volume, change frequency, and content structure.
The Job in progress status indicates an active crawl of your datasource. Since full crawls typically take several days to complete, this status will persist throughout the crawling process.
To delete a datasource after initial setup, navigate to the Data Sources page on the admin console and click the datasource instance you wish to delete. On the “Overview” tab, open the “Extreme Measures” section and click the “Delete instance” button.Once you have confirmed the deletion, it may take up to 5 minutes for documents from this datasource instance to no longer appear in Glean search results. All associated data will be removed in the background.
Currently, some datasources do not support deletion. Datasources that you cannot create multiple instances of cannot be deleted. In addition, you will not be able to delete your active People Data source. If you wish to delete this datasource, first set a new People Data source on the People Data page.
Crawl management operations must be performed by Glean support. Please contact them for assistance with stopping or restarting crawls.
Crawl Rate is the hourly rate of crawling tasks across document parts, for example, content, metadata, permissions during the initial crawl. It serves as a heartbeat to confirm active progress.
Change Rate is the number of user‑driven changes, for example, edits, additions, deletions synced in the past 24 hours after the initial crawl completes. It indicates ongoing freshness.
You will see Crawl Rate only during the initial crawl. After the initial crawl is complete, Change Rate appears for that data source.
Approximately every 5–10 minutes.
This can mean the data source is being initialized or no crawling tasks were performed in the last hour, for example, due to health checks. If 0 persists longer than expected, investigate configuration and permissions.
Either no user‑driven changes occurred in the last 24 hours or updates are not being synced. If you expect activity, investigate potential sync or permission issues.
Certain connector types are excluded, for example, federated‑fetch only, customer‑managed, and web connectors.
Not necessarily. These metrics reflect activity, not a complete health assessment. Use them alongside status indicators and error surfacing.
  • Yes—conceptually, it’s the count of document change events Glean processed in the last 24 hours for an ongoing crawl. These events include creates (adds), updates (content/metadata/permissions), moves/renames, and deletes. Think of it as an activity “heartbeat” showing that new or changed content is actively being processed.
  • In the original spec, the admin table highlights “added in the past day” as the simplest, user-friendly roll-up for ongoing crawls; internally, it’s backed by the change-event stream described above.
  • Multiple changes to the same item: If one document is edited many times in a day, each edit is a separate change event. “Items synced” is a cumulative count of distinct items indexed, so it won’t rise with repeated edits.
  • Updates and deletes don’t increase “items synced”: Edits and permission-only changes are counted in “Change rate” but do not add to the total items. Deletes can even decrease “items synced” while still incrementing the change count.
  • Timing and pipeline lag: “Items synced” is a lagging, cumulative indicator that updates after indexing completes; “Change rate” reflects event processing activity within the last 24 hours and can surface earlier in the pipeline. Over short windows, you may see a high change rate without a corresponding immediate increase in the items total.
  • Permission/metadata churn: Some connectors generate events for permission or metadata changes (e.g., access list updates), which raise “Change rate” even when no new items are added to the index.
  • A live “heartbeat” of connector activity: It shows whether the data source is actively processing new or changed content in the last 24 hours (creates, edits, permission/metadata changes, deletes), so you can quickly confirm the crawl isn’t idle.
  • Early stall detection: If Change rate flatlines while you expect activity, it’s a signal to check connector health (auth scopes, webhook subscriptions, errors) even before the total items count moves.
  • Interpreting gaps vs. Items synced: Because Items synced is a cumulative, lagging indicator, a high Change rate with little movement in Items synced can indicate many edits/deletes or permission-only changes (which don’t add to the total).
  • Validating configuration changes propagate: After updating inclusion/exclusion rules or visibility settings, a non-zero Change rate is a quick way to verify those changes are being picked up and applied by the pipeline.
  • Spotting surges or operational events: Spikes can reflect bulk content uploads/migrations or large permission sweeps—useful operational context that can explain search result shifts or indexing load.
  • Where it fits with Crawl rate: Use Change rate to monitor ongoing crawls; Crawl rate is the companion metric during initial syncs.
  • Not exactly. For initial crawls, Crawl rate is the number of document parts discovered in the past hour. It’s a throughput metric of the initial sync, reported in “parts,” not a count of internal processing tasks, and it’s not limited to permissions/metadata-only operations
  • In the admin UI this shows up as a parts/processing rate during initial sync. Once a data source moves to ongoing crawls, this column switches to show Change rate instead.
  • A live initial‑sync heartbeat: If Crawl rate is non‑zero and changing, the initial crawl is progressing and discovering new parts; if it flatlines during initial sync, it suggests a stall that merits a health check.
  • Speed and time‑to‑completion context: Because Crawl rate is “parts discovered per hour,” monitoring it alongside the initial crawl time estimate helps you gauge how quickly the initial sync will finish. The setup flow includes a “Get sync time estimate.”
  • Interpreting vs. Items synced: Items synced is a cumulative, lagging count that updates after indexing completes. During initial sync, you may see Crawl rate activity without an immediate Items synced increase, especially when pipelines are still processing discovered parts.
  • When the metric changes: Remember that Crawl rate is shown for initial syncs; once the source transitions to ongoing syncs, the table shows Change rate as the ongoing “heartbeat.”
For any questions not addressed here or for specific assistance with your crawls, don’t hesitate to contact Glean support.