Enable access to models in Vertex AI
Go to the Vertex AI Model Garden and make sure you have enabled access to the following foundation models from the GCP project that Glean is running in:| Model name | How Glean uses the model |
|---|---|
Claude Sonnet 4.5 (Preferred model) claude-sonnet-4-5-20250929 | Agentic reasoning model used for assistant and autonomous agents |
| Claude 3.7 Sonnet | Large model used for other, more complex tasks in Glean Assistant |
| Claude 3.5 Haiku | Small Model used for simpler tasks such as follow-up question generation |
Request additional quota from Vertex AI
You will need to file a standard GCP quota request, which is expressed in Requests Per Minute (RPM) and Tokens Per Minute (TPM). Filter forbase_model: on the model names in the table below and region: for the region that your GCP project is running in.
Please note that the quota is not a guarantee of capacity, but is intended by Google to ensure fair use of the shared capacity, and your requests may not be served during peak periods. To obtain guaranteed capacity, please speak with your Google account team about purchasing Provisioned Throughput.

Capacity Requirements
On Claude 4.5 Sonnet, Glean Assistant consumes on average 64.4k full input, 10.3k cached input, and 1.2k output tokens per query equivalent to approximately 0.08 per query based on current [Claude API pricing](https://cloud.google.com/vertex-ai/generative-ai/pricing#claude-models). These averages were derived from running a large representative sample of real customer queries through Claude 4.5 Sonnet. To estimate weekly Glean Assistant LLM costs, multiply your weekly query volume by 0.08 per query. Actual token usage will vary by customer depending on query complexity and document size. To estimate throughput requirements (TPM), identify your deployment’s query-per-minute (QPM) rate at the desired percentile (typically p90), then multiply by the average tokens per query. The table below illustrates example TPM conversions assuming 0.004 QPM per DAU, based on historical customer data.| Users | TPM |
|---|---|
| 500 | 125,000 |
| 1000 | 245,000 |
| 2500 | 615,000 |
| 5000 | 1,225,000 |
| 10000 | 2,450,000 |
| 20000 | 4,895,000 |
Glean highly recommends estimating capacity using your deployment’s actual QPM to produce capacity estimates as QPM per DAU varies widely across customers.
Select the model in Glean Workspace
- Go to Admin Console > Platform > LLM.
- Click on Add LLM.
- Select Vertex AI.
- Select Claude 4.5 for the agentic model.
- Click Validate to ensure Glean can leverage the model.
- Once validated, click Save.

- In order to use Claude Sonnet 4.5 with Glean Assistant, agentic engine features should be turned on. See details here. Until these features are turned on, Glean Assistant will continue to use large and small models you previously configured. You do not need to change your large and small model at this time.
- We will use Application Default Credentials to call the models, so no additional authentication is required.
FAQ
How do you ensure data security?
How do you ensure data security?
All data is encrypted in transit between your Glean instance and the Vertex AI service which runs in the same GCP region as your Glean instance.Please review the Vertex AI Generative AI and Data Governance guide. We have highlighted some relevant excerpts below (as of June 4, 2024):
- Foundation Model Training: By default, Google Cloud doesn’t use Customer Data to train its Foundation Models. Customers can use Google Cloud’s Foundation Models knowing that their prompts, responses, and any Adapter Model training data aren’t used for the training of Foundation Models.
- Prediction: Inputs and outputs processed by Foundation Models, Adapter Models, and Safety Classifiers during Prediction are Customer Data. Customer Data is never logged by Google, without explicit permission from the customer by opting in to allow it to cache inputs and outputs.
Architecture Diagram
