Who can use image generation
Image generation is available to:- Glean Key customers on GCP with access to Glean Assistant and Agents
- End users in tenants where the feature is enabled
Admin Console → Assistant → Settings → Image generationFrom the end user’s perspective, there’s nothing to install. If your admin has enabled the feature, you’ll see image responses directly in Glean Assistant.
Where you can generate images
You can use image generation anywhere you use Glean Assistant:- Assistant chat in the Glean web application
- Plan & Execute note in Agents when creating or refining multi-step plans
- Fast mode – optimized for speed
- Thinking mode – optimized for deeper reasoning and richer use of your enterprise context
How to generate an image
You generate images using natural language prompts, just like any other question to the assistant.Write your prompt
Type a prompt that clearly asks for an image. Specify style, level of realism, audience, or any other constraints.
Submit and review
Submit your prompt. The assistant will generate the image using Nano Banana Pro and return it inline in the chat, typically with a link to view or download.
Example prompts
Using your work context:- “Search for my scope and what I work on. Then generate an image that describes it visually.”
- “Generate a photorealistic image of a hamster flying a plane.”
- “Create a simple diagram that explains the enterprise agent development lifecycle, with clear stages from value to launch and monitoring.”
- “Generate a clean, line-art style diagram that shows ‘My Work’ at the center with Security, Agents, Connectors, and Customer Outcomes around it, suitable for a product overview slide.”
Prompting tips and best practices
To get better results from image generation:Be explicit about subject and style
Specify visual style, detail level, and format to guide the model. Examples: “flat illustration, minimal text, pastel colors” | “photorealistic” | “simple black-and-white line drawing”Specify the audience
Tell the assistant who will view the image to adjust complexity and tone. Examples: “for an executive audience” | “for a new engineer onboarding doc”Limit on-image text
Short labels or headings work best. Long paragraphs should live in your document, not on the image.Iterate with feedback
Ask the assistant to refine instead of starting from scratch. Examples: “regenerate with fewer details” | “simplify the diagram” | “change the color palette”Use your enterprise context when it helps
For work-related visuals, reference your internal data so the assistant can ground the image in what you actually do. Examples: “my recent docs” | “my roadmap” | “my team’s projects”Credits, pricing, and admin controls
- Credits: Each generated image consumes Flex Credits, similar to other LLM-powered features.
- Visibility: Image generation appears as a distinct line item in Admin Console usage and billing interfaces, so admins can track usage and adjust budgets or policies.
- Controls: Admins can enable or disable image generation globally in Assistant settings and combine this with existing Assistant and Agents permissions for finer-grained control.
Known limitations and quirks
Most of the time, images render directly in your chat. However, there are a few quirks to be aware of:Occasional link-only responses
In some cases, you may see a clickable image link instead of an inline thumbnail. Clicking the link should still open the image in a new tab. If you open “Show work”, you may see a valid image link there even if the main response only shows the URL.Activity-based prompts and broken links
Prompts that ask the assistant to analyze your recent activity or personal graph and then generate an image (for example, “Use my personal graph over the last 3 months, then generate an image”) can sometimes produce an image link that doesn’t load correctly or returns a browser error. If this happens:- Try regenerating with a simpler prompt that doesn’t rely on “my last N months of work”
- If the problem persists, report it via the feedback controls in the chat (thumbs down with a brief description) so we can track it
Image safety and blocked requests
Requests that violate safety policies (explicit content, hateful or harassing imagery, real-person impersonation, or sensitive personal data) may be refused or heavily modified. For best results, avoid prompts that include:- Real coworkers’ faces or full names
- Sensitive customer data, secrets, or production credentials
- Harmful, illegal, or discriminatory scenarios
Frequently asked questions
Can I upload an image and ask Glean to edit it?
Can I upload an image and ask Glean to edit it?
Yes, you can attach an image and ask the assistant to adjust style, colors, or add/remove elements, subject to the same safety policies.
Which image model is used?
Which image model is used?
Image generation currently uses Nano Banana Pro by default. Additional models may be introduced over time for more advanced creative or marketing use cases; when available, they will be surfaced via the normal Assistant experience.
Where are images stored?
Where are images stored?
Generated images are stored and served through Glean’s image infrastructure and follow the same enterprise-grade security and access controls as other Glean content.
Do I need to switch modes to generate images?
Do I need to switch modes to generate images?
No. Image generation is available in both Fast and Thinking modes, so you can choose speed or depth without losing access to image generation.