An AI model record in Maverick does more than point to a model name. It configures how the AI behaves, how much it outputs, how creative or precise it is, and which credentials it uses. Getting these eight settings right is what separates AI that occasionally helps from AI that consistently delivers accurate project updates.

1. Model Name

The model name is the identifier the AI provider uses to route your request — for example, "gpt-4o," "claude-3-5-sonnet-20241022," or "gemini-1.5-pro." Use the exact string from the provider's model list. An incorrect name returns a model-not-found error that looks identical to a misconfigured provider, making it one of the most common sources of confusion during initial setup.

2. Provider

Each model record is linked to an AI provider record that holds the API credentials. Selecting the correct provider ensures that requests use the right API key and base URL. If you have multiple provider records for the same vendor — for example, separate keys for different departments — select the one intended for this model's users. A model pointed at the wrong provider will authenticate with the wrong key and fail.

3. System Prompt

The system prompt is an instruction that runs silently at the start of every AI conversation. Use it to give the AI standing context: your project methodology, your organization's terminology, formatting preferences, or scope limits on what changes it's allowed to make. A well-written system prompt can dramatically improve consistency across all users without requiring each person to re-explain context in every session.

4. Temperature

Temperature controls how deterministic or creative the AI's responses are. Values closer to 0 produce focused, predictable answers — best for project scheduling where precision matters. Higher values (above 0.7) introduce variation — better for brainstorming than for date calculations. For project management use cases in Maverick, a temperature of 0.1 to 0.3 produces the most reliable schedule updates.

5. Top P

Top P (nucleus sampling) is an alternative to temperature for controlling response variability. It restricts the AI to tokens that together account for a set probability mass — for example, 0.9 means consider only the top 90 percent of probable next words. Most Maverick users can leave this at the default. It's a fine-tuning control for advanced use cases where temperature alone isn't providing the right balance.

6. Max Output Tokens

Max output tokens caps how long the AI's response can be. Setting this too low truncates responses mid-sentence; setting it too high increases cost and latency. For routine project-update prompts — a few modified tasks — 1,000 to 2,000 tokens is ample. For complex multi-task restructuring or long project summaries, allow 4,000 or more. Match the limit to your most common prompt type for that model.

7. Deployment Name

Azure OpenAI users must provide a deployment name — the custom name given to the model when it was deployed in the Azure portal. This field is separate from the model name. Other providers don't use it. If you're on Azure and leave this blank, Maverick won't know which deployment to call and will return a routing error that may look like an authentication problem.

8. Per-Model Credential Overrides

Maverick lets you override the provider-level API key and base URL at the model level. This is useful when a single AI provider account has multiple deployments with different endpoints, or when you want a specific model to bill to a separate API key from others under the same provider. Use this sparingly — credential sprawl at the model level is harder to audit and rotate than a clean provider-level hierarchy.