Google Gemini
No GPU required. Get a free API key at aistudio.google.com/apikey.
AI Studio (simplest)
export GEMINI_API_KEY=your-key
contenox backend add gemini --type gemini --api-key-env GEMINI_API_KEY
contenox config set default-model gemini-2.5-flash
contenox config set default-provider gemini
Vertex AI
Vertex AI hosts Gemini (and Anthropic, Meta, Mistral) models on your GCP project.
| Type | Models |
|---|---|
vertex-google | Gemini models via Vertex |
vertex-anthropic | Claude models via Vertex |
vertex-meta | Llama models via Vertex |
vertex-mistralai | Mistral models via Vertex |
With a service account JSON
export VERTEX_SA_JSON=$(cat /path/to/service-account.json)
contenox backend add vertex --type vertex-google \
--url "https://us-central1-aiplatform.googleapis.com/v1/projects/$GOOGLE_CLOUD_PROJECT/locations/us-central1" \
--api-key-env VERTEX_SA_JSON
contenox config set default-model gemini-2.5-flash
contenox config set default-provider vertex-google
With Application Default Credentials (CLI only)
gcloud config set project YOUR_PROJECT_ID
gcloud services enable aiplatform.googleapis.com
gcloud auth application-default login
gcloud auth application-default set-quota-project YOUR_PROJECT_ID
contenox backend add vertex --type vertex-google \
--url "https://us-central1-aiplatform.googleapis.com/v1/projects/YOUR_PROJECT_ID/locations/us-central1"
contenox config set default-model gemini-2.5-flash
contenox config set default-provider vertex-google
Important
set-quota-project is required. Without it, every Vertex AI call returns 403 SERVICE_DISABLED — even if you already ran gcloud config set project.