Understand
Configure Resources
Takk automatically provisions cloud resources based on the type hints in your settings classes. When you declare a field with a recognised type (like PostgresDsn or RedisDsn), Takk detects the required infrastructure and provisions it automatically, unless the value already exists in your .env or as an environment variable.
Automatic Provisioning
Simply declaring a field with a recognised DSN or resource type is enough to trigger provisioning:
from pydantic import PostgresDsn
from pydantic_settings import BaseSettings
class AppSettings(BaseSettings):
psql_uri: PostgresDsn # Automatically provisions a serverless PostgreSQL cluster
Takk will provision a ServerlessPostgresInstance with sensible defaults. No extra configuration needed.
Overriding Resource Configuration
To customise how a resource is provisioned, for example to enable the pgvector extension, declare a resource field directly in your Project:
from takk import Project, Job
from takk.resources import ServerlessPostgresInstance
from my_app.jobs import update_consumption, UpdateConsumptionArgs
project = Project(
name="my-api",
shared_secrets=[AppSettings],
load_consumption=Job(
main_function=update_consumption,
arguments=UpdateConsumptionArgs(),
# cron_schedule="0 * * * *", # Every hour
),
# Override the default serverless PostgreSQL to add extensions.
# Without this line, Takk provisions ServerlessPostgresInstance with default settings.
default=ServerlessPostgresInstance(extensions=["pgvector"]),
)
If only one resource of a given type is defined, Takk resolves all references to that type against it, regardless of the field name. You can call it anything:
project = Project(
name="my-api",
...
my_db=ServerlessPostgresInstance(extensions=["pgvector"]), # Name doesn't matter
)
Names only become significant when you define multiple resources of the same type.
Multiple Resources of the Same Type
Define any number of independent resources by using distinct names:
from takk import Project
from takk.resources import ServerlessPostgresInstance
project = Project(
name="my-api",
shared_secrets=[AppSettings],
# Primary database with vector search support
default=ServerlessPostgresInstance(extensions=["pgvector"]),
# Secondary database with scheduled jobs support
other_psql=ServerlessPostgresInstance(extensions=["pg_cron"]),
)
When multiple resources exist, use ResourceRef to point each settings field at the right one by name:
from typing import Annotated
from pydantic import PostgresDsn
from pydantic_settings import BaseSettings
from takk.secrets import ResourceRef, ResourceTags
class AppSettings(BaseSettings):
# Resolves to the "default" PostgreSQL resource
psql_uri: PostgresDsn
# Resolves to the "other_psql" PostgreSQL resource
analytics_uri: Annotated[str, ResourceRef(ResourceTags.psql_dsn, name="other_psql")]
Supported Resources
Serverless PostgreSQL
Provisioned automatically when PostgresDsn or any Postgres shorthand type is detected.
Scales to zero when idle and up to max_cpus under load.
from takk.resources import ServerlessPostgresInstance
default=ServerlessPostgresInstance(
version=16, # PostgreSQL version (only 16 supported)
min_cpus=0, # Minimum CPU units (scales to zero by default)
max_cpus=4, # Maximum CPU units
extensions=["pgvector", "pg_cron"], # PostgreSQL extensions to enable
)
Trigger types (any of these triggers automatic provisioning):
| Type | Equivalent |
PostgresDsn | Full DSN, the most common trigger |
PostgresHost | Annotated[str, ResourceTags.psql_host] |
PostgresName | Annotated[str, ResourceTags.psql_name] |
PostgresUsername | Annotated[str, ResourceTags.psql_username] |
PostgresPassword | Annotated[SecretStr, ResourceTags.psql_password] |
PostgresSsl | Annotated[str, ResourceTags.psql_ssl] |
DatabaseHost | Annotated[str, ResourceTags.database_host] |
DatabaseName | Annotated[str, ResourceTags.database_name] |
DatabaseUsername | Annotated[str, ResourceTags.database_username] |
DatabasePassword | Annotated[str, ResourceTags.database_password] |
DatabaseSsl | Annotated[str, ResourceTags.database_ssl] |
Supported extensions include: pgvector, pg_cron, postgis, timescaledb, pgcrypto, hstore, uuid-ossp, pg_trgm, btree_gin, btree_gist, citext, fuzzystrmatch, intarray, ltree, unaccent, and more.
Dedicated PostgreSQL
For workloads that require a dedicated, always-on cluster with predictable performance and optional high availability.
from takk.resources import PostgresInstance
primary_db=PostgresInstance(
version=17, # PostgreSQL version: 14, 15, 16, or 17
min_vcpus=2, # Minimum vCPUs
min_gb_ram=4, # Minimum RAM in GB
number_of_nodes=2, # 1 (standalone) or 2 (high availability)
k_iops=15, # IOPS tier: 5 or 15
is_backup_disabled=False, # Enable automated backups
)
Trigger types:
| Type | Equivalent |
PostgresDsn | Full DSN |
PostgresHost | Annotated[str, ResourceTags.psql_host] |
Redis
Provisioned automatically when RedisDsn is detected. Used for caching, message queues, and pub/sub.
from takk.resources import RedisInstance
cache=RedisInstance(
version="7.2.11", # Redis version
number_of_nodes=1, # Number of nodes
min_vcpus=0,
min_gb_ram=1,
)
Trigger types:
| Type | Equivalent |
RedisDsn | Full DSN, triggers automatic Redis provisioning |
MongoDB
Provisioned automatically when MongoDsn is detected.
from takk.resources import MongoDBInstance
documents=MongoDBInstance(
version="7.0", # MongoDB version
number_of_nodes=1, # 1 (standalone) or 3 (replica set)
min_vcpus=0,
min_gb_ram=16,
)
Trigger types:
| Type | Equivalent |
MongoDsn | Full DSN, triggers automatic MongoDB provisioning |
AI / LLM Resources
AI model connections are provisioned as managed inference endpoints rather than self-hosted infrastructure.
Use AiToken, AiBaseAPI, or AiBaseUrl in your settings to declare what your application needs.
The Ai prefix is intentionally generic: a single endpoint can serve Chat, Vision, Embedding, and AudioTranscriber models. The model type is chosen at call time, not at provisioning time.
AiBaseAPIfor OpenAI-compatible endpoints. The base URL includes/v1.AiBaseUrlfor Anthropic-compatible endpoints. The base URL without/v1.AiTokenthe API key or bearer token for the provider.
from pydantic_settings import BaseSettings
from takk.secrets import AiToken, AiBaseAPI
class AISettings(BaseSettings):
# One connection serves all model types: Chat, Vision, Embedding, AudioTranscriber
ai_api: AiBaseAPI # OpenAI-compatible base URL (includes /v1)
ai_token: AiToken
If your project uses multiple LLM providers, use ResourceRef with distinct names. The same single-resource shortcut applies here too:
from typing import Annotated
from pydantic import AnyUrl, SecretStr
from pydantic_settings import BaseSettings
from takk.secrets import AiToken, AiBaseAPI, ResourceRef, ResourceTags
class AISettings(BaseSettings):
# Default provider
ai_api: AiBaseAPI
ai_token: AiToken
# Second provider (only needed when connecting to multiple LLM services)
embed_api: Annotated[AnyUrl, ResourceRef(ResourceTags.llm_base_api, name="embed")]
embed_token: Annotated[SecretStr, ResourceRef(ResourceTags.llm_token, name="embed")]
AI shorthand types:
| Shorthand | Equivalent | Use for |
AiBaseAPI | Annotated[AnyUrl, ResourceTags.llm_base_api] | OpenAI-compatible endpoints (includes /v1) |
AiBaseUrl | Annotated[AnyUrl, ResourceTags.llm_base_url] | Anthropic-compatible endpoints (base URL only) |
AiToken | Annotated[SecretStr, ResourceTags.llm_token] | API key / token for any LLM provider |
LLMBaseAPI | Same as AiBaseAPI | Alias |
LLMBaseUrl | Same as AiBaseUrl | Alias |
LLMToken | Same as AiToken | Alias |

