Understand
Secrets Management
Every real application needs secrets—database passwords, API keys, OAuth tokens, and service credentials. Hardcoding these values into your code is dangerous and inflexible. Takk solves this by letting you declare what secrets your application needs using type-safe Python code, then managing those secrets securely across all your environments.
How Secrets Work
Secrets in Takk are defined using Pydantic's BaseSettings class. Instead of scattered environment variables or configuration files, you write a Python class that describes exactly what your application expects:
from pydantic import SecretStr
from pydantic_settings import BaseSettings
class SharedSecrets(BaseSettings):
openai_api_key: SecretStr
stripe_secret_key: SecretStr
This isn't just documentation—it's executable configuration. When Takk builds your project, it reads this class definition and knows your application requires two secret values. It creates secure storage for them in each environment (test, production, preview branches) and prompts you to fill in the actual values through the dashboard.
At runtime, these secrets are injected into your application as environment variables. Your code reads them using Pydantic's settings mechanism, which automatically validates types and fails fast if something is missing. The secrets themselves never touch your source code or version control.
Because your configuration is Python instead of YAML or JSON, Takk gains deep insight into your infrastructure needs. It can detect which database drivers you're using, which ML frameworks you've imported, and which cloud services you're connecting to. This enables intelligent provisioning and version matching that would be impossible with static config files.
Declaring Secrets in Your Project
To begin, create a Pydantic settings class that represents the required configuration for your service:
from pydantic import PostgresDns, SecretStr
from pydantic_settings import BaseSettings
class SharedSecrets(BaseSettings):
openai_api_key: SecretStr
psql_url: PostgresDns
You then register this class in your project definition:
project = Project(
name="my-project",
shared_secrets=[ObjectStorageConfig, SharedSecrets],
server=FastAPIApp(app)
)
With that in place, no manual secret handling is needed. The platform will detect the settings and guide you through assigning values for each environment.
Resource Tags and Dedicated Secret Types
Under the hood, Takk uses a ResourceTags enum to identify all kinds of infrastructure resources. Each tag represents a specific resource type that Takk knows how to provision and manage. You can annotate any field with a resource tag to tell Takk what infrastructure it represents:
from typing import Annotated
from takk.models import ResourceTags
from pydantic_settings import BaseSettings
class MyConfig(BaseSettings):
s3_key: Annotated[str, ResourceTags.s3_secret_key]
nats_creds: Annotated[str, ResourceTags.nats_creds_file]
Shorthand Types
For common resource tags, Takk provides dedicated shorthand types in takk.secrets and takk.models. These are Annotated aliases that make settings classes more concise
from takk.secrets import S3SecretKey, LokiToken, NatsCredsFile
from pydantic import PostgresDsn, NatsDsn
from pydantic_settings import BaseSettings
class InfraConfig(BaseSettings):
psql_url: PostgresDsn
nats_url: NatsDsn
nats_creds: NatsCredsFile
s3_key: S3SecretKey
loki_token: LokiToken
PostgreSQL:
| Shorthand | Equivalent |
PostgresHost | Annotated[str, ResourceTags.psql_host] |
PostgresName | Annotated[str, ResourceTags.psql_name] |
PostgresUsername | Annotated[str, ResourceTags.psql_username] |
PostgresPassword | Annotated[SecretStr, ResourceTags.psql_password] |
PostgresSsl | Annotated[str, ResourceTags.psql_ssl] |
Database (generic / serverless):
Used with serverless PostgreSQL and generic database connections. Takk maps these fields to a managed PostgreSQL instance automatically.
| Shorthand | Equivalent |
DatabaseHost | Annotated[str, ResourceTags.database_host] |
DatabaseName | Annotated[str, ResourceTags.database_name] |
DatabaseUsername | Annotated[str, ResourceTags.database_username] |
DatabasePassword | Annotated[str, ResourceTags.database_password] |
DatabaseSsl | Annotated[str, ResourceTags.database_ssl] |
S3 / Object Storage:
| Shorthand | Equivalent |
S3Endpoint | Annotated[AnyUrl, ResourceTags.s3_endpoint] |
S3AccessKey | Annotated[str, ResourceTags.s3_access_key] |
S3SecretKey | Annotated[SecretStr, ResourceTags.s3_secret_key] |
S3RegionName | Annotated[str, ResourceTags.s3_region_name] |
S3BucketName | Annotated[str, ResourceTags.s3_bucket_name] |
SQS / Message Queue:
| Shorthand | Equivalent |
SqsEndpoint | Annotated[AnyUrl, ResourceTags.sqs_endpoint] |
SqsAccessKey | Annotated[str, ResourceTags.sqs_access_key] |
SqsSecretKey | Annotated[SecretStr, ResourceTags.sqs_secret_key] |
SqsRegionName | Annotated[str, ResourceTags.sqs_region_name] |
NATS:
| Shorthand | Equivalent |
NatsCredsFile | Annotated[str, ResourceTags.nats_creds_file] |
Kafka:
| Shorthand | Equivalent |
KafkaBootstrapServers | Annotated[str, ResourceTags.kafka_bootstrap_servers] |
KafkaHost | Annotated[str, ResourceTags.kafka_host] |
KafkaPort | Annotated[int, ResourceTags.kafka_port] |
MLflow:
| Shorthand | Equivalent |
MlflowTrackingUri | Annotated[AnyUrl, ResourceTags.mlflow_tracking_uri] |
MlflowRegistryUri | Annotated[AnyUrl, ResourceTags.mlflow_registry_uri] |
ClickHouse:
| Shorthand | Equivalent |
ClickhouseHost | Annotated[str, ResourceTags.clickhouse_host] |
ClickhousePort | Annotated[int, ResourceTags.clickhouse_port] |
ClickhouseUsername | Annotated[str, ResourceTags.clickhouse_username] |
ClickhousePassword | Annotated[SecretStr, ResourceTags.clickhouse_password] |
ClickhouseDatabase | Annotated[str, ResourceTags.clickhouse_database] |
OpenSearch:
| Shorthand | Equivalent |
OpenSearchUrl | Annotated[AnyUrl, ResourceTags.opensearch_url] |
OpenSearchHost | Annotated[str, ResourceTags.opensearch_host] |
OpenSearchPort | Annotated[int, ResourceTags.opensearch_port] |
OpenSearchUser | Annotated[str, ResourceTags.opensearch_user] |
OpenSearchPassword | Annotated[SecretStr, ResourceTags.opensearch_password] |
Spark:
| Shorthand | Equivalent |
SparkConnectUrl | Annotated[str, ResourceTags.spark_connect_url] |
SparkMasterUrl | Annotated[str, ResourceTags.spark_master_url] |
SparkUiUrl | Annotated[str, ResourceTags.spark_ui_url] |
LLM:
| Shorthand | Equivalent |
LLMBaseAPI | Annotated[AnyUrl, ResourceTags.llm_base_api] |
LLMBaseUrl | Annotated[AnyUrl, ResourceTags.llm_base_url] |
LLMToken | Annotated[SecretStr, ResourceTags.llm_token] |
Observability:
| Shorthand | Equivalent |
LokiPushEndpoint | Annotated[str, ResourceTags.loki_push_endpoint] |
LokiUser | Annotated[str, ResourceTags.loki_user] |
LokiToken | Annotated[SecretStr, ResourceTags.loki_token] |
MimirBaseUrl | Annotated[str, ResourceTags.mimir_base_url] |
MimirToken | Annotated[SecretStr, ResourceTags.mimir_token] |
Notifications:
| Shorthand | Equivalent |
SlackWebhookUrl | Annotated[str, ResourceTags.slack_webhook_url] |
DiscordWebhookUrl | Annotated[str, ResourceTags.discord_webhook_url] |
TeamsWebhookUrl | Annotated[str, ResourceTags.teams_webhook_url] |
Multiple Resources
By default, each resource tag corresponds to a single resource named "default". If your application needs multiple instances of the same resource type—for example, two PostgreSQL databases or two NATS clusters—you can use ResourceRef to give each one a distinct name:
from typing import Annotated
from takk.models import ResourceTags
from takk.secrets import ResourceRef, NatsCredsFile
from pydantic import NatsDsn
from pydantic_settings import BaseSettings
class MultiResourceConfig(BaseSettings):
# Default NATS cluster (equivalent to ResourceRef(ResourceTags.nats_dsn, "default"))
nats_url: NatsDsn
nats_creds: NatsCredsFile
# A second NATS cluster named "other_nats"
second_nats_url: Annotated[str, ResourceRef(ResourceTags.nats_dsn, "other_nats")]
second_nats_creds: Annotated[str, ResourceRef(ResourceTags.nats_creds_file, "other_nats")]
When Takk sees a plain NatsDsn or S3SecretKey, it treats it as shorthand for Annotated[str, ResourceRef(ResourceTags.<tag>, "default")]. The explicit ResourceRef form lets you provision and reference any number of independent resources of the same type, and Takk will manage each one separately.
Service URLs
You can reference the URLs of other services in your project using the ServiceUrl annotation. This is useful when one service needs to call another, or when you need to expose a public-facing URL in configuration:
from typing import Annotated
from pydantic import AnyUrl
from pydantic_settings import BaseSettings
from takk.secrets import ServiceUrl
class ServiceUrls(BaseSettings):
# The public URL for "my_app"
app_url: Annotated[AnyUrl, ServiceUrl("my_app", "external")]
# The internal (cluster-local) URL for "my_app"
internal_app_url: Annotated[AnyUrl, ServiceUrl("my_app", "internal")]
The first argument to ServiceUrl is the service name as defined in your project, and the second specifies the URL type-"external" for the public-facing URL (with your custom domain) or "internal" for the cluster-internal URL. Takk resolves these automatically at deploy time based on your project's service configuration.
Putting It All Together
Here is a more complete example combining dedicated types, named resources, and service URLs:
from typing import Annotated
from pydantic import AnyUrl, NatsDsn, PostgresDsn
from pydantic_settings import BaseSettings
from takk.models import ResourceTags
from takk.secrets import ResourceRef, ServiceUrl, NatsCredsFile
class SharedSettings(BaseSettings):
# Service URLs
app_url: Annotated[AnyUrl, ServiceUrl("my_app", "external")]
# Default NATS resource
nats_url: NatsDsn
nats_creds: NatsCredsFile
# A second NATS cluster
second_nats_url: Annotated[str, ResourceRef(ResourceTags.nats_dsn, "other_nats")]
second_nats_creds: Annotated[str, ResourceRef(ResourceTags.nats_creds_file, "other_nats")]
Intelligent Version Matching
Because configuration and imports are written in Python, the platform can determine not only which resources are needed, but also which versions to deploy. For example, if the project depends on a specific version of mlflow, the platform can automatically provision an MLflow server that matches that version, avoiding compatibility issues between client and server.
The same applies to systems such as Spark, S3/MinIO object storage SDKs, and any service where the runtime infrastructure must align with the Python package version. If your project imports a Spark dependency or references a particular version in pyproject.toml, the deployed Spark cluster will be version-compatible. This drastically reduces configuration divergence and runtime errors.
Resource Discovery
The platform inspects the project and looks for configuration types that are known to represent infrastructure resources. For example, if a settings class includes types such as RedisDsn, PostgresDsn, MySQLDsn, NatsDsn, KafkaDsn, ClickHouseDsn, or MongoDsn, the system recognizes that these services may be required. It will then ask whether it should provision those services for you and automatically generate the corresponding connection credentials.
This means you do not need to pre-configure databases, message brokers, or analytics storage. The intent is captured by your Pydantic models, and the platform handles the resource creation and secret generation that follow from that intent.
ML and AI Discovery
There is similar intelligence for applications that rely on machine-learning, AI, and large-scale data features. If your repository contains references to MLflow, Spark, S3 or other object storage tooling, or API-driven platforms such as OpenAI or Anthropic, the platform may suggest provisioning compatible compute environments and creating any needed access tokens.
For example, if an import or configuration parameter implies the use of the OpenAI API, you may be prompted to allow the platform to generate and store an API key for all deployment environments. If Spark jobs are detected, it may propose setting up a cluster with matching versions automatically. Object storage needs can result in new buckets, access policies, and secret assignments without any additional coding.
Secrets Across Environments
Each deployment environment receives its own credentials. Development and preview environments can use temporary or low-privilege credentials, while production receives fully secured values. When changes move through the release pipeline, you may keep, regenerate, or update the secret values depending on the required security posture.
Summary
The secrets system is centered on Pydantic settings models, which serve as the single source of truth for required configuration. From these models, the platform:
secures values automatically,
maintains isolation between environments,
discovers required infrastructure and offers to provision it, and
creates or manages tokens for AI, ML, object storage, and similar external services.
All of these capabilities allow you to write Python configuration classes while the platform handles the difficult work of managing secrets at scale.

