NoBS Python

Understand

Secrets Management

Many applications need access to sensitive values such as database passwords, API keys, object storage credentials, or ML/AI service tokens. The platform handles these securely and automatically using Pydantic Settings, so that developers can focus on writing code rather than managing environment secrets.

How It Works

Secrets are defined using Pydantic’s settings module, imported like this:

python
from pydantic_settings import BaseSettings

A settings class that inherits from BaseSettings becomes a secret container. The platform reads these declarations and creates the secure storage needed for each deployment environment. It manages values separately for production, test, development, and also for temporary preview environments such as pull requests. When your application runs, these values are injected as environment variables so the code stays completely configuration-driven and secure.

Because configuration lives in Python rather than YAML or dashboard-based configuration, the platform gains a deeper understanding of your application’s dependencies. This enables it to make accurate decisions about the infrastructure and the versions that need to be deployed.

Declaring Secrets in Your Project

To begin, create a Pydantic settings class that represents the required configuration for your service:

python
from pydantic import PostgresDns, SecretStr
from pydantic_settings import BaseSettings

class SharedSecrets(BaseSettings):
    openai_api_key: SecretStr
    psql_url: PostgresDns

You then register this class in your project definition:

python
project = Project(
    name="my-project",
    shared_secrets=[ObjectStorageConfig, SharedSecrets],
    server=FastAPIApp(app)
)

With that in place, no manual secret handling is needed. The platform will detect the settings and guide you through assigning values for each environment.

Intelligent Version Matching

Because configuration and imports are written in Python, the platform can determine not only which resources are needed, but also which versions to deploy. For example, if the project depends on a specific version of mlflow, the platform can automatically provision an MLflow server that matches that version, avoiding compatibility issues between client and server.

The same applies to systems such as Spark, S3/MinIO object storage SDKs, and any service where the runtime infrastructure must align with the Python package version. If your project imports a Spark dependency or references a particular version in pyproject.toml, the deployed Spark cluster will be version-compatible. This drastically reduces configuration divergence and runtime errors.

Resource Discovery

The platform inspects the project and looks for configuration types that are known to represent infrastructure resources. For example, if a settings class includes types such as RedisDns, PostgresDns, MySQLDns, NatsDns, ClickhouseDns, or MongoDns, the system recognizes that these services may be required. It will then ask whether it should provision those services for you and automatically generate the corresponding connection credentials.

This means you do not need to pre-configure databases, message brokers, or analytics storage. The intent is captured by your Pydantic models, and the platform handles the resource creation and secret generation that follow from that intent.

ML and AI Discovery

There is similar intelligence for applications that rely on machine-learning, AI, and large-scale data features. If your repository contains references to MLflow, Spark, S3 or other object storage tooling, or API-driven platforms such as OpenAI or Anthropic, the platform may suggest provisioning compatible compute environments and creating any needed access tokens.

For example, if an import or configuration parameter implies the use of the OpenAI API, you may be prompted to allow the platform to generate and store an API key for all deployment environments. If Spark jobs are detected, it may propose setting up a cluster with matching versions automatically. Object storage needs can result in new buckets, access policies, and secret assignments without any additional coding.

Secrets Across Environments

Each deployment environment receives its own credentials. Development and preview environments can use temporary or low-privilege credentials, while production receives fully secured values. When changes move through the release pipeline, you may keep, regenerate, or update the secret values depending on the required security posture.

Summary

The secrets system is centered on Pydantic settings models, which serve as the single source of truth for required configuration. From these models, the platform:

  • secures values automatically,
  • maintains isolation between environments,
  • discovers required infrastructure and offers to provision it, and
  • creates or manages tokens for AI, ML, object storage, and similar external services.

All of these capabilities allow you to write Python configuration classes while the platform handles the difficult work of managing secrets at scale.

Previous
Project Definition