Somewhere in the dusty archives of database history, someone decided that column type declarations were the perfect place to enforce business rules. It made a kind of intuitive sense at the time: if a Social Security Number is always 11 characters, just slap VARCHAR(11) on there and let the database handle it.

Except that's not how it works in practice. All it takes is one entry that requires a longer field length to reveal the monstrosities lurking within this approach. Like most databases, Postgres tracks object dependencies. Column attributes get copied by numerous downstream database objects, and even a single such dependency prevents column modifications. The longer the dependency chain, the worse the situation becomes.

Fortunately there's a better way, and it's been sitting right there in the SQL standard since the beginning: CHECK constraints. They do the same job without the collateral repercussions.

Let's see how.

Catalogs Cast in Stone

When declaring a column as VARCHAR(10), Postgres dutifully records that type modifier in the attribute catalog, specifically in the pg_attribute.atttypmod column. That value becomes metadata about the column itself, baked directly into the type definition. It's actually a property of the column that Postgres enforces at the storage layer.

Why does this matter? Because anything that references that column inherits the type definition wholesale. Views, materialized views, functions with typed parameters, composite types, you name it. They all pick up VARCHAR(10) as if it were gospel, because from the catalog's perspective, it is.

Consider a straightforward scenario:

CREATE TABLE my_table (
  junk VARCHAR(10)
);

CREATE VIEW v_junk AS
SELECT * FROM my_table;

Everything looks fine, and the length limit works as expected:

INSERT INTO my_table (junk) VALUES ('long string');

ERROR:  value too long for type character varying(10)

Great! The system did its job. But now business requirements shift (as they always do) and we need to allow slightly longer strings. Should be easy, right?

ALTER TABLE my_table ALTER junk TYPE VARCHAR(15);

ERROR:  cannot alter type of a column used by a view or rule
DETAIL:  rule _RETURN on view v_junk depends on column "junk"

Oof. The view uses the original type definition as part of its own structure, and Postgres refuses to let us change the underlying column as a result. This is actually correct behavior on the part of Postgres; silently altering the type of a column that other objects depend on would be far worse than refusing. It would be nice if Postgres had an option to cascade the change through all dependent objects, but that isn't available, so we are stuck with the original definition.

The Dependency Cascade

In a toy example with one table and one view, the fix is obvious: drop the view, alter the column, recreate the view. Annoying, but manageable.

Now imagine a production database with 200 tables, many of which have multiple views layered on top of them. Some of those views feed other views. Some are referenced by functions. Others are exposed through an API layer that has its own set of assumptions about column types. Suddenly "just drop and recreate" becomes a multi-hour migration involving careful dependency ordering, and $Deity forbid something goes wrong halfway through.

The pg_depend catalog tracks all of these relationships, and it can be genuinely alarming to see how deep the dependency tree goes for a single column type change. On a sufficiently mature system, altering VARCHAR(10) to VARCHAR(15) might require dropping and recreating a dozen objects in exactly the right order, all within a transaction that you're silently praying doesn't fail at step eleven.

This isn't a Postgres limitation so much as a consequence of how SQL type systems work. The type modifier is part of the column's identity. Changing it is a structural alteration with many downstream effects.

So what can we do instead?

Constraints to the Rescue

Postgres CHECK constraints take an entirely different approach. Instead of embedding validation into the type system, they sit alongside the column as independent database objects. They're evaluated at runtime during INSERT and UPDATE operations, and they have their own identity in the catalog. Most importantly, they can be dropped and recreated independently of the column definition.

Here's the same table, rebuilt with a CHECK constraint instead of a type modifier:

CREATE TABLE my_table (
  junk VARCHAR,
  CONSTRAINT junk_limit CHECK (length(junk) < 11)
);

CREATE VIEW v_junk AS
SELECT * FROM my_table;

Notice that the column is now plain VARCHAR with no length specifier. The length enforcement comes entirely from the named constraint. Let's verify it works:

INSERT INTO my_table (junk) VALUES ('long string');

ERROR:  new row for relation "my_table" violates check constraint "junk_limit"
DETAIL:  Failing row contains (long string).

The string "long string" is 11 characters, and our constraint requires length(junk) < 11, so it fails as expected. The error message is different (it names the constraint rather than citing the type), but the effect is identical: invalid data is rejected.

Now for the moment of truth. Can we change the limit without disturbing the view?

BEGIN;
ALTER TABLE my_table DROP CONSTRAINT junk_limit;
ALTER TABLE my_table ADD CONSTRAINT junk_limit CHECK (length(junk) < 16);
COMMIT;

INSERT INTO my_table (junk) VALUES ('long string');

No error on the ALTER, no complaints about dependent views, and the insert succeeds because "long string" is well within our new 15-character limit. The view never knew anything changed because, from its perspective, nothing did. The column is still VARCHAR, same as before. Only the constraint was swapped, and constraints don't participate in type dependency tracking.

Unlimited Cosmic Power

Once you start thinking of validation as a constraint rather than a type attribute, the possibilities open up considerably. A VARCHAR(10) only stipulates that "this string must be 10 characters or fewer." A CHECK constraint can enforce anything that evaluates as a boolean.

Want to enforce a minimum length as well?

ALTER TABLE system_login
ADD CONSTRAINT username_length
CHECK (length(username) BETWEEN 3 AND 50);

Need to ensure a code follows a specific pattern like AAA-NNNN?

ALTER TABLE product
ADD CONSTRAINT product_code_format
CHECK (product_code ~ '^[A-Z]{3}-[0-9]{4}$');

How about validating that a column only contains printable ASCII?

ALTER TABLE message
ADD CONSTRAINT clean_description_text
CHECK (description ~ '^[\x20-\x7E]*$');

None of these are possible with type modifiers alone. It's possible to drop, modify, and recreate them without any impact on dependent objects. The constraint is decoupled from the column's identity, which means it's decoupled from the entire dependency graph. We can add or remove as many arbitrary rules as we want, with only a short lock to update the system catalog.

There's also a subtlety worth noting here regarding NULL handling. CHECK constraints evaluate successfully when the expression returns true or NULL. This means a CHECK constraint on string length won't prevent NULL values on its own; it’s still necessary to specify NOT NULL if that's the intent.

Alternatively, it's also possible to do this:

ALTER TABLE my_table
ADD CONSTRAINT junk_limit
CHECK (length(junk) < 16 AND junk IS NOT NULL);

Remember, any boolean evaluation works here. Yes it's better to use NOT NULL in the column definition itself, but what if the table already has billions of rows and you want to prevent more NULL values from being inserted? Normally that requires a trigger until it’s possible to change the column definition, and we can't change the definition until all existing rows contain non-NULL values. The CHECK definition bridges the gap without as much overhead.

The One True Type

Many experienced Postgres users will simply recommend using TEXT for all string columns. Postgres does, after all, treat VARCHAR, VARCHAR(n), and TEXT identically in terms of storage and performance. The (n) modifier merely adds attribute metadata, it doesn't optimize storage. A VARCHAR(255) column doesn't use less space than a TEXT column storing the same string. The type modifier is purely a validation mechanism, and as we've established, a rather draconian one.

Using TEXT everywhere with CHECK constraints for validation achieves the best of both worlds. This approach provides the same enforcement as a column length specification, and delivers improved flexibility to modify constraints without a multi-step migration. 

Constraints also simplify migrations from legacy schemas. An existing database chock-full of VARCHAR(n) columns now has a modernization path that looks something like this:

BEGIN;

-- Add the equivalent CHECK constraint
ALTER TABLE my_table
  ADD CONSTRAINT junk_limit CHECK (length(junk) < 11);

-- Drop dependent views (one last time!)
DROP VIEW v_junk;

-- Remove the type modifier
ALTER TABLE my_table ALTER junk TYPE TEXT;

-- Recreate views
CREATE VIEW v_junk AS
SELECT * FROM my_table;

COMMIT;

Yes, the migration itself still requires dealing with view dependencies; that's unavoidable for the initial type change. But it's a one-time cost. After that, every future length change is a simple constraint swap with zero dependency headaches.

The Performance Question

What about overhead? Is a CHECK constraint slower to enforce than a type length?

The type modifier check happens at a very low level in the Postgres input function for the type. Postgres CHECK constraints are evaluated as an expression during the standard constraint-checking phase of an INSERT or UPDATE. As a natural consequence, the CHECK constraint involves marginally more overhead because it's running an expression evaluator rather than a simple integer comparison in the type input function.

This is something we can actually quantify. Let's check a 10-million row insert using the standard approach:

\timing on

CREATE TABLE my_table (
  junk VARCHAR(10)
);

INSERT INTO my_table (junk)
SELECT generate_series(1, 10_000_000)::VARCHAR;

INSERT 0 10000000
Time: 7608.862 ms (00:07.609)

And now with a CHECK constraint:

\timing on

CREATE TABLE my_table (
  junk VARCHAR,
  CONSTRAINT junk_limit CHECK (length(junk) < 11)
);

INSERT INTO my_table (junk)
SELECT generate_series(1, 10_000_000)::VARCHAR;

INSERT 0 10000000
Time: 8272.950 ms (00:08.273)

It's not exactly statistical noise, but a difference of ~66 nanoseconds per row is hard to really complain about. We may as well quibble about the weight of the paint on a freight train.

Additionally, CHECK constraints convey better diagnostic information when they fire. Rather than the generic "value too long for type character varying(10)", Postgres reports the offending constraint name, which is presumably a more meaningful designation. On a table with many string columns, knowing that a value violated customer_email_length tends to be more helpful than knowing one of many VARCHAR column values was too long.

All in a Name

Speaking of constraint names, this is one area where discipline pays dividends. Always name CHECK constraints explicitly. Postgres will auto-generate names otherwise, and while those algorithmic names are functional, they leave a lot to be desired.

Compare:

CREATE TABLE customer (email TEXT);
ALTER TABLE customer ADD CHECK (length(email) < 255);

\d customer

            Table "public.customer"
 Column | Type | Collation | Nullable | Default 
--------+------+-----------+----------+---------
 email  | text |           |          | 
Check constraints:
    "customer_email_check" CHECK (length(email) < 255)

Postgres produced a constraint named “customer_email_check” in this case. What was the check? Formatting? Validation of "name@tld" contents? Something else? Let's be more specific:

CREATE TABLE customer (email TEXT);
ALTER TABLE customer
  ADD CONSTRAINT customer_email_max_length
CHECK (length(email) < 255);

The explicit name clearly communicates what the constraint enforces from the name itself. When a constraint violation shows up in application logs at 3 AM, the constraint naming-scheme should disambiguate what went wrong without having to examine the table schema. It also makes the drop-and-recreate pattern cleaner. Rather than looking up the auto-generated constraint name or assuming what Postgres will do based on inputs, all future operations reference a predefined object name without any ambiguity.

Learning to Let Go

The core insight here is really about where validation logic belongs in a database schema. Type modifiers embed validation into the column's identity, which ties it into the dependency graph and makes it resistant to change. CHECK constraints keep validation as a separate, independently manageable object. This results in the same business logic enforcement pattern, but uses a subtly different dependency graph that’s far more flexible.

For legacy systems that already have VARCHAR(n) everywhere, the migration to CHECK constraints (or TEXT with CHECK constraints) is a worthwhile investment. Yes, there's a one-time cost to untangle the existing dependencies, but every subsequent schema change becomes simpler. For new schemas, there's really no reason to use VARCHAR(n) at all unless an external system absolutely demands it.

Postgres gives us the tools to build schemas that are both strict and adaptable. CHECK constraints are one of those tools that have always been there, patiently waiting for us to stop hard-coding our validation into the type system. It's time we started using them.

Give it a try on your next project, and you'll wonder why you ever put a number in parentheses after VARCHAR. True enlightenment comes with the second epiphany that TEXT takes this a step further, forever relegating legacy SQL types to the dustbin of history. If you’re not ready for that particular conversation just yet, the Postgres community is nothing but patient. Consider CHECK constraints your introduction to contemporary SQL the Postgres Way™.