Updating your schema
Introduction
Triplit provides tools that help you update your schema in a way that will:
- allow clients with the previous version of the schema to continue syncing and
- maintain the integrity of data in their caches and on the server
With these goals in mind, Triplit divides schema changes into two categories:
Backwards compatible changes
These are changes that you can make to your schema that will never corrupt existing data in the database or cause issues with clients that have a local cache. These changes are:
- Adding optional attributes
- Adding new collections
Triplit will allow you to make these changes to your schema while the server is running, and you can push them to the server using the triplit schema push
command.
Backwards incompatible changes
Any other change to the schema is considered backwards incompatible. These include:
- removing an attribute
- adding a required attribute
- changing the type of an attribute
These are "backwards incompatible" because even though they may be changed safely on the server, your app may have online client with cached data or offline clients with durable caches that violate the new schema.
This does not mean that you can't make these changes, but it does mean that you will need to account for the fact that clients may have data in their local cache that doesn't match the schema. This may necessitate updating your client code to:
- loosen the assumptions about schema in the client code, e.g. handling attributes that have been removed or changed.
- run a script on app load that migrates database to match the new schema.
- clear the local cache on app load.
None of these are ideal solutions, which is why making backwards incompatible changes to the schema is discouraged. However, Triplit will allow you to make backwards incompatible changes to the schema if they do not corrupt existing data in the database e.g. you may remove an attribute from the schema, but only if all of the existing entities in the collection have that attribute set to undefined
.
In production, it is recommended that you do not make backwards incompatible changes if your client applications have a durable cache, e.g. one using IndexedDB. You can ensure that all backwards incompatible changes are rejected by the server by setting the triplit schema push --enforceBackwardsCompatibility
flag.
triplit schema push
This command will look at the schema defined at ./triplit/schema.ts
and attempt to apply it to the server while it's still running. If the schema is backwards compatible, it will be applied immediately. If the schema has potentially dangerous changes that do not violate any data integrity constraints, it will also be applied. This behavior is useful for development, but in production, you should always use the --enforceBackwardsCompatibility
flag to ensure that the schema is backwards compatible.
Read more about the schema push command and its options.
Client compatibility
When a client connects to a Triplit server, it compares the schema it has stored locally with the schema on the server. If the schemas are incompatible, the client will refuse to connect to the server. This is a safety feature to prevent data corruption. That does not mean that you can't update your schema on the server, but you must do so in a way that is backwards compatible. This page describes the tools Triplit provides to make this process as smooth as possible.
Guided example
In this section, we'll walk through a simple example of how to update a schema in production. We'll start with a simple schema, add an attribute to it, and then push the schema to the server. We'll also cover how to handle backwards incompatible changes.
Getting setup
Let start with a simple schema, defined at ./triplit/schema.ts
in your project directory.
import { Schema as S } from '@triplit/client';
const schema = S.Collections({
todos: {
schema: S.Schema({
id: S.Id(),
text: S.String(),
completed: S.Boolean({ default: false }),
}),
},
});
You can start the development server with the schema pre-loaded.
triplit dev
By default, the server will use in-memory storage, meaning that if you shut down the server, all of your data will be lost. This can be useful when you're early in development and frequently iterating on your schema. If you like this quick feedback loop but don't want to repeatedly re-insert test data by hand, you can use Triplit's seed
commands. You can use seed command on its own:
triplit seed run my-seed
Or use the --seed
flag with triplit dev
:
triplit dev --seed=my-seed
If you want a development environment that's more constrained and closer to production, consider using the SQLite (opens in a new tab) persistent storage option for the development server:
triplit dev --storage=sqlite
Your database will be saved to triplit/.data
. You can delete this folder to clear your database.
Updating your schema
Let's assume you've run some version of triplit dev
shown above and have a server up and running with a schema. You've also properly configured your .env
such that Triplit CLI commands will be pointing at it. Let's also assume you've added some initial todos:
import { TriplitClient } from '@triplit/client';
import { schema } from '../triplit';
const client = new TriplitClient({
schema,
serverUrl: import.meta.env.VITE_TRIPLIT_SERVER_URL,
token: import.meta.env.VITE_TRIPLIT_TOKEN,
});
client.insert('todos', { text: 'Get groceries' });
client.insert('todos', { text: 'Do laundry' });
client.insert('todos', { text: 'Work on project' });
Adding an attribute
Now let's edit our schema by adding a new tagId
attribute to todos
, in anticipation of letting users group their todos by tag.
const schema = S.Collections({
todos: {
schema: S.Schema({
id: S.Id(),
text: S.String(),
completed: S.Boolean({ default: false }),
tagId: S.String(),
}),
},
});
Pushing the schema
We're trying to mimic production patterns as much as possible, so we're not going to restart the server to apply this change (and in fact, that would cause problems, as we'll soon see). Instead let's use a new command:
triplit schema push
This will look at the schema defined at ./triplit/schema.ts
and attempt to apply it to the server while it's still running. In our case, it fails, and we get an error like this:
✖ Failed to push schema to server
Found 1 backwards incompatible schema changes.
Schema update failed. Please resolve the following issues:
Collection: 'todos'
'tagIds'
Issue: added an attribute where optional is not set
Fix: make 'tagIds' optional or delete all entities in 'todos' to allow this edit
What's at issue here is that we tried to change the shape/schema of a todo to one that no longer matches those in the database. All attributes in Triplit are required by default, and by adding a new attribute without updating the existing todos, we would be violating the contract between the schema and the data.
Thankfully, the error gives us some instructions. We can either
- Make
tagId
optional e.g.tagIds: S.Optional(S.String())
and permit existing todos to have atagId
that'sundefined
. - Delete all of the todos in the collection so that there isn't any conflicting data.
While 2. might be acceptable in development, 1. is the obvious choice in production. In production, we would first add the attribute as optional, backfill it for existing entities with calls to client.update
, as well as upgrade any clients to start creating new todos with tagId
defined. Only when you're confident that all clients have been updated to handle the new schema and all existing data has been updated to reflect the target schema, should you proceed with a backwards incompatible change.
Whenever you try to triplit schema push
, the receiving database will run a diff between the current schema and the one attempting to be applied and surface issues like these. Here are all possible conflicts that may arise.
Fixing the issues
Let's make tagId
optional:
const schema = S.Collections({
todos: {
schema: S.Schema({
id: S.Id(),
text: S.String(),
completed: S.Boolean({ default: false }),
tagId: S.Optional(S.String()),
}),
},
});
Now we can run triplit schema push
again, and it should succeed. For completeness, let's also backfill the tagId
for existing todos:
await client.transact(async (tx) => {
const allTodos = await tx.fetch(client.query('todos'));
for (const [_id, todo] of allTodos) {
await tx.update('todos', todo.id, (entity) => {
entity.tagId = 'chores';
});
}
});
We've now successfully updated our schema in a backwards compatible way. If you're confident that all clients have been updated to handle the new schema and all existing data has been updated to reflect the target schema, you can then choose to make tagId
required.
Curing backwards incompatible changes
Adding an attribute where optional is not set
Like in the example above, these changes will be backwards incompatible if you have existing entities in that collection. In production, only add optional attributes, and backfill that attribute for existing entities.
Removing a non-optional attribute
This is a backwards incompatible change, as it would leave existing entities in the collection with a missing attribute. In production, deprecate the attribute by making it optional, delete the attribute from all existing entities (set it to undefined
), and then you be allowed to remove it from the schema.
Removing an optional attribute
While not technically a backwards incompatible change, it would lead to data loss. In production, delete the attribute from all existing entities first (set it to undefined
) and then it will be possible to remove it from the schema.
Changing an attribute from optional to required
This is a backwards incompatible change, as existing entities with this attribute set to undefined
will violate the schema. In production, update all existing entities to have a non-null value for the attribute, and then you will be able to make it required.
Changing the type of an attribute
Triplit will prevent you from changing the type of an attribute if there are existing entities in the collection. In production, create a new optional attribute with the desired type, backfill it for existing entities, and then remove the old attribute following the procedure described above ("Removing an optional attribute").
Changing the type of a set's items
This is similar to changing the type of an attribute, but for sets. In production, create a new optional set attribute with the desired type, backfill it for existing entities, and then remove the old set following the procedure described above ("Removing an optional attribute").
Changing an attribute from nullable to non-nullable
Triplit will prevent you from changing an attribute from nullable to non-nullable if there are existing entities in the collection for which the attribute is null
. In production, update all of the existing entities to have a non-null value for the attribute and take care that no clients will continue writing null
values to the attribute. Then you will be able to make it non-nullable.
Changing a string to an enum string or updating an enum
Triplit will prevent you from changing a string to an enum string or updating an enum if there are existing entities in the collection with values that are not in the new enum. In production, update all of the existing entities to have a value that is in the new enum and then you will be able to make the change.
Removing a relation
Because relations in Triplit are just views on data in other collections, removing a relation will not corrupt data but can still lead to backward-incompatible behavior between client and server. For instance, if the server's schema is updated to remove a relation, but an out-of-date client continues to issues queries with clauses that reference that relation, such as include
, a relational where
filter, or an exists
filter, the server will reject those queries. In production, you may need to deprecate clients that are still using the old relation and force them to update the app with the new schema bundled in.
Handling schema changes in the client
onDatabaseInit
hook
Triplit provides a hook onDatabaseInit
that runs after your client-side database has initialized and before any client operations are run and syncing begins. It will report any issues related to updating the schema on database initialization through the event
parameter. In the case of a successful migration, the database in the hook will have the latest schema applied. In the case of a failure, the database will have the currently saved schema applied.
An event may be one of the following type
:
SUCCESS
: The database was initialized successfully and is ready to be used.SCHEMA_UPDATE_FAILED
: The database was unable to update the schema. This may be due to a backwards incompatible change or a failure to migrate existing data. Alongside thistype
, the event will also have achange
object that contains information about the schema change that failed. This object has the following properties:code
: The code of the error that occurred. This will be one of the following:EXISTING_DATA_MISMATCH
: The schema change failed because there was existing data in the database that did not match the new schema.SCHEMA_UPDATE_FAILED
: The schema change failed for some other reason.
newSchema
: The new schema that was attempted to be applied.oldSchema
: The old schema that was in place before the update.
ERROR
: An error occurred while initializing the database. The event will have anerror
property that contains the error that occurred.
Below is an example of how to use the onDatabaseInit
hook to clear data from your local database if the schema cannot be migrated:
import { TriplitClient } from '@triplit/client';
import { Schema as S } from '@triplit/client';
const client = new TriplitClient({
schema: S.Collections({
// Your schema here
}),
experimental: {
onDatabaseInit: async (db, event) => {
if (event.type === 'SUCCESS') return;
if (event.type === 'SCHEMA_UPDATE_FAILED') {
if (event.change.code === 'EXISTING_DATA_MISMATCH') {
// clear local database
await db.clear();
// retry schema update
const nextChange = await db.overrideSchema(event.change.newSchema);
// Schema update succeeded!
if (nextChange.successful) return;
}
}
// handle other cases...
// Handle fatal states as you see fit
telemetry.reportError('database init failed', {
event,
});
throw new Error('Database init failed');
},
},
});