« Previous 1 2 3
A Distributed SQL Database
Insect Collector
Transactions
Transactions are an essential part of any database system, providing guarantees around the consistency and atomicity of operations. CockroachDB supports distributed transactions with strong consistency guarantees, ensuring that your data remains consistent across a distributed environment. In this section, I discuss how to use transactions in CockroachDB, their isolation levels, and how they're implemented in a distributed setting, along with code examples.
For transactions in CockroachDB, you can use the standard SQL syntax (i.e., the BEGIN
, COMMIT
, and ROLLBACK
statements). For example, this code transfers $100 from one bank account to another:
BEGIN; UPDATE accounts SET balance = balance - 100 WHERE id = 1; UPDATE accounts SET balance = balance + 100 WHERE id = 2; COMMIT;
This transaction comprises two UPDATE
statements enclosed within a BEGIN
and COMMIT
block. CockroachDB ensures that either both UPDATE
statements are executed or neither of them are, maintaining consistency and integrity of the data.
CockroachDB supports two isolation levels for transactions: serializable and snapshot. By default, transactions in CockroachDB use the serializable isolation level, which provides the strongest consistency guarantees. Serializable isolation ensures that transactions appear to have executed sequentially, even if they were actually executed concurrently.
If you require lower latency reads at the cost of reduced consistency, you can use the snapshot isolation level. Snapshot isolation allows for more concurrency but can result in non-repeatable reads and other anomalies. To use snapshot isolation, you can specify it when beginning a transaction:
BEGIN TRANSACTION ISOLATION LEVEL SNAPSHOT; SELECT * FROM accounts WHERE id = 1; [...] <Other operations within the transaction> [...] COMMIT;
In a distributed environment, transactions can span multiple nodes, and maintaining consistency becomes more challenging, so again, CockroachDB addresses this by using a combination of the Raft consensus algorithm and the 2PC protocol.
Raft ensures strong consistency for individual operations by replicating them across a majority of nodes. The 2PC protocol extends this consistency guarantee to transactions that involve multiple operations by ensuring that either all of the operations are executed atomically or none of them are. This combination allows CockroachDB to maintain strong consistency even in a distributed setting.
CockroachDB also provides mechanisms for handling transaction conflicts and retries. If two transactions conflict (e.g., both try to update the same row), one of them will need to retry. CockroachDB's client libraries (e.g., the cockroachdb package for Go or the pg package for Node.js) provide built-in support for handling transaction retries automatically.
Indexes
Indexes play a crucial role in optimizing database performance by allowing faster query execution. CockroachDB supports various types of indexes that help you optimize your queries on the basis of specific access patterns. It automatically creates a primary index for each table, which is based on the primary key columns. The primary index is used for efficiently looking up rows.
For example, consider the table definition presented in Listing 3. In this case, CockroachDB creates a primary index on the id
column, which allows for lookups of users by their ID.
Listing 3
Sample Table with Index
CREATE TABLE users ( id UUID PRIMARY KEY, username VARCHAR(255) UNIQUE NOT NULL, email VARCHAR(255) UNIQUE NOT NULL, created_at TIMESTAMPTZ NOT NULL );
If you need to optimize queries by other columns, you can create secondary indexes on one or more columns, and they can have different storage orders and unique constraints. For example, if you want to optimize queries that look up users by their email addresses, you can create a secondary index on the email
column:
CREATE INDEX users_email_idx ON users(email);
Now queries that filter users by email addresses will be able to use this secondary index for faster execution:
SELECT * FROM users WHERE email = 'mary.poppins@example.com';
You can also create composite indexes on multiple columns, which can help optimize queries that filter on multiple conditions. For example, to create an index on the username
and created_at
columns, you can use the statement
CREATE INDEX users_username_created_at_idx ON users(username, created_at);
This composite index can be used to optimize queries like
SELECT * FROM users WHERE username = 'marypoppins'AND created_at > '2023-07-01';
CockroachDB also supports unique secondary indexes that enforce uniqueness constraints on the indexed columns. To create a unique index, simply add the UNIQUE
keyword to the CREATE INDEX
statement:
CREATE UNIQUE INDEX users_username_uq_idx ON users(username);
In this example, the unique index on the username
column ensures that no two users can have the same username.
CockroachDB provides various index management features, such as renaming, dropping, and modifying indexes. For example, to rename an index, you can use
ALTER INDEX users_email_idx RENAME TO users_email_new_idx;
Likewise, use
DROP INDEX users_email_new_idx;
to drop an index.
Conclusion
Having read thus far, you might ask yourself, "Why on Earth am I not using CockroachDB yet?!" Although the features sound impressive, CockroachDB still has some shortcomings. To be fair, though, they are being addressed constantly and might become less of an issue with time.
The first and obvious problem is complexity: It is much easier to administer a huge single instance than a cluster. However, once you move to a more apples-to-apples comparison, you could argue that administering a PostgreSQL cluster is not less complex than managing one based on CockroachDB, although the benefits of the latter might seem more obvious.
Second, the distributed nature is inherently linked to performance issues. In an ideal world, all networks work perfectly, but in practice you will always have bottlenecks and other occasional issues. Again, this is a trade-off related to the distributed architecture and not something inherent to CockroachDB itself.
Because it is a relatively new solution, it is far less mature than its older, non-distributed siblings, such as MySQL. As a result, the ecosystem is still underdeveloped, with everything that entails: It is more difficult to find talent and support if you want to use CockroachDB in more complex projects. Also, the SQL subset is still limited, although the situation improves year by year. Ultimately, you need to test it yourself and see whether it fits your requirements.
Infos
- CockroachDB: https://www.cockroachlabs.com/product/
- Securing the cluster: https://www.cockroachlabs.com/docs/v22.2/secure-a-cluster
- Releases page: https://www.cockroachlabs.com/docs/releases/index.html
- Installing CockroachDB on Linux: https://www.cockroachlabs.com/docs/v22.2/install-cockroachdb-linux
« Previous 1 2 3
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.