NoSQL vs SQL on AWS
DynamoDB vs RDS: NoSQL vs SQL on AWS
DynamoDB and RDS solve different problems. Choosing the wrong one means either paying for SQL flexibility you do not need, or inheriting NoSQL constraints that block your reporting requirements.
<div class="quick-answer"> **Quick Answer:** DynamoDB wins for high-throughput key-value/document access patterns. RDS wins when you need SQL, complex queries, or ad-hoc reporting. </div> DynamoDB and RDS are both AWS database services, but choosing between them is not a matter of preference — it is a matter of matching the database model to your data access patterns, query requirements, and scale characteristics. Using DynamoDB where you need SQL flexibility creates architectural dead ends. Using RDS where DynamoDB's key-value model suffices means paying for relational complexity and vertical scaling limits you do not need. This comparison is written to help architects avoid the two most common mistakes: choosing DynamoDB because it is "modern and scalable," and dismissing DynamoDB because SQL is familiar. ## Data Model Comparison The fundamental difference is how data is stored and retrieved. **RDS (PostgreSQL/MySQL):** Data is stored in tables with rows and columns. Relationships between tables are expressed as foreign keys. You query data using SQL — a declarative language that lets you express complex joins, aggregations, and filters against any column with an appropriate index. The schema is defined upfront but can evolve with `ALTER TABLE` migrations. **DynamoDB:** Data is stored in tables with items (similar to rows) containing attributes (similar to columns). Each item must have a partition key (and optionally a sort key) that uniquely identifies it. Retrieval is by partition key, sort key ranges, or Global Secondary Indexes (GSIs) — which must be defined when data patterns are known. There is no SQL, no joins, and no native aggregations. | Characteristic | DynamoDB | RDS (PostgreSQL/MySQL) | | ----------------- | -------------------------------------- | ---------------------------------- | | Data model | Key-value / document | Relational (tables, rows, columns) | | Query language | PartiQL (subset), DynamoDB API | SQL (full ANSI) | | Joins | Not supported | Full JOIN support | | Aggregations | Not supported natively | SUM, AVG, COUNT, GROUP BY | | Schema | Flexible (per-item attributes) | Fixed schema (DDL) | | Transactions | Single-table ACID; limited multi-table | Full ACID, multi-table | | Index flexibility | Must be defined upfront (GSIs) | Add indexes at any time | | Max item/row size | 400 KB per item | No practical row size limit | ## Performance Characteristics | Scenario | DynamoDB | RDS | | ---------------------------- | --------------------------------- | ----------------------------------------- | | Single-item key lookup | ~1-5ms at any scale | 1-10ms (cached), higher under load | | Complex multi-table join | Not supported | Milliseconds to seconds (query-dependent) | | Bulk scan / analytical query | Expensive, slow (full table scan) | Optimized with proper indexing | | Write throughput ceiling | Virtually unlimited (horizontal) | Limited by instance size, vertical | | Connection model | Stateless HTTP API | TCP connections (connection pool) | | Scaling model | Automatic horizontal | Vertical (instance) + read replicas | ## Cost Model Comparison DynamoDB and RDS have completely different pricing structures that make direct comparison non-trivial. **DynamoDB On-Demand:** - Reads: $0.25 per million Read Capacity Units (RCUs) - Writes: $1.25 per million Write Capacity Units (WCUs) - Storage: $0.25/GB-month - Best for: unpredictable or spiky traffic **DynamoDB Provisioned:** - Reserved RCUs/WCUs per second, billed hourly - ~70% cheaper than on-demand at steady traffic - Auto-scaling available - Best for: predictable, sustained traffic **RDS (PostgreSQL, db.r6g.large, Multi-AZ, us-east-1):** - Instance: ~$370/month (Multi-AZ) - Storage: $0.115/GB-month (gp3) - No per-query charge - Best for: sustained workloads with complex query patterns **Cost comparison at 10 million writes per day:** | Configuration | Monthly Cost | | ------------------------------------------------- | --------------------------------- | | DynamoDB On-Demand (10M writes/day, 100 GB) | ~$400-500/month | | DynamoDB Provisioned (steady traffic, ~116 WCU/s) | ~$150-200/month | | RDS db.r6g.large Multi-AZ + 100 GB | ~$395/month (no per-write charge) | At high, predictable write volumes, DynamoDB Provisioned and RDS are cost-competitive. DynamoDB On-Demand becomes expensive at sustained scale — it is priced for the convenience of variable traffic, not for cost efficiency at sustained high throughput. ## Query Flexibility Comparison This is the starkest practical difference between the two databases. **Queries DynamoDB handles naturally:** - Get user by user_id - Get all orders for a user, sorted by date (GSI on user_id, sort key on created_at) - Update a specific item's attribute - Put/Delete a specific item **Queries that require a full DynamoDB table scan or are not possible:** - "Get all users who signed up in the last 30 days" (requires a GSI or scan) - "Total revenue across all orders this month" (not possible natively — requires exporting to S3/Athena) - "Find all orders above $500 placed in California" (requires scan + filter) - "Which products have been ordered together most frequently?" (multi-table join pattern) For ad-hoc queries, reporting, or any analytics on DynamoDB data, teams typically export to S3 and query via Athena — adding operational complexity and query latency. ## DynamoDB Pitfalls **Hot partitions:** DynamoDB distributes data across partitions based on the partition key. If your partition key is poorly chosen (e.g., a status field with few distinct values, or a date where all today's traffic hits the same partition), all traffic concentrates on a small number of partitions — causing throttling even if your overall provisioned capacity is sufficient. Adaptive capacity helps but does not fully solve hot partition issues from bad key design. **Item size limits:** DynamoDB items are limited to 400 KB. Large JSON documents, embedded lists, or binary data that approach this limit require compression or external storage (S3 for large payloads). RDS has no practical equivalent constraint. **GSI costs:** Global Secondary Indexes have their own read/write capacity and storage, effectively doubling your WCU costs for every write that is reflected in a GSI. A table with 3 GSIs can cost 4x what the base table capacity suggests. **Single-table design cognitive cost:** Advanced DynamoDB usage involves single-table design — storing heterogeneous entity types in one table with composite sort keys and GSI overloading. This is powerful and efficient but has a steep learning curve and produces schemas that are difficult to understand without documentation. ## RDS Pitfalls **Vertical scaling ceiling:** RDS scales vertically. The largest RDS instances (db.r8g.48xlarge) are powerful, but scaling up requires instance resizing with a brief maintenance window. DynamoDB scales horizontally without limits or downtime. **Connection pool limits:** RDS supports a fixed number of database connections based on instance memory. A Lambda function that spawns 1,000 concurrent invocations can exhaust an RDS connection pool, causing errors. RDS Proxy helps but adds cost and latency. DynamoDB uses stateless HTTP calls — there are no connection pool limits. **Schema migrations in production:** `ALTER TABLE` on large tables in MySQL can lock the table; PostgreSQL is more flexible with `pg_rewrite` operations, but large schema changes still require planning and maintenance windows. ## Decision Framework by Use Case | Use Case | Recommended | Reason | | ------------------------ | ----------- | ----------------------------------------------- | | Session storage | DynamoDB | Key-value access, TTL support, high throughput | | User profile data | DynamoDB | Key-value access pattern, single-item reads | | E-commerce cart | DynamoDB | Item-level reads/writes, predictable access | | Order history (per user) | DynamoDB | GSI on user_id, sort by date | | Financial reporting | RDS | Complex aggregations, ad-hoc queries | | Multi-tenant SaaS | Either | Depends on query patterns | | Real-time leaderboard | DynamoDB | Atomic counters, sorted sets via sort key | | CMS / content | RDS | Ad-hoc filtering, full-text search | | IoT event ingestion | DynamoDB | High write throughput, time-series access | | ERP / accounting | RDS | Complex relational integrity, multi-table joins | ## Related Comparisons Explore other technical comparisons: - [AWS RDS vs Aurora](/compare/aws-rds-vs-aurora) - [Aurora Serverless vs Provisioned](/compare/aws-aurora-serverless-vs-aurora-provisioned) ## Why Work With FactualMinds FactualMinds is an **AWS Select Tier Consulting Partner** — a verified AWS designation earned through demonstrated technical expertise and customer success. Our architects have run production workloads for companies from seed-stage startups to enterprises. - **AWS Select Tier Partner** — verified by AWS Partner Network - **Architecture-first approach** — we evaluate your specific workload before recommending a solution - **No lock-in consulting** — we document everything so your team can operate independently - [AWS Marketplace Seller](https://aws.amazon.com/marketplace/seller-profile?id=seller-m753gfqftla7y) ---
Quick Answer: DynamoDB wins for high-throughput key-value/document access patterns. RDS wins when you need SQL, complex queries, or ad-hoc reporting.
DynamoDB and RDS are both AWS database services, but choosing between them is not a matter of preference — it is a matter of matching the database model to your data access patterns, query requirements, and scale characteristics. Using DynamoDB where you need SQL flexibility creates architectural dead ends. Using RDS where DynamoDB’s key-value model suffices means paying for relational complexity and vertical scaling limits you do not need.
This comparison is written to help architects avoid the two most common mistakes: choosing DynamoDB because it is “modern and scalable,” and dismissing DynamoDB because SQL is familiar.
Data Model Comparison
The fundamental difference is how data is stored and retrieved.
RDS (PostgreSQL/MySQL): Data is stored in tables with rows and columns. Relationships between tables are expressed as foreign keys. You query data using SQL — a declarative language that lets you express complex joins, aggregations, and filters against any column with an appropriate index. The schema is defined upfront but can evolve with ALTER TABLE migrations.
DynamoDB: Data is stored in tables with items (similar to rows) containing attributes (similar to columns). Each item must have a partition key (and optionally a sort key) that uniquely identifies it. Retrieval is by partition key, sort key ranges, or Global Secondary Indexes (GSIs) — which must be defined when data patterns are known. There is no SQL, no joins, and no native aggregations.
| Characteristic | DynamoDB | RDS (PostgreSQL/MySQL) |
|---|---|---|
| Data model | Key-value / document | Relational (tables, rows, columns) |
| Query language | PartiQL (subset), DynamoDB API | SQL (full ANSI) |
| Joins | Not supported | Full JOIN support |
| Aggregations | Not supported natively | SUM, AVG, COUNT, GROUP BY |
| Schema | Flexible (per-item attributes) | Fixed schema (DDL) |
| Transactions | Single-table ACID; limited multi-table | Full ACID, multi-table |
| Index flexibility | Must be defined upfront (GSIs) | Add indexes at any time |
| Max item/row size | 400 KB per item | No practical row size limit |
Performance Characteristics
| Scenario | DynamoDB | RDS |
|---|---|---|
| Single-item key lookup | ~1-5ms at any scale | 1-10ms (cached), higher under load |
| Complex multi-table join | Not supported | Milliseconds to seconds (query-dependent) |
| Bulk scan / analytical query | Expensive, slow (full table scan) | Optimized with proper indexing |
| Write throughput ceiling | Virtually unlimited (horizontal) | Limited by instance size, vertical |
| Connection model | Stateless HTTP API | TCP connections (connection pool) |
| Scaling model | Automatic horizontal | Vertical (instance) + read replicas |
Cost Model Comparison
DynamoDB and RDS have completely different pricing structures that make direct comparison non-trivial.
DynamoDB On-Demand:
- Reads: $0.25 per million Read Capacity Units (RCUs)
- Writes: $1.25 per million Write Capacity Units (WCUs)
- Storage: $0.25/GB-month
- Best for: unpredictable or spiky traffic
DynamoDB Provisioned:
- Reserved RCUs/WCUs per second, billed hourly
- ~70% cheaper than on-demand at steady traffic
- Auto-scaling available
- Best for: predictable, sustained traffic
RDS (PostgreSQL, db.r6g.large, Multi-AZ, us-east-1):
- Instance: ~$370/month (Multi-AZ)
- Storage: $0.115/GB-month (gp3)
- No per-query charge
- Best for: sustained workloads with complex query patterns
Cost comparison at 10 million writes per day:
| Configuration | Monthly Cost |
|---|---|
| DynamoDB On-Demand (10M writes/day, 100 GB) | ~$400-500/month |
| DynamoDB Provisioned (steady traffic, ~116 WCU/s) | ~$150-200/month |
| RDS db.r6g.large Multi-AZ + 100 GB | ~$395/month (no per-write charge) |
At high, predictable write volumes, DynamoDB Provisioned and RDS are cost-competitive. DynamoDB On-Demand becomes expensive at sustained scale — it is priced for the convenience of variable traffic, not for cost efficiency at sustained high throughput.
Query Flexibility Comparison
This is the starkest practical difference between the two databases.
Queries DynamoDB handles naturally:
- Get user by user_id
- Get all orders for a user, sorted by date (GSI on user_id, sort key on created_at)
- Update a specific item’s attribute
- Put/Delete a specific item
Queries that require a full DynamoDB table scan or are not possible:
- “Get all users who signed up in the last 30 days” (requires a GSI or scan)
- “Total revenue across all orders this month” (not possible natively — requires exporting to S3/Athena)
- “Find all orders above $500 placed in California” (requires scan + filter)
- “Which products have been ordered together most frequently?” (multi-table join pattern)
For ad-hoc queries, reporting, or any analytics on DynamoDB data, teams typically export to S3 and query via Athena — adding operational complexity and query latency.
DynamoDB Pitfalls
Hot partitions: DynamoDB distributes data across partitions based on the partition key. If your partition key is poorly chosen (e.g., a status field with few distinct values, or a date where all today’s traffic hits the same partition), all traffic concentrates on a small number of partitions — causing throttling even if your overall provisioned capacity is sufficient. Adaptive capacity helps but does not fully solve hot partition issues from bad key design.
Item size limits: DynamoDB items are limited to 400 KB. Large JSON documents, embedded lists, or binary data that approach this limit require compression or external storage (S3 for large payloads). RDS has no practical equivalent constraint.
GSI costs: Global Secondary Indexes have their own read/write capacity and storage, effectively doubling your WCU costs for every write that is reflected in a GSI. A table with 3 GSIs can cost 4x what the base table capacity suggests.
Single-table design cognitive cost: Advanced DynamoDB usage involves single-table design — storing heterogeneous entity types in one table with composite sort keys and GSI overloading. This is powerful and efficient but has a steep learning curve and produces schemas that are difficult to understand without documentation.
RDS Pitfalls
Vertical scaling ceiling: RDS scales vertically. The largest RDS instances (db.r8g.48xlarge) are powerful, but scaling up requires instance resizing with a brief maintenance window. DynamoDB scales horizontally without limits or downtime.
Connection pool limits: RDS supports a fixed number of database connections based on instance memory. A Lambda function that spawns 1,000 concurrent invocations can exhaust an RDS connection pool, causing errors. RDS Proxy helps but adds cost and latency. DynamoDB uses stateless HTTP calls — there are no connection pool limits.
Schema migrations in production: ALTER TABLE on large tables in MySQL can lock the table; PostgreSQL is more flexible with pg_rewrite operations, but large schema changes still require planning and maintenance windows.
Decision Framework by Use Case
| Use Case | Recommended | Reason |
|---|---|---|
| Session storage | DynamoDB | Key-value access, TTL support, high throughput |
| User profile data | DynamoDB | Key-value access pattern, single-item reads |
| E-commerce cart | DynamoDB | Item-level reads/writes, predictable access |
| Order history (per user) | DynamoDB | GSI on user_id, sort by date |
| Financial reporting | RDS | Complex aggregations, ad-hoc queries |
| Multi-tenant SaaS | Either | Depends on query patterns |
| Real-time leaderboard | DynamoDB | Atomic counters, sorted sets via sort key |
| CMS / content | RDS | Ad-hoc filtering, full-text search |
| IoT event ingestion | DynamoDB | High write throughput, time-series access |
| ERP / accounting | RDS | Complex relational integrity, multi-table joins |
Related Comparisons
Explore other technical comparisons:
Why Work With FactualMinds
FactualMinds is an AWS Select Tier Consulting Partner — a verified AWS designation earned through demonstrated technical expertise and customer success. Our architects have run production workloads for companies from seed-stage startups to enterprises.
- AWS Select Tier Partner — verified by AWS Partner Network
- Architecture-first approach — we evaluate your specific workload before recommending a solution
- No lock-in consulting — we document everything so your team can operate independently
- AWS Marketplace Seller
Frequently Asked Questions
Is DynamoDB faster than RDS?
When should I use DynamoDB instead of RDS?
Can DynamoDB replace PostgreSQL?
What is DynamoDB's biggest limitation?
How do I choose between DynamoDB and RDS?
Not Sure Which AWS Service Is Right?
Our AWS-certified architects help engineering teams choose the right architecture for their workload, scale, and budget — before they build the wrong thing.
