Challenges • Asked 2 months ago by Sarah Chen
The “Enhancing your Document-Oriented Database” lesson was a solid refresher on indexing and validation. Curious how folks here handle live schema evolution in production without outages.
What’s worked well for me:
- Add schemaVersion to every doc; keep reads backward-compatible.
- Turn on validation in warn/lenient mode first; tighten after backfill.
- Background/online index builds, then flip queries to use the new index.
- Dual-write new+old fields for a window; migrate with idempotent batches.
- Measure read/write amplification and cache hit rate before/after.
Pitfalls I’ve seen:
- Hot shards from monotonically increasing keys.
- Overly wide compound indexes inflating memory/IO.
- TTL indexes nuking data before downstream consumers caught up.
- Partial indexes that silently exclude edge cases.
Questions:
- How do you phase out old fields safely—feature flags, read-repair, or hard cutovers?
- Any favorite patterns for backfilling large collections (chunking, change streams, queue-based workers)?
- Have you switched shard keys or redesigned partitioning in place? What tripped you up?
Would love concrete war stories and checklists.