Problem StatementI build and operate real-time data/ML platforms, and one recurring pain I see inside any data org is this: “Why does this attribute have this value?” When a company name, industry, or revenue looks wrong, investigations often stall without engineering help. I wanted a way for analysts, support, and product folks to self-serve the “why,” with history and evidence, without waking up an engineer.This is the blueprint I shipped: A serverless, low‑maintenance traceability service that queries terabytes in seconds and costs peanuts.What this tool needed to do (for non‑engineers)Explain a value: Why does attribute X for company Y equal Z right now?Show the history: When did it change? What were the past versions?Show evidence: Which sources and rules produced that value?Be self‑serve: An API + simple UI so teams can investigate without engineersThe architecture (serverless on purpose)Amazon API Gateway: a secure front door for the APIAWS Lambda: stateless request handlers (no idle compute to pay for)Amazon S3 + Apache Hudi: cheap storage with time travel and upsertsAWS Glue Data Catalog: schema and partition metadataAmazon Athena: SQL over S3/Hudi, pay-per-data‑scanned, zero infra\Why these choices?Cost: storage on S3 is cheap; Athena charges only for bytes scanned; Lambda is pay‑per‑invocationScale: S3/Hudi trivially supports TB→PB, and Athena scales horizontallyMaintenance: no fleet to patch; infra footprint stays tiny as usage grows:::infoData layout: performance is a data problem (not a compute problem) Athena is fast when it reads almost nothing, and slow when it plans or scans too much. The entire project hinged on getting partitions and projection right.:::Partitioning strategy (based on query patterns)created_date (date): most queries are time‑boundedattributename (enum): employees, revenue, linkedinurl, founded_year, industry, etc.entityidmod (integer bucket): mod(entity_id, N) to spread hot keys evenlyThis limits data scanned and, more importantly, narrows what partition metadata Athena needs to consider.The three things that made it fast:Partitioning:Put only frequently filtered columns in the partition spec.Use integer bucketing (mod) for high‑cardinality keys like entity_id.\Partition Indexing (first win, partial):We enabled partition indexing so Athena could prune partition metadata faster during planning.This helped until the partition count grew large; planning was still the dominant cost.\Partition Projection (the actual game‑changer):Instead of asking Glue to store millions of partitions, we taught Athena how partitions are shaped.Result: planning time close to zero; queries jumped from “slow-ish with growth” to consistently 1–2 seconds for typical workloads.\Athena TBLPROPERTIES (minimal example)TBLPROPERTIES ( 'projection.enabled'='true', 'projection.attribute_name.type'='enum', 'projection.attribute_name.values'='employees,revenue,linkedin_url,founded_year,industry', 'projection.entity_id_mod.type'='integer', 'projection.entity_id_mod.interval'='1', 'projection.entity_id_mod.range'='0,9', 'projection.created_date.type'='date', 'projection.created_date.format'='yyyy-MM-dd', 'projection.created_date.interval'='1', 'projection.created_date.interval.unit'='days', 'projection.created_date.range'='2022-01-01,NOW')\Why this worksAthena no longer fetches a huge partition list from Glue; it calculates partitions on the fly from the rules aboveScanning drops to “only the files that match the constraints”Planning time becomes negligible, even as data and partitions grow\What surprised me (and what was hard)The “gotcha” was query planning, not compute. We often optimize engines, but here the slowest part was enumerating partitions. Partition projection solved the right problem.Picking partition keys is half art, half science. Over-partition and you drown in metadata; under-partition and you pay per scan. Start from your top 3 query predicates and work backwards.Enum partitions are underrated. For low‑cardinality domains (attribute_name), enum projection is both simple and fast.Bucketing (via mod) is pragmatic. True bucketing support is limited in Athena, but a mod-based integer partition gets you most of the benefits.\Cost & latency (real numbers)Typical queries: 1–2 seconds end‑to‑end (Lambda cold starts excluded)Data size: multiple TB in S3/HudiCost: pennies per 100s of requests (Athena scan + Lambda invocations)Ops: near‑zero—no servers, no manual compaction beyond Hudi maintenance cadence\Common pitfalls (so you can skip them)Don’t partition by high‑cardinality fields directly (e.g., raw entity_id); you’ll explode the partition countDon’t skip projection if you expect partitions to grow; indexing alone won’t save youDon’t save partition metadata for every key if a rule can generate it (projection exists exactly for that reason)Don’t leave Glue schemas to drift; version them and validate in CI\Try this at home (a minimal checklist)Model your top 3 queries; pick partitions that match those predicatesUse enum projection for low‑cardinality fields; date projection for time; integer ranges for bucketsStore data in columnar formats (Parquet/ORC) via Hudi to keep scans small and enable time travelAdd a thin API (API Gateway + Lambda) to turn traceability SQL into JSON for your UIMeasure planning vs. scan time; optimize the former first\What this unlocked for my usersAnalysts and support can answer “why” without engineersProduct can audit attribute changes by time and causeEngineering spends more time on fixes and less time on forensicsThe org trusts the data more because the evidence is one click away\Closing thoughtGreat performance is usually a data layout story. Before you scale compute, fix how you store and find bytes. In serverless analytics, the fastest query is the one that plans instantly and reads almost nothing, and partition projection is the lever that gets you there.