Dramatic growth in Internet connectivity poses a challenge for the resource-constrained data collection efforts that support scientific and operational analysis of interdomain routing. Inspired by tradeoffs made in other disciplines, we explore a fundamental reconceptualization to how we design public BGP data collection architectures: an overshoot-and-discard approach that can accommodate an order of magnitude increase in vantage points by discarding redundant data shortly after its collection. As defining redundant depends on the context, we design algorithms that filter redundant updates without optimizing for one objective, and evaluate our approach in terms of detecting two noteworthy phenomena using BGP data: AS-topology mapping and hijacks. Our approach can generalize to other types of Internet data (e.g., traceroute, traffic). We offer this study as a first step to a potentially new area of Internet measurement research.