Hi HN — founder and engineer Jason here.
Hypervector is an API for writing test fixtures for data-driven features and components. It provides synthetic data (generated from distributions the user defines programmatically) over dedicated endpoints, and allows you to benchmark the output of any function or model to detect changes and regressions as part of your wider test suites.
It began as a side-project (and now my main focus) based on my experiences originally as a data scientist, and then software engineer in data-heavy teams. Production-facing data features often ended up difficult to continue testing beyond the experimentation phase of the machine learning and data science development process, and giving confidence to all engineers that their production changes around/nearby these components weren't introducing bugs or regressions was tricky. Hypervector tries to help this by providing the tools to let you easily build test fixtures for empirically-derived software that can be asserted alongside more traditional integration tests, and as part of your build and CI automation.
Keen to hear early impressions, and get folks using the API if they're interested. This preview access version is available for just now, and I'll be looking to roll out a more comprehensive version as a service in the coming months.