We've recently decided to move these workloads to snowflake because we want to protect our transactional workloads.
The snowflake devex has been pretty bad because we'd need a snowflake "instance" for each dev's postgres localhost, and we like that localhost postgres to be ephemeral. Additionally, it'd be nice to have this work all locally.
One interesting piece of software I came across is DuckDB. It's lightweight. There's no additional storage needed. It's an interesting direction for me to test but I don't know if it'll satisfy our latency requirements.
How have you separated and scaled out your analytics workloads from postgres?
One of the side-effects that come with more feature flag(FF) use is the FF cleanup burden. I was curious if anyone has developed any tooling that makes this cleanup easier and automated?
I know there's the obvious candidate of creating a clean up ticket as part story(or whatever equivalent) conclusion, but i'd like to see if cleanup is solvable via automation as opposed to process.
An idea I have is for every merge to main, check to see if any feature flags have been added. For every feature flag added, create a "cleanup PR" per added flag that removes the feature flag and check, removes the old functionality, and persists the new functionality.
I recognize that this solution design could be pretty specific to how my team uses feature-flags.
Any feedback on a tool that works like I described or are do you think there's an alternative approach I should consider?
I’ve explored text-to-speech (TTS) for articles using tools like Speechify. However, I noticed a gap: TTS lacked the engagement I found in podcasts. I think there’s two reasons why:
1. Articles are visually oriented, while podcasts are audio-centric. 2. The personal connection we develop with our favorite podcasters.
Addressing this, a-to-p leverages AI to bridge the gap. The idea is to morph articles into a format resembling podcast transcripts before applying TTS. This approach, in my experience, significantly enhances engagement compared to standard TTS.
The roadmap for a-to-p includes:
- Tackling the content and length alterations in the transformation process. - Implementing open-source LLMs for generating transcripts. - Integrating high-quality, affordable open-source TTS models. - Adding Intro and Outro segments for a complete podcast experience.
I’m eager for any feedback or contributions to the project. Your insights can greatly shape a-to-p’s future!