Kiwi News
Menu
🥝

Starts @ 19:00 CET

holy f this audio assistant demo is crazily good

for a while since Llama3, I've felt a "what now?" vibe. obviously, it's not just enough to release the model, the interface has to be even more ambitious than ever for the juice to be worth the squeeze. another way to frame this: once one figures out operation system hotkeys, they become impossible to lose. another fact is Tesla's FSD v12 update. it's a brand new architecture, mostly because the diminishing return revolves around the edge cases where intervention is so rare yet life-or-death. so what is my impression of the OpenAI app? we're seeing what ML studios choose when they have no more world to conquer. speech2text2speech is great word of mouth advertisement, literally. a global, public storefront is more fluid, esoteric feedback. a cheaper gateway is more demand for the destination. how do OAI partners see more users with lower barrier to modify their UX? how do OAI competitors see this moat (and has it substantially widened)? do FOSS developers see this as a informative windfall or as a blackbox with a runaway growth of humans in the loop? much to ponder until GPT-5 does our laundry for us.

I want a better programming buddy than GPT-4, just needs to be smarter and pay more attention to what I‘m saying. I‘m not subbed to the OpenAI app since a while because I don‘t consider its output super trustworthy