"The basic logic: GPT-1 cost approximately nothing to train. GPT-2 cost $40,000. GPT-3 cost $4 million. GPT-4 cost $100 million. Details about GPT-5 are still secret, but one extremely unreliable estimate says $2.5 billion, and this seems the right order of magnitude given the $8 billion that Microsoft gave OpenAI. So each GPT costs between 25x and 100x the last one. Let’s say 30x on average. That means we can expect GPT-6 to cost $75 billion, and GPT-7 to cost $2 trillion." (...) "GPT-6 will probably cost $75 billion or more. OpenAI can’t afford this. Microsoft or Google could afford it, but it would take a significant fraction (maybe half?) of company resources. If GPT-5 fails, or is only an incremental improvement, nobody will want to spend $75 billion making GPT-6, and all of this will be moot. On the other hand, if GPT-5 is close to human-level, and revolutionizes entire industries, and seems poised to start an Industrial-Revolution-level change in human affairs, then $75 billion for the next one will seem like a bargain." Not sure how accurate these calculations are, but if it takes $75B to design GPT-6, then OpenAI would have to start printing a lot of money. Microsoft's annual revenue for 2023 is $218B, and they probably won't spend 1/3 of it on OpenAI :)

> More promising is synthetic data, where the AI generates data for itself. This sounds like a perpetual motion machine that won’t work, but there are tricks to get around this. For example, you can train a chess AI on synthetic data by making it play against itself a million times. You can train a math AI by having it randomly generate steps to a proof, eventually stumbling across a correct one by chance, automatically detecting the correct proof, and then training on that one. You can train a video game playing AI by having it make random motions, then see which one gets the highest score. In general you can use synthetic data when you don’t know how to create good data, but you do know how to recognize it once it exists (eg the chess AI won the game against itself, the math AI got a correct proof, the video game AI gets a good score). But nobody knows how to do this well for written text yet. This is a misleading argument for me. The game theoretical optimum, or rational strategies for winning in games like chess and others, are really clear, so you can synthesize more data from existing data. In chess, you can give all figures scores, and ultimately, you can resolve any game to a loser and a winner or a draw. I know that Tesla is also recombining tricky traffic situations into new patterns because traffic has objective rules (traffic laws) that must be followed by cars, and so there is a basis of judgment the computers can have among themselves, too. Now, where is this objective basis of judgment for writing good text? "Nobody knows how to do this well for written text yet" is misleading to me because it assumes there's an actual answer to this globally, an objective measure of what winning language looks like in a specific context. Maybe I'm pessimistic here, but it's difficult for me to believe that.

One of the limitations of the limitation of acquiring data for training could be removed if models could learn from video. Video feeds from the real world could provide enormous amount of data. This paper attempts to make it possible true to learn from the world via video https://largeworldmodel.github.io/ Found via this cast https://warpcast.com/polluterofminds/0x9afe15ac

@timdaub.eth - I agree re: chess. You can even give scores to positions. e.g., if your bishop hasn't moved, its score is lower than if it's in a well-protected position in the center of the chessboard. With writing texts, there are some high-level rules, but there's a lot of space for individual expression: see Mike Solana's essays vs. Paul Graham's essays. Some people prefer the former, some prefer the latter. I am wondering, though, if AI would be able to nail 20, 30, or 50 main styles of writing - it already mimics style pretty well via ChatGPT. If it does, then the main question is, how does it produce interesting thoughts, as this is what writing (for me) is about. @ruycer.eth - thanks for sharing. I think this might be one of the reasons why Tesla bets so heavily on video data.