

151·
10 days agoWant more PhD level intelligence?
https://chatgpt.com/share/6898598d-fb18-800c-ab20-681788add81a
And my favorite: 🤣 https://chatgpt.com/share/68985829-03c4-800c-b261-7182f39d54c5
Want more PhD level intelligence?
https://chatgpt.com/share/6898598d-fb18-800c-ab20-681788add81a
And my favorite: 🤣 https://chatgpt.com/share/68985829-03c4-800c-b261-7182f39d54c5
To give the (anthropomorphized) model credit, I think the biggest problem is that the way it’s trained is has practically no concept between the relation of the token it spits out an the html code that will be produced by the ChatGPT UI wrapper.
Because of that inserting regular newlines often don’t work because a single newline in markdown doesn’t translate to a line break in HTML.
To force a particular structure I often ask it to use some will known formats like yaml. Because there is so much yaml training data, is practically impossible for the LLM to not add newlines at the correct places (which is also typically rendered correctly because it places that in a markdown code block)