So I'd love to get your thoughts on the following. There's a tweet from Sam Altman that he wrote a few years ago and it's aged quite well as they say. He was announcing the release of ChatGPT and maybe an early iteration of ChatGPT, maybe 3 or 3.5 or something like that. Maybe even an earlier one. And the tweet went something like, it's our conversational, or first it's a conversational model or something.

And what's interesting to me about this is that I discovered AI through ChatGPT or got excited about it through that interface. And then from there worked back to more instructional workloads as then I used it as a chat interface, then began using LLMs through their API endpoints and then began using them programmatically and scripting and using them on my local computer. And now I doing much more of that than I am using them as chatbots.

I know a lot of people, I think even people who are pretty technically literate, aren't really aware that there's, that there's, AI can be used in this way. But what's interesting about that tweet I mentioned is it inferred that instructional models actually predate conversational models. In other words, that I think what he was saying was that OpenAI had developed GPT firstly for instruction following, and then they sort of refined it for conversation.

And what I'm curious to know is, is that accurate that instructional models predate conversational models and if so by sort of how long?