The community member is experiencing an error message when using a hosted AI model, stating that the maximum context length has been exceeded. The comments suggest that choosing a smaller instance or reducing the length of the messages may help resolve the issue. One community member has indicated that the problem was solved by selecting the correct element to modify while editing. The community members also note that the current AI is in an alpha stage and that the experience will be improved in the future.
When prompted it shows the followwing error - "API Internal Error: 400 This model's maximum context length is 16385 tokens. However, your messages resulted in 98779 tokens. Please reduce the length of the messages. 400 This model's maximum context length is 16385 tokens. However, your messages resulted in 98779 tokens. Please reduce the length of the messages." But Also with preset prompts it shows the same and with smaller one line prompts it's showing the same , I am using Hosted one , Am I missing something?