Text Generation
Transformers
Safetensors
PyTorch
nvidia
conversational

Problem working with long text

#4
by Kosh69 - opened

When trying to retell a chapter from a book (~120k tokens), he produces some kind of gibberish, despite using a bunch of words and phrases from the entire text, writing clearly and consistently... but essentially missing the point, and not just missing the point, but completely unrelated to the content of the text.
When I try to ask a question about the text, it presents its own fanciful version of the events it asked about.
Used unsloth q4_k_xl and I don't think it's a quantization issue...
Regarding the bandwidth and memory consumption - thank you! 5 stars!

Hi, Thanks for your interest in the model!
Is the chapter you are working with publicly available so that we can test ?
Thanks!

This book is publicly available. It's an old science fiction novel by the Strugatsky brothers, "Hard to Be a God." It's in Russian. The vast majority of models now work perfectly in Russian :)

Here's an example of a correct solution to the summary of Chapter 5, with a translation into English: https://chat.z.ai/s/fb9d8fde-f568-45c1-be21-c6a5b96031c7

This is my standard test; the smallest llm that passed it was qwen3 14b.

Thanks!
To confirm, did you test the model with Russian text as input ?

Yes, both the instruction and the text were in Russian .

I apologize, I'm starting to get confused myself about language models :) Of course, I meant Russian.

Sign up or log in to comment