- What is you used chain-of-reasoning to increase the model’s ability to remember how many empty cells it has generated so far?
- Why is it that LSTMs are so good at generating levels with so few levels?
- Is there any work on using LLMS to generate a symbolic language to solve a problem?
Bibliography
Todd, G., Earle, S., Nasir, M. U., Green, M. C., & Togelius, J. (2023). Level Generation Through Large Language Models (arXiv:2302.05817). arXiv. https://doi.org/10.48550/arXiv.2302.05817