Papers
arxiv:2408.10635

Strategist: Learning Strategic Skills by LLMs via Bi-Level Tree Search

Published on Aug 20
· Submitted by HenryCai1129 on Aug 23
Authors:
,
,
,
,
,
,

Abstract

In this paper, we propose a new method Strategist that utilizes LLMs to acquire new skills for playing multi-agent games through a self-improvement process. Our method gathers quality feedback through self-play simulations with Monte Carlo tree search and LLM-based reflection, which can then be used to learn high-level strategic skills such as how to evaluate states that guide the low-level execution.We showcase how our method can be used in both action planning and dialogue generation in the context of games, achieving good performance on both tasks. Specifically, we demonstrate that our method can help train agents with better performance than both traditional reinforcement learning-based approaches and other LLM-based skill learning approaches in games including the Game of Pure Strategy (GOPS) and The Resistance: Avalon.

Community

Paper author Paper submitter

Strategist: Learning Strategic Skills by LLMs via Bi-Level Tree Search

We are excited to release our new paper on LLM playing social deduction games (The Resistance: Avalon)! It can make better decisions and generate more sophisticated dialogues, and it's more than just an Avalon agent! Find more details in our

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2408.10635 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2408.10635 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2408.10635 in a Space README.md to link it from this page.

Collections including this paper 2