Papers
arxiv:2408.16667

Iterative Graph Alignment

Published on Aug 29
· Submitted by Ksgk-fy on Sep 2
Authors:
,

Abstract

By compressing diverse narratives, LLMs go beyond memorization, achieving intelligence by capturing generalizable causal relationships. However, they suffer from local 'representation gaps' due to insufficient training data diversity, limiting their real-world utility, especially in tasks requiring strict alignment to rules. Traditional alignment methods relying on heavy human annotations are inefficient and unscalable. Recent self-alignment techniques also fall short, as they often depend on self-selection based prompting and memorization-based learning. To address these issues, we introduce Iterative Graph Alignment (IGA), an annotation-free rule-based alignment algorithm. A teacher model (VLM) employs Iterative Graph Prompting (IGP) to create logical graphs and reference answers. The student model (LLM) identifies local knowledge gaps by attempting to align its responses with these references, collaborating with helper models to generate diverse answers. These aligned responses are then used for iterative supervised fine-tuning (SFT). Our evaluations across five rule-based scenarios demonstrate IGP's effectiveness, with a 73.12\% alignment improvement in Claude Sonnet 3.5, and Llama3-8B-Instruct achieving an 86.20\% improvement, outperforming Claude Sonnet 3.5 in rule-based alignment.

Community

Paper author Paper submitter

Large Language Models (LLMs) such as GPT-4 and Claude Sonnet 3.5 often struggle with role-play scenarios, frequently breaking character over simple queries. Our research tackles this challenge by focusing on the general rule-based alignment problem, aiming to closely mimic human reasoning and learning processes.

Reasoning and language are processed separately in the human brain. Coincidently, we found that by visually reason on a logical graph, rule-based alignment performance of SOTA VLM increases by ~73%.

Human learning is adaptive: We focus on questions we don't know the anwer to, and seeks to understand how to answer it instead of trying to memorize the answer. However, SFT-based learning paradigm places equal focus on easy and hard cases, while the exact-matching-based loss function effectively rejects all but one aligned answers. We view this as a significant drawback and propose a 'self-adaptive incremental learning' to curate an model-specific curriculum training dataset, and encourage any answers which aligns with the reference answer. We observe an improvement of ~86% from this learning approach.

By integrating these insights, we present Iterative Graph Alignment (IGA) 🔥 , a technique that iteratively refines a small language model (~8B) with guidance from a teacher VLM. IGA excels in rule-based alignment without human annotation, preserving a transparent logical graph that enhances the explainability of AI's decision-making process.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2408.16667 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2408.16667 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2408.16667 in a Space README.md to link it from this page.

Collections including this paper 4