Ahead of AI 2024年10月22日
Tackling Hallucinations, Boosting Reasoning Abilities, and New Insights into the Transformer Architecture
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了语言模型的三个问题,着重介绍了通过多种方法对语言模型进行事实性优化,包括采用直接偏好优化(DPO)降低幻觉率、创建全自动‘真实性’分数、使用特定数据集等,还探讨了相关方法的优势及可能存在的问题。

🦘语言模型常产生幻觉,即生成看似合理却不准确的信息,该文提出用直接偏好优化(DPO)方法,对7B Llama 2模型微调后,事实错误率降低58%。

💡DPO是一种比强化学习与人类反馈(RLHF)更简单的方法,直接根据响应偏好排名训练语言模型,且已被成功应用并有望成为流行的微调方法。

🎯为减少人工标注努力,作者实验了两种创建全自动‘真实性’分数的方法:基于参考的FactTune-FS和无参考的FactTune-MC,其中FactTune-FS在降低事实错误率方面表现出色。

📚文章使用了两个相对小但每个响应包含多个事实的数据集:知名人物传记和医学问答,通过多种模型进行实验并探讨了其效果。

This month, I want to focus on three papers that address three distinct problem categories of Large Language Models (LLMs):

    Reducing hallucinations.

    Enhancing the reasoning capabilities of small, openly available models.

    Deepening our understanding of, and potentially simplifying, the transformer architecture.

Reducing hallucinations is important because, while LLMs like GPT-4 are widely used for knowledge generation, they can still produce plausible yet inaccurate information.

Improving the reasoning capabilities of smaller models is also important. Right now, ChatGPT & GPT-4 (vs. private or personal LLMs) are still our go-to when it comes to many tasks. Enhancing the reasoning abilities of these smaller models is one way to close the gap between open-source and current-gen proprietary LLMs.

Lastly, enhancing our understanding of the transformer architecture is fundamental for grasping the training dynamics of LLMs. This knowledge could lead to simpler, more efficient models, boosting the performance of open-source models and potentially paving the way for new architectural innovations.


1) Fine-tuning Language Models for Factuality

Language Models (LLMs) often suffer from hallucinations, meaning they generate convincing yet factually inaccurate information. This is particularly problematic when using LLMs for knowledge-based question answering, as it necessitates manual fact-checking of the responses. This process can be time-consuming and may defeat the purpose of using an LLM for some queries.

In the paper Fine-tuning Language Models for Factuality, the authors suggest finetuning methods using Direct Preference Optimization (DPO) to reduce the rate of hallucinations. By fine-tuning a 7B Llama 2 model using this approach, they achieved a 58% reduction in the factual error rate compared to the original Llama-2-chat model.

Annotated figure excerpt from https://arxiv.org/abs/2311.08401 showing that the proposed method outperforms other alternatives in terms of factual error rates.


About Hallucinations

This is not explicitly discussed in the paper. However, before delving into the paper's discussion, it is worth considering the training and inference mechanisms in LLMs, which present two challenges. During pretraining, there may be datasets of poor quality that can lead the LLM to absorb factually incorrect information.

Secondly, during inference, we utilize a temperature setting and sampling method that allow the LLM to vary its sentence structure, ensuring it doesn't always generate identical text. This approach prevents the model from simply acting as a database lookup. However, there is no mechanism to guarantee which tokens should be sampled or taken directly from the training set.

For instance, in response to a query like "Who invented the iPhone and when was it first released?", possible responses include the following:

Different LLM responses with key facts highlighted in blue.

The LLM may vary the sentence structure and phrasing, but the key facts, specifically that the iPhone was developed by a team at Apple under the leadership of Steve Jobs and was released on June 29, 2007, should not be altered.

More concretely, changing "invented" to "created" would be acceptable. However, changing "Apple" to "Orange" or "Samsung" (or changing the date) would obviously be a significant error.

Direct Preference Optimization

The notable method employed by the researchers here is Direct Preference Optimization (DPO), which is becoming a popular alternative to Reinforcement Learning with Human Feedback (RLHF).

RLHF is the method behind ChatGPT, Llama 2 Chat, and others. For more information about RLHF, please refer to my other article linked below:

DPO is conceptually simpler than RLHF, as it directly trains the LLM on response-preference rankings instead of creating a reward model. Essentially, DPO optimizes a classification loss computed directly on preference data, making it much easier to implement and use than RLHF. Moreover, DPO has recently been successfully used (e.g., the Zephyr 7B model by Lewis Tunstall and colleagues, which seems to outperform the bigger Llama-2 70b Chat model trained via RLHF) and is on its way to becoming one of the most popular finetuning approaches.

Annotated figure from the DPO paper, https://arxiv.org/abs/2305.18290

Side note: Three of the authors of this Fine-tuning Language Models for Factuality paper were also part of the original DPO paper.

Removing Human Labeling Effort

Compared to RLHF, DPO simplifies the finetuning process since it doesn't require creating a reward model. However, DPO still requires generating preference data, typically involving sampling responses from the model for human fact-checking. A human labeler then ranks these responses by preference.

The authors note that fact-checking an LLM response, such as a biography of a well-known person, takes an average of 9 minutes per response for a human. Checking 505 biographies, the dataset used in this study, would cost about $2000.

As an alternative to human fact-checking, the authors suggest a variant of DPO that completely removes the human from the loop. This approach is somewhat analogous to RLAIF, reinforcement learning with AI feedback, which, unlike RLHF, does not require human input.

Annotated figure from the RLAIF paper, https://arxiv.org/abs/2309.00267

In the "Fine-tuning Language Models for Factuality" paper, the authors experiment with two methods for creating fully automated "truthfulness" scores:

    A reference-based truthfulness score, abbreviated as FactTune-FS.

    A reference-free truthfulness score, abbreviated as FactTune-MC.

Annotated figure from "Fine-tuning Language Models for Factuality" paper, https://arxiv.org/abs/2311.08401

The first step in each of the two methods is to obtain a dataset of responses, such as biographies of well-known persons, and extract atomic claims using a GPT-3.5 model. In Method 1 (FactTune-FS), the authors utilize the existing FactScore approach. They consider Wikipedia as a source of truth and use a Llama 1 7B model to verify if the atomic claim is supported by the article. I assume that the reason why a small Llama 1 model is used here it that it was the model employed in the original FactScore paper.

For Method 2 (FactTune-MC), the authors employ GPT-3.5 to first transform atomic claims into questions. These questions are then used as queries for a Llama 1 model with a high-temperature setting, enabling it to generate multiple different responses. The frequency of the most common response is treated as the truthfulness score. The advantage of this FactTune-MC over Method 1 is that it doesn't require a reference article from an external source.

The truthfulness scores are then used to create preference ratings for their datasets, and the models are finetuned using the DPO pipeline. It turns out that FactTune-FS performs exceptionally well, surpassing all other tested methods, including RLHF and regular supervised finetuning, in reducing factual error rates.

Annotated figure from "Fine-tuning Language Models for Factuality" paper (https://arxiv.org/abs/2311.08401) comparing the different approaches.

Note that the authors experimented with finetuning Llama 1 and Llama 2. However, they didn't specify which model was finetuned in the figure above. Based on matching the numbers from Table 2 in the paper, I assume it is the Llama 2 model. Here, RLHF refers to the Llama 2 Chat model provided by Meta. The SFT model is a Llama 2 model they finetuned with regular supervised learning (instead of DPO on preference data), using the same biographies and medical QA datasets as for the FactTune-FS model.


Datasets

As hinted at in the figure above, the authors worked with two datasets: biographies of well-known people and medical QA. The datasets are relatively small, yet each response contains several facts. Hence, the total number of facts used here may well be into the four-digit range for each of the two datasets.

The dataset created and used in the "Fine-tuning Language Models for Factuality" paper (https://arxiv.org/abs/2311.08401)

Robustness

Based on the description above, we can see that there are many moving parts relying on LLMs. For example, generating atomic claims with GPT-3.5 or using a 7B Llama 1 model to check whether a claim is supported, among other tasks. This means that we are placing considerable trust in LLMs to essentially fact-check another LLM.

Overall, it's quite surprising how well this works. However, the elephant in the room concerns whether the models, given that the number of correct and incorrect responses is based on the automated FactScore procedure, are overfitted to FactScore artifacts and do not actually create factually accurate responses.

To address concerns regarding the FactScore evaluation, the authors employ human labelers to evaluate the models. As shown in the figure below, the FactScore scores largely agree with the scores from human labelers. Assuming we trust the accuracy of the humans verifying the responses, the FactScore is only slightly inflated.

Annotated table from the "Fine-tuning Language Models for Factuality" paper (https://arxiv.org/abs/2311.08401) showing that the automated FactScore matches the performance of human labelers.


Tradeoffs

Llama 2 Chat was trained using Reinforcement Learning with Human Feedback (RLHF) to optimize helpfulness and harmlessness. And the authors found that applying FactTune-FS on top of it further improves the factual correctness of Llama 2 Chat. However, I wonder if this improvement comes at the cost of decreased helpfulness and harmlessness.

In theory, there should be a high correlation between helpfulness and factual correctness. For instance, a response that is factually correct should be more helpful than one that is factually incorrect. However, this aspect may not have been fully captured in the way the Llama 2 reward model data was labeled. That is, the humans who worked on the helpfulness rankings for Llama 2 training might not have fact-checked all responses carefully. Consequently, a response that was convincing, even if potentially factually incorrect, could have been regarded as helpful.

Llama 2 Chat has been optimized for helpfulness and harmlessness; annotated figure from original Llama 2 paper, https://arxiv.org/abs/2307.09288

Of course, making an already helpful response even more factually correct is preferable. However, I wonder if this has a negative impact on conversational skills or other tasks that do not necessarily rely on facts, such as translation, grammar correction, fiction writing, and so on.

While we don't have any data on whether DPO fine-tuning for factual correctness decreases the helpfulness and harmlessness scores that Llama 2 Chat has been optimized for, the authors included some interesting observations:

Limitations and Conclusions

Overall, I think this is a very well-written paper demonstrating the usefulness of DPO fine-tuning. And while some LLMs are explicitly finetuned to reduce harmfulness, this paper also shows that we can successfully finetune models using other objectives.

One minor critique is that the datasets used seem surprisingly small. On the other hand, the fact that the approach still works so well is even more impressive. The good news is that their approach is fully automated and, therefore, could easily be scaled to larger datasets.

Subscribe now

2) Orca 2: Teaching Small Language Models How to Reason

The paper Orca 2: Enhancing Reasoning in Smaller Language Models presents an effective method to significantly boost the reasoning abilities of smaller language models (LLMs) by using specialized synthetic data for training. The key idea is to implement various reasoning techniques to teach the LLM to recognize the most effective solution strategy for each task.

The resulting Orca-2-13B model outperforms similar-sized models in zero-shot reasoning tasks, showing an impressive 47.54% improvement over Llama-2-Chat-13B and 28.15% over WizardLM-13B. Note that all three models (Orca, Llama-2-Chat, and WizardLM) used the same Llama-2 base model for finetuning.

Moreover, Orca-2-13B can also compete with models that are 5-10x larger, like LLaMA-2-Chat-70B, WizardLM-70B, and ChatGPT. 

Annotated figure of the Orca 2 paper (https://arxiv.org/abs/2311.11045) showing that Orca 2 outperforms LLMs of similar and larger size on reasoning benchmarks. Note that Orca 2 and WizardLM are based on Llama 2, making this an interesting comparison.

(The "small cautious system message prompt" in the figure above is a small implementation detail where they added the following text to the instruction: "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior.")

Imitation Learning

In the past few months, imitation learning has become all the rage. In the context of Large Language Models (LLMs), imitation learning involves training a smaller target LLM (referred to as the "student") on the outputs of a larger source LLM (referred to as the "teacher"), such as GPT-4.

Illustration of imitation learning in an LLM context where a small LLM learns to mimic a larger LLM by learning from its responses.

The paper titled The False Promise of Imitating Proprietary LLMs points out a key issue with smaller language models trying to copy larger ones. While these smaller models can mimic the style of the larger ones and create content that seems impressive at first, a closer look often reveals that their output is not accurate. This means that, despite appearing to work well, they actually make mistakes when we examine what they produce more closely.

In the Orca 2 paper, the authors explain that small language models (which now include those with 7 to 13 billion parts, considered "small" in today's standards) can't just rely on copying what bigger models do to get better. They suggest a different approach: teaching these smaller models unique ways of solving problems or thinking, different from the big models. This is because the smaller models may not be able to employ the same solution strategies used by the larger ones due to having a smaller capacity (due to the reduced number of parameters).

Teaching Different Reasoning Strategies

The authors call their approach explanation tuning, which is mainly about creating a synthetic dataset using specific queries to extract answers with high-quality explanations from the source LLM (here GPT-4).

If you have used ChatGPT before (especially the earlier versions), you probably know that the answer quality varies greatly based on how you prompt the model. Some tasks require different prompting strategies, such as step-by-step, recall-then-generate, recall-reason-generate, direct answer, etc.

The authors argue that small models should not be trained on or use a specific solution strategy but that the model's solution (or reasoning) strategy should depend on the task itself.

They propose the following training procedure for Orca 2, where "Orca" (ref.) is the predecessor model:

Note that during its training, the smaller model only learns from the task and the responses, without seeing the original questions or commands that were used to extract these responses from the larger teacher LLM.

In other words, there is nothing special about the base LLM (here, Llama 2) and how it is finetuned. The innovation here, which leads to astoundingly good results, is entirely in how the synthetic training data is created.

An excerpt of the Flan-CoT collection dataset showing different answers for the same prompt. This is an annotated graphic based on a figure from the Orca 2 paper, https://arxiv.org/abs/2311.11045.


Benchmarks & Results

The resulting Orca-2 models perform exceptionally well on reasoning and knowledge benchmarks when compared to other models of similar size, as shown in the figure below.

Annotated figure covering a subset of the the knowledge and reasoning benchmarks in the Orca 2 paper, https://arxiv.org/abs/2311.11045.

Note that the figure above is only a small excerpt. The benchmarks in the Orca 2 paper are fairly comprehensive, covering reasoning capabilities, knowledge and language understanding, text completion, multi-turn open-ended conversations, grounding and abstractive summarization, safety, and truthfulness. 

The 13B model performs astoundingly well, outperforming other models of the same size by a significant margin. The multi-turn conversational benchmarks are the only benchmark where Orca-2-13B performs slightly worse than Llama-2-Chat. The author's explanation that it's because the training datasets didn't include such conversations makes this entirely plausible.

In my view, the strength of a language model does not lie in its universality but in its specialization. It's not essential for every task to be tackled by a conversational AI. I much prefer a suite of highly specialized, exceptionally strong tools over a jack-of-all-trades approach that offers only mediocrity across the bench.

Conclusion and Final Thoughts

It's very impressive what has been achieved using a 13B Llama 2 model as a starting point. I'm curious and excited to see if these significant improvements in reasoning and other benchmarks will also apply when tailored synthetic data strategies are utilized with 70B Llama 2 models or larger.

The paper presents a comprehensive list of benchmarks, and the excellent results speak for themselves. However, a minor critique is that the authors emphasize the importance of using smaller models to select the most effective solution strategy based on the task. Yet, they did not conduct any experiments or ablation studies to investigate this aspect for their Orca 2 models. The sole piece of evidence is that Orca 2 models, which are Llama 2 models trained on a carefully curated synthetic data mix, perform better than Llama 2 models fine-tuned by other means, including larger models.

On the positive side, it's noteworthy that the results were achieved entirely through supervised fine-tuning on curated synthetic data. The Orca 2 models have not undergone any RLHF or DPO fine-tuning. It's plausible, and even likely, that RLHF or DPO finetuning could further improve these models, making it an intriguing topic for future research.

PS: The weights for Orca 2 are publicly available here (https://www.microsoft.com/en-us/research/project/orca/).

3) Simplifying Transformer Blocks

In "Simplifying Transformer Blocks" the authors look into how the standard transformer block, essential to LLMs, can be simplified without compromising convergence properties and downstream task performance. 

Based on signal propagation theory and empirical evidence, they find that many parts can be removed to simplify GPT-like decoder architectures as well as encoder-style BERT models:

    skip connections 

    projection and value parameters

    sequential attention and MLP sub-blocks (in favor of a parallel layout)

    normalization layers (LayerNorm)

Annotated figures of the original transformer block (left) compared to the simplified transformer block (right) (from https://arxiv.org/abs/2311.01906)

Removing Skip Connections

As networks become deeper, gradients can diminish as they backpropagate through layers, making it difficult to train the model. Skip connections, also known as residual connections, are a common architectural feature in deep neural nets to alleviate the vanishing gradient problem.

Naively removing the skip connections would lead to gradient problems. So, to avoid that, the authors propose specific weight initialization schemes motivated by signal propagation theory. I highly recommend reading this excellent paper for those interested in the details since it has tons of interesting insights. 

When removing skip connections, changing the weight initialization and weighting the residuals with additional hyperparameters β leads to evaluation losses almost the same as the original transformer block. (Annotated plots from https://arxiv.org/abs/2311.01906)

As a side note, the "Pre-LN" in the upper corner of the original transformer block diagram refers to the placement of the LayerNorm ("Norm") layer. Actually, the original transformer proposed in Attention is All You Need used a Post-LN setup. This shift from Post-LN to Pre-LN has become more popular in recent transformer models because it simplifies the training process. 

One of the key benefits of Pre-LN is that it is less sensitive to the initial learning rate and does not require a careful warm-up of the learning rate, which can be a critical factor in training stability and effectiveness. For more details on the advantages and disadvantages of Pre- and Post-LN, I recommend this video by my friend and collaborator Vahid.

Visual comparison of an implementation detail: Pre- and Post-LayerNorm

Removing other parts

The insights from the experiments in the previous section (setting the weighting hyperparameters β to 0) led the authors to successfully remove projections and value parameters as well. The authors argue that the value and projection matrices are simply linear projections of the inputs (as opposed to the linear layers in the MLP sub-block with non-linear activation functions between them) and thus might be redundant.

The authors also tried to remove the normalization layers (aka LayerNorm). The rationale is that Pre-LayerNorm implicitly downweights the residual branches. And this downweighting can also be achieved by other mechanisms that the authors already account for via their previous modifications, making LayerNorm theoretically redundant in this setup. 

However, the authors observed a slight degradation in terms of the training convergence, which currently cannot be explained by signal propagation theory. In conclusion, they recommend keeping LayerNorm in the transformer block.

Limitations and Conclusion

The authors experimented with relatively small models, and while there is no reason to believe these findings wouldn't generalize to larger language models, there is yet no empirical evidence for this. I don't fault the authors for not including such experiments. On the contrary, I am very impressed with this work from a two-author team, and it ranks among my favorite papers I've read this year. They also excelled in referencing tons of related works, motivating their experiments. I definitely recommend reading this paper for the references alone.

In my opinion, one of the main benefits of the proposed modifications is to gain a better understanding and simplifying the transformer architecture. However, there authors also report a 15% increase in training throughput and a requirement for 15% fewer parameters.

I hope larger research labs with more compute resources also find this work interesting and apply these insights to new architectures (ideally sharing their insights and outcomes, of course). I am definitely interested to see if these modifications are viable for larger LLMs as well. 

Other Interesting Research Papers

ChatGPT's One-year Anniversary: Are Open-Source Large Language Models Catching Up? by Chen, Jiao, Li, and Qin (29 Nov), https://arxiv.org/abs/2311.16989

Language Model Inversion by Morris, Zhao, Chiu, Shmatikov, and Rush (22 Nov), https://arxiv.org/abs/2311.13647

Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model by Yang, Tao, Lyu, Ge, et al. (22 Nov), https://arxiv.org/abs/2311.13231

PaSS: Parallel Speculative Sampling by Monea, Joulin, and Grave (22 Nov), https://arxiv.org/abs/2311.13581

ZipLoRA: Any Subject in Any Style by Effectively Merging LoRAs by Shah, Ruiz, Cole, Lu, et al. (22 Nov), https://arxiv.org/abs/2311.13600 

GAIA: A Benchmark for General AI Assistants by Mialon, Fourrier, Swift, Wolf, et al. (21 Nov), https://arxiv.org/abs/2311.12983

System 2 Attention (is something you might need too) by Weston and Sukhbaatar (20 Nov), https://arxiv.org/abs/2311.11829

Orca 2: Teaching Small Language Models How to Reason by Mitra, Del Corro, Mahajan, Codas, et al. (18 Nov), https://arxiv.org/abs/2311.11045

Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2 by Ivison, Wang, Pyatkin, Lambert, et al. (17 Nov), https://arxiv.org/abs/2311.10702

Video-LLaVA: Learning United Visual Representation by Alignment Before Projection by Lin, Zhu, Ye, Ning, et al. (16 Nov), https://arxiv.org/abs/2311.10122

Emu Edit: Precise Image Editing via Recognition and Generation Tasks by Sheynin, Polyak, Singer,  Kirstain, et al. (16 Nov), https://arxiv.org/abs/2311.10089

Tied-Lora: Enhancing Parameter Efficiency of LoRA with Weight Tying by Renduchintala, Konuk, and Kuchaiev (16 Nov), https://arxiv.org/abs/2311.09578

Exponentially Faster Language Modelling by Belcak and Wattenhofer (15 Nov), https://arxiv.org/abs/2311.10770

Llamas Know What GPTs Don't Show: Surrogate Models for Confidence Estimation by Shrivastava, Liang, and Kumar (15 Nov), https://arxiv.org/abs/2311.08877

Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models by Yu, Zhang, Pan, Ma, et al. (15 Nov), https://arxiv.org/abs/2311.09210

Fusion-Eval: Integrating Evaluators with LLMs by Shu, Wichers, Luo, Zhu, et al. (15 Nov), https://arxiv.org/abs/2311.09204

Routing to the Expert: Efficient Reward-guided Ensemble of Large Language Models by Lu, Yuan, Lin, Lin, et al. (15 Nov), https://arxiv.org/abs/2311.08692

Comparing Humans, GPT-4, and GPT-4V On Abstraction and Reasoning Tasks by Mitchell, Palmarini, and Moskvichev (14 Nov), https://arxiv.org/abs/2311.09247

DiLoCo: Distributed Low-Communication Training of Language Models by Douillard, Feng, Rusu, Chhaparia, et al. (14 Nov), https://arxiv.org/abs/2311.08105

A Survey on Language Models for Code by Zhang, Chen, Liu, Liao, et al. (14 Nov), https://arxiv.org/abs/2311.07989v1

Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models by Li, Yang, Liu, Ma, et al. (12 Nov), https://arxiv.org/abs/2311.06607

LCM-LoRA: A Universal Stable-Diffusion Acceleration Module by Luo, Tan, Patil, Gu, et al. (9 Nov), https://arxiv.org/abs/2311.05556

Removing RLHF Protections in GPT-4 via Fine-Tuning by Zhan, Fang, Bindu, Gupta, et al. (9 Nov), https://arxiv.org/abs/2311.05553

A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions by Huang, Yu, Ma, Zhong, et al. (9 Nov), https://arxiv.org/abs/2311.05232

Holistic Evaluation of Text-To-Image Models by Lee, Yasunaga, Meng, Mai, et al. (7 Nov), https://arxiv.org/abs/2311.04287

OtterHD: A High-Resolution Multi-modality Model by Li, Zhang, Yang, Zhang, et al. (7 Nov), https://arxiv.org/abs/2311.04219

S-LoRA: Serving Thousands of Concurrent LoRA Adapters by Sheng, Cao, Li, Hooper, et al. (6 Nov), https://arxiv.org/abs/2311.03285

GPT4All: An Ecosystem of Open Source Compressed Language Models by Anand, Nussbaum, Treat, Miller, et al. (Nov 6), https://arxiv.org/abs/2311.04931

CogVLM: Visual Expert for Pretrained Language Models by Wang, Lv, Yu, Hong, et al. (6 Nov), https://arxiv.org/abs/2311.03079

FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation by Vu, Iyyer, Wang, Constant, et al. (5 Nov), https://arxiv.org/abs/2310.03214

Levels of AGI: Operationalizing Progress on the Path to AGI by Morris, Sohl-dickstein, Fiedel, Warkentin, et al. (4 Nov), https://arxiv.org/abs/2311.02462

FlashDecoding++: Faster Large Language Model Inference on GPUs by Hong, Dai, Xu, Mao, et al. (2 Nov), https://arxiv.org/abs/2311.01282

Pretraining Data Mixtures Enable Narrow Model Selection Capabilities in Transformer Models by Yadlowsky, Doshi, and Tripuraneni (1 Nov), https://arxiv.org/abs/2311.0087


This magazine is a personal passion project that does not offer direct compensation. However, for those who wish to support me, please consider purchasing a copy of one of my books. If you find them insightful and beneficial, please feel free to recommend them to your friends and colleagues.

Your support means a great deal! Thank you!

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

语言模型 事实性优化 直接偏好优化 数据集
相关文章