FuseChat: Knowledge Fusion of Chat Models
Paper
•
2408.07990
•
Published
•
14
This is a merge of pre-trained language models created using mergekit.
This model was merged using the SCE merge method using schonsense/IPOplectic as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
merge_method: sce
select_topk: 0.25
models:
- model: WhiteRabbitNeo/Llama-3.1-WhiteRabbitNeo-2-70B
- model: schonsense/IPOplectic
- model: watt-ai/watt-tool-70B
- model: xTRam1/plan-and-act-planner-70b
- model: xTRam1/plan-and-act-actor-70b
base_model: schonsense/IPOplectic
parameters:
normalize: false
int8_mask: true
dtype: float32
out_dtype: bfloat16
tokenizer:
source: base
pad_to_multiple_of: 8