metadata
base_model:
- meta-llama/Llama-3.1-70B
library_name: transformers
tags:
- mergekit
- merge
datasets:
- Severian/Internal-Knowledge-Map-StoryWriter-RolePlaying
IKM_31_stock
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Model Stock merge method using meta-llama/Llama-3.1-70B as a base.
Models Merged
The following models were included in the merge:
- D:\mergekit\LORAs\applied\IKM_c1
- D:\mergekit\LORAs\applied\IKM_final
- D:\mergekit\LORAs\applied\IKM_c2
Configuration
The following YAML configuration was used to produce this model:
models:
- model: "D:\\mergekit\\LORAs\\applied\\IKM_c1"
- model: "D:\\mergekit\\LORAs\\applied\\IKM_c2"
- model: "D:\\mergekit\\LORAs\\applied\\IKM_final"
- model: meta-llama/Llama-3.1-70B
base_model: meta-llama/Llama-3.1-70B
merge_method: model_stock
dtype: float32
out_dtype: bfloat16
chat_template: llama3
tokenizer:
source: union
pad_to_multiple_of: 8