Fill-Mask
Transformers
PyTorch
German
bert
scherrmann commited on
Commit
f868c50
·
1 Parent(s): f163ad9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -1
README.md CHANGED
@@ -3,4 +3,40 @@ license: apache-2.0
3
  language:
4
  - de
5
  ---
6
- "# German FinBERT (Further Pre-trained Version)\n\nThis model card details the further pre-trained version of German FinBERT, a language model focusing on the financial domain within the German language.\n\n## Overview \n**Author:** Moritz Scherrmann【23†source】 \n**License:** *Information not provided in the document* \n**Framework:** BERT-base \n**Language:** German \n**Specialization:** Financial textual data \n**Original Model:** gbert of deepset \n\n## Pre-training Corpus\nThe pre-training corpus consists of German financial textual data. It comprises a comprehensive collection that includes financial reports, ad-hoc announcements, and news related to German companies. The corpus size is on par with those used for standard BERT models, indicating substantial coverage and depth.\n\n## Performance \n### Fine-tune Datasets\nGerman FinBERT has been evaluated on three finance-specific tasks against generic German language models, showing improved performance in:\n- Sentiment prediction\n- Topic recognition\n- Question answering\n\nThe model effectively captures domain-specific nuances, outperforming standard models on finance-related texts.\n\n### Benchmark Results\n- *The precise benchmark results and comparisons are not provided in the accessed part of the document.*\n\n## Authors \n**Moritz Scherrmann** \nInstitute for Finance & Banking \nLudwig-Maximilians-Universität München \nLudwigstr. 28, RB 80539 Munich, Germany \nEmail: scherrmann@lmu.de【23†source】\n\nFor additional details regarding the performance on fine-tune datasets and benchmark results, please refer to the full documentation provided in the study. German FinBERT represents an innovative development in the field of financial NLP, offering enhanced capabilities for analyzing German financial texts."
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  language:
4
  - de
5
  ---
6
+ # German FinBERT (Further Pre-trained Version)
7
+
8
+ This model card details the further pre-trained version of German FinBERT, a language model focusing on the financial domain within the German language.
9
+
10
+ ## Overview
11
+ **Author:
12
+ ** Moritz Scherrmann
13
+ **Framework:
14
+ ** BERT-base
15
+ **Language:
16
+ ** German
17
+ **Specialization:
18
+ ** Financial textual data
19
+ **Original Model:
20
+ ** gbert of deepset
21
+
22
+ ## Pre-training Corpus
23
+ The pre-training corpus consists of German financial textual data. It comprises a comprehensive collection that includes financial reports, ad-hoc announcements, and news related to German companies. The corpus size is on par with those used for standard BERT models, indicating substantial coverage and depth.
24
+ ## Performance
25
+ ### Fine-tune Datasets
26
+ German FinBERT has been evaluated on three finance-specific tasks against generic German language models, showing improved performance in:
27
+ - Sentiment prediction
28
+ - Topic recognition
29
+ - Question answering
30
+ The model effectively captures domain-specific nuances, outperforming standard models on finance-related texts.
31
+
32
+ ### Benchmark Results
33
+ - *The precise benchmark results and comparisons are not provided in the accessed part of the document.*
34
+
35
+ ## Authors
36
+ **Moritz Scherrmann
37
+ ** Institute for Finance & Banking
38
+ Ludwig-Maximilians-Universität München
39
+ Ludwigstr. 28, RB 80539 Munich, Germany
40
+ Email: scherrmann@lmu.de
41
+
42
+ For additional details regarding the performance on fine-tune datasets and benchmark results, please refer to the full documentation provided in the study. German FinBERT represents an innovative development in the field of financial NLP, offering enhanced capabilities for analyzing German financial texts.