Dataset Viewer
Auto-converted to Parquet Duplicate
paper
stringlengths
10
10
relevant_tables
listlengths
1
3
tables
listlengths
3
55
fulltext
stringlengths
27.1k
359k
question
stringlengths
36
601
answer
stringlengths
1
126
plan
stringlengths
76
1.47k
2401.06769
[ [ "\\begin{table*}[h!]\n", "\\centering\n", "\\begin{tabularx}{\\textwidth}{@{}Xrrrrrrrrr@{}}\n", "\\toprule\n", "& \\multicolumn{3}{c}{M2M-100-418M} & \\multicolumn{3}{c}{SMaLL-100} & \\multicolumn{3}{c}{NLLB-200-1.3B} \\\\\n", "\\cmidrule(lr){2-4} \\cmidrule(lr){5-7} \\cmidrule(lr){8-10}\n...
[ [ "\\begin{figure}\n", " \\centering\n", " \\includegraphics[width=\\columnwidth, trim=0 0.15cm 0 0, clip]{images/figure1}\n", " \\caption{\n", " NMT models can be used for inferring the likely original translation direction of parallel text.\n", " In this example, the NMT...
% This must be in the first 5 lines to tell arXiv to use pdfLaTeX, which is strongly recommended. \pdfoutput=1 % In particular, the hyperref package requires pdfLaTeX in order to break URLs across lines. \documentclass[11pt]{article} % Remove the "review" option to generate the final version. %\usepackage[review]{acl} \usepackage{acl} % Standard package includes \usepackage{times} \usepackage{latexsym} % For proper rendering and hyphenation of words containing Latin characters (including in bib files) \usepackage[T1]{fontenc} % For Vietnamese characters % \usepackage[T5]{fontenc} % See https://www.latex-project.org/help/documentation/encguide.pdf for other character sets % This assumes your files are encoded as UTF8 \usepackage[utf8]{inputenc} % This is not strictly necessary, and may be commented out, % but it will improve the layout of the manuscript, % and will typically save some space. \usepackage{microtype} % This is also not strictly necessary, and may be commented out. % However, it will improve the aesthetics of text in % the typewriter font. \usepackage{inconsolata} % Package for figures \usepackage{graphicx} % Package for equations \usepackage{amsmath} % Package for size adjustment \usepackage{relsize} % Package for scientific units \usepackage{siunitx} % Package for table formatting \usepackage{booktabs} \usepackage{tabularx} \usepackage{enumitem} % Package for writing comments on side \usepackage{todonotes} \newcommand\rico[1]{\todo{\textcolor{blue!60!black}{#1}}} \newcommand\jannis[1]{\todo{\textcolor{green!40!black}{#1}}} % New command for in-text arrow in both directions \newcommand{\biarrow}{$\leftrightarrow$} \title{Machine Translation Models are\\ Zero-Shot Detectors of Translation Direction} \author{Michelle Wastl \quad Jannis Vamvas \quad Rico Sennrich \vspace{0.1cm}\\ Department of Computational Linguistics, University of Zurich\\ \texttt{michelle.wastl@uzh.ch}, \texttt{\{vamvas,sennrich\}@cl.uzh.ch} } \begin{document} \maketitle \begin{abstract} Detecting the translation direction of parallel text has applications for machine translation training and evaluation, but also has forensic applications such as resolving plagiarism or forgery allegations. In this work, we explore an unsupervised approach to translation direction detection based on the simple hypothesis that $p(\text{translation}|\text{original})>p(\text{original}|\text{translation})$, motivated by the well-known simplification effect in translationese or machine-translationese. In experiments with massively multilingual machine translation models across 20 translation directions, we confirm the effectiveness of the approach for high-resource language pairs, achieving document-level accuracies of 82–96\% for NMT-produced translations, and 60–81\% for human translations, depending on the model used.\footnote{Code and demo are available at \url{https://github.com/ZurichNLP/translation-direction-detection}} \end{abstract} \section{Introduction} \label{sec:intro} While the original translation direction of parallel text is often ignored or unknown in the machine translation community, research has shown that it can be relevant for training~\cite{kurokawa-etal-2009-automatic,ni-etal-2022-original} and evaluation~\cite{graham-etal-2020-statistical}.\footnote{As of today, training data is not typically filtered by translation direction, but we find evidence of a need for better detection in recent work. For example, \citet{post2023escaping} show that back-translated data is more suited than crawled parallel data for document-level training, presumably because of translations in the crawled data that lack document-level consistency.} Beyond machine translation, translation direction detection has practical applications in areas such as forensic linguistics, where determining the original of a document pair may help resolve plagiarism or forgery accusations. %This study is partially inspired by a highly publicized plagiarism case in 2022, where one party has been accused of plagiarizing their (German) PhD thesis from an ostensibly older English book, but where the English book is suspected to be a forgery and translation of the thesis, possibly created with the explicit purpose of slandering the author of the thesis Previous work has addressed translation direction detection with feature-based approaches, using features such as n-gram frequency statistics and POS tags for classification \cite{kurokawa-etal-2009-automatic,10.1093/llc/fqt031,Sominsky2019} or unsupervised clustering \cite{Nisioi2015, Rabinovich2015}. However, these methods require a substantial amount of text data, and cross-domain differences in the statistics used can overshadow differences between original and translationese text. \begin{figure} \centering % Copy and edit the figure here: https://docs.google.com/presentation/d/1rVIO0ceciQjGFqFqK1acb_h1tAdbSNVD0-XhPG33RWQ/edit?usp=sharing \includegraphics[width=\columnwidth, trim=0 0.15cm 0 0, clip]{images/figure1} \caption{ NMT models can be used for inferring the likely original translation direction of parallel text. In this example, the NMT model assigns a much higher probability to the German sentence given the English sentence than to the English sentence given the German sentence, indicating that the more likely original translation direction is English\(\rightarrow\)German. } \label{fig:figure1} \end{figure} \begin{figure*} \begin{center} % Copy and edit the figure here: https://docs.google.com/presentation/d/1jQgZstKuKskE7ekyij4mMYU42Jnoammf-G5X4iiL9X0/edit?usp=sharing \includegraphics[width=\textwidth]{images/figure2} \end{center} \caption{A recent forensic case in Germany underscores the relevance of translation direction detection~\cite{ebbinghaus2022b, zenthoefer2022b, dewiki:238411824}. In 2022, two experts raised concerns about the originality of a German PhD thesis and suspected it to be plagiarized from a proceedings volume in English (\textit{plagiarism hypothesis}). Other experts observed that the alleged source could not be found in any library or database, and raised the possibility of a deliberate attempt to discredit the thesis author by fabricating the English book (\textit{forgery hypothesis}). Initially, the debate focused on the dating of the typefaces and the paper used to print the proceedings, in addition to some textual inconsistencies. However, a computational analysis of translation direction could provide additional evidence in this or similar cases. The illustration depicts one of many parallel passages identified by \citet{Weber2022}. } \label{fig:figure2} \end{figure*} In this work, we explore the unsupervised detection of translation directions purely on the basis of a neural machine translation (NMT) system's translation probabilities in both directions. As illustrated in Figure~\ref{fig:figure1}, we hypothesize that $p(\text{translation}|\text{original})>p(\text{original}|\text{translation})$, which, if it generally holds, would allow us to infer the original translation direction. If the translation has been automatically generated, this hypothesis can be motivated by the fact that machine translation systems typically generate text with mode-seeking search algorithms, and consequently tend to over-produce high-frequency outputs and reduce lexical diversity \cite{vanmassenhove-etal-2019-lost}. However, even human translations are known for so-called translationese properties such as interference, normalization, and simplification, and a (relative) lack of lexical diversity \cite{Teich+2003, 10.1093/llc/fqt031,toral-2019-post}. %We hypothesize that our detection method can be applied to both human and automatic translations. We test the approach on 20 translation directions, experimenting with 3 multilingual NMT models to predict the translation probabilities of human translations, NMT-produced translations, and pre-neural translations. We find that the approach detects the translation direction of human translations with an accuracy of 66\% on average on the sentence level, and 80\% for documents with $\geq$ 10 sentences. For the output of neural MT systems, detection accuracy is even higher, but our hypothesis that $p(\text{translation}|\text{original})>p(\text{original}|\text{translation})$ does not hold for the output of pre-neural systems. %In a qualitative analysis, we show that the detection performance can be connected to the translationese characteristics found in the sentences. Finally, we apply our method to a recent forensic case~(Figure~\ref{fig:figure2}), where the translation direction of a German PhD thesis and an English book has been under dispute, finding additional evidence for the hypothesis that the English book is a forgery created to make the thesis appear plagiarized. \medskip \noindent{}Our main contributions are the following: \begin{itemize}[itemsep=0pt] \item We propose a simple, unsupervised approach to translation direction detection based on the translation probabilities of NMT models. \item We demonstrate that the approach is effective for detecting the original translation direction of neural translations, and to a lesser extent, human translations in a variety of high-resource language pairs. \item We provide a qualitative analysis of detection performance and apply the method to a real-world forensic case. \end{itemize} \section{Related Work} \label{sec:rel_work} \subsection{Translation (Direction) Detection} \label{subsec:td} In an ideal scenario where large-scale annotated in-domain data is available, high accuracy can be achieved in translation direction detection at phrase and sentence level by training supervised systems based on various features such as word frequency statistics and POS n-grams \cite{Sominsky2019}. To reduce reliance on in-domain supervision, unsupervised methods that rely on clustering and consequent cluster labelling have also been explored for the related task of translationese detection~\cite{Rabinovich2015, Nisioi2015}. One could conceivably perform translation direction detection using similar methods, but this has the practical problem of requiring an expert for cluster labelling, and poor open-domain performance. In a multi-domain scenario, \citet{Rabinovich2015} observe that clustering based on features proposed by \citet{10.1093/llc/fqt031} results in clusters separated by domain rather than translation status. They address this by producing $2k$ clusters, $k$ being the number of domains in their dataset, and labelling each. Clearly, labelling becomes more costly as the number of domains increases, which might limit applicability to an open-domain scenario. In contrast, we hypothesize that comparing translation probabilities remains a valid strategy across domains, and requires no resources other than NMT models that are competent for the respective language pair and domain. \subsection{Translation Probabilities} \label{subsec:tb} So far, translation probabilities have been used in tasks such as noisy parallel corpus filtering \cite{Junczys-Dowmunt2018}, machine translation evaluation \cite{Thompson2020}, and paraphrase identification \cite{mallinson-etal-2017-paraphrasing,vamvas-sennrich-2022-nmtscore}. Those tasks all involve the comparison of parallel text and require a high level of attention to stylistic detail, for which the translation probabilities have proven useful in a highly data-efficient way by using existing machine translation models to generate the probabilities. \section{Methods} \label{sec:methods} Given a parallel sentence pair $(x,y)$, the main task in this work is to identify the translation direction between a language X and a language Y, and, consequently, establish which side is the original and which is the translation. This is achieved by comparing the conditional translation probability $P(y|x)$ by an NMT model $M_{X\rightarrow Y}$ with the conditional translation probability $P(x|y)$ by a model $M_{Y\rightarrow X}$ operating in the inverse direction. Our core assumption is that translations are assigned higher conditional probabilities than the original by NMT models, so if $P(y|x) > P(x|y)$, we predict that $y$ is the translation, and $x$ the original, and the original translation direction is $X \to Y$. \subsection{Detection on the Sentence Level} With a probabilistic autoregressive NMT model, we can obtain $p(y|x)$ as a product of the individual token probabilities: \begin{equation} \label{eq:avg} P(y|x) = \prod_{j=1}^{|y|} p(y_j|y_{<j},x) \end{equation} We follow earlier work by \citet{Junczys-Dowmunt2018,Thompson2020}, and average token-level (log-)probabilities.\footnote{ % Note that $P_{\text{tok}}(y|x)=\frac{1}{PPL(y|x)}=e^{-H(y|x)}$. The models we use have been trained with label smoothing~\cite{szegedy2016rethinking}, which has a cumulative effect on sequence-level probabilities~\cite{yan2023dcmbr}. Averaging token-level probabilities can help mitigate this shortcoming. } \begin{equation} \label{eq:avg2} P_{\text{tok}}(y|x) = P(y|x)^{\frac{1}{|y|}} \end{equation} To detect the original translation direction (OTD), $P_{\text{tok}}(y|x)$ and $P_{\text{tok}}(x|y)$ are compared: \[ \begin{aligned} \text{OTD} = \begin{cases} X \to Y, & \text{if } P_{\text{tok}}(y|x) > P_{\text{tok}}(x|y) \\ Y \to X, & \text{otherwise} \end{cases} \end{aligned} \] \subsection{Detection on the Document Level}\label{subsec:doc_level} We also study translation direction detection on the level of documents, as opposed to individual sentences. We assume that the sentences in the document are aligned 1:1, so that we can apply an NMT model trained on the sentence level to all $n$~sentence pairs $(x_i, y_i)$ in the document, and then aggregate the result. Our approach is equivalent to the sentence-level approach in that we calculate the average token-level probability across the document, conditioned on the respective sentence in the other language: \begin{equation} \label{eq:avg_doc} P_{\text{tok}}(y|x) = [\prod_{i=1}^{n} \prod_{j=1}^{|y_i|} p(y_{i,j}|y_{i,<j},x_i)]^{\frac{1}{\scriptstyle{|y_1| + \dots + |y_n|}}} \end{equation} The criterion for the original translation direction is then again whether $P_{\text{tok}}(y|x) > P_{\text{tok}}(x|y)$. \subsection{On Directional Bias} \label{subsec:bn} A multilingual translation model (or a pair of bilingual models) may consistently assign higher probabilities in one translation direction than the other, thus biasing our prediction. This could be the result of training data imbalance, tokenization choices, or typological differences between the languages~\cite{cotterell-etal-2018-languages,bugliarello-etal-2020-easier}.\footnote{We note that \citet{bugliarello-etal-2020-easier} do not control for the original translation direction of their data. Re-examining their findings in view of our core hypothesis could be fruitful future work.} Ideally, we would want $P(\text{OTD}=X \to Y)$ to match true value, i.e.\ equal 0.5 in a data set where gold directions are balanced. To allow for a cross-lingual comparison of bias despite varying data balance, we measure bias via the difference in accuracy between the two gold directions. An unbiased model should have similar accuracy in both directions. An extremely biased model that always predicts $\text{OTD}=X \to Y$ would achieve perfect accuracy on the gold direction $X \to Y$, and zero accuracy on the reverse gold direction $Y \to X$. We will report the bias $B$ as follows: \begin{equation} \label{eq:avg_doc} B=|acc(X \rightarrow Y)-acc(Y \rightarrow X)| \end{equation} This yields a score that ranges from 0 (unbiased) to~1 (fully biased). \section{Experiments: Models and Data} \label{subsec:models} We experiment with three massively multilingual machine translation models: M2M-100-418M \citep{Fan2021}, SMaLL-100 \citep{Mohammadshahi2022}, and NLLB-200-1.3B \citep{Akula}. The models are architecturally similar, all being based on the Transformer architecture \cite{DBLP:journals/corr/VaswaniSPUJGKP17}, but they differ in the training data used, number of languages covered, and model size, and consequently in translation quality. The comparison allows conclusions about how sensitive our method is to translation quality -- NLLB-200-1.3B yields the highest quality of the three~\citep{tiedemann-de-gibert-2023-opus}, but we also highlight differences in data balance. English has traditionally been dominant in the amount of training data, and all three models aim to reduce this dominance in different ways, for example via large-scale back-translation \cite{sennrich-etal-2016-improving} in M2M-100-418M and NLLB-200-1.3B. SMaLL-100 is a distilled version of the M2M-100-12B model, and samples training data uniformly across language pairs. We test the approach on datasets from the WMT news/general translation tasks from WMT16~\cite{bojar-etal-2016-findings}, WMT22~\cite{kocmi-etal-2022-findings}, and WMT23~\cite{kocmi-etal-2023-findings}, which come annotated with document boundaries and the original language of each document. We also experiment with a part of the FLORES-101 dataset~\cite{goyal-etal-2022-flores} to test the approach on indirect translations, where English was the original language of both sides of the parallel text. We divide the data into subsets based on several categorisations: \begin{itemize} \item \textbf{Translation direction}: the WMT data span 14 translation directions and 3 scripts (Latin, Cyrillic, Chinese). \item \textbf{Type of translation}: we distinguish between human translations (\textbf{HT}), which consist of (possibly multiple) reference translations, neural translation systems (\textbf{NMT}) (WMT 2016; 2022–2023), and phrase-based or rule-based pre-neural systems from WMT 2016 (\textbf{pre-NMT}) as a third category. \item \textbf{Directness}: Given that the WMT data are \textit{direct} translations from on side of the parallel text to the other, we perform additional experiments on translations for~4 \mbox{FLORES} language pairs (Bengali\biarrow Hindi, Czech\biarrow Ukrainian, German\biarrow French, Xhosa\biarrow Zulu). This allows us to analyze the behavior of our approach on \textit{indirect} sentence pairs where both source and reference are translations from a source in a third language (in this case English). \end{itemize} We use HT and NMT translations from WMT16 as a validation set, and the remaining translations for testing our approach. Table~\ref{tab:hr_stats} shows test set statistics for our main experiments. { \setlength{\tabcolsep}{4pt} \begin{table}[h!] \centering \smaller[1] \begin{tabular}{lS[table-format=5.0]S[table-format=4.0]S[table-format=4.0]S[table-format=5.0]S[table-format=5.0]} \toprule & \multicolumn{2}{c}{source} & \multicolumn{3}{c}{target sentences}\\ direction & \multicolumn{1}{c}{sents} & \multicolumn{1}{c}{docs \footnotesize{$\geq10$}} & \multicolumn{1}{c}{HT} & \multicolumn{1}{c}{NMT} & \multicolumn{1}{c}{Pre-NMT} \\ \cmidrule(r){2-3} \cmidrule(l){4-6} cs\textrightarrow en & 1448 & 129 & 2896 & 15928 & 16489 \\ cs\textrightarrow uk & 3947 & 112 & 3947 & 49381 & \multicolumn{1}{c}{-} \\ de\textrightarrow en & 1984 & 121 & 3968 & 17856 & 13491 \\ de\textrightarrow fr & 1984 & 73 & 1984 & 11904 & \multicolumn{1}{c}{-} \\ en\textrightarrow cs & 4111 & 204 & 6148 & 51480 & 27000 \\ en\textrightarrow de & 2037 & 125 & 4074 & 18333 & 18000 \\ en\textrightarrow ru & 4111 & 174 & 4111 & 47295 & 15000 \\ en\textrightarrow uk & 4111 & 174 & 4111 & 41147 & \multicolumn{1}{c}{-} \\ en\textrightarrow zh & 4111 & 204 & 6148 & 57591 & \multicolumn{1}{c}{-} \\ fr\textrightarrow de & 2006 & 71 & 2006 & 14042 & \multicolumn{1}{c}{-} \\ ru\textrightarrow en & 3739 & 136 & 3739 & 40836 & 13482 \\ uk\textrightarrow cs & 2812 & 43 & 2812 & 33744 & \multicolumn{1}{c}{-} \\ uk\textrightarrow en & 3844 & 88 & 3844 & 40266 & \multicolumn{1}{c}{-} \\ zh\textrightarrow en & 3851 & 162 & 5726 & 52140 & \multicolumn{1}{c}{-} \\ \addlinespace Total & 44096 & 1816 & 55514 & 491943 & 103462 \\ \bottomrule \end{tabular} \caption{Statistics of the WMT data we use for our main experiments.} \label{tab:hr_stats} \end{table} } As an additional ``real-world'' dataset, we use 86 parallel sentences in German and English from a publicly documented plagiarism allegation case, in which translation-based plagiarism was the main focus \citep{ebbinghaus2022b, zenthoefer2022b}. The sentences were extracted from aligned excerpts of both the PhD thesis and the alleged source that were presented in a plagiarism analysis report \citep{Weber2022}. We extracted the sentences with OCR and manually checked for OCR errors. This dataset introduces an element of real-world complexity, as it involves translations that might not adhere strictly to professional standards nor does it have to fit into either of the aforementioned categories of translation strategy. It also provides a unique opportunity to test the robustness and adaptability of the translation direction detection system in a scenario that extends beyond controlled environments. \section{Results} \label{sec:results} \subsection{Sentence-level Classification} \label{subsec:hr} \begin{table*}[h!] \centering \begin{tabularx}{\textwidth}{@{}Xrrrrrrrrr@{}} \toprule & \multicolumn{3}{c}{M2M-100-418M} & \multicolumn{3}{c}{SMaLL-100} & \multicolumn{3}{c}{NLLB-200-1.3B} \\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} Language Pair & \(\rightarrow\) & \(\leftarrow\) & Avg. & \(\rightarrow\) & \(\leftarrow\) & Avg. & \(\rightarrow\) & \(\leftarrow\) & Avg. \\ \midrule HT~~en\biarrow cs & 68.85 & 65.19 & \textbf{67.02} & 63.08 & 69.37 & 66.22 & 54.05 & 68.78 & 61.42 \\ HT~~en\biarrow de & 56.38 & 67.44 & \textbf{61.91} & 58.62 & 63.10 & 60.86 & 59.70 & 47.76 & 53.73 \\ HT~~en\biarrow ru & 71.81 & 54.05 & 62.93 & 68.38 & 57.56 & \textbf{62.97} & 67.40 & 49.08 & 58.24 \\ HT~~en\biarrow uk & 71.95 & 69.56 & \textbf{70.76} & 70.49 & 68.83 & 69.66 & 47.21 & 64.00 & 55.61 \\ HT~~en\biarrow zh & 54.25 & 84.30 & \textbf{69.27} & 56.41 & 80.54 & 68.48 & 17.81 & 82.52 & 50.16 \\ HT~~cs\biarrow uk & 52.44 & 74.40 & 63.42 & 59.26 & 70.52 & \textbf{64.89} & 47.68 & 76.67 & 62.18 \\ HT~~de\biarrow fr & 89.72 & 50.50 & 70.11 & 85.48 & 57.68 & 71.58 & 86.29 & 62.16 & \textbf{74.23} \\ \addlinespace Macro-Avg. & 66.49 & 66.49 & \textbf{66.49} & 65.96 & 66.80 & 66.38 & 54.31 & 64.42 & 59.37 \\ \bottomrule \end{tabularx} \caption{Accuracy of three different models when detecting the translation direction of human-translated sentences. The first column per model reports accuracy for sentence pairs with left-to-right gold direction~(e.g., en\(\rightarrow\)cs), the second column for sentence pairs with the reverse gold direction~(e.g., en\(\leftarrow\)cs). The last column reports the macro-average across both directions. The best average result for each language pair is printed in bold. } \label{tab:full_results_ht} \end{table*} The sentence-level results are shown in Tables~\ref{tab:full_results_ht}~(HT),~\ref{tab:hr_results_nmt}~(NMT), and~\ref{tab:hr_results_pnmt}~(pre-NMT). Table~\ref{tab:full_results_ht} compares the results for human translations across all models. As a general result, we find that it is not NLLB, but M2M-100 that on average yields the best results for human translations on the sentence level, with SMaLL-100 a very close second. Hence, we report results for M2M-100 in further experiments, and report performance of the other models in the Appendix. A second result is that the translation detection works best for neural translations (75.0\% macro-average), second-best for human translations (66.5\% macro-average), and worst for pre-neural (41.5\% macro-average). The fact that performance for pre-neural systems is below chance level indicates that the NMT systems we use tend to assign low probabilities to the (often ungrammatical) outputs of pre-neural systems. In practice, one could pair our detection method with a monolingual model to identify such low-quality outputs. A third result is that accuracy varies by language pair. Among the language pairs tested, accuracy of M2M-100 ranges from 61.9\% (en$\leftrightarrow$de) to 70.8\% (en$\leftrightarrow$uk) for HT, and from 71.1\% (de$\leftrightarrow$fr) to 77.3\% (en$\leftrightarrow$zh) for NMT. \begin{table} \centering \begin{tabularx}{\columnwidth}{@{}Xrrr@{}} \toprule Language Pair & \(\rightarrow\) & \(\leftarrow\) & Avg. \\ \midrule NMT~~en\biarrow cs & 71.87 & 78.30 & 75.09 \\ NMT~~en\biarrow de & 62.69 & 85.27 & 73.98 \\ NMT~~en\biarrow ru & 76.91 & 71.98 & 74.44 \\ NMT~~en\biarrow uk & 75.01 & 79.31 & 77.16 \\ NMT~~en\biarrow zh & 64.29 & 90.29 & 77.29 \\ NMT~~cs\biarrow uk & 72.83 & 79.15 & 75.99 \\ NMT~~de\biarrow fr & 90.65 & 51.60 & 71.13 \\ \addlinespace Macro-Avg. & 73.46 & 76.56 & 75.01 \\ \bottomrule \end{tabularx} \caption{Accuracy of M2M-100 when detecting the translation direction of NMT-translated sentences.} \label{tab:hr_results_nmt} \end{table} \begin{table} \centering \begin{tabularx}{\columnwidth}{@{}Xrrr@{}} \toprule Language Pair & \(\rightarrow\) & \(\leftarrow\) & Avg. \\ \midrule Pre-NMT~~en\biarrow cs & 41.97 & 42.59 & 42.28 \\ Pre-NMT~~en\biarrow de & 33.33 & 54.30 & 43.81 \\ Pre-NMT~~en\biarrow ru & 37.98 & 39.01 & 38.49 \\ \addlinespace Macro-Avg. & 37.76 & 45.30 & 41.53 \\ \bottomrule \end{tabularx} \caption{Accuracy of M2M-100 when detecting the translation direction of sentences translated with \mbox{pre-NMT} systems.} \label{tab:hr_results_pnmt} \end{table} \begin{table} \centering \begin{tabularx}{\columnwidth}{@{}Xrr@{}} \toprule Language Pair & Ratio of \(\rightarrow\) & Ratio of \(\leftarrow\)\\ \midrule HT~~bn\biarrow hi & 66.80\,\% & 33.20\,\% \\ HT~~cs\biarrow uk & 42.69\,\% & 57.31\,\% \\ HT~~de\biarrow fr & 84.78\,\% & 15.22\,\% \\ HT~~xh\biarrow zu & 45.06\,\% & 54.94\,\% \\ \bottomrule \end{tabularx} \caption{Percentage of predictions by M2M-100 for each translation direction when neither is the true translation direction (English-original FLORES).} \label{tab:lr_results_ht} \end{table} \begin{table*} \centering \smaller \begin{tabularx}{\textwidth}{@{}rXrrr@{}} \toprule & Sentences & \(\rightarrow\) & \(\leftarrow\) & Rel. Difference \\ \midrule 1 & \textit{DE: Mit dem Programm "Guten Tag, liebes Glück" ist er seit 2020 auf Tour.} & & & \\ & EN: He has been on tour with the programme "Guten Tag, liebes Glück" since 2020. (HT) & 0.145 & 0.558 & \phantom{0}0.26 \\ & EN: He has been on tour since 2020. (NMT) & 0.272 & 0.092 & \textbf{\phantom{0}2.95} \\ \addlinespace 2 & \textit{EN: please try to perfprm thsi procedures"} & & & \\ & DE: bitte versuchen Sie es mit diesen Verfahren (HT) & 0.246 & 0.010 & \textbf{24.29} \\ & DE: Bitte versuchen Sie, diese Prozeduren durchzuführen" (NMT) & 0.586 & 0.025 & \textbf{23.59} \\ \addlinespace 3 & \textit{EN: If costs for your country are not listed, please contact us for a quote.} & & & \\ & DE: Wenn die Kosten für Ihr Land nicht aufgeführt sind, wenden Sie sich für einen Kostenvoranschlag an uns. (HT) & 0.405 & 0.525 & \phantom{0}0.77 \\ & DE: Wenn die Kosten für Ihr Land nicht aufgeführt sind, kontaktieren Sie uns bitte für ein Angebot. (NMT) & 0.697 & 0.585 & \textbf{\phantom{0}1.19}\\ \addlinespace 4 & \textit{EN: Needless to say, it was chaos.} & & & \\ & DE: Es war natürlich ein Chaos. (HT) & 0.119 & 0.372 & \phantom{0}0.32 \\ & DE: Unnötig zu sagen, es war Chaos. (NMT) & 0.755 & 0.591 & \textbf{\phantom{0}1.28} \\ \addlinespace 5 & \textit{DE: Mit freundlichen Grüßen} & & & \\ & FR: Cordialement (HT) & 0.026 & 0.107 & \phantom{0}0.24 \\ & FR: Sincèrement (NMT) & 0.015 & 0.083 & \phantom{0}0.18 \\ & FR: Sincères amitiés (NMT) & 0.062 & 0.160 & \phantom{0}0.39 \\ & FR: Avec mes meilleures salutations (NMT) & 0.215 & 0.353 & \phantom{0}0.61 \\ \bottomrule \end{tabularx} \caption{Qualitative comparison of sentence pairs. Source sentences are marked in \textit{italics}, and gold direction is always \(\rightarrow\). Relative probability difference $>1$ indicates that translation direction was successfully identified, and is highlighted in bold. The probabilities are generated by M2M-100.} \label{tab:quali_analysis} \end{table*} \subsection{Directional Bias} Analyzing Table~\ref{tab:full_results_ht} for directional bias, we observe that M2M-100 is especially biased in the directions~de$\to$fr ($B=0.39$) and zh$\to$en ($B=0.30$). While we expected a general bias towards x$\to$en due to the dominance of English in training data, we find that the direction and strength of the bias vary across language pairs and models. An extreme result is NLLB for en\biarrow zh, with $B=0.64$ towards zh$\to$en. We leave it to future work to explore whether bias can be reduced via different normalization, a language pair specific bias correction term, or different model training. At present, our recommendation is to be mindful in the choice of NMT model and to perform validation before trusting the results of a previously untested NMT model for translation direction detection. \subsection{Indirect Translations} With an experiment on the English-original FLORES data, we evaluate our approach on the special case that neither side is the original. As shown in Table~\ref{tab:lr_results_ht}, our approach yields relatively balanced predictions on human translations for Czech\biarrow Ukrainian and Xhosa\biarrow Zulu, predicting each direction a roughly equal number of times. For German\biarrow French, we again find that the model predicts the de\(\rightarrow\)fr direction much more frequently than the reverse direction, reflecting the high directional bias of the model for this language pair. \subsection{Qualitative Analysis} \label{subsec:ea} A qualitative comparison of sources and translations, as illustrated in Table \ref{tab:quali_analysis}, reveals that factors such as normalization, simplification, word order interference, and sentence length influence the detection of translation direction. In Example 1, an English HT translates the German source fully, while the NMT omits half of the content, showing a high degree of simplification. Our method recognizes the simplified NMT version as a translation but not the more literal HT. Example 2 demonstrates varying degrees of normalization: the first translation corrects typos, whereas the second shows stronger normalization with added capitalization and punctuation but also exhibits more interference due to closer adherence to the source's lexical choice and copying the trailing quotation mark. Both translations are detected, but the second presents a higher probability difference. The third example indicates that translations exhibiting normalization, simplification, and interference to a higher degree are more likely to be identified. In Example 4, source language interference in terms of word order and choice significantly impacts the detection; the more literal translation mirroring the source's word order is recognized, while the more liberal translation is not. Finally, Example 5 highlights challenges with short sentences: The German phrase \textit{Mit freundlichen Grüßen} is fairly standardized, while its French equivalents can vary in use and context, adding to the ambiguity and affecting the probability distribution in NMT. Hence, our approach fails to identify any of the French translations without additional context. Misclassified short sentences as in Example 5 are not a rarity in our experiments. Our findings show that reliable detection of translation direction, with an average accuracy exceeding 50\%, is consistently attained for all language pairs we tested starting at a sentence length between 50 and 60 characters.\footnote{We used SMaLL-100 for this analysis.} Additionally, we observed a trend where the accuracy of direction detection increases as the length of the sentences grows. This aligns with previous unsupervised approaches, which also documented higher accuracy the larger the text chunks that were used, although there, reliable results were reported on a more extreme scale starting from text chunks with a length of 250 tokens \cite{Rabinovich2015}. \subsection{Document-Level Classification} \label{subsec:dl} \noindent{}Accuracy scores for document-level results by M2M-100 (best-performing system at sentence level) are presented in Tables \ref{tab:doc_nmt} (NMT) and \ref{tab:doc_ht} (HT). We consider documents with at least 10 sentences, and language pairs with at least 100 such documents in both directions. The table shows that the sentence-level results are amplified at the document level. Translation direction detection accuracy for human translations reaches a macro-average of 80.5\%, while the document-level accuracy for translations generated by NMT systems reaches 95.5\% on average. \begin{table} \centering \begin{tabularx}{\columnwidth}{@{}Xrrr@{}} \toprule Language Pair & \(\rightarrow\) & \(\leftarrow\) & Avg. \\ \midrule HT~~en\biarrow cs & 88.24 & 80.62 & 84.43 \\ HT~~en\biarrow de & 70.40 & 88.43 & 79.41 \\ HT~~en\biarrow ru & 96.55 & 54.41 & 75.48 \\ HT~~en\biarrow zh & 67.65 & 97.53 & 82.59 \\ \addlinespace Macro-Avg. & 80.71 & 80.25 & 80.48 \\ \bottomrule \end{tabularx} \caption{Document-level classification: Accuracy of M2M-100 when detecting the translation direction of human translations at the document level~(documents with $\geq$ 10 sentences).} \label{tab:doc_ht} \end{table} \begin{table} \centering \begin{tabularx}{\columnwidth}{@{}Xrrr@{}} \toprule Language Pair & \(\rightarrow\) & \(\leftarrow\) & Avg. \\ \midrule NMT~~en\biarrow cs & 96.78 & 99.27 & 98.03 \\ NMT~~en\biarrow de & 91.06 & 99.18 & 95.12 \\ NMT~~en\biarrow ru & 98.39 & 94.60 & 96.50 \\ NMT~~en\biarrow zh & 86.33 & 98.62 & 92.47 \\ \addlinespace Macro-Avg. & 93.14 & 97.92 & 95.53 \\ \bottomrule \end{tabularx} \caption{Document-level classification: Accuracy of M2M-100 when detecting the translation direction of NMT translations at the document level~(documents with $\geq$ 10 sentences).} \label{tab:doc_nmt} \end{table} \subsection{Application to Real-World Forensic Case} \label{subsec:rw} Finally, we apply our approach to the 86 segment pairs of the plagiarism allegation case. We treat the segments as a single document and classify them with M2M-100 using the document-level approach defined in Section~\ref{subsec:doc_level}. We find that according to the model, it is more probable that the English segments are translations of the German segments than vice versa. We validate our analysis using a permutation test. The null hypothesis is that the model probabilities for both potential translation directions are drawn from the same distribution. In order to perform the permutation test, we swap the segment-level probabilities $P(y_i|x_i)$ and $P(x_i|y_i)$ for randomly selected segments $i$ before calculating the difference between the document-level probabilities $P(y|x)$ and $P(x|y)$. We repeat this process 10,000 times and calculate the $p$-value as twice the proportion of permutations that yield a difference at least as extreme as the observed difference. Obtaining a $p$-value of 0.0002, we reject the null hypothesis and conclude that our approach makes a statistically significant prediction that the English segments are translated from the German segments. % Predicted direction: de→en % 86 sentence pairs % de→en: 0.513 % en→de: 0.487 % p-value: 0.00019998000199980003 Overall, our analysis supports the hypothesis that German is indeed the language of origin in this real-world dataset (\textit{forgery hypothesis}; Figure~\ref{fig:figure2}). Nevertheless, we recommend that additional evidence be considered before drawing a final conclusion, given the error rate of 5–21\% that we observed in our earlier experiments on English--German WMT documents. \section{Conclusion} \label{sec:conclusion} We proposed a novel approach to detecting the translation direction of parallel sentences, using only an off-the-shelf multilingual NMT system. Experiments on WMT data showed that our approach, without any task-specific supervision, is able to detect the translation direction of NMT-translated sentences with relatively high accuracy. The accuracy increases to 96\% if the classifier is provided with at least 10 sentences per document. We also found a robust accuracy for translations by human translators. Finally, we applied our approach to a real-world forensic case and found that it supports the hypothesis that the English book is a forgery. Future work should explore whether our approach can be improved by mitigating directional bias of the NMT model used. Another open question is to what degree our approach will generalize to document-level translation and to translation with large language models. \section*{Limitations} While the proposed approach is simple and effective, there are some limitations that might make its application more difficult in practice: \paragraph{Sentence alignment:} We performed our experiments on sentence-aligned parallel data, where each sentence in one language has a corresponding sentence in the other language. In practice, parallel documents might have one-to-many or many-to-many alignments, which would require custom pre-processing or the use of models that can directly estimate document-level probabilities. \paragraph{Translation strategies:} Our main experiments used academic data from the WMT translation task, where care is taken to ensure that different translation methods are clearly separated: NMT translations did not undergo human post-editing, and human translators were instructed to work from scratch. In practice, parallel documents might have undergone a mixture of translation strategies, which makes it more difficult to predict the accuracy of our approach. Specifically, we found that our approach has less-than-chance accuracy on pre-NMT translations. Applying our approach to web-scale parallel corpus filtering might therefore require additional filtering steps to exclude translations of lower quality. \paragraph{Low-resource languages:} Our experiments required test data for both translation directions, which limited the set of languages we could test. While the community has created reference translations for many low-resource languages, the translation directions are usually not covered symmetrically. For example, the test set of FLORES~\cite{goyal-etal-2022-flores} has been translated from English into many languages, but not vice versa. Thus, apart from Table~\ref{tab:lr_results_ht}, we have not tested our approach on low-resource languages, and it is possible that the accuracy of our approach is lower for such languages, in parallel with the lower translation quality of NMT models for low-resource languages. \section*{Ethical Considerations} Translation direction detection has a potential application in forensic linguistics, where reliable accuracy is crucial. Our experiments show that accuracy can vary depending on the language pair, the NMT model used for detection, as well as the translation strategy and the length of the input text. Before our approach is applied in a forensic setting, we recommend that its accuracy be validated in the context of the specific use case. In Section~\ref{subsec:rw}, we tested our approach on a real-world instance of such a case, where one party has been accused of plagiarism, but the purported original is now suspected to be a forgery. This case is publicly documented and has been widely discussed in German-speaking media~(e.g.,~\citealt{ebbinghaus2022b, zenthoefer2022b, dewiki:238411824}). For this experiment, we used 86 sentence pairs from the two (publicly available) books that are the subject of this case. However, the case has not been definitively resolved, as legal proceedings are still ongoing. No author of this paper is involved in the legal proceedings. We therefore refrain from publicly releasing the dataset of sentence pairs we used for this experiment. \section*{Acknowledgements} JV and RS acknowledge funding by the Swiss National Science Foundation (project MUTAMUR; no.~213976). \bibliography{bibliography} \appendix \onecolumn \section{Comparison of Models (Sentence Level)} \begin{table}[h!] \centering \begin{tabularx}{\textwidth}{@{}Xrrrrrrrrr@{}} \toprule & \multicolumn{3}{c}{M2M-100-418M} & \multicolumn{3}{c}{SMaLL-100} & \multicolumn{3}{c}{NLLB-200-1.3B} \\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} Language Pair & \(\rightarrow\) & \(\leftarrow\) & Avg. & \(\rightarrow\) & \(\leftarrow\) & Avg. & \(\rightarrow\) & \(\leftarrow\) & Avg. \\ \midrule NMT~~en\biarrow cs & 71.87 & 78.30 & \textbf{75.09} & 67.96 & 79.11 & 73.53 & 62.42 & 77.89 & 70.16 \\ NMT~~en\biarrow de & 62.69 & 85.27 & 73.98 & 67.05 & 80.96 & \textbf{74.00} & 73.51 & 74.36 & 73.93 \\ NMT~~en\biarrow ru & 76.91 & 71.98 & \textbf{74.44} & 74.34 & 72.30 & 73.32 & 78.87 & 57.15 & 68.01 \\ NMT~~en\biarrow uk & 75.01 & 79.31 & \textbf{77.16} & 73.74 & 78.27 & 76.01 & 58.48 & 79.56 & 69.02 \\ NMT~~en\biarrow zh & 64.29 & 90.29 & \textbf{77.29} & 66.54 & 87.62 & 77.08 & 25.42 & 90.54 & 57.98 \\ NMT~~cs\biarrow uk & 72.83 & 79.15 & 75.99 & 77.33 & 76.04 & \textbf{76.68} & 70.55 & 76.59 & 73.57 \\ NMT~~de\biarrow fr & 90.65 & 51.60 & 71.13 & 86.83 & 59.19 & \textbf{73.01} & 79.44 & 57.58 & 68.51 \\ \addlinespace Macro-Avg. & 73.46 & 76.56 & \textbf{75.01} & 73.40 & 76.21 & 74.81 & 64.10 & 73.38 & 68.74 \\ \bottomrule \end{tabularx} \caption{Accuracy of three different models when detecting the translation direction of NMT-translated sentences. The first column reports accuracy for sentence pairs with left-to-right gold direction~(e.g., en\(\rightarrow\)cs), the second column for sentence pairs with the reverse gold direction~(e.g., en\(\leftarrow\)cs). The last column reports the macro-average across both directions. The best result for each language pair is printed in bold. } \label{tab:full_results_nmt} \end{table} \vspace{0.5cm} \begin{table}[h!] \centering \begin{tabularx}{\textwidth}{@{}Xrrrrrrrrr@{}} \toprule & \multicolumn{3}{c}{M2M-100-418M} & \multicolumn{3}{c}{SMaLL-100} & \multicolumn{3}{c}{NLLB-200-1.3B} \\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} Language Pair & \(\rightarrow\) & \(\leftarrow\) & Avg. & \(\rightarrow\) & \(\leftarrow\) & Avg. & \(\rightarrow\) & \(\leftarrow\) & Avg. \\ \midrule Pre-NMT~~en\biarrow cs & 41.97 & 42.59 & \textbf{42.28} & 37.88 & 45.42 & 41.65 & 16.36 & 35.34 & 25.85 \\ Pre-NMT~~en\biarrow de & 33.33 & 54.30 & \textbf{43.81} & 36.18 & 48.57 & 42.37 & 18.73 & 26.20 & 22.47 \\ Pre-NMT~~en\biarrow ru & 37.98 & 39.01 & \textbf{38.49} & 35.71 & 39.19 & 37.45 & 19.71 & 16.04 & 17.88 \\ \addlinespace Macro-Avg. & 37.76 & 45.30 & \textbf{41.53} & 36.59 & 44.39 & 40.49 & 18.27 & 25.86 & 22.07 \\ \bottomrule \end{tabularx} \caption{Accuracy of three different models when detecting the translation direction of sentences translated with \mbox{pre-NMT} systems. The best result for each language pair is printed in bold. } \label{tab:full_results_prenmt} \end{table} \clearpage \section{Comparison of Models (Document Level)} \begin{table}[h!] \centering \begin{tabularx}{\textwidth}{@{}Xrrrrrrrrr@{}} \toprule & \multicolumn{3}{c}{M2M-100-418M} & \multicolumn{3}{c}{SMaLL-100} & \multicolumn{3}{c}{NLLB-200-1.3B} \\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} Language Pair & \(\rightarrow\) & \(\leftarrow\) & Avg. & \(\rightarrow\) & \(\leftarrow\) & Avg. & \(\rightarrow\) & \(\leftarrow\) & Avg. \\ \midrule HT~~en\biarrow cs & 88.24 & 80.62 & \textbf{84.43} & 78.92 & 89.15 & 84.03 & 55.88 & 86.05 & 70.96 \\ HT~~en\biarrow de & 70.40 & 88.43 & \textbf{79.41} & 73.60 & 82.64 & 78.12 & 68.00 & 45.45 & 56.73 \\ HT~~en\biarrow ru & 96.55 & 54.41 & 75.48 & 95.40 & 61.03 & \textbf{78.22} & 82.18 & 39.71 & 60.94 \\ HT~~en\biarrow zh & 67.65 & 97.53 & 82.59 & 71.57 & 96.30 & \textbf{83.93} & 3.92 & 96.91 & 50.42 \\ \addlinespace Macro-Avg. & 80.71 & 80.25 & 80.48 & 79.87 & 82.28 & \textbf{81.08} & 52.50 & 67.03 & 59.76 \\ \bottomrule \end{tabularx} \caption{Accuracy of three different models when detecting the translation direction of human-translated documents. The best result for each language pair is printed in bold. } \label{tab:full_results_doc_ht} \end{table} \vspace{0.5cm} \begin{table}[h!] \centering \begin{tabularx}{\textwidth}{@{}Xrrrrrrrrr@{}} \toprule & \multicolumn{3}{c}{M2M-100-418M} & \multicolumn{3}{c}{SMaLL-100} & \multicolumn{3}{c}{NLLB-200-1.3B} \\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} Language Pair & \(\rightarrow\) & \(\leftarrow\) & Avg. & \(\rightarrow\) & \(\leftarrow\) & Avg. & \(\rightarrow\) & \(\leftarrow\) & Avg. \\ \midrule NMT~~en\biarrow cs & 96.78 & 99.27 & \textbf{98.03} & 94.64 & 97.81 & 96.22 & 86.06 & 95.62 & 90.84 \\ NMT~~en\biarrow de & 91.06 & 99.18 & 95.12 & 93.85 & 97.12 & 95.49 & 96.65 & 95.06 & \textbf{95.85} \\ NMT~~en\biarrow ru & 98.39 & 94.60 & \textbf{96.50} & 97.05 & 95.12 & 96.08 & 99.20 & 72.75 & 85.97 \\ NMT~~en\biarrow zh & 86.33 & 98.62 & 92.47 & 90.62 & 98.16 & \textbf{94.39} & 13.14 & 98.39 & 55.76 \\ \addlinespace Macro-Avg. & 93.14 & 97.92 & 95.53 & 94.04 & 97.05 & \textbf{95.55} & 73.76 & 90.46 & 82.11 \\ \bottomrule \end{tabularx} \caption{Accuracy of three different models when detecting the translation direction of documents translated with NMT systems. The best result for each language pair is printed in bold. } \label{tab:full_results_doc_nmt} \end{table} \vfill \section{Example for Forensic Dataset} \begin{table*}[h!] \centering \begin{tabularx}{\textwidth}{@{}Xrrr@{}} \toprule & DE\(\rightarrow\)EN & EN\(\rightarrow\)DE \\ \midrule \textit{DE: Nach 30 sec. wurde das trypsinhaltige PBS abgegossen, und die Zellen kamen für eine weitere Minute in den Brutschrank.} & & \\ \addlinespace EN: After 30 sec, the trypsin-containing PBS was poured off, and the cells were placed in the incubator for another minute. & 0.442 & 0.285 \\ \bottomrule \end{tabularx} \caption{Example of two segments from the forensic case~(Section~\ref{subsec:rw}). M2M-100 assigns a higher probability to the English sentence conditioned on the German sentence than vice versa, suggesting that the English sentence is more likely to be a translation of the German sentence.} \label{tab:examples-colchicine} \end{table*} \vfill \clearpage \section{Data Statistics} \label{sec:appendix_data} \begin{table}[h!] \centering \begin{tabular}{ccS[table-format=5.0]S[table-format=4.0]S[table-format=4.0]S[table-format=5.0]S[table-format=5.0]} \toprule & & \multicolumn{2}{c}{source} & \multicolumn{3}{c}{target sentences}\\ testset & direction & \multicolumn{1}{c}{sents} & \multicolumn{1}{c}{docs $\geq10$} & \multicolumn{1}{c}{HT} & \multicolumn{1}{c}{NMT} & \multicolumn{1}{c}{Pre-NMT} \\ \cmidrule(r){3-4} \cmidrule(l){5-7} WMT16 & cs\textrightarrow en & 1499 & 40 & \textit{1499} & \textit{1499} & 16489 \\ WMT16 & de\textrightarrow en & 1499 & 55 & \textit{1499} & \textit{1499} & 13491 \\ WMT16 & en\textrightarrow cs & 1500 & 54 & \textit{1500} & \textit{3000} & 27000 \\ WMT16 & en\textrightarrow de & 1500 & 54 & \textit{1500} & \textit{4500} & 18000 \\ WMT16 & en\textrightarrow ru & 1500 & 54 & \textit{1500} & \textit{3000} & 15000 \\ WMT16 & ru\textrightarrow en & 1498 & 52 & \textit{1498} & \textit{1498} & 13482 \\ \midrule WMT22 & cs\textrightarrow en & 1448 & 129 & 2896 & 15928 & \multicolumn{1}{c}{-} \\ WMT22 & cs\textrightarrow uk & 1930 & 13 & 1930 & 23160 & \multicolumn{1}{c}{-} \\ WMT22 & de\textrightarrow en & 1984 & 121 & 3968 & 17856 & \multicolumn{1}{c}{-} \\ WMT22 & de\textrightarrow fr & 1984 & 73 & 1984 & 11904 & \multicolumn{1}{c}{-} \\ WMT22 & en\textrightarrow cs & 2037 & 125 & 4074 & 20370 & \multicolumn{1}{c}{-} \\ WMT22 & en\textrightarrow de & 2037 & 125 & 4074 & 18333 & \multicolumn{1}{c}{-} \\ WMT22 & en\textrightarrow ru & 2037 & 95 & 2037 & 22407 & \multicolumn{1}{c}{-} \\ WMT22 & en\textrightarrow uk & 2037 & 95 & 2037 & 18333 & \multicolumn{1}{c}{-} \\ WMT22 & en\textrightarrow zh & 2037 & 125 & 4074 & 26481 & \multicolumn{1}{c}{-} \\ WMT22 & fr\textrightarrow de & 2006 & 71 & 2006 & 14042 & \multicolumn{1}{c}{-} \\ WMT22 & ru\textrightarrow en & 2016 & 73 & 2016 & 20160 & \multicolumn{1}{c}{-} \\ WMT22 & uk\textrightarrow cs & 2812 & 43 & 2812 & 33744 & \multicolumn{1}{c}{-} \\ WMT22 & uk\textrightarrow en & 2018 & 22 & 2018 & 20180 & \multicolumn{1}{c}{-} \\ WMT22 & zh\textrightarrow en & 1875 & 102 & 3750 & 22500 & \multicolumn{1}{c}{-} \\ \midrule WMT23 & cs\textrightarrow uk & 2017 & 99 & 2017 & 26221 & \multicolumn{1}{c}{-} \\ WMT23 & en\textrightarrow cs & 2074 & 79 & 2074 & 31110 & \multicolumn{1}{c}{-} \\ WMT23 & en\textrightarrow ru & 2074 & 79 & 2074 & 24888 & \multicolumn{1}{c}{-} \\ WMT23 & en\textrightarrow uk & 2074 & 79 & 2074 & 22814 & \multicolumn{1}{c}{-} \\ WMT23 & en\textrightarrow zh & 2074 & 79 & 2074 & 31110 & \multicolumn{1}{c}{-} \\ WMT23 & ru\textrightarrow en & 1723 & 63 & 1723 & 20676 & \multicolumn{1}{c}{-} \\ WMT23 & uk\textrightarrow en & 1826 & 66 & 1826 & 20086 & \multicolumn{1}{c}{-} \\ WMT23 & zh\textrightarrow en & 1976 & 60 & 1976 & 29640 & \multicolumn{1}{c}{-} \\ \bottomrule \end{tabular} \caption{Detailed data statistics for the main experiments. Cursive: data used for validation. } \label{tab:data_stats} \end{table} \vspace{0.5cm} { \begin{table}[h!] \centering \begin{tabular}{lS[table-format=5.0]S[table-format=4.0]S[table-format=4.0]S[table-format=5.0]} \toprule Direction & \multicolumn{1}{c}{Sentence pairs} \\ \midrule bn\biarrow hi & 1012 \\ cs\biarrow uk & 1012 \\ de\biarrow fr & 1012 \\ xh\biarrow zu & 1012 \\ \bottomrule \end{tabular} \caption{Statistics for the FLORES-101 (devtest) datasets, where both sides are human translations from English.} \label{tab:flores_stats} \end{table} } \end{document}
Which model has the biggest difference in translation quality when translating into English versus from English, and what is the value of that difference?
NLLB-200-1.3B. 64.71
SELECT all models LOOP for each mode SELECT all language pair containing en(English) LOOP for each language pair containing en (English) COMPUTE diff = abs(score translating into English − score translating from English) COMPUTE max diff for the model COMPUTE argmax max diff across all models RETURN model with largest max diff and the value
2401.06769
[ [ "\\begin{table}\n", "\\centering\n", "\\begin{tabularx}{\\columnwidth}{@{}Xrrr@{}}\n", "\\toprule\n", "Language Pair & \\(\\rightarrow\\) & \\(\\leftarrow\\) & Avg. \\\\\n", "\\midrule\n", "HT~~en\\biarrow cs & 88.24 & 80.62 & 84.43 \\\\\n", "HT~~en\\biarrow de & 70.40 & 88.43 & ...
[ [ "\\begin{figure}\n", " \\centering\n", " \\includegraphics[width=\\columnwidth, trim=0 0.15cm 0 0, clip]{images/figure1}\n", " \\caption{\n", " NMT models can be used for inferring the likely original translation direction of parallel text.\n", " In this example, the NMT...
% This must be in the first 5 lines to tell arXiv to use pdfLaTeX, which is strongly recommended. \pdfoutput=1 % In particular, the hyperref package requires pdfLaTeX in order to break URLs across lines. \documentclass[11pt]{article} % Remove the "review" option to generate the final version. %\usepackage[review]{acl} \usepackage{acl} % Standard package includes \usepackage{times} \usepackage{latexsym} % For proper rendering and hyphenation of words containing Latin characters (including in bib files) \usepackage[T1]{fontenc} % For Vietnamese characters % \usepackage[T5]{fontenc} % See https://www.latex-project.org/help/documentation/encguide.pdf for other character sets % This assumes your files are encoded as UTF8 \usepackage[utf8]{inputenc} % This is not strictly necessary, and may be commented out, % but it will improve the layout of the manuscript, % and will typically save some space. \usepackage{microtype} % This is also not strictly necessary, and may be commented out. % However, it will improve the aesthetics of text in % the typewriter font. \usepackage{inconsolata} % Package for figures \usepackage{graphicx} % Package for equations \usepackage{amsmath} % Package for size adjustment \usepackage{relsize} % Package for scientific units \usepackage{siunitx} % Package for table formatting \usepackage{booktabs} \usepackage{tabularx} \usepackage{enumitem} % Package for writing comments on side \usepackage{todonotes} \newcommand\rico[1]{\todo{\textcolor{blue!60!black}{#1}}} \newcommand\jannis[1]{\todo{\textcolor{green!40!black}{#1}}} % New command for in-text arrow in both directions \newcommand{\biarrow}{$\leftrightarrow$} \title{Machine Translation Models are\\ Zero-Shot Detectors of Translation Direction} \author{Michelle Wastl \quad Jannis Vamvas \quad Rico Sennrich \vspace{0.1cm}\\ Department of Computational Linguistics, University of Zurich\\ \texttt{michelle.wastl@uzh.ch}, \texttt{\{vamvas,sennrich\}@cl.uzh.ch} } \begin{document} \maketitle \begin{abstract} Detecting the translation direction of parallel text has applications for machine translation training and evaluation, but also has forensic applications such as resolving plagiarism or forgery allegations. In this work, we explore an unsupervised approach to translation direction detection based on the simple hypothesis that $p(\text{translation}|\text{original})>p(\text{original}|\text{translation})$, motivated by the well-known simplification effect in translationese or machine-translationese. In experiments with massively multilingual machine translation models across 20 translation directions, we confirm the effectiveness of the approach for high-resource language pairs, achieving document-level accuracies of 82–96\% for NMT-produced translations, and 60–81\% for human translations, depending on the model used.\footnote{Code and demo are available at \url{https://github.com/ZurichNLP/translation-direction-detection}} \end{abstract} \section{Introduction} \label{sec:intro} While the original translation direction of parallel text is often ignored or unknown in the machine translation community, research has shown that it can be relevant for training~\cite{kurokawa-etal-2009-automatic,ni-etal-2022-original} and evaluation~\cite{graham-etal-2020-statistical}.\footnote{As of today, training data is not typically filtered by translation direction, but we find evidence of a need for better detection in recent work. For example, \citet{post2023escaping} show that back-translated data is more suited than crawled parallel data for document-level training, presumably because of translations in the crawled data that lack document-level consistency.} Beyond machine translation, translation direction detection has practical applications in areas such as forensic linguistics, where determining the original of a document pair may help resolve plagiarism or forgery accusations. %This study is partially inspired by a highly publicized plagiarism case in 2022, where one party has been accused of plagiarizing their (German) PhD thesis from an ostensibly older English book, but where the English book is suspected to be a forgery and translation of the thesis, possibly created with the explicit purpose of slandering the author of the thesis Previous work has addressed translation direction detection with feature-based approaches, using features such as n-gram frequency statistics and POS tags for classification \cite{kurokawa-etal-2009-automatic,10.1093/llc/fqt031,Sominsky2019} or unsupervised clustering \cite{Nisioi2015, Rabinovich2015}. However, these methods require a substantial amount of text data, and cross-domain differences in the statistics used can overshadow differences between original and translationese text. \begin{figure} \centering % Copy and edit the figure here: https://docs.google.com/presentation/d/1rVIO0ceciQjGFqFqK1acb_h1tAdbSNVD0-XhPG33RWQ/edit?usp=sharing \includegraphics[width=\columnwidth, trim=0 0.15cm 0 0, clip]{images/figure1} \caption{ NMT models can be used for inferring the likely original translation direction of parallel text. In this example, the NMT model assigns a much higher probability to the German sentence given the English sentence than to the English sentence given the German sentence, indicating that the more likely original translation direction is English\(\rightarrow\)German. } \label{fig:figure1} \end{figure} \begin{figure*} \begin{center} % Copy and edit the figure here: https://docs.google.com/presentation/d/1jQgZstKuKskE7ekyij4mMYU42Jnoammf-G5X4iiL9X0/edit?usp=sharing \includegraphics[width=\textwidth]{images/figure2} \end{center} \caption{A recent forensic case in Germany underscores the relevance of translation direction detection~\cite{ebbinghaus2022b, zenthoefer2022b, dewiki:238411824}. In 2022, two experts raised concerns about the originality of a German PhD thesis and suspected it to be plagiarized from a proceedings volume in English (\textit{plagiarism hypothesis}). Other experts observed that the alleged source could not be found in any library or database, and raised the possibility of a deliberate attempt to discredit the thesis author by fabricating the English book (\textit{forgery hypothesis}). Initially, the debate focused on the dating of the typefaces and the paper used to print the proceedings, in addition to some textual inconsistencies. However, a computational analysis of translation direction could provide additional evidence in this or similar cases. The illustration depicts one of many parallel passages identified by \citet{Weber2022}. } \label{fig:figure2} \end{figure*} In this work, we explore the unsupervised detection of translation directions purely on the basis of a neural machine translation (NMT) system's translation probabilities in both directions. As illustrated in Figure~\ref{fig:figure1}, we hypothesize that $p(\text{translation}|\text{original})>p(\text{original}|\text{translation})$, which, if it generally holds, would allow us to infer the original translation direction. If the translation has been automatically generated, this hypothesis can be motivated by the fact that machine translation systems typically generate text with mode-seeking search algorithms, and consequently tend to over-produce high-frequency outputs and reduce lexical diversity \cite{vanmassenhove-etal-2019-lost}. However, even human translations are known for so-called translationese properties such as interference, normalization, and simplification, and a (relative) lack of lexical diversity \cite{Teich+2003, 10.1093/llc/fqt031,toral-2019-post}. %We hypothesize that our detection method can be applied to both human and automatic translations. We test the approach on 20 translation directions, experimenting with 3 multilingual NMT models to predict the translation probabilities of human translations, NMT-produced translations, and pre-neural translations. We find that the approach detects the translation direction of human translations with an accuracy of 66\% on average on the sentence level, and 80\% for documents with $\geq$ 10 sentences. For the output of neural MT systems, detection accuracy is even higher, but our hypothesis that $p(\text{translation}|\text{original})>p(\text{original}|\text{translation})$ does not hold for the output of pre-neural systems. %In a qualitative analysis, we show that the detection performance can be connected to the translationese characteristics found in the sentences. Finally, we apply our method to a recent forensic case~(Figure~\ref{fig:figure2}), where the translation direction of a German PhD thesis and an English book has been under dispute, finding additional evidence for the hypothesis that the English book is a forgery created to make the thesis appear plagiarized. \medskip \noindent{}Our main contributions are the following: \begin{itemize}[itemsep=0pt] \item We propose a simple, unsupervised approach to translation direction detection based on the translation probabilities of NMT models. \item We demonstrate that the approach is effective for detecting the original translation direction of neural translations, and to a lesser extent, human translations in a variety of high-resource language pairs. \item We provide a qualitative analysis of detection performance and apply the method to a real-world forensic case. \end{itemize} \section{Related Work} \label{sec:rel_work} \subsection{Translation (Direction) Detection} \label{subsec:td} In an ideal scenario where large-scale annotated in-domain data is available, high accuracy can be achieved in translation direction detection at phrase and sentence level by training supervised systems based on various features such as word frequency statistics and POS n-grams \cite{Sominsky2019}. To reduce reliance on in-domain supervision, unsupervised methods that rely on clustering and consequent cluster labelling have also been explored for the related task of translationese detection~\cite{Rabinovich2015, Nisioi2015}. One could conceivably perform translation direction detection using similar methods, but this has the practical problem of requiring an expert for cluster labelling, and poor open-domain performance. In a multi-domain scenario, \citet{Rabinovich2015} observe that clustering based on features proposed by \citet{10.1093/llc/fqt031} results in clusters separated by domain rather than translation status. They address this by producing $2k$ clusters, $k$ being the number of domains in their dataset, and labelling each. Clearly, labelling becomes more costly as the number of domains increases, which might limit applicability to an open-domain scenario. In contrast, we hypothesize that comparing translation probabilities remains a valid strategy across domains, and requires no resources other than NMT models that are competent for the respective language pair and domain. \subsection{Translation Probabilities} \label{subsec:tb} So far, translation probabilities have been used in tasks such as noisy parallel corpus filtering \cite{Junczys-Dowmunt2018}, machine translation evaluation \cite{Thompson2020}, and paraphrase identification \cite{mallinson-etal-2017-paraphrasing,vamvas-sennrich-2022-nmtscore}. Those tasks all involve the comparison of parallel text and require a high level of attention to stylistic detail, for which the translation probabilities have proven useful in a highly data-efficient way by using existing machine translation models to generate the probabilities. \section{Methods} \label{sec:methods} Given a parallel sentence pair $(x,y)$, the main task in this work is to identify the translation direction between a language X and a language Y, and, consequently, establish which side is the original and which is the translation. This is achieved by comparing the conditional translation probability $P(y|x)$ by an NMT model $M_{X\rightarrow Y}$ with the conditional translation probability $P(x|y)$ by a model $M_{Y\rightarrow X}$ operating in the inverse direction. Our core assumption is that translations are assigned higher conditional probabilities than the original by NMT models, so if $P(y|x) > P(x|y)$, we predict that $y$ is the translation, and $x$ the original, and the original translation direction is $X \to Y$. \subsection{Detection on the Sentence Level} With a probabilistic autoregressive NMT model, we can obtain $p(y|x)$ as a product of the individual token probabilities: \begin{equation} \label{eq:avg} P(y|x) = \prod_{j=1}^{|y|} p(y_j|y_{<j},x) \end{equation} We follow earlier work by \citet{Junczys-Dowmunt2018,Thompson2020}, and average token-level (log-)probabilities.\footnote{ % Note that $P_{\text{tok}}(y|x)=\frac{1}{PPL(y|x)}=e^{-H(y|x)}$. The models we use have been trained with label smoothing~\cite{szegedy2016rethinking}, which has a cumulative effect on sequence-level probabilities~\cite{yan2023dcmbr}. Averaging token-level probabilities can help mitigate this shortcoming. } \begin{equation} \label{eq:avg2} P_{\text{tok}}(y|x) = P(y|x)^{\frac{1}{|y|}} \end{equation} To detect the original translation direction (OTD), $P_{\text{tok}}(y|x)$ and $P_{\text{tok}}(x|y)$ are compared: \[ \begin{aligned} \text{OTD} = \begin{cases} X \to Y, & \text{if } P_{\text{tok}}(y|x) > P_{\text{tok}}(x|y) \\ Y \to X, & \text{otherwise} \end{cases} \end{aligned} \] \subsection{Detection on the Document Level}\label{subsec:doc_level} We also study translation direction detection on the level of documents, as opposed to individual sentences. We assume that the sentences in the document are aligned 1:1, so that we can apply an NMT model trained on the sentence level to all $n$~sentence pairs $(x_i, y_i)$ in the document, and then aggregate the result. Our approach is equivalent to the sentence-level approach in that we calculate the average token-level probability across the document, conditioned on the respective sentence in the other language: \begin{equation} \label{eq:avg_doc} P_{\text{tok}}(y|x) = [\prod_{i=1}^{n} \prod_{j=1}^{|y_i|} p(y_{i,j}|y_{i,<j},x_i)]^{\frac{1}{\scriptstyle{|y_1| + \dots + |y_n|}}} \end{equation} The criterion for the original translation direction is then again whether $P_{\text{tok}}(y|x) > P_{\text{tok}}(x|y)$. \subsection{On Directional Bias} \label{subsec:bn} A multilingual translation model (or a pair of bilingual models) may consistently assign higher probabilities in one translation direction than the other, thus biasing our prediction. This could be the result of training data imbalance, tokenization choices, or typological differences between the languages~\cite{cotterell-etal-2018-languages,bugliarello-etal-2020-easier}.\footnote{We note that \citet{bugliarello-etal-2020-easier} do not control for the original translation direction of their data. Re-examining their findings in view of our core hypothesis could be fruitful future work.} Ideally, we would want $P(\text{OTD}=X \to Y)$ to match true value, i.e.\ equal 0.5 in a data set where gold directions are balanced. To allow for a cross-lingual comparison of bias despite varying data balance, we measure bias via the difference in accuracy between the two gold directions. An unbiased model should have similar accuracy in both directions. An extremely biased model that always predicts $\text{OTD}=X \to Y$ would achieve perfect accuracy on the gold direction $X \to Y$, and zero accuracy on the reverse gold direction $Y \to X$. We will report the bias $B$ as follows: \begin{equation} \label{eq:avg_doc} B=|acc(X \rightarrow Y)-acc(Y \rightarrow X)| \end{equation} This yields a score that ranges from 0 (unbiased) to~1 (fully biased). \section{Experiments: Models and Data} \label{subsec:models} We experiment with three massively multilingual machine translation models: M2M-100-418M \citep{Fan2021}, SMaLL-100 \citep{Mohammadshahi2022}, and NLLB-200-1.3B \citep{Akula}. The models are architecturally similar, all being based on the Transformer architecture \cite{DBLP:journals/corr/VaswaniSPUJGKP17}, but they differ in the training data used, number of languages covered, and model size, and consequently in translation quality. The comparison allows conclusions about how sensitive our method is to translation quality -- NLLB-200-1.3B yields the highest quality of the three~\citep{tiedemann-de-gibert-2023-opus}, but we also highlight differences in data balance. English has traditionally been dominant in the amount of training data, and all three models aim to reduce this dominance in different ways, for example via large-scale back-translation \cite{sennrich-etal-2016-improving} in M2M-100-418M and NLLB-200-1.3B. SMaLL-100 is a distilled version of the M2M-100-12B model, and samples training data uniformly across language pairs. We test the approach on datasets from the WMT news/general translation tasks from WMT16~\cite{bojar-etal-2016-findings}, WMT22~\cite{kocmi-etal-2022-findings}, and WMT23~\cite{kocmi-etal-2023-findings}, which come annotated with document boundaries and the original language of each document. We also experiment with a part of the FLORES-101 dataset~\cite{goyal-etal-2022-flores} to test the approach on indirect translations, where English was the original language of both sides of the parallel text. We divide the data into subsets based on several categorisations: \begin{itemize} \item \textbf{Translation direction}: the WMT data span 14 translation directions and 3 scripts (Latin, Cyrillic, Chinese). \item \textbf{Type of translation}: we distinguish between human translations (\textbf{HT}), which consist of (possibly multiple) reference translations, neural translation systems (\textbf{NMT}) (WMT 2016; 2022–2023), and phrase-based or rule-based pre-neural systems from WMT 2016 (\textbf{pre-NMT}) as a third category. \item \textbf{Directness}: Given that the WMT data are \textit{direct} translations from on side of the parallel text to the other, we perform additional experiments on translations for~4 \mbox{FLORES} language pairs (Bengali\biarrow Hindi, Czech\biarrow Ukrainian, German\biarrow French, Xhosa\biarrow Zulu). This allows us to analyze the behavior of our approach on \textit{indirect} sentence pairs where both source and reference are translations from a source in a third language (in this case English). \end{itemize} We use HT and NMT translations from WMT16 as a validation set, and the remaining translations for testing our approach. Table~\ref{tab:hr_stats} shows test set statistics for our main experiments. { \setlength{\tabcolsep}{4pt} \begin{table}[h!] \centering \smaller[1] \begin{tabular}{lS[table-format=5.0]S[table-format=4.0]S[table-format=4.0]S[table-format=5.0]S[table-format=5.0]} \toprule & \multicolumn{2}{c}{source} & \multicolumn{3}{c}{target sentences}\\ direction & \multicolumn{1}{c}{sents} & \multicolumn{1}{c}{docs \footnotesize{$\geq10$}} & \multicolumn{1}{c}{HT} & \multicolumn{1}{c}{NMT} & \multicolumn{1}{c}{Pre-NMT} \\ \cmidrule(r){2-3} \cmidrule(l){4-6} cs\textrightarrow en & 1448 & 129 & 2896 & 15928 & 16489 \\ cs\textrightarrow uk & 3947 & 112 & 3947 & 49381 & \multicolumn{1}{c}{-} \\ de\textrightarrow en & 1984 & 121 & 3968 & 17856 & 13491 \\ de\textrightarrow fr & 1984 & 73 & 1984 & 11904 & \multicolumn{1}{c}{-} \\ en\textrightarrow cs & 4111 & 204 & 6148 & 51480 & 27000 \\ en\textrightarrow de & 2037 & 125 & 4074 & 18333 & 18000 \\ en\textrightarrow ru & 4111 & 174 & 4111 & 47295 & 15000 \\ en\textrightarrow uk & 4111 & 174 & 4111 & 41147 & \multicolumn{1}{c}{-} \\ en\textrightarrow zh & 4111 & 204 & 6148 & 57591 & \multicolumn{1}{c}{-} \\ fr\textrightarrow de & 2006 & 71 & 2006 & 14042 & \multicolumn{1}{c}{-} \\ ru\textrightarrow en & 3739 & 136 & 3739 & 40836 & 13482 \\ uk\textrightarrow cs & 2812 & 43 & 2812 & 33744 & \multicolumn{1}{c}{-} \\ uk\textrightarrow en & 3844 & 88 & 3844 & 40266 & \multicolumn{1}{c}{-} \\ zh\textrightarrow en & 3851 & 162 & 5726 & 52140 & \multicolumn{1}{c}{-} \\ \addlinespace Total & 44096 & 1816 & 55514 & 491943 & 103462 \\ \bottomrule \end{tabular} \caption{Statistics of the WMT data we use for our main experiments.} \label{tab:hr_stats} \end{table} } As an additional ``real-world'' dataset, we use 86 parallel sentences in German and English from a publicly documented plagiarism allegation case, in which translation-based plagiarism was the main focus \citep{ebbinghaus2022b, zenthoefer2022b}. The sentences were extracted from aligned excerpts of both the PhD thesis and the alleged source that were presented in a plagiarism analysis report \citep{Weber2022}. We extracted the sentences with OCR and manually checked for OCR errors. This dataset introduces an element of real-world complexity, as it involves translations that might not adhere strictly to professional standards nor does it have to fit into either of the aforementioned categories of translation strategy. It also provides a unique opportunity to test the robustness and adaptability of the translation direction detection system in a scenario that extends beyond controlled environments. \section{Results} \label{sec:results} \subsection{Sentence-level Classification} \label{subsec:hr} \begin{table*}[h!] \centering \begin{tabularx}{\textwidth}{@{}Xrrrrrrrrr@{}} \toprule & \multicolumn{3}{c}{M2M-100-418M} & \multicolumn{3}{c}{SMaLL-100} & \multicolumn{3}{c}{NLLB-200-1.3B} \\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} Language Pair & \(\rightarrow\) & \(\leftarrow\) & Avg. & \(\rightarrow\) & \(\leftarrow\) & Avg. & \(\rightarrow\) & \(\leftarrow\) & Avg. \\ \midrule HT~~en\biarrow cs & 68.85 & 65.19 & \textbf{67.02} & 63.08 & 69.37 & 66.22 & 54.05 & 68.78 & 61.42 \\ HT~~en\biarrow de & 56.38 & 67.44 & \textbf{61.91} & 58.62 & 63.10 & 60.86 & 59.70 & 47.76 & 53.73 \\ HT~~en\biarrow ru & 71.81 & 54.05 & 62.93 & 68.38 & 57.56 & \textbf{62.97} & 67.40 & 49.08 & 58.24 \\ HT~~en\biarrow uk & 71.95 & 69.56 & \textbf{70.76} & 70.49 & 68.83 & 69.66 & 47.21 & 64.00 & 55.61 \\ HT~~en\biarrow zh & 54.25 & 84.30 & \textbf{69.27} & 56.41 & 80.54 & 68.48 & 17.81 & 82.52 & 50.16 \\ HT~~cs\biarrow uk & 52.44 & 74.40 & 63.42 & 59.26 & 70.52 & \textbf{64.89} & 47.68 & 76.67 & 62.18 \\ HT~~de\biarrow fr & 89.72 & 50.50 & 70.11 & 85.48 & 57.68 & 71.58 & 86.29 & 62.16 & \textbf{74.23} \\ \addlinespace Macro-Avg. & 66.49 & 66.49 & \textbf{66.49} & 65.96 & 66.80 & 66.38 & 54.31 & 64.42 & 59.37 \\ \bottomrule \end{tabularx} \caption{Accuracy of three different models when detecting the translation direction of human-translated sentences. The first column per model reports accuracy for sentence pairs with left-to-right gold direction~(e.g., en\(\rightarrow\)cs), the second column for sentence pairs with the reverse gold direction~(e.g., en\(\leftarrow\)cs). The last column reports the macro-average across both directions. The best average result for each language pair is printed in bold. } \label{tab:full_results_ht} \end{table*} The sentence-level results are shown in Tables~\ref{tab:full_results_ht}~(HT),~\ref{tab:hr_results_nmt}~(NMT), and~\ref{tab:hr_results_pnmt}~(pre-NMT). Table~\ref{tab:full_results_ht} compares the results for human translations across all models. As a general result, we find that it is not NLLB, but M2M-100 that on average yields the best results for human translations on the sentence level, with SMaLL-100 a very close second. Hence, we report results for M2M-100 in further experiments, and report performance of the other models in the Appendix. A second result is that the translation detection works best for neural translations (75.0\% macro-average), second-best for human translations (66.5\% macro-average), and worst for pre-neural (41.5\% macro-average). The fact that performance for pre-neural systems is below chance level indicates that the NMT systems we use tend to assign low probabilities to the (often ungrammatical) outputs of pre-neural systems. In practice, one could pair our detection method with a monolingual model to identify such low-quality outputs. A third result is that accuracy varies by language pair. Among the language pairs tested, accuracy of M2M-100 ranges from 61.9\% (en$\leftrightarrow$de) to 70.8\% (en$\leftrightarrow$uk) for HT, and from 71.1\% (de$\leftrightarrow$fr) to 77.3\% (en$\leftrightarrow$zh) for NMT. \begin{table} \centering \begin{tabularx}{\columnwidth}{@{}Xrrr@{}} \toprule Language Pair & \(\rightarrow\) & \(\leftarrow\) & Avg. \\ \midrule NMT~~en\biarrow cs & 71.87 & 78.30 & 75.09 \\ NMT~~en\biarrow de & 62.69 & 85.27 & 73.98 \\ NMT~~en\biarrow ru & 76.91 & 71.98 & 74.44 \\ NMT~~en\biarrow uk & 75.01 & 79.31 & 77.16 \\ NMT~~en\biarrow zh & 64.29 & 90.29 & 77.29 \\ NMT~~cs\biarrow uk & 72.83 & 79.15 & 75.99 \\ NMT~~de\biarrow fr & 90.65 & 51.60 & 71.13 \\ \addlinespace Macro-Avg. & 73.46 & 76.56 & 75.01 \\ \bottomrule \end{tabularx} \caption{Accuracy of M2M-100 when detecting the translation direction of NMT-translated sentences.} \label{tab:hr_results_nmt} \end{table} \begin{table} \centering \begin{tabularx}{\columnwidth}{@{}Xrrr@{}} \toprule Language Pair & \(\rightarrow\) & \(\leftarrow\) & Avg. \\ \midrule Pre-NMT~~en\biarrow cs & 41.97 & 42.59 & 42.28 \\ Pre-NMT~~en\biarrow de & 33.33 & 54.30 & 43.81 \\ Pre-NMT~~en\biarrow ru & 37.98 & 39.01 & 38.49 \\ \addlinespace Macro-Avg. & 37.76 & 45.30 & 41.53 \\ \bottomrule \end{tabularx} \caption{Accuracy of M2M-100 when detecting the translation direction of sentences translated with \mbox{pre-NMT} systems.} \label{tab:hr_results_pnmt} \end{table} \begin{table} \centering \begin{tabularx}{\columnwidth}{@{}Xrr@{}} \toprule Language Pair & Ratio of \(\rightarrow\) & Ratio of \(\leftarrow\)\\ \midrule HT~~bn\biarrow hi & 66.80\,\% & 33.20\,\% \\ HT~~cs\biarrow uk & 42.69\,\% & 57.31\,\% \\ HT~~de\biarrow fr & 84.78\,\% & 15.22\,\% \\ HT~~xh\biarrow zu & 45.06\,\% & 54.94\,\% \\ \bottomrule \end{tabularx} \caption{Percentage of predictions by M2M-100 for each translation direction when neither is the true translation direction (English-original FLORES).} \label{tab:lr_results_ht} \end{table} \begin{table*} \centering \smaller \begin{tabularx}{\textwidth}{@{}rXrrr@{}} \toprule & Sentences & \(\rightarrow\) & \(\leftarrow\) & Rel. Difference \\ \midrule 1 & \textit{DE: Mit dem Programm "Guten Tag, liebes Glück" ist er seit 2020 auf Tour.} & & & \\ & EN: He has been on tour with the programme "Guten Tag, liebes Glück" since 2020. (HT) & 0.145 & 0.558 & \phantom{0}0.26 \\ & EN: He has been on tour since 2020. (NMT) & 0.272 & 0.092 & \textbf{\phantom{0}2.95} \\ \addlinespace 2 & \textit{EN: please try to perfprm thsi procedures"} & & & \\ & DE: bitte versuchen Sie es mit diesen Verfahren (HT) & 0.246 & 0.010 & \textbf{24.29} \\ & DE: Bitte versuchen Sie, diese Prozeduren durchzuführen" (NMT) & 0.586 & 0.025 & \textbf{23.59} \\ \addlinespace 3 & \textit{EN: If costs for your country are not listed, please contact us for a quote.} & & & \\ & DE: Wenn die Kosten für Ihr Land nicht aufgeführt sind, wenden Sie sich für einen Kostenvoranschlag an uns. (HT) & 0.405 & 0.525 & \phantom{0}0.77 \\ & DE: Wenn die Kosten für Ihr Land nicht aufgeführt sind, kontaktieren Sie uns bitte für ein Angebot. (NMT) & 0.697 & 0.585 & \textbf{\phantom{0}1.19}\\ \addlinespace 4 & \textit{EN: Needless to say, it was chaos.} & & & \\ & DE: Es war natürlich ein Chaos. (HT) & 0.119 & 0.372 & \phantom{0}0.32 \\ & DE: Unnötig zu sagen, es war Chaos. (NMT) & 0.755 & 0.591 & \textbf{\phantom{0}1.28} \\ \addlinespace 5 & \textit{DE: Mit freundlichen Grüßen} & & & \\ & FR: Cordialement (HT) & 0.026 & 0.107 & \phantom{0}0.24 \\ & FR: Sincèrement (NMT) & 0.015 & 0.083 & \phantom{0}0.18 \\ & FR: Sincères amitiés (NMT) & 0.062 & 0.160 & \phantom{0}0.39 \\ & FR: Avec mes meilleures salutations (NMT) & 0.215 & 0.353 & \phantom{0}0.61 \\ \bottomrule \end{tabularx} \caption{Qualitative comparison of sentence pairs. Source sentences are marked in \textit{italics}, and gold direction is always \(\rightarrow\). Relative probability difference $>1$ indicates that translation direction was successfully identified, and is highlighted in bold. The probabilities are generated by M2M-100.} \label{tab:quali_analysis} \end{table*} \subsection{Directional Bias} Analyzing Table~\ref{tab:full_results_ht} for directional bias, we observe that M2M-100 is especially biased in the directions~de$\to$fr ($B=0.39$) and zh$\to$en ($B=0.30$). While we expected a general bias towards x$\to$en due to the dominance of English in training data, we find that the direction and strength of the bias vary across language pairs and models. An extreme result is NLLB for en\biarrow zh, with $B=0.64$ towards zh$\to$en. We leave it to future work to explore whether bias can be reduced via different normalization, a language pair specific bias correction term, or different model training. At present, our recommendation is to be mindful in the choice of NMT model and to perform validation before trusting the results of a previously untested NMT model for translation direction detection. \subsection{Indirect Translations} With an experiment on the English-original FLORES data, we evaluate our approach on the special case that neither side is the original. As shown in Table~\ref{tab:lr_results_ht}, our approach yields relatively balanced predictions on human translations for Czech\biarrow Ukrainian and Xhosa\biarrow Zulu, predicting each direction a roughly equal number of times. For German\biarrow French, we again find that the model predicts the de\(\rightarrow\)fr direction much more frequently than the reverse direction, reflecting the high directional bias of the model for this language pair. \subsection{Qualitative Analysis} \label{subsec:ea} A qualitative comparison of sources and translations, as illustrated in Table \ref{tab:quali_analysis}, reveals that factors such as normalization, simplification, word order interference, and sentence length influence the detection of translation direction. In Example 1, an English HT translates the German source fully, while the NMT omits half of the content, showing a high degree of simplification. Our method recognizes the simplified NMT version as a translation but not the more literal HT. Example 2 demonstrates varying degrees of normalization: the first translation corrects typos, whereas the second shows stronger normalization with added capitalization and punctuation but also exhibits more interference due to closer adherence to the source's lexical choice and copying the trailing quotation mark. Both translations are detected, but the second presents a higher probability difference. The third example indicates that translations exhibiting normalization, simplification, and interference to a higher degree are more likely to be identified. In Example 4, source language interference in terms of word order and choice significantly impacts the detection; the more literal translation mirroring the source's word order is recognized, while the more liberal translation is not. Finally, Example 5 highlights challenges with short sentences: The German phrase \textit{Mit freundlichen Grüßen} is fairly standardized, while its French equivalents can vary in use and context, adding to the ambiguity and affecting the probability distribution in NMT. Hence, our approach fails to identify any of the French translations without additional context. Misclassified short sentences as in Example 5 are not a rarity in our experiments. Our findings show that reliable detection of translation direction, with an average accuracy exceeding 50\%, is consistently attained for all language pairs we tested starting at a sentence length between 50 and 60 characters.\footnote{We used SMaLL-100 for this analysis.} Additionally, we observed a trend where the accuracy of direction detection increases as the length of the sentences grows. This aligns with previous unsupervised approaches, which also documented higher accuracy the larger the text chunks that were used, although there, reliable results were reported on a more extreme scale starting from text chunks with a length of 250 tokens \cite{Rabinovich2015}. \subsection{Document-Level Classification} \label{subsec:dl} \noindent{}Accuracy scores for document-level results by M2M-100 (best-performing system at sentence level) are presented in Tables \ref{tab:doc_nmt} (NMT) and \ref{tab:doc_ht} (HT). We consider documents with at least 10 sentences, and language pairs with at least 100 such documents in both directions. The table shows that the sentence-level results are amplified at the document level. Translation direction detection accuracy for human translations reaches a macro-average of 80.5\%, while the document-level accuracy for translations generated by NMT systems reaches 95.5\% on average. \begin{table} \centering \begin{tabularx}{\columnwidth}{@{}Xrrr@{}} \toprule Language Pair & \(\rightarrow\) & \(\leftarrow\) & Avg. \\ \midrule HT~~en\biarrow cs & 88.24 & 80.62 & 84.43 \\ HT~~en\biarrow de & 70.40 & 88.43 & 79.41 \\ HT~~en\biarrow ru & 96.55 & 54.41 & 75.48 \\ HT~~en\biarrow zh & 67.65 & 97.53 & 82.59 \\ \addlinespace Macro-Avg. & 80.71 & 80.25 & 80.48 \\ \bottomrule \end{tabularx} \caption{Document-level classification: Accuracy of M2M-100 when detecting the translation direction of human translations at the document level~(documents with $\geq$ 10 sentences).} \label{tab:doc_ht} \end{table} \begin{table} \centering \begin{tabularx}{\columnwidth}{@{}Xrrr@{}} \toprule Language Pair & \(\rightarrow\) & \(\leftarrow\) & Avg. \\ \midrule NMT~~en\biarrow cs & 96.78 & 99.27 & 98.03 \\ NMT~~en\biarrow de & 91.06 & 99.18 & 95.12 \\ NMT~~en\biarrow ru & 98.39 & 94.60 & 96.50 \\ NMT~~en\biarrow zh & 86.33 & 98.62 & 92.47 \\ \addlinespace Macro-Avg. & 93.14 & 97.92 & 95.53 \\ \bottomrule \end{tabularx} \caption{Document-level classification: Accuracy of M2M-100 when detecting the translation direction of NMT translations at the document level~(documents with $\geq$ 10 sentences).} \label{tab:doc_nmt} \end{table} \subsection{Application to Real-World Forensic Case} \label{subsec:rw} Finally, we apply our approach to the 86 segment pairs of the plagiarism allegation case. We treat the segments as a single document and classify them with M2M-100 using the document-level approach defined in Section~\ref{subsec:doc_level}. We find that according to the model, it is more probable that the English segments are translations of the German segments than vice versa. We validate our analysis using a permutation test. The null hypothesis is that the model probabilities for both potential translation directions are drawn from the same distribution. In order to perform the permutation test, we swap the segment-level probabilities $P(y_i|x_i)$ and $P(x_i|y_i)$ for randomly selected segments $i$ before calculating the difference between the document-level probabilities $P(y|x)$ and $P(x|y)$. We repeat this process 10,000 times and calculate the $p$-value as twice the proportion of permutations that yield a difference at least as extreme as the observed difference. Obtaining a $p$-value of 0.0002, we reject the null hypothesis and conclude that our approach makes a statistically significant prediction that the English segments are translated from the German segments. % Predicted direction: de→en % 86 sentence pairs % de→en: 0.513 % en→de: 0.487 % p-value: 0.00019998000199980003 Overall, our analysis supports the hypothesis that German is indeed the language of origin in this real-world dataset (\textit{forgery hypothesis}; Figure~\ref{fig:figure2}). Nevertheless, we recommend that additional evidence be considered before drawing a final conclusion, given the error rate of 5–21\% that we observed in our earlier experiments on English--German WMT documents. \section{Conclusion} \label{sec:conclusion} We proposed a novel approach to detecting the translation direction of parallel sentences, using only an off-the-shelf multilingual NMT system. Experiments on WMT data showed that our approach, without any task-specific supervision, is able to detect the translation direction of NMT-translated sentences with relatively high accuracy. The accuracy increases to 96\% if the classifier is provided with at least 10 sentences per document. We also found a robust accuracy for translations by human translators. Finally, we applied our approach to a real-world forensic case and found that it supports the hypothesis that the English book is a forgery. Future work should explore whether our approach can be improved by mitigating directional bias of the NMT model used. Another open question is to what degree our approach will generalize to document-level translation and to translation with large language models. \section*{Limitations} While the proposed approach is simple and effective, there are some limitations that might make its application more difficult in practice: \paragraph{Sentence alignment:} We performed our experiments on sentence-aligned parallel data, where each sentence in one language has a corresponding sentence in the other language. In practice, parallel documents might have one-to-many or many-to-many alignments, which would require custom pre-processing or the use of models that can directly estimate document-level probabilities. \paragraph{Translation strategies:} Our main experiments used academic data from the WMT translation task, where care is taken to ensure that different translation methods are clearly separated: NMT translations did not undergo human post-editing, and human translators were instructed to work from scratch. In practice, parallel documents might have undergone a mixture of translation strategies, which makes it more difficult to predict the accuracy of our approach. Specifically, we found that our approach has less-than-chance accuracy on pre-NMT translations. Applying our approach to web-scale parallel corpus filtering might therefore require additional filtering steps to exclude translations of lower quality. \paragraph{Low-resource languages:} Our experiments required test data for both translation directions, which limited the set of languages we could test. While the community has created reference translations for many low-resource languages, the translation directions are usually not covered symmetrically. For example, the test set of FLORES~\cite{goyal-etal-2022-flores} has been translated from English into many languages, but not vice versa. Thus, apart from Table~\ref{tab:lr_results_ht}, we have not tested our approach on low-resource languages, and it is possible that the accuracy of our approach is lower for such languages, in parallel with the lower translation quality of NMT models for low-resource languages. \section*{Ethical Considerations} Translation direction detection has a potential application in forensic linguistics, where reliable accuracy is crucial. Our experiments show that accuracy can vary depending on the language pair, the NMT model used for detection, as well as the translation strategy and the length of the input text. Before our approach is applied in a forensic setting, we recommend that its accuracy be validated in the context of the specific use case. In Section~\ref{subsec:rw}, we tested our approach on a real-world instance of such a case, where one party has been accused of plagiarism, but the purported original is now suspected to be a forgery. This case is publicly documented and has been widely discussed in German-speaking media~(e.g.,~\citealt{ebbinghaus2022b, zenthoefer2022b, dewiki:238411824}). For this experiment, we used 86 sentence pairs from the two (publicly available) books that are the subject of this case. However, the case has not been definitively resolved, as legal proceedings are still ongoing. No author of this paper is involved in the legal proceedings. We therefore refrain from publicly releasing the dataset of sentence pairs we used for this experiment. \section*{Acknowledgements} JV and RS acknowledge funding by the Swiss National Science Foundation (project MUTAMUR; no.~213976). \bibliography{bibliography} \appendix \onecolumn \section{Comparison of Models (Sentence Level)} \begin{table}[h!] \centering \begin{tabularx}{\textwidth}{@{}Xrrrrrrrrr@{}} \toprule & \multicolumn{3}{c}{M2M-100-418M} & \multicolumn{3}{c}{SMaLL-100} & \multicolumn{3}{c}{NLLB-200-1.3B} \\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} Language Pair & \(\rightarrow\) & \(\leftarrow\) & Avg. & \(\rightarrow\) & \(\leftarrow\) & Avg. & \(\rightarrow\) & \(\leftarrow\) & Avg. \\ \midrule NMT~~en\biarrow cs & 71.87 & 78.30 & \textbf{75.09} & 67.96 & 79.11 & 73.53 & 62.42 & 77.89 & 70.16 \\ NMT~~en\biarrow de & 62.69 & 85.27 & 73.98 & 67.05 & 80.96 & \textbf{74.00} & 73.51 & 74.36 & 73.93 \\ NMT~~en\biarrow ru & 76.91 & 71.98 & \textbf{74.44} & 74.34 & 72.30 & 73.32 & 78.87 & 57.15 & 68.01 \\ NMT~~en\biarrow uk & 75.01 & 79.31 & \textbf{77.16} & 73.74 & 78.27 & 76.01 & 58.48 & 79.56 & 69.02 \\ NMT~~en\biarrow zh & 64.29 & 90.29 & \textbf{77.29} & 66.54 & 87.62 & 77.08 & 25.42 & 90.54 & 57.98 \\ NMT~~cs\biarrow uk & 72.83 & 79.15 & 75.99 & 77.33 & 76.04 & \textbf{76.68} & 70.55 & 76.59 & 73.57 \\ NMT~~de\biarrow fr & 90.65 & 51.60 & 71.13 & 86.83 & 59.19 & \textbf{73.01} & 79.44 & 57.58 & 68.51 \\ \addlinespace Macro-Avg. & 73.46 & 76.56 & \textbf{75.01} & 73.40 & 76.21 & 74.81 & 64.10 & 73.38 & 68.74 \\ \bottomrule \end{tabularx} \caption{Accuracy of three different models when detecting the translation direction of NMT-translated sentences. The first column reports accuracy for sentence pairs with left-to-right gold direction~(e.g., en\(\rightarrow\)cs), the second column for sentence pairs with the reverse gold direction~(e.g., en\(\leftarrow\)cs). The last column reports the macro-average across both directions. The best result for each language pair is printed in bold. } \label{tab:full_results_nmt} \end{table} \vspace{0.5cm} \begin{table}[h!] \centering \begin{tabularx}{\textwidth}{@{}Xrrrrrrrrr@{}} \toprule & \multicolumn{3}{c}{M2M-100-418M} & \multicolumn{3}{c}{SMaLL-100} & \multicolumn{3}{c}{NLLB-200-1.3B} \\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} Language Pair & \(\rightarrow\) & \(\leftarrow\) & Avg. & \(\rightarrow\) & \(\leftarrow\) & Avg. & \(\rightarrow\) & \(\leftarrow\) & Avg. \\ \midrule Pre-NMT~~en\biarrow cs & 41.97 & 42.59 & \textbf{42.28} & 37.88 & 45.42 & 41.65 & 16.36 & 35.34 & 25.85 \\ Pre-NMT~~en\biarrow de & 33.33 & 54.30 & \textbf{43.81} & 36.18 & 48.57 & 42.37 & 18.73 & 26.20 & 22.47 \\ Pre-NMT~~en\biarrow ru & 37.98 & 39.01 & \textbf{38.49} & 35.71 & 39.19 & 37.45 & 19.71 & 16.04 & 17.88 \\ \addlinespace Macro-Avg. & 37.76 & 45.30 & \textbf{41.53} & 36.59 & 44.39 & 40.49 & 18.27 & 25.86 & 22.07 \\ \bottomrule \end{tabularx} \caption{Accuracy of three different models when detecting the translation direction of sentences translated with \mbox{pre-NMT} systems. The best result for each language pair is printed in bold. } \label{tab:full_results_prenmt} \end{table} \clearpage \section{Comparison of Models (Document Level)} \begin{table}[h!] \centering \begin{tabularx}{\textwidth}{@{}Xrrrrrrrrr@{}} \toprule & \multicolumn{3}{c}{M2M-100-418M} & \multicolumn{3}{c}{SMaLL-100} & \multicolumn{3}{c}{NLLB-200-1.3B} \\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} Language Pair & \(\rightarrow\) & \(\leftarrow\) & Avg. & \(\rightarrow\) & \(\leftarrow\) & Avg. & \(\rightarrow\) & \(\leftarrow\) & Avg. \\ \midrule HT~~en\biarrow cs & 88.24 & 80.62 & \textbf{84.43} & 78.92 & 89.15 & 84.03 & 55.88 & 86.05 & 70.96 \\ HT~~en\biarrow de & 70.40 & 88.43 & \textbf{79.41} & 73.60 & 82.64 & 78.12 & 68.00 & 45.45 & 56.73 \\ HT~~en\biarrow ru & 96.55 & 54.41 & 75.48 & 95.40 & 61.03 & \textbf{78.22} & 82.18 & 39.71 & 60.94 \\ HT~~en\biarrow zh & 67.65 & 97.53 & 82.59 & 71.57 & 96.30 & \textbf{83.93} & 3.92 & 96.91 & 50.42 \\ \addlinespace Macro-Avg. & 80.71 & 80.25 & 80.48 & 79.87 & 82.28 & \textbf{81.08} & 52.50 & 67.03 & 59.76 \\ \bottomrule \end{tabularx} \caption{Accuracy of three different models when detecting the translation direction of human-translated documents. The best result for each language pair is printed in bold. } \label{tab:full_results_doc_ht} \end{table} \vspace{0.5cm} \begin{table}[h!] \centering \begin{tabularx}{\textwidth}{@{}Xrrrrrrrrr@{}} \toprule & \multicolumn{3}{c}{M2M-100-418M} & \multicolumn{3}{c}{SMaLL-100} & \multicolumn{3}{c}{NLLB-200-1.3B} \\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} Language Pair & \(\rightarrow\) & \(\leftarrow\) & Avg. & \(\rightarrow\) & \(\leftarrow\) & Avg. & \(\rightarrow\) & \(\leftarrow\) & Avg. \\ \midrule NMT~~en\biarrow cs & 96.78 & 99.27 & \textbf{98.03} & 94.64 & 97.81 & 96.22 & 86.06 & 95.62 & 90.84 \\ NMT~~en\biarrow de & 91.06 & 99.18 & 95.12 & 93.85 & 97.12 & 95.49 & 96.65 & 95.06 & \textbf{95.85} \\ NMT~~en\biarrow ru & 98.39 & 94.60 & \textbf{96.50} & 97.05 & 95.12 & 96.08 & 99.20 & 72.75 & 85.97 \\ NMT~~en\biarrow zh & 86.33 & 98.62 & 92.47 & 90.62 & 98.16 & \textbf{94.39} & 13.14 & 98.39 & 55.76 \\ \addlinespace Macro-Avg. & 93.14 & 97.92 & 95.53 & 94.04 & 97.05 & \textbf{95.55} & 73.76 & 90.46 & 82.11 \\ \bottomrule \end{tabularx} \caption{Accuracy of three different models when detecting the translation direction of documents translated with NMT systems. The best result for each language pair is printed in bold. } \label{tab:full_results_doc_nmt} \end{table} \vfill \section{Example for Forensic Dataset} \begin{table*}[h!] \centering \begin{tabularx}{\textwidth}{@{}Xrrr@{}} \toprule & DE\(\rightarrow\)EN & EN\(\rightarrow\)DE \\ \midrule \textit{DE: Nach 30 sec. wurde das trypsinhaltige PBS abgegossen, und die Zellen kamen für eine weitere Minute in den Brutschrank.} & & \\ \addlinespace EN: After 30 sec, the trypsin-containing PBS was poured off, and the cells were placed in the incubator for another minute. & 0.442 & 0.285 \\ \bottomrule \end{tabularx} \caption{Example of two segments from the forensic case~(Section~\ref{subsec:rw}). M2M-100 assigns a higher probability to the English sentence conditioned on the German sentence than vice versa, suggesting that the English sentence is more likely to be a translation of the German sentence.} \label{tab:examples-colchicine} \end{table*} \vfill \clearpage \section{Data Statistics} \label{sec:appendix_data} \begin{table}[h!] \centering \begin{tabular}{ccS[table-format=5.0]S[table-format=4.0]S[table-format=4.0]S[table-format=5.0]S[table-format=5.0]} \toprule & & \multicolumn{2}{c}{source} & \multicolumn{3}{c}{target sentences}\\ testset & direction & \multicolumn{1}{c}{sents} & \multicolumn{1}{c}{docs $\geq10$} & \multicolumn{1}{c}{HT} & \multicolumn{1}{c}{NMT} & \multicolumn{1}{c}{Pre-NMT} \\ \cmidrule(r){3-4} \cmidrule(l){5-7} WMT16 & cs\textrightarrow en & 1499 & 40 & \textit{1499} & \textit{1499} & 16489 \\ WMT16 & de\textrightarrow en & 1499 & 55 & \textit{1499} & \textit{1499} & 13491 \\ WMT16 & en\textrightarrow cs & 1500 & 54 & \textit{1500} & \textit{3000} & 27000 \\ WMT16 & en\textrightarrow de & 1500 & 54 & \textit{1500} & \textit{4500} & 18000 \\ WMT16 & en\textrightarrow ru & 1500 & 54 & \textit{1500} & \textit{3000} & 15000 \\ WMT16 & ru\textrightarrow en & 1498 & 52 & \textit{1498} & \textit{1498} & 13482 \\ \midrule WMT22 & cs\textrightarrow en & 1448 & 129 & 2896 & 15928 & \multicolumn{1}{c}{-} \\ WMT22 & cs\textrightarrow uk & 1930 & 13 & 1930 & 23160 & \multicolumn{1}{c}{-} \\ WMT22 & de\textrightarrow en & 1984 & 121 & 3968 & 17856 & \multicolumn{1}{c}{-} \\ WMT22 & de\textrightarrow fr & 1984 & 73 & 1984 & 11904 & \multicolumn{1}{c}{-} \\ WMT22 & en\textrightarrow cs & 2037 & 125 & 4074 & 20370 & \multicolumn{1}{c}{-} \\ WMT22 & en\textrightarrow de & 2037 & 125 & 4074 & 18333 & \multicolumn{1}{c}{-} \\ WMT22 & en\textrightarrow ru & 2037 & 95 & 2037 & 22407 & \multicolumn{1}{c}{-} \\ WMT22 & en\textrightarrow uk & 2037 & 95 & 2037 & 18333 & \multicolumn{1}{c}{-} \\ WMT22 & en\textrightarrow zh & 2037 & 125 & 4074 & 26481 & \multicolumn{1}{c}{-} \\ WMT22 & fr\textrightarrow de & 2006 & 71 & 2006 & 14042 & \multicolumn{1}{c}{-} \\ WMT22 & ru\textrightarrow en & 2016 & 73 & 2016 & 20160 & \multicolumn{1}{c}{-} \\ WMT22 & uk\textrightarrow cs & 2812 & 43 & 2812 & 33744 & \multicolumn{1}{c}{-} \\ WMT22 & uk\textrightarrow en & 2018 & 22 & 2018 & 20180 & \multicolumn{1}{c}{-} \\ WMT22 & zh\textrightarrow en & 1875 & 102 & 3750 & 22500 & \multicolumn{1}{c}{-} \\ \midrule WMT23 & cs\textrightarrow uk & 2017 & 99 & 2017 & 26221 & \multicolumn{1}{c}{-} \\ WMT23 & en\textrightarrow cs & 2074 & 79 & 2074 & 31110 & \multicolumn{1}{c}{-} \\ WMT23 & en\textrightarrow ru & 2074 & 79 & 2074 & 24888 & \multicolumn{1}{c}{-} \\ WMT23 & en\textrightarrow uk & 2074 & 79 & 2074 & 22814 & \multicolumn{1}{c}{-} \\ WMT23 & en\textrightarrow zh & 2074 & 79 & 2074 & 31110 & \multicolumn{1}{c}{-} \\ WMT23 & ru\textrightarrow en & 1723 & 63 & 1723 & 20676 & \multicolumn{1}{c}{-} \\ WMT23 & uk\textrightarrow en & 1826 & 66 & 1826 & 20086 & \multicolumn{1}{c}{-} \\ WMT23 & zh\textrightarrow en & 1976 & 60 & 1976 & 29640 & \multicolumn{1}{c}{-} \\ \bottomrule \end{tabular} \caption{Detailed data statistics for the main experiments. Cursive: data used for validation. } \label{tab:data_stats} \end{table} \vspace{0.5cm} { \begin{table}[h!] \centering \begin{tabular}{lS[table-format=5.0]S[table-format=4.0]S[table-format=4.0]S[table-format=5.0]} \toprule Direction & \multicolumn{1}{c}{Sentence pairs} \\ \midrule bn\biarrow hi & 1012 \\ cs\biarrow uk & 1012 \\ de\biarrow fr & 1012 \\ xh\biarrow zu & 1012 \\ \bottomrule \end{tabular} \caption{Statistics for the FLORES-101 (devtest) datasets, where both sides are human translations from English.} \label{tab:flores_stats} \end{table} } \end{document}
Can we detect the translation direction for Czech-English better for human translation or neural machine translation?
neural translation
SELECT avg detection scores for (en-cs) for human translations from Table 1 SELECT avg detection scores for (en-cs) for neural machine translations from Table 2 IF human detection score > NMT detection score RETURN human translation ELSE RETURN neural machine translation
2410.21272
[["\\begin{table}[h]\n"," \\centering\n"," \\caption{Accuracy of the analyzed models on arithm(...TRUNCATED)
[["\\begin{figure}[t]\n"," \\centering\n"," \\includegraphics[width=0.96\\textwidth]{figures/o(...TRUNCATED)
"\n\\documentclass{article} %\n\\usepackage{arxiv_preprint,times}\n\n\n\\usepackage{amsmath,amsfonts(...TRUNCATED)
Calculate the average accuracy for addition and division operations for each model.
Llama3-8B: 0.945, Llama3-70B: 0.85, Pythia-6.9B: 0.525, GPT-J: 0.435
"SELECT all models\nLOOP for each model\n COMPUTE average_accuracy = (accuracy for + operation + (...TRUNCATED)
2410.21272
[["\\begin{table}[h]\n"," \\centering\n"," \\caption{Accuracy of the analyzed models on arithm(...TRUNCATED)
[["\\begin{figure}[t]\n"," \\centering\n"," \\includegraphics[width=0.96\\textwidth]{figures/o(...TRUNCATED)
"\n\\documentclass{article} %\n\\usepackage{arxiv_preprint,times}\n\n\n\\usepackage{amsmath,amsfonts(...TRUNCATED)
Which operation reduced the average accuracy of Llama3-70B model?
divison
"SELECT Llama3-70B model\nCOMPUTE argmin accuracy across all operations\nRETURN operation with lowes(...TRUNCATED)
2410.21272
[["\\begin{table}[h]\n"," \\centering\n"," \\caption{Accuracy of the analyzed models on arithm(...TRUNCATED)
[["\\begin{figure}[t]\n"," \\centering\n"," \\includegraphics[width=0.96\\textwidth]{figures/o(...TRUNCATED)
"\n\\documentclass{article} %\n\\usepackage{arxiv_preprint,times}\n\n\n\\usepackage{amsmath,amsfonts(...TRUNCATED)
Which model has the highest average of the multiplication and division operations?
Llama3-8B
"SELECT all models\nLOOP for each model\n COMPUTE average_accuracy = (accuracy for × operation +(...TRUNCATED)
2205.15544
[["\\begin{table}[t]\n"," \\begin{center}\n"," \\caption{Comparison of BLEU scores for differe(...TRUNCATED)
[["\\begin{figure}\n"," \\begin{center}\n"," \\centerline{\\includegraphics[width=0.9\\textwid(...TRUNCATED)
"\\documentclass{article}\n\n\n% if you need to pass options to natbib, use, e.g.:\n\\PassOptionsToP(...TRUNCATED)
What is the average Nepali translation BLEU score for each method?
7.2, 9.05, 10.0, 13.6
"SELECT language pairs containing Ne (Nepali)\nLOOP for each method\n COMPUTE average_Nepali_BLEU(...TRUNCATED)
2205.15544
[["\\begin{table}[t]\n"," \\begin{center}\n"," \\caption{Comparison of BLEU scores for differe(...TRUNCATED)
[["\\begin{figure}\n"," \\begin{center}\n"," \\centerline{\\includegraphics[width=0.9\\textwid(...TRUNCATED)
"\\documentclass{article}\n\n\n% if you need to pass options to natbib, use, e.g.:\n\\PassOptionsToP(...TRUNCATED)
What are the languages mentioned in the table?
English, Nepali, Sinhala, Hindi, Gujarati, Finnish, Estonian, Latvian, Kazakh
SELECT all unique languages mentioned in the table RETURN list of languages
2205.15544
[["\\begin{table}[t]\n"," \\begin{center}\n"," \\caption{Comparison of BLEU scores for differe(...TRUNCATED)
[["\\begin{figure}\n"," \\begin{center}\n"," \\centerline{\\includegraphics[width=0.9\\textwid(...TRUNCATED)
"\\documentclass{article}\n\n\n% if you need to pass options to natbib, use, e.g.:\n\\PassOptionsToP(...TRUNCATED)
Which language family has the highest average BLEU score using our method?
Uralic
"SELECT Ours method\n\n\nLOOP for each language family\n COMPUTE average BLEU score across all la(...TRUNCATED)
1903.00089
[["\\begin{table*}[!ht]\n","\\begin{center}\n","\\setlength\\tabcolsep{4.9pt}\n","\\begin{tabular}{l(...TRUNCATED)
[["\\begin{table}[!ht]\n","\\begin{center}\n","\\begin{small}\n","\\setlength\\tabcolsep{3.8pt}\n","(...TRUNCATED)
" %\n% File naacl2019.tex\n%\n%% Based on the style files for ACL 2018 and NAACL 2018, which were\n%(...TRUNCATED)
Which translation direction has a higher BLEU score for Italian?
X→En
"SELECT X to EN BLEU scores for Italian from Table 1\nSELECT EN to X BLEU scores for Italian from Ta(...TRUNCATED)
1911.02782
[["\\begin{table*}[t!]\n"," \\centering\n"," \\scalebox{0.81}{\n"," \\begin{tabular}{p{(...TRUNCATED)
[["\\begin{figure}[t!]\n"," \\centering\n"," \\includegraphics[width=\\columnwidth]{gorc_links(...TRUNCATED)
"%\n% File acl2020.tex\n%\n%% Based on the style files for ACL 2020, which were\n%% Based on the sty(...TRUNCATED)
In which domain does S2ORC outperform SCIBERT in most of the task?
Biomed
"SELECT S2ORC and SCIBERT scores\n\n\nLOOP for each domain\n LOOP for each dataset in the domain\(...TRUNCATED)
End of preview. Expand in Data Studio

Dataset Card for SciTaRC

Dataset Summary

SciTaRC (Scientific Table Reasoning and Computation) is an expert-authored benchmark designed to evaluate Large Language Models (LLMs) on complex question-answering tasks over real-world scientific tables.

Unlike existing benchmarks that focus on simple table-text integration or single-step operations, SciTaRC focuses on composite reasoning—requiring models to execute interdependent operations such as descriptive analysis, complex arithmetic, and ranking across detailed scientific tables. To facilitate granular diagnosis of model failures, every instance includes an expert-annotated pseudo-code plan that explicitly outlines the algorithmic reasoning steps required to reach the correct answer.

Dataset Structure

The dataset is provided as a single test split containing 370 expert-annotated instances.

Data Instances

A typical instance contains the question, the ground truth answer, the expert-authored pseudo-code plan, the LaTeX representations of the relevant tables, and the full text of the source paper.

Data Fields

Each JSON object in the dataset contains the following fields:

  • paper (string): The arXiv ID of the source scientific paper (e.g., "2401.06769").
  • question (string): The complex, multi-step question asked about the tabular data.
  • answer (string): The ground-truth answer.
  • plan (string): The expert-authored pseudo-code blueprint. It explicitly structures the logical and mathematical operations required to solve the question (e.g., SELECT, LOOP, COMPUTE).
  • relevant_tables (list of lists of strings): The exact LaTeX source code for the specific table(s) required to answer the question.
  • tables (list of lists of strings): The LaTeX source code for all tables and figures extracted from the paper.
  • fulltext (string): The complete LaTeX source text of the original scientific paper, providing full context.

Citation

If you use this dataset, please cite the original paper:

@misc{scitarc2026,
  title={SciTaRC: Benchmarking QA on Scientific Tabular Data that Requires Language Reasoning and Complex Computation},
  author={Wang, Hexuan and Ren, Yaxuan and Bommireddypalli, Srikar and Chen, Shuxian and Prabhudesai, Adarsh and Baral, Elina and Zhou, Rongkun and Koehn, Philipp},
  year={2026},
  url={[Insert ArXiv URL here]}
}
Downloads last month
10