Our study uses n-gram statistics as features for classification. In particular, we compare support vector machines, Na茂ve Bayesian and difference-in-frequency classifiers on different amounts of input text and various values of n for different amounts of training data. For a fixed value of n the support vector machines generally outperform the other classifiers, but the simpler classifiers are able to handle larger values of n. The additional computational complexity of training the support vector machine classifier may not be justified in light of importance of using a large value of n, except possibly for small sizes of the input window when limited training data is available.
Our training and testing corpora consisted of text from the 11 official languages of South Africa. Within these languages distinct language families can be found. We find that it is much more difficult to discriminate languages within languages families than languages in different families. The overall accuracy on short input strings is low for this reason, but for input strings of 100 characters or more there is only a slight confusion within families and accuracies as high as 99.4%are achieved. For the smallest input strings studied here, which consist of 15 characters, the best accuracy achieved is only 83%, but when languages in different families are grouped together, this corresponds to a usable 95.1%accuracy.
The relationship between the amount of training data and the accuracy achieved is found to depend on the window size: for the largest window (300 characters) about 400 000 characters are sufficient to achieve close-to-optimal accuracy, whereas improvements in accuracy are found even beyond 1.6 million characters of training data for smaller windows.
Our study concludes that the correlation between the factors studied significantly affect classification accuracy; therefore, to assure credible and comparable results, these factors need to be controlled in any text-based language identification task.