Purpose Corpus analyses of spontaneous language fragments of varying length provide useful insights in the language change caused by brain damage, such as caused by some forms of dementia. Sample size is an important experimental parameter to consider when designing spontaneous language analyses studies. Sample length influences the confidence levels of analyses. Machine learning approaches often favor to use as much language as available, whereas language evaluation in a clinical setting is often based on truncated samples to minimize annotation labor and to limit any discomfort for participants. This article investigates, using Bayesian estimation of machine learned models, what the ideal text length should be to minimize model uncertainty. Method We use the Stanford parser to extract linguistic variables and train a statistic model to distinguish samples by speakers with no brain damage from samples by speakers with probable Alzheimer's disease. We compare the results to previously published models that used CLAN for linguistic analysis. Results The uncertainty around six individual variables and its relation to sample length are reported. The same model with linguistic variables that is used in all three experiments can predict group membership better than a model without them. One variable (concept density) is more informative when measured using the Stanford tools than when measured using CLAN. Conclusion For our corpus of German speech, the optimal sample length is found to be around 700 words long. Longer samples do not provide more information.