updated word_frequency docstring

This commit is contained in:
Joshua Chin 2015-07-17 14:52:06 -04:00
parent 131b916c57
commit afaed8757b

View File

@ -257,6 +257,9 @@ def word_frequency(word, lang, wordlist='combined', minimum=0.):
If a word decomposes into multiple tokens, we'll return a smoothed estimate
of the word frequency that is no greater than the frequency of any of its
individual tokens.
It should be noted that the current tokenizer does not support
multi-character Chinese terms.
"""
args = (word, lang, wordlist, minimum)
try: