updated word_frequency docstring

Former-commit-id: afaed8757b
This commit is contained in:
Joshua Chin 2015-07-17 14:52:06 -04:00
parent d0e0287d71
commit f4c875983e

View File

@ -257,6 +257,9 @@ def word_frequency(word, lang, wordlist='combined', minimum=0.):
If a word decomposes into multiple tokens, we'll return a smoothed estimate
of the word frequency that is no greater than the frequency of any of its
individual tokens.
It should be noted that the current tokenizer does not support
multi-character Chinese terms.
"""
args = (word, lang, wordlist, minimum)
try: