updated word_frequency docstring for Chinese

This commit is contained in:
Joshua Chin 2015-07-20 10:28:11 -04:00
parent 465afb854c
commit 01b286e801

View File

@ -266,7 +266,7 @@ def word_frequency(word, lang, wordlist='combined', minimum=0.):
individual tokens.
It should be noted that the current tokenizer does not support
multi-character Chinese terms.
multi-word Chinese phrases.
"""
args = (word, lang, wordlist, minimum)
try: