updated word_frequency docstring for Chinese

Former-commit-id: 01b286e801
This commit is contained in:
Joshua Chin 2015-07-20 10:28:11 -04:00
parent 8be9d15a80
commit b787d9104e

View File

@ -266,7 +266,7 @@ def word_frequency(word, lang, wordlist='combined', minimum=0.):
individual tokens.
It should be noted that the current tokenizer does not support
multi-character Chinese terms.
multi-word Chinese phrases.
"""
args = (word, lang, wordlist, minimum)
try: