updated word_frequency docstring for Chinese

Former-commit-id: 01b286e801
This commit is contained in:
Joshua Chin 2015-07-20 10:28:11 -04:00
parent 360f66bbaf
commit 532b953839

View File

@ -266,7 +266,7 @@ def word_frequency(word, lang, wordlist='combined', minimum=0.):
individual tokens. individual tokens.
It should be noted that the current tokenizer does not support It should be noted that the current tokenizer does not support
multi-character Chinese terms. multi-word Chinese phrases.
""" """
args = (word, lang, wordlist, minimum) args = (word, lang, wordlist, minimum)
try: try: