diff --git a/README.md b/README.md index 4f31cd7..8f64f54 100644 --- a/README.md +++ b/README.md @@ -387,7 +387,7 @@ the 'cjk' feature: pip install wordfreq[cjk] Tokenizing Chinese depends on the `jieba` package, tokenizing Japanese depends -on `mecab-python` and `ipadic`, and tokenizing Korean depends on `mecab-python` +on `mecab-python3` and `ipadic`, and tokenizing Korean depends on `mecab-python3` and `mecab-ko-dic`. As of version 2.4.2, you no longer have to install dictionaries separately. diff --git a/setup.py b/setup.py index 098b07b..d5e0429 100755 --- a/setup.py +++ b/setup.py @@ -49,9 +49,8 @@ setup( install_requires=dependencies, # mecab-python3 is required for looking up Japanese or Korean word - # frequencies. In turn, it depends on libmecab-dev being installed on the - # system. It's not listed under 'install_requires' because wordfreq should - # be usable in other languages without it. + # frequencies. It's not listed under 'install_requires' because wordfreq + # should be usable in other languages without it. # # Similarly, jieba is required for Chinese word frequencies. extras_require={