mirror of
https://github.com/rspeer/wordfreq.git
synced 2024-12-23 01:11:37 +00:00
small documentation fixes
This commit is contained in:
parent
32093d9efc
commit
fc5c4cdda8
@ -387,7 +387,7 @@ the 'cjk' feature:
|
||||
pip install wordfreq[cjk]
|
||||
|
||||
Tokenizing Chinese depends on the `jieba` package, tokenizing Japanese depends
|
||||
on `mecab-python` and `ipadic`, and tokenizing Korean depends on `mecab-python`
|
||||
on `mecab-python3` and `ipadic`, and tokenizing Korean depends on `mecab-python3`
|
||||
and `mecab-ko-dic`.
|
||||
|
||||
As of version 2.4.2, you no longer have to install dictionaries separately.
|
||||
|
5
setup.py
5
setup.py
@ -49,9 +49,8 @@ setup(
|
||||
install_requires=dependencies,
|
||||
|
||||
# mecab-python3 is required for looking up Japanese or Korean word
|
||||
# frequencies. In turn, it depends on libmecab-dev being installed on the
|
||||
# system. It's not listed under 'install_requires' because wordfreq should
|
||||
# be usable in other languages without it.
|
||||
# frequencies. It's not listed under 'install_requires' because wordfreq
|
||||
# should be usable in other languages without it.
|
||||
#
|
||||
# Similarly, jieba is required for Chinese word frequencies.
|
||||
extras_require={
|
||||
|
Loading…
Reference in New Issue
Block a user