mirror of
https://github.com/rspeer/wordfreq.git
synced 2024-12-23 17:31:41 +00:00
describe optional dependencies better in the README
Former-commit-id: b460eef444
This commit is contained in:
parent
960dc437a2
commit
8e963dc312
19
README.md
19
README.md
@ -15,13 +15,26 @@ or by getting the repository and running its setup.py:
|
|||||||
|
|
||||||
python3 setup.py install
|
python3 setup.py install
|
||||||
|
|
||||||
To handle word frequency lookups in Japanese, you need to additionally install
|
Japanese and Chinese have additional external dependencies so that they can be
|
||||||
mecab-python3, which itself depends on libmecab-dev. These commands will
|
tokenized correctly.
|
||||||
install them on Ubuntu:
|
|
||||||
|
To be able to look up word frequencies in Japanese, you need to additionally
|
||||||
|
install mecab-python3, which itself depends on libmecab-dev and its dictionary.
|
||||||
|
These commands will install them on Ubuntu:
|
||||||
|
|
||||||
sudo apt-get install mecab-ipadic-utf8 libmecab-dev
|
sudo apt-get install mecab-ipadic-utf8 libmecab-dev
|
||||||
pip3 install mecab-python3
|
pip3 install mecab-python3
|
||||||
|
|
||||||
|
To be able to look up word frequencies in Chinese, you need Jieba, a
|
||||||
|
pure-Python Chinese tokenizer:
|
||||||
|
|
||||||
|
pip3 install jieba
|
||||||
|
|
||||||
|
These dependencies can also be requested as options when installing wordfreq.
|
||||||
|
For example:
|
||||||
|
|
||||||
|
pip3 install wordfreq[mecab,jieba]
|
||||||
|
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user