I'm starting a new Python environment on a new Ubuntu installation. You
never know when a huge yak will show up and demand to be shaved.
I tried following the directions in the README, and found that a couple
of steps were missing. I've added those.
When you follow those steps, it appears to install the MeCab Korean
dictionary in `/usr/lib/x86_64-linux-gnu/mecab/dic`, which was none
of the paths we were checking, so I've added that as a search path.
Significant changes in this data include:
- Added ParaCrawl, a multilingual Web crawl, as a data source.
This supplements the Leeds Web crawl with more modern data.
ParaCrawl seems to provide a more balanced sample of Web pages than
Common Crawl, which we once considered adding, but found that its data
heavily overrepresented TripAdvisor and Urban Dictionary in a way that
was very apparent in the word frequencies.
ParaCrawl has a fairly subtle impact on the top terms, mostly boosting
the frequencies of numbers and months.
- Fixes to inconsistencies where words from different sources were going
through different processing steps. As a result of these
inconsistencies, some word lists contained words that couldn't
actually be looked up because they would be normalized to something
else.
All words should now go through the aggressive normalization of
`lossy_tokenize`.
- Fixes to inconsistencies regarding what counts as a word.
Non-punctuation, non-emoji symbols such as `=` were slipping through
in some cases but not others.
- As a result of the new data, Latvian becomes a supported language and
Czech gets promoted to a 'large' language.
I _almost_ got the description and long_description right for 2.0.1. I
even checked it on the test server. But I didn't notice that I was
handling the first line of README.md specially, and ended up setting the
project description to "wordfreq is a Python library for looking up the
frequencies of words in many".
It'll be right in the next version.
We don't need to set it to any value but 80 now, but we will need to if
we try to distinguish three kinds of Chinese (zh-Hans, zh-Hant, and
unified zh-Hani).
This is the result of re-running exquisite-corpus via wordfreq 2. The
frequencies for most languages were identical. Small changes that move
words by a few places in the list appeared in Chinese, Japanese, and
Korean. There are also even smaller changes in Bengali and Hindi.
The source of the CJK change is that Roman letters are case-folded
_before_ Jieba or MeCab tokenization, which changes their output in a
few cases.
In Hindi, one word changed frequency in the top 500. In Bengali, none of
those words changed frequency, but the data file is still different.
I'm not sure I have such a solid explanation here, except that these
languages use the regex tokenizer, and we just updated the regex
dependency, which could affect some edge cases of these languages.