Commit Graph

648 Commits

Author SHA1 Message Date
Robyn Speer
7a32b56c1c Round frequencies to 3 significant digits 2018-06-18 15:21:33 -04:00
Lance Nathan
a95b360563 Merge pull request #57 from LuminosoInsight/version2.1
Version 2.1
2018-06-18 12:06:47 -04:00
Robyn Speer
39a1308770 update table in README: Dutch has 5 sources 2018-06-18 11:43:52 -04:00
Robyn Speer
0280f82496 fix typo in previous changelog entry 2018-06-18 10:52:28 -04:00
Robyn Speer
42efcfc1ad relax the test that assumed the Chinese list has few ASCII words 2018-06-15 16:29:15 -04:00
Robyn Speer
ad0f046f47 fixes to tests, including that 'test.py' wasn't found by pytest 2018-06-15 15:48:41 -04:00
Robyn Speer
a975bcedae update tests to include new languages
Also, it's easy to say `>=` in pytest
2018-06-12 17:55:44 -04:00
Robyn Speer
4b7e3d9655 bump version to 2.1; add test requirement for pytest 2018-06-12 17:48:24 -04:00
Robyn Speer
3259c4a375 Merge remote-tracking branch 'origin/pytest' into version2.1 2018-06-12 17:46:48 -04:00
Robyn Speer
d5f7335d90 New data import from exquisite-corpus
Significant changes in this data include:

- Added ParaCrawl, a multilingual Web crawl, as a data source.
  This supplements the Leeds Web crawl with more modern data.

  ParaCrawl seems to provide a more balanced sample of Web pages than
  Common Crawl, which we once considered adding, but found that its data
  heavily overrepresented TripAdvisor and Urban Dictionary in a way that
  was very apparent in the word frequencies.

  ParaCrawl has a fairly subtle impact on the top terms, mostly boosting
  the frequencies of numbers and months.

- Fixes to inconsistencies where words from different sources were going
  through different processing steps. As a result of these
  inconsistencies, some word lists contained words that couldn't
  actually be looked up because they would be normalized to something
  else.

  All words should now go through the aggressive normalization of
  `lossy_tokenize`.

- Fixes to inconsistencies regarding what counts as a word.
  Non-punctuation, non-emoji symbols such as `=` were slipping through
  in some cases but not others.

- As a result of the new data, Latvian becomes a supported language and
  Czech gets promoted to a 'large' language.
2018-06-12 17:22:43 -04:00
Robyn Speer
b3c42be331 port remaining tests to pytest 2018-06-01 16:40:51 -04:00
Robyn Speer
75b4d62084 port test.py and test_chinese.py to pytest 2018-06-01 16:33:06 -04:00
Robyn Speer
6235d88869 Use data from fixed XC build - mostly changes Chinese 2018-05-30 13:09:20 -04:00
Robyn Speer
5762508e7c commit new data files (Italian changed for some reason) 2018-05-29 17:36:48 -04:00
Robyn Speer
e4cb9a23b6 update data to include xc's processing of ParaCrawl 2018-05-25 16:12:35 -04:00
Robyn Speer
8907423147 Packaging updates for the new PyPI
I _almost_ got the description and long_description right for 2.0.1. I
even checked it on the test server. But I didn't notice that I was
handling the first line of README.md specially, and ended up setting the
project description to "wordfreq is a Python library for looking up the
frequencies of words in many".

It'll be right in the next version.
2018-05-01 17:16:53 -04:00
Lance Nathan
316670a234 Merge pull request #56 from LuminosoInsight/japanese-edge-cases
Handle Japanese edge cases in `simple_tokenize`
2018-05-01 14:57:45 -04:00
Robyn Speer
e0da20b0c4 update CHANGELOG for 2.0.1 2018-05-01 14:47:55 -04:00
Robyn Speer
666f7e51fa Handle Japanese edge cases in simple_tokenize 2018-04-26 15:53:07 -04:00
Lance Nathan
18f176dbf6 Merge pull request #55 from LuminosoInsight/version2
Version 2, with standalone text pre-processing
2018-03-15 14:26:49 -04:00
Robyn Speer
d9bc4af8cd update the changelog 2018-03-14 17:56:29 -04:00
Robyn Speer
b2663272a7 remove LAUGHTER_WORDS, which is now unused
This was a fun Twitter test, but we don't do that anymore
2018-03-14 17:33:35 -04:00
Robyn Speer
65811d587e More explicit error message for a missing wordlist 2018-03-14 15:10:27 -04:00
Robyn Speer
2ecf31ee81 Actually use min_score in _language_in_list
We don't need to set it to any value but 80 now, but we will need to if
we try to distinguish three kinds of Chinese (zh-Hans, zh-Hant, and
unified zh-Hani).
2018-03-14 15:08:52 -04:00
Robyn Speer
c57032d5cb code review fixes to wordfreq.tokens 2018-03-14 15:07:45 -04:00
Robyn Speer
de81a23b9d code review fixes to __init__ 2018-03-14 15:04:59 -04:00
Robyn Speer
8656688b0b fix mention of dependencies in README 2018-03-14 15:01:08 -04:00
Robyn Speer
d68d4baad2 Subtle changes to CJK frequencies
This is the result of re-running exquisite-corpus via wordfreq 2.  The
frequencies for most languages were identical. Small changes that move
words by a few places in the list appeared in Chinese, Japanese, and
Korean. There are also even smaller changes in Bengali and Hindi.

The source of the CJK change is that Roman letters are case-folded
_before_ Jieba or MeCab tokenization, which changes their output in a
few cases.

In Hindi, one word changed frequency in the top 500. In Bengali, none of
those words changed frequency, but the data file is still different.
I'm not sure I have such a solid explanation here, except that these
languages use the regex tokenizer, and we just updated the regex
dependency, which could affect some edge cases of these languages.
2018-03-14 11:36:02 -04:00
Robyn Speer
0cb36aa74f cache the language info (avoids 10x slowdown) 2018-03-09 14:54:03 -05:00
Robyn Speer
b162de353d avoid log spam: only warn about an unsupported language once 2018-03-09 11:50:15 -05:00
Robyn Speer
c5f64a5de8 update the README 2018-03-08 18:16:15 -05:00
Robyn Speer
d8e3669a73 wordlist updates from new exquisite-corpus 2018-03-08 18:16:00 -05:00
Robyn Speer
53dc0bbb1a Test that we can leave the wordlist unspecified and get 'large' freqs 2018-03-08 18:09:57 -05:00
Robyn Speer
8e3dff3c1c Traditional Chinese should be preserved through tokenization 2018-03-08 18:08:55 -05:00
Robyn Speer
45064a292f reorganize wordlists into 'small', 'large', and 'best' 2018-03-08 17:52:44 -05:00
Robyn Speer
fe85b4e124 fix az-Latn transliteration, and test 2018-03-08 16:47:36 -05:00
Robyn Speer
a4d9614e39 setup: update version number and dependencies 2018-03-08 16:26:24 -05:00
Robyn Speer
5ab5d2ea55 Separate preprocessing from tokenization 2018-03-08 16:26:17 -05:00
Robyn Speer
72646f16a1 minor fixes to README 2018-02-28 16:14:50 -05:00
Robyn Speer
cd7bfc4060 Merge pull request #54 from LuminosoInsight/fix-deps
Fix setup.py (version number and msgpack dependency)
2018-02-28 12:46:46 -08:00
Robyn Speer
208559ae1e bump version to 1.7.0, belatedly 2018-02-28 15:15:47 -05:00
Robyn Speer
98cb47c774 update msgpack-python dependency to msgpack 2018-02-28 15:14:51 -05:00
Robyn Speer
ec9c94be92 update citation to v1.7 2017-09-27 13:36:30 -04:00
Andrew Lin
95a13ab4ce Merge pull request #51 from LuminosoInsight/version1.7
Version 1.7: update tokenization, update Wikipedia data, add languages
2017-09-08 17:02:05 -04:00
Robyn Speer
b042f2be9d remove unnecessary enumeration from top_n.py 2017-09-08 16:52:06 -04:00
Robyn Speer
fb4a7db6f7 update README for 1.7; sort language list in English order 2017-08-25 17:38:31 -04:00
Robyn Speer
46e32fbd36 v1.7: update tokenization, update data, add bn and mk 2017-08-25 17:37:48 -04:00
Robyn Speer
9dac967ca3 Tokenize by graphemes, not codepoints (#50)
* Tokenize by graphemes, not codepoints

* Add more documentation to TOKEN_RE

* Remove extra line break

* Update docstring - Brahmic scripts are no longer an exception

* approve using version 2017.07.28 of regex
2017-08-08 11:35:28 -04:00
Andrew Lin
6c118c0b6a Merge pull request #49 from LuminosoInsight/restore-langcodes
Use langcodes when tokenizing again
2017-05-10 16:20:06 -04:00
Robyn Speer
aa3ed23282 v1.6.1: depend on langcodes 1.4 2017-05-10 13:26:23 -04:00