Commit Graph

71 Commits

Author SHA1 Message Date
Joshua Chin
354f09ec24 removed TOKENIZE_TWITTER
Former-commit-id: 449a656edd
2015-07-17 14:43:14 -04:00
Joshua Chin
d0df4cc9a4 removed TOKENIZE_TWITTER option
Former-commit-id: 00e18b7d4b
2015-07-17 14:40:49 -04:00
Joshua Chin
46b2730601 more README fixes
Former-commit-id: 772c0cddd1
2015-07-17 14:40:33 -04:00
Joshua Chin
3e4643f9c4 fixed README
Former-commit-id: 0a085132f4
2015-07-17 14:35:43 -04:00
Robyn Speer
73bacc659d update the wordfreq_builder README
Former-commit-id: 8633e8c2a9
2015-07-13 11:58:48 -04:00
Robyn Speer
e9d88bf35e add docstrings and remove some brackets 2015-07-07 18:22:51 -04:00
Joshua Chin
1c365e6a50 Removes mention of Rosette from README 2015-07-07 10:32:16 -04:00
Robyn Speer
3eb3e7c388 add 'twitter' as a final build, and a new build dir
The `data/dist` directory is now a convenient place to find the final
built files that can be copied into wordfreq.
2015-07-01 17:45:39 -04:00
Robyn Speer
58c8bda21b cope with occasional Unicode errors in the input 2015-06-30 17:05:40 -04:00
Robyn Speer
deed2f767c remove wiki2tokens and tokenize_wikipedia
These components are no longer necessary. Wikipedia output can and
should be tokenized with the standard tokenizer, instead of the
almost-equivalent one in the Nim code.
2015-06-30 15:28:01 -04:00
Robyn Speer
f17a04aa84 fix comment and whitespace involving tokenize_twitter 2015-06-30 15:18:37 -04:00
Robyn Speer
91d6edd55b Switch to a centibel scale, add a header to the data 2015-06-22 17:38:13 -04:00
Robyn Speer
3108d24d76 Merge pull request #2 from LuminosoInsight/review-refactor
Adds a number of bugfixes and improvements to wordfreq_builder
2015-06-19 15:29:52 -04:00
Robyn Speer
a83cf82adb restore missing Russian OpenSubtitles data 2015-06-19 12:36:08 -04:00
Joshua Chin
1385b735cf updated freqs_to_dBpack docstring 2015-06-18 10:32:53 -04:00
Joshua Chin
3596434f7f revised read_freqs docstring 2015-06-18 10:28:22 -04:00
Joshua Chin
18b53f6071 updated monolingual_tokenize_file docstring, and removed unused argument 2015-06-18 10:20:54 -04:00
Joshua Chin
34e9512517 tokenize_file should ignore lines with unknown languages 2015-06-18 10:18:57 -04:00
Joshua Chin
2f4fe92c90 Fixed CLD2_BAD_CHAR regex 2015-06-18 10:18:00 -04:00
Joshua Chin
87285b8b90 changed tokenize_file: cld2 return 'un' instead of None if it cannot recognize the language 2015-06-17 14:19:28 -04:00
Joshua Chin
b5bc39c893 tokenize_file: don't join tokens if language is None 2015-06-17 14:18:18 -04:00
Joshua Chin
7fc0ba9092 automatically closes input file in tokenize_file 2015-06-17 11:42:34 -04:00
Joshua Chin
2039b18b71 updated test to check number parsing 2015-06-17 11:30:25 -04:00
Joshua Chin
dad23c117a fixed build process 2015-06-17 11:25:07 -04:00
Joshua Chin
a495de9f65 updated directory of twitter output 2015-06-16 17:32:58 -04:00
Joshua Chin
6f0a082007 removed intermediate twitter file rules 2015-06-16 17:28:09 -04:00
Joshua Chin
42ca1f2523 improved tokenize_file and updated docstring 2015-06-16 17:27:27 -04:00
Joshua Chin
80afc5dc45 renamed pretokenize_twitter to tokenize twitter, and deleted format_twitter 2015-06-16 17:26:52 -04:00
Joshua Chin
20bc34f224 fixed bugs and removed unused code 2015-06-16 17:25:06 -04:00
Joshua Chin
aa0bef3fb7 changed tokenizer to only strip t.co urls 2015-06-16 16:11:31 -04:00
Joshua Chin
8dd17fded4 Added codepoints U+10FFFE and U+10FFFF to CLD2_BAD_CHAR_RANGE 2015-06-16 16:03:58 -04:00
Joshua Chin
308cdbb4c4 added tests for the tokenizer and language recognizer 2015-06-16 16:00:14 -04:00
Joshua Chin
e57a88b548 added pycld2 dependency 2015-06-16 15:06:22 -04:00
Joshua Chin
7a3cd8068c Replaced Rosette with cld2 language recognizer and wordfreq tokenizer 2015-06-16 14:45:49 -04:00
Robyn Speer
6cd6ab33bc ninja2dot: make a graph of the build process 2015-06-15 13:14:32 -04:00
Robyn Speer
26b03392fe Reorganize and document some functions 2015-06-15 12:40:31 -04:00
Robyn Speer
04ad6720cc okay, apparently you can't mix code blocks and bullets 2015-06-01 11:39:42 -04:00
Robyn Speer
69d9e89bb8 is this indented enough for you, markdown 2015-06-01 11:38:10 -04:00
Robyn Speer
dcc1e87728 add a README 2015-06-01 11:37:19 -04:00
Robyn Speer
296901b93f Tokenize Japanese consistently with MeCab 2015-05-27 17:44:58 -04:00
Robyn Speer
a5954d14df give mecab a larger buffer 2015-05-26 19:34:46 -04:00
Robyn Speer
b9a5e05f87 fix build rules for Japanese Wikipedia 2015-05-26 18:08:57 -04:00
Robyn Speer
353533bba4 fix version in config.py 2015-05-26 18:08:46 -04:00
Robyn Speer
4f738ad78c correct a Leeds bug; add some comments to rules.ninja 2015-05-26 18:08:04 -04:00
Robyn Speer
4513fed60c add Google Books data for English 2015-05-11 18:44:28 -04:00
Robyn Speer
ed4f79b90e move some functions to the wordfreq package 2015-05-11 17:02:52 -04:00
Robyn Speer
414c9ac1f0 use a more general-purpose tokenizer, not 'retokenize' 2015-05-08 12:40:14 -04:00
Robyn Speer
ef69bcec62 build.ninja knows about its own dependencies 2015-05-08 12:40:06 -04:00
Robyn Speer
aa55e32450 Makefile should only be needed for bootstrapping Ninja 2015-05-08 12:39:31 -04:00
Robyn Speer
a69bd518af limit final builds to languages with >= 2 sources 2015-05-07 23:59:04 -04:00