Joshua Chin
6083219fe5
removed bad comment
...
Former-commit-id: 4d5ec57144
2015-07-17 14:54:09 -04:00
Joshua Chin
4fa4060036
removed unused scripts
...
Former-commit-id: 39f01b0485
2015-07-17 14:53:18 -04:00
Joshua Chin
631a5f1b71
removed mkdir -p for many cases
...
Former-commit-id: 98a7a8093b
2015-07-17 14:45:22 -04:00
Joshua Chin
bc4cedf85a
removed TOKENIZE_TWITTER
...
Former-commit-id: 449a656edd
2015-07-17 14:43:14 -04:00
Joshua Chin
c80943c677
removed TOKENIZE_TWITTER option
...
Former-commit-id: 00e18b7d4b
2015-07-17 14:40:49 -04:00
Joshua Chin
753d241b6a
more README fixes
...
Former-commit-id: 772c0cddd1
2015-07-17 14:40:33 -04:00
Joshua Chin
0f92367e3d
fixed README
...
Former-commit-id: 0a085132f4
2015-07-17 14:35:43 -04:00
Rob Speer
7f9b7bb5d0
update the wordfreq_builder README
...
Former-commit-id: 8633e8c2a9
2015-07-13 11:58:48 -04:00
Rob Speer
41dba74da2
add docstrings and remove some brackets
2015-07-07 18:22:51 -04:00
Joshua Chin
b0f759d322
Removes mention of Rosette from README
2015-07-07 10:32:16 -04:00
Rob Speer
10c04d116f
add 'twitter' as a final build, and a new build dir
...
The `data/dist` directory is now a convenient place to find the final
built files that can be copied into wordfreq.
2015-07-01 17:45:39 -04:00
Rob Speer
37375383e8
cope with occasional Unicode errors in the input
2015-06-30 17:05:40 -04:00
Rob Speer
4771c12814
remove wiki2tokens and tokenize_wikipedia
...
These components are no longer necessary. Wikipedia output can and
should be tokenized with the standard tokenizer, instead of the
almost-equivalent one in the Nim code.
2015-06-30 15:28:01 -04:00
Rob Speer
9a2855394d
fix comment and whitespace involving tokenize_twitter
2015-06-30 15:18:37 -04:00
Rob Speer
f305679caf
Switch to a centibel scale, add a header to the data
2015-06-22 17:38:13 -04:00
Rob Speer
d16683f2b9
Merge pull request #2 from LuminosoInsight/review-refactor
...
Adds a number of bugfixes and improvements to wordfreq_builder
2015-06-19 15:29:52 -04:00
Rob Speer
5bc1f0c097
restore missing Russian OpenSubtitles data
2015-06-19 12:36:08 -04:00
Joshua Chin
3746af1350
updated freqs_to_dBpack docstring
2015-06-18 10:32:53 -04:00
Joshua Chin
59ce14cdd0
revised read_freqs docstring
2015-06-18 10:28:22 -04:00
Joshua Chin
04bf6aadcc
updated monolingual_tokenize_file docstring, and removed unused argument
2015-06-18 10:20:54 -04:00
Joshua Chin
91dd73a2b5
tokenize_file should ignore lines with unknown languages
2015-06-18 10:18:57 -04:00
Joshua Chin
ffc01c75a0
Fixed CLD2_BAD_CHAR regex
2015-06-18 10:18:00 -04:00
Joshua Chin
8277de2c7f
changed tokenize_file: cld2 return 'un' instead of None if it cannot recognize the language
2015-06-17 14:19:28 -04:00
Joshua Chin
b24f31d30a
tokenize_file: don't join tokens if language is None
2015-06-17 14:18:18 -04:00
Joshua Chin
99d97956e6
automatically closes input file in tokenize_file
2015-06-17 11:42:34 -04:00
Joshua Chin
e50c0c6917
updated test to check number parsing
2015-06-17 11:30:25 -04:00
Joshua Chin
c71e93611b
fixed build process
2015-06-17 11:25:07 -04:00
Joshua Chin
8317ea6d51
updated directory of twitter output
2015-06-16 17:32:58 -04:00
Joshua Chin
da93bc89c2
removed intermediate twitter file rules
2015-06-16 17:28:09 -04:00
Joshua Chin
87f08780c8
improved tokenize_file and updated docstring
2015-06-16 17:27:27 -04:00
Joshua Chin
bea8963a79
renamed pretokenize_twitter to tokenize twitter, and deleted format_twitter
2015-06-16 17:26:52 -04:00
Joshua Chin
aeedb408b7
fixed bugs and removed unused code
2015-06-16 17:25:06 -04:00
Joshua Chin
64644d8ede
changed tokenizer to only strip t.co urls
2015-06-16 16:11:31 -04:00
Joshua Chin
b649d45e61
Added codepoints U+10FFFE and U+10FFFF to CLD2_BAD_CHAR_RANGE
2015-06-16 16:03:58 -04:00
Joshua Chin
a200a0a689
added tests for the tokenizer and language recognizer
2015-06-16 16:00:14 -04:00
Joshua Chin
1cf7e3d2b9
added pycld2 dependency
2015-06-16 15:06:22 -04:00
Joshua Chin
297d981e20
Replaced Rosette with cld2 language recognizer and wordfreq tokenizer
2015-06-16 14:45:49 -04:00
Rob Speer
b78d8ca3ee
ninja2dot: make a graph of the build process
2015-06-15 13:14:32 -04:00
Rob Speer
56d447a825
Reorganize and document some functions
2015-06-15 12:40:31 -04:00
Rob Speer
3d28491f4d
okay, apparently you can't mix code blocks and bullets
2015-06-01 11:39:42 -04:00
Rob Speer
d202474763
is this indented enough for you, markdown
2015-06-01 11:38:10 -04:00
Rob Speer
9927a8c414
add a README
2015-06-01 11:37:19 -04:00
Rob Speer
cbe3513e08
Tokenize Japanese consistently with MeCab
2015-05-27 17:44:58 -04:00
Rob Speer
536c15fbdb
give mecab a larger buffer
2015-05-26 19:34:46 -04:00
Rob Speer
5de81c7111
fix build rules for Japanese Wikipedia
2015-05-26 18:08:57 -04:00
Rob Speer
3d5b3d47e8
fix version in config.py
2015-05-26 18:08:46 -04:00
Rob Speer
ffd352f148
correct a Leeds bug; add some comments to rules.ninja
2015-05-26 18:08:04 -04:00
Rob Speer
50ff85ce19
add Google Books data for English
2015-05-11 18:44:28 -04:00
Rob Speer
c707b32345
move some functions to the wordfreq package
2015-05-11 17:02:52 -04:00
Rob Speer
d0d777ed91
use a more general-purpose tokenizer, not 'retokenize'
2015-05-08 12:40:14 -04:00