Joshua Chin
|
04bf6aadcc
|
updated monolingual_tokenize_file docstring, and removed unused argument
|
2015-06-18 10:20:54 -04:00 |
|
Joshua Chin
|
91dd73a2b5
|
tokenize_file should ignore lines with unknown languages
|
2015-06-18 10:18:57 -04:00 |
|
Joshua Chin
|
ffc01c75a0
|
Fixed CLD2_BAD_CHAR regex
|
2015-06-18 10:18:00 -04:00 |
|
Joshua Chin
|
8277de2c7f
|
changed tokenize_file: cld2 return 'un' instead of None if it cannot recognize the language
|
2015-06-17 14:19:28 -04:00 |
|
Joshua Chin
|
b24f31d30a
|
tokenize_file: don't join tokens if language is None
|
2015-06-17 14:18:18 -04:00 |
|
Joshua Chin
|
99d97956e6
|
automatically closes input file in tokenize_file
|
2015-06-17 11:42:34 -04:00 |
|
Rob Speer
|
855da9974d
|
Merge pull request #1 from LuminosoInsight/cld2-tokenizer
Replaced usage of Rosette with the CLD2 language recognizer and the wordfreq tokenizer
|
2015-06-17 11:33:15 -04:00 |
|
Joshua Chin
|
e50c0c6917
|
updated test to check number parsing
|
2015-06-17 11:30:25 -04:00 |
|
Joshua Chin
|
c71e93611b
|
fixed build process
|
2015-06-17 11:25:07 -04:00 |
|
Joshua Chin
|
8317ea6d51
|
updated directory of twitter output
|
2015-06-16 17:32:58 -04:00 |
|
Joshua Chin
|
da93bc89c2
|
removed intermediate twitter file rules
|
2015-06-16 17:28:09 -04:00 |
|
Joshua Chin
|
87f08780c8
|
improved tokenize_file and updated docstring
|
2015-06-16 17:27:27 -04:00 |
|
Joshua Chin
|
bea8963a79
|
renamed pretokenize_twitter to tokenize twitter, and deleted format_twitter
|
2015-06-16 17:26:52 -04:00 |
|
Joshua Chin
|
aeedb408b7
|
fixed bugs and removed unused code
|
2015-06-16 17:25:06 -04:00 |
|
Joshua Chin
|
64644d8ede
|
changed tokenizer to only strip t.co urls
|
2015-06-16 16:11:31 -04:00 |
|
Joshua Chin
|
b649d45e61
|
Added codepoints U+10FFFE and U+10FFFF to CLD2_BAD_CHAR_RANGE
|
2015-06-16 16:03:58 -04:00 |
|
Joshua Chin
|
a200a0a689
|
added tests for the tokenizer and language recognizer
|
2015-06-16 16:00:14 -04:00 |
|
Joshua Chin
|
1cf7e3d2b9
|
added pycld2 dependency
|
2015-06-16 15:06:22 -04:00 |
|
Joshua Chin
|
297d981e20
|
Replaced Rosette with cld2 language recognizer and wordfreq tokenizer
|
2015-06-16 14:45:49 -04:00 |
|
Rob Speer
|
b78d8ca3ee
|
ninja2dot: make a graph of the build process
|
2015-06-15 13:14:32 -04:00 |
|
Rob Speer
|
56d447a825
|
Reorganize and document some functions
|
2015-06-15 12:40:31 -04:00 |
|
Rob Speer
|
3d28491f4d
|
okay, apparently you can't mix code blocks and bullets
|
2015-06-01 11:39:42 -04:00 |
|
Rob Speer
|
d202474763
|
is this indented enough for you, markdown
|
2015-06-01 11:38:10 -04:00 |
|
Rob Speer
|
9927a8c414
|
add a README
|
2015-06-01 11:37:19 -04:00 |
|
Rob Speer
|
cbe3513e08
|
Tokenize Japanese consistently with MeCab
|
2015-05-27 17:44:58 -04:00 |
|
Rob Speer
|
536c15fbdb
|
give mecab a larger buffer
|
2015-05-26 19:34:46 -04:00 |
|
Rob Speer
|
5de81c7111
|
fix build rules for Japanese Wikipedia
|
2015-05-26 18:08:57 -04:00 |
|
Rob Speer
|
3d5b3d47e8
|
fix version in config.py
|
2015-05-26 18:08:46 -04:00 |
|
Rob Speer
|
ffd352f148
|
correct a Leeds bug; add some comments to rules.ninja
|
2015-05-26 18:08:04 -04:00 |
|
Rob Speer
|
50ff85ce19
|
add Google Books data for English
|
2015-05-11 18:44:28 -04:00 |
|
Rob Speer
|
c707b32345
|
move some functions to the wordfreq package
|
2015-05-11 17:02:52 -04:00 |
|
Rob Speer
|
d0d777ed91
|
use a more general-purpose tokenizer, not 'retokenize'
|
2015-05-08 12:40:14 -04:00 |
|
Rob Speer
|
35128a94ca
|
build.ninja knows about its own dependencies
|
2015-05-08 12:40:06 -04:00 |
|
Rob Speer
|
d6cc90792f
|
Makefile should only be needed for bootstrapping Ninja
|
2015-05-08 12:39:31 -04:00 |
|
Rob Speer
|
b541fe68e1
|
Merge branch 'ninja-build'
Conflicts:
wordfreq_builder/cmd_count_twitter.py
wordfreq_builder/cmd_count_wikipedia.py
|
2015-05-08 00:01:01 -04:00 |
|
Rob Speer
|
2f14417bcf
|
limit final builds to languages with >= 2 sources
|
2015-05-07 23:59:04 -04:00 |
|
Rob Speer
|
1b7a2b9d0b
|
fix dependency
|
2015-05-07 23:55:57 -04:00 |
|
Rob Speer
|
abb0e059c8
|
a reasonably complete build process
|
2015-05-07 19:38:33 -04:00 |
|
Rob Speer
|
02d8b32119
|
process leeds and opensubtitles
|
2015-05-07 17:07:33 -04:00 |
|
Rob Speer
|
7e238cf547
|
abstract how we define build rules a bit
|
2015-05-07 16:59:28 -04:00 |
|
Rob Speer
|
d2f9c60776
|
WIP on more build steps
|
2015-05-07 16:49:53 -04:00 |
|
Rob Speer
|
16928ed182
|
add rules to count wikipedia tokens
|
2015-05-05 15:21:24 -04:00 |
|
Rob Speer
|
bd579e2319
|
fix the 'count' ninja rule
|
2015-05-05 14:06:13 -04:00 |
|
Rob Speer
|
5787b6bb73
|
add and adjust some build steps
- more build steps for Wikipedia
- rename 'tokenize_twitter' to 'pretokenize_twitter' to indicate that
the results are preliminary
|
2015-05-05 13:59:21 -04:00 |
|
Rob Speer
|
61b9440e3d
|
add wiki-parsing process
|
2015-05-04 13:25:01 -04:00 |
|
Rob Speer
|
34400de35a
|
not using wordfreq.cfg anymore
|
2015-04-30 16:25:42 -04:00 |
|
Rob Speer
|
5437bb4e85
|
WIP on new build system
|
2015-04-30 16:24:28 -04:00 |
|
Rob Speer
|
2a1b16b55c
|
use script codes for Chinese
|
2015-04-30 13:02:58 -04:00 |
|
Rob Speer
|
4dae2f8caf
|
define some ninja rules
|
2015-04-29 17:13:58 -04:00 |
|
Rob Speer
|
14e445a937
|
WIP on Ninja build automation
|
2015-04-29 15:59:06 -04:00 |
|