Joshua Chin
|
8277de2c7f
|
changed tokenize_file: cld2 return 'un' instead of None if it cannot recognize the language
|
2015-06-17 14:19:28 -04:00 |
|
Joshua Chin
|
b24f31d30a
|
tokenize_file: don't join tokens if language is None
|
2015-06-17 14:18:18 -04:00 |
|
Joshua Chin
|
cde8ac5366
|
corrected available_languages to return a dict of strs to strs
Former-commit-id: 9f288bac31
|
2015-06-17 12:43:13 -04:00 |
|
Joshua Chin
|
baca17c2ef
|
changed yield to yield from in iter_wordlist
Former-commit-id: 6cc962bfea
|
2015-06-17 12:38:31 -04:00 |
|
Joshua Chin
|
b5a6fcc03b
|
updated db_to_freq docstring
Former-commit-id: 9b30da4dec
|
2015-06-17 12:24:23 -04:00 |
|
Joshua Chin
|
5f7f661a1f
|
added docstrings
Former-commit-id: 7e808bf7c1
|
2015-06-17 12:20:50 -04:00 |
|
Joshua Chin
|
c5d8bac7d5
|
removed temporary variable
Former-commit-id: 053e4da3e6
|
2015-06-17 12:16:02 -04:00 |
|
Joshua Chin
|
99d97956e6
|
automatically closes input file in tokenize_file
|
2015-06-17 11:42:34 -04:00 |
|
Rob Speer
|
855da9974d
|
Merge pull request #1 from LuminosoInsight/cld2-tokenizer
Replaced usage of Rosette with the CLD2 language recognizer and the wordfreq tokenizer
|
2015-06-17 11:33:15 -04:00 |
|
Joshua Chin
|
e50c0c6917
|
updated test to check number parsing
|
2015-06-17 11:30:25 -04:00 |
|
Joshua Chin
|
c71e93611b
|
fixed build process
|
2015-06-17 11:25:07 -04:00 |
|
Joshua Chin
|
8317ea6d51
|
updated directory of twitter output
|
2015-06-16 17:32:58 -04:00 |
|
Joshua Chin
|
da93bc89c2
|
removed intermediate twitter file rules
|
2015-06-16 17:28:09 -04:00 |
|
Joshua Chin
|
87f08780c8
|
improved tokenize_file and updated docstring
|
2015-06-16 17:27:27 -04:00 |
|
Joshua Chin
|
bea8963a79
|
renamed pretokenize_twitter to tokenize twitter, and deleted format_twitter
|
2015-06-16 17:26:52 -04:00 |
|
Joshua Chin
|
aeedb408b7
|
fixed bugs and removed unused code
|
2015-06-16 17:25:06 -04:00 |
|
Joshua Chin
|
64644d8ede
|
changed tokenizer to only strip t.co urls
|
2015-06-16 16:11:31 -04:00 |
|
Joshua Chin
|
b649d45e61
|
Added codepoints U+10FFFE and U+10FFFF to CLD2_BAD_CHAR_RANGE
|
2015-06-16 16:03:58 -04:00 |
|
Joshua Chin
|
a200a0a689
|
added tests for the tokenizer and language recognizer
|
2015-06-16 16:00:14 -04:00 |
|
Joshua Chin
|
1cf7e3d2b9
|
added pycld2 dependency
|
2015-06-16 15:06:22 -04:00 |
|
Joshua Chin
|
297d981e20
|
Replaced Rosette with cld2 language recognizer and wordfreq tokenizer
|
2015-06-16 14:45:49 -04:00 |
|
Rob Speer
|
b78d8ca3ee
|
ninja2dot: make a graph of the build process
|
2015-06-15 13:14:32 -04:00 |
|
Rob Speer
|
56d447a825
|
Reorganize and document some functions
|
2015-06-15 12:40:31 -04:00 |
|
Rob Speer
|
3d28491f4d
|
okay, apparently you can't mix code blocks and bullets
|
2015-06-01 11:39:42 -04:00 |
|
Rob Speer
|
d202474763
|
is this indented enough for you, markdown
|
2015-06-01 11:38:10 -04:00 |
|
Rob Speer
|
9927a8c414
|
add a README
|
2015-06-01 11:37:19 -04:00 |
|
Rob Speer
|
9a46b80028
|
clearer error on py2
Former-commit-id: ed19d79c5a
|
2015-05-28 14:05:11 -04:00 |
|
Rob Speer
|
51f4e4c826
|
add installation instructions to the readme
Former-commit-id: 0f4ca80026
|
2015-05-28 14:02:12 -04:00 |
|
Rob Speer
|
1f41cb083c
|
update Japanese data; test Japanese and token combining
Former-commit-id: 611a6a35de
|
2015-05-28 14:01:56 -04:00 |
|
Rob Speer
|
d991373c1d
|
Work on making Japanese tokenization use MeCab consistently
Former-commit-id: 05cf94d1fd
|
2015-05-27 18:10:25 -04:00 |
|
Rob Speer
|
cbe3513e08
|
Tokenize Japanese consistently with MeCab
|
2015-05-27 17:44:58 -04:00 |
|
Rob Speer
|
536c15fbdb
|
give mecab a larger buffer
|
2015-05-26 19:34:46 -04:00 |
|
Rob Speer
|
5de81c7111
|
fix build rules for Japanese Wikipedia
|
2015-05-26 18:08:57 -04:00 |
|
Rob Speer
|
3d5b3d47e8
|
fix version in config.py
|
2015-05-26 18:08:46 -04:00 |
|
Rob Speer
|
ffd352f148
|
correct a Leeds bug; add some comments to rules.ninja
|
2015-05-26 18:08:04 -04:00 |
|
Rob Speer
|
e4e146f22f
|
Merge branch 'master' into newbuild
Conflicts:
setup.py
wordfreq/build.py
wordfreq/config.py
Former-commit-id: 0e5156e162
|
2015-05-21 20:41:47 -04:00 |
|
Rob Speer
|
b807d01f8f
|
rebuild data
Former-commit-id: 84e5edcea1
|
2015-05-21 20:36:15 -04:00 |
|
Rob Speer
|
a1c31d3390
|
remove old tests
Former-commit-id: 410912d8f0
|
2015-05-21 20:36:09 -04:00 |
|
Rob Speer
|
24a8e5531b
|
allow more language matches; reorder some parameters
Former-commit-id: b42594fa5f
|
2015-05-21 20:35:02 -04:00 |
|
Rob Speer
|
5b4107bd1d
|
tests for new wordfreq with full coverage
Former-commit-id: df863a5169
|
2015-05-21 20:34:17 -04:00 |
|
Rob Speer
|
c953fc1626
|
update README, another setup fix
Former-commit-id: dd41e61c57
|
2015-05-13 04:09:34 -04:00 |
|
Rob Speer
|
5cbc0d0f94
|
update dependencies
Former-commit-id: f13cca4d81
|
2015-05-12 12:30:01 -04:00 |
|
Rob Speer
|
6f61cac4cb
|
restore missing line in setup.py
Former-commit-id: bb18f741e2
|
2015-05-12 12:24:18 -04:00 |
|
Rob Speer
|
1c65cb9f14
|
add new data files from wordfreq_builder
Former-commit-id: 35aec061de
|
2015-05-11 18:45:47 -04:00 |
|
Rob Speer
|
50ff85ce19
|
add Google Books data for English
|
2015-05-11 18:44:28 -04:00 |
|
Rob Speer
|
c707b32345
|
move some functions to the wordfreq package
|
2015-05-11 17:02:52 -04:00 |
|
Rob Speer
|
9cd6f7c5c5
|
WIP: burn stuff down
Former-commit-id: 9b63e54471
|
2015-05-08 15:28:52 -04:00 |
|
Rob Speer
|
d0d777ed91
|
use a more general-purpose tokenizer, not 'retokenize'
|
2015-05-08 12:40:14 -04:00 |
|
Rob Speer
|
35128a94ca
|
build.ninja knows about its own dependencies
|
2015-05-08 12:40:06 -04:00 |
|
Rob Speer
|
d6cc90792f
|
Makefile should only be needed for bootstrapping Ninja
|
2015-05-08 12:39:31 -04:00 |
|