Tokenizer for Icelandic text
Tokenizer: A tokenizer for Icelandic text Tokenization is a necessary first step in many natural language processing tasks, such as word counting, parsing, spell checking, corpus generation, and statistical analysis of text.
$
pkg install py311-tokenizerOrigin
textproc/py-tokenizer
Size
625KiB
License
MIT
Maintainer
otis@FreeBSD.org
Dependencies
1 packages
Required by
0 packages