FreeBSD.software
Home/textproc/py311-tokenizer

py311-tokenizer

3.5.2

Tokenizer for Icelandic text

Tokenizer: A tokenizer for Icelandic text Tokenization is a necessary first step in many natural language processing tasks, such as word counting, parsing, spell checking, corpus generation, and statistical analysis of text.

Origin: textproc/py-tokenizer
Category: textproc
Size: 625KiB
License: MIT
Maintainer: otis@FreeBSD.org
Dependencies: 1 packages
Required by: 0 packages
$pkg install py311-tokenizer

Dependencies (1)

More in textproc