0.01
No commit activity in last 3 years
No release in over 3 years
Process linguistic corpus
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
 Dependencies

Development

~> 1.3
= 2.14.0.rc1

Runtime

>= 0
 Project Readme

Corpus Processor

Gem Version Build Status Code Climate Dependency Status Coverage Status

  • Versão em português
  • English version

Versão em português

Corpus Processor é uma ferramenta para trabalhar com Linguística de Corpus. Ele converte corpora entre diferentes formatos para serem usados em ferramentas de Processamento de Linguagem Natural (NLP).

O primeiro propósito do Corpus Processor e seu único recurso implementado até agora é transformar corpora encontrados na Linguateca para o formato usado pelo treinamento do Stanford NER.

Linguateca é uma fonte de corpora em português.

Stanford NER é uma implementação de Reconhecimento de Entidade Mencionada (NER).

Instalação

Corpus Processor é uma Ruby Gem. Para instalar, dada uma instalação de Ruby, rode:

$ gem install corpus_processor

Uso

Converter corpus do formato do LâMPADA 2.0 para o formato do Stanford-NER:

$ corpus-processor process [INPUT_FILE [OUTPUT_FILE]]

As classes reconhecidas por padrão pelo Corpus Processor são PESSOA, LOCAL e ORGANIZACAO. Para configurar outras classes, veja o arquivo de configuração em lib/corpus-processor/categories/default.yml.

Para usar outras configurações, veja as opções com:

$ corpus-processor help process

Resultados

Os resultados do uso do Corpus Processor com um corpus do LâMPADA 2.0 / Classic HAREM 2.0 Golden Collection - disponível na Linguateca - estão neste diretório:

  • ner-pt_br.training.txt: O corpus da Linguateca convertido com o Corpus Processor para o formato de treinamento do Stanford NER.
  • ner-pt_br.training-partial.txt: Os primeiros 95% do corpus em ner-pt_br.training.txt, usados para o teste de precisão do Stanford NER.
  • ner-pt_br.test.txt: Os últimos 5% do curpus em ner-pt_br.training.txt, usado para testar o modelo linguístico.
  • ner-pt_br.prop: O arquivo de propriedados no formato do Stanford NER que é usado para treinar com o ner-pt_br.training.txt.
  • ner-pt_br.partial.prop: O arquivo de propriedados no formato do Stanford NER que é usado para treinar com o ner-pt_br.training-partial.txt.
  • ner-pt_br.ser.gz: O modelo linguístico no formato do Stanford NER resultante do treinamento com o ner-pt_br.training.txt.
  • ner-pt_br.ser-partial.gz: O modelo linguístico no formato do Stanford NER resultante do treinamento com o ner-pt_br.training-partial.txt.

A performance do modelo linguístico testado é:

CRFClassifier tagged 4450 words in 1 documents at 3632.65 words per second.
         Entity P       R       F1      TP      FP      FN
       LOCATION 0.5667  0.3953  0.4658  17      13      26
   ORGANIZATION 0.4531  0.2500  0.3222  29      35      87
         PERSON 0.5333  0.7442  0.6214  32      28      11
         Totals 0.5065  0.3861  0.4382  78      76      124

Essa performance é ruim se compara com outros trabalhos sobre o assunto, mas tem servido aos nossos propósitos. Nós continuaremos tentando melhorar essa situação.

Sugestões são bem vindas sobre como fazer isso.


Note que a transformação do Corpus Processor descarta muita informação do corpus anotado. Os corpora usados são bastante ricos em anotações e para tirar completo proveito deles considere usar as ferramentas encontradas na Linguateca.

Para entender melhor, siga as seguintes referências:

Diana Santos. "O modelo semântico usado no Primeiro HAREM". In Diana Santos & Nuno Cardoso (eds.), Reconhecimento de entidades mencionadas em português: Documentação e actas do HAREM, a primeira avaliação conjunta na área. Linguateca, 2007, pp. 43-57.
http://www.linguateca.pt/aval_conjunta/LivroHAREM/Cap04-SantosCardoso2007-Santos.pdf

Diana Santos. "Evaluation in natural language processing". European Summer School on Language, Logic and Information (ESSLLI 2007) (Trinity College, Dublin, Irlanda, 6-17 de Agosto de 2007).

Leia mais sobre o processo de treinamento.

Agradecimentos

English version

Corpus Processor is a tool to work with Corpus Linguistics. It converts corpora between different formats for use in Natural Language Processing (NLP) tools.

The first purpose of Corpus Processor and its current only feature is to transform corpora found in Linguateca into the format used for training in Stanford NER.

Linguateca is an source of corpora in Portuguese.

Stanford NER is an implementation of Named Entity Recognition.

Installation

Corpus Processor is a Ruby Gem. To install it, given a working installation of Ruby, run:

$ gem install corpus_processor

Usage

Convert corpus from LâMPADA 2.0 format to Stanford-NER format:

$ corpus-processor process [INPUT_FILE [OUTPUT_FILE]]

Classes recognized by default in Corpus Processor are PESSOA (person), LOCAL (location) and ORGANIZACAO (organization). In order to configure other classes, refer to lib/corpus-processor/categories/default.yml.

To run with different configurations, consult the options with:

$ corpus-processor help process

Results

The results of using Corpus Processor with a corpus from LâMPADA 2.0 / Classic HAREM 2.0 Golden Collection - available in Linguateca - are in this directory:

  • ner-pt_br.training.txt: The corpus from Linguateca converted with Corpus Processor to Stanford NER training format.
  • ner-pt_br.training-partial.txt: The first 95% of the corpus in ner-pt_br.training.txt, used for training Stanford NER for accuracy testing.
  • ner-pt_br.test.txt: The last 5% of the corpus in ner-pt_br.training.txt, used to test the language model.
  • ner-pt_br.prop: The property file in Stanford NER's format for setting up the training with the whole ner-pt_br.training.txt.
  • ner-pt_br.partial.prop: The property file in Stanford NER's format for setting up the training with the partial ner-pt_br.training-partial.txt.
  • ner-pt_br.ser.gz: The resulting language model for Stanford NER trained with ner-pt_br.training.txt.
  • ner-pt_br.ser-partial.gz: The resulting language model for Stanford NER trained with ner-pt_br.training-partial.txt.

The performance of the language model under test is:

CRFClassifier tagged 4450 words in 1 documents at 3632.65 words per second.
         Entity P       R       F1      TP      FP      FN
       LOCATION 0.5667  0.3953  0.4658  17      13      26
   ORGANIZATION 0.4531  0.2500  0.3222  29      35      87
         PERSON 0.5333  0.7442  0.6214  32      28      11
         Totals 0.5065  0.3861  0.4382  78      76      124

This performance is poor if compared with other works on the topic, but it has served well our purposes. We'll keep trying to improve on this.

Suggestions are welcome in this regard.


Note that the transformation performed by Corpus Processor discards lots of information from the annotated corpus. The corpora used in this process are very rich in annotations, in order to extract all of it consider using one of the tools found in Linguateca.

Further information about the subject can be found in the following sources:

Diana Santos. "O modelo semântico usado no Primeiro HAREM". In Diana Santos & Nuno Cardoso (eds.), Reconhecimento de entidades mencionadas em português: Documentação e actas do HAREM, a primeira avaliação conjunta na área. Linguateca, 2007, pp. 43-57.
http://www.linguateca.pt/aval_conjunta/LivroHAREM/Cap04-SantosCardoso2007-Santos.pdf

Diana Santos. "Evaluation in natural language processing". European Summer School on Language, Logic and Information (ESSLLI 2007) (Trinity College, Dublin, Irlanda, 6-17 de Agosto de 2007).

Read more about the process of training.

Thanks

Contributing

  1. Fork it.
  2. Create your feature branch (git checkout -b my-new-feature).
  3. Commit your changes (git commit -am 'Add some feature').
  4. Push to the branch (git push origin my-new-feature).
  5. Create new Pull Request.

Changelog

0.3.0

  • Stoped using Regex for parser and started using Nokogiri.
  • Fixed missing punctuation.
  • Fixed inconsistencies in tagging. The issue was caused by <ALT> tags.
  • Accepted categories definitions from users.
  • Installed several measures for quality of code.
  • Added documentation.

0.2.0

  • Renamed Harem to LâMPADA, as asked by Linguateca's team.

0.0.1

License

Copyright (c) 2013 Das Dad

MIT License

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.