BESKlus : BERT Extractive Summarization with K-Means Clustering in Scientific Paper

Isi Artikel Utama

Feliks Victor Parningotan Samosir
Hapnes Toba
Mewati Ayub

Abstrak

This study aims to propose methods and models for extractive text summarization with contextual embedding. To build this model, a combination of traditional machine learning algorithms such as K-Means Clustering and the latest BERT-based architectures such as Sentence-BERT (SBERT) is carried out. The contextual embedding process will be carried out at the sentence level by SBERT. Embedded sentences will be clustered and the distance calculated from the centroid. The top sentences from each cluster will be used as summary candidates. The dataset used in this study is a collection of scientific journals from NeurIPS. Performance evaluation carried out with ROUGE-L gave a result of 15.52% and a BERTScore of 85.55%. This result surpasses several previous models such as PyTextRank and BERT Extractive Summarizer. The results of these measurements prove that the use of contextual embedding is very good if applied to extractive text summarization which is generally done at the sentence level.

Unduhan

Data unduhan belum tersedia.

Rincian Artikel

Cara Mengutip
[1]
F. V. P. Samosir, H. Toba, dan M. Ayub, “BESKlus : BERT Extractive Summarization with K-Means Clustering in Scientific Paper”, JuTISI, vol. 8, no. 1, hlm. 202 –, Apr 2022.
Bagian
Articles

Artikel paling banyak dibaca berdasarkan penulis yang sama

1 2 3 > >>