• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

±¹³» ³í¹®Áö

Ȩ Ȩ > ¿¬±¸¹®Çå > ±¹³» ³í¹®Áö > Çѱ¹Á¤º¸°úÇÐȸ ³í¹®Áö > Á¤º¸°úÇÐȸ³í¹®Áö (Journal of KIISE)

Á¤º¸°úÇÐȸ³í¹®Áö (Journal of KIISE)

Current Result Document : 5 / 14 ÀÌÀü°Ç ÀÌÀü°Ç   ´ÙÀ½°Ç ´ÙÀ½°Ç

ÇѱÛÁ¦¸ñ(Korean Title) Self-Attention Áö¹è¼Ò ÀÎ½Ä ¸ðµ¨À» ÀÌ¿ëÇÑ ¾îÀý ´ÜÀ§ Çѱ¹¾î ÀÇÁ¸ ±¸¹®ºÐ¼®
¿µ¹®Á¦¸ñ(English Title) Korean Dependency Parsing using the Self-Attention Head Recognition Model
ÀúÀÚ(Author) ÀÓÁØÈ£   ±èÇö±â   Joon-Ho Lim   Hyun-ki Kim  
¿ø¹®¼ö·Ïó(Citation) VOL 46 NO. 01 PP. 0022 ~ 0030 (2019. 01)
Çѱ۳»¿ë
(Korean Abstract)
ÀÇÁ¸ ±¸¹®ºÐ¼®Àº ¹®ÀåÀÇ ±¸Á¶Àû ÁßÀǼºÀ» ÇؼÒÇÏ´Â ¹®Á¦·Î, ÃÖ±Ù ´Ù¾çÇÑ µö·¯´× ±â¼úÀÌ Àû¿ëµÇ¾î ³ôÀº ¼º´ÉÀ» º¸ÀÌ°í ÀÖ´Ù. º» ³í¹®Àº µö·¯´×À» ÀÌ¿ëÇÑ ÀÇÁ¸±¸¹®ºÐ¼®À» Å©°Ô 3°¡Áö ´Ü°è·Î ±¸ºÐÇÏ¿© »ìÆ캸¾Ò ´Ù. ù ¹ø°´Â ÀÇÁ¸ ±¸¹®ºÐ¼®ÀÇ ´ÜÀ§°¡ µÇ´Â ¾îÀý¿¡ ´ëÇÑ º¤ÅÍ Ç¥Çö ´Ü°è, µÎ ¹ø°´Â °¢ ¾îÀýÀÇ ÁÖÀ§ ¾îÀý Á¤ º¸¸¦ ¹Ý¿µÇÏ´Â ¹®¸Æ ¹Ý¿µ ´Ü°è, ¸¶Áö¸·Àº ¹®¸Æ ¹Ý¿µµÈ ¾îÀý Á¤º¸¿¡ ±â¹ÝÇÑ Áö¹è¼Ò ¹× ÀÇÁ¸°ü°è ÀÎ½Ä ´Ü°èÀÌ ´Ù. º» ³í¹®¿¡¼­´Â ¾îÀý Ç¥Çö ¹æ¹ýÀ¸·Î CNN ¸ðµ¨¿¡¼­ ¸¹ÀÌ »ç¿ëÇÏ´Â max-pooling ¹æ¹ýÀ» Á¦¾ÈÇÏ°í, ¹®¸Æ ¹Ý¿µÀ» À§ÇÏ¿© LSTM, GRUº¸´Ù ÀûÀº °è»ê·®À» °¡Áö´Â Minimal-RNN UnitÀ» Àû¿ëÇÏ¿´´Ù. ¸¶Áö¸·À¸·Î Áö¹è ¼Ò ÀνÄÀ» À§ÇÏ¿© °¢ ¾îÀý »çÀÌÀÇ »ó´ë °Å¸® ÀÓº£µùÀ» ¹Ý¿µÇÑ Self-Attention Áö¹è¼Ò ÀÎ½Ä ¸ðµ¨À» Á¦¾ÈÇÏ°í, ÀÇÁ¸°ü°è ·¹À̺í ÀνÄÀ» À§ÇÏ¿© Áö¹è¼Ò ÀÎ½Ä ¸ðµ¨°ú µ¿½Ã¿¡ ÇнÀÀ» ¼öÇàÇÏ´Â multi-task learningÀ» Àû¿ëÇÏ ¿´´Ù. Æò°¡¸¦ À§ÇÏ¿© ¼¼Á¾°èȹ ±¸±¸Á¶ ±¸¹®ºÐ¼® ¸»¹¶Ä¡¸¦ TTA Ç¥ÁØ ÀÇÁ¸ ±¸Á¶ °¡À̵å¶óÀο¡ µû¶ó º¯È¯ÇÏ¿´°í, ½ÇÇè°á°ú Á¦¾È ¸ðµ¨ÀÌ UAS 93.38%, LAS 90.42%ÀÇ ±¸¹®ºÐ¼® Á¤È®µµ¸¦ º¸¿´´Ù.
¿µ¹®³»¿ë
(English Abstract)
Dependency parsing is the problem solving of structural ambiguities of natural language in sentences. Recently, various deep learning techniques have been applied and shown high performance. In this paper, we analyzed deep learning based dependency parsing problem in three stages. The first stage was a representation step for a word (eojeol) that is a unit of dependency parsing. The second stage was a context reflecting step that reflected the surrounding word information for each word. The last stage was the head word and dependency label recognition step. In this paper, we propose the max-pooling method that is widely used in the CNN model for a word representation. Moreover, we apply the Minimal-RNN Unit that has less computational complexity than the LSTM and GRU for contextual representation. Finally, we propose a Self-Attention Head Recognition Model that includes the relative distance embedding between each word for the head word recognition, and applies multi-task learning to the dependency label recognition simultaneously. For the evaluation, the SEJONG phrase-structure parsing corpus was transformed according to the TTA Standard Dependency Guideline. The proposed model showed the accuracy of parsing for UAS 93.38% and LAS 90.42%.
Å°¿öµå(Keyword) ÀÇÁ¸±¸¹®ºÐ¼®   Self-Attention   µö·¯´×   ÀÚ¿¬¾î󸮠  dependency parsing   self-attention   deep learning   natural language processing  
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå