• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

±¹³» ÇÐȸÁö

Ȩ Ȩ > ¿¬±¸¹®Çå > ±¹³» ÇÐȸÁö > µ¥ÀÌÅͺ£À̽º ¿¬±¸È¸Áö(SIGDB)

µ¥ÀÌÅͺ£À̽º ¿¬±¸È¸Áö(SIGDB)

Current Result Document : 5 / 11 ÀÌÀü°Ç ÀÌÀü°Ç   ´ÙÀ½°Ç ´ÙÀ½°Ç

ÇѱÛÁ¦¸ñ(Korean Title) µö·¯´×ÀÇ È¿À²ÀûÀÎ ÇнÀÀ» À§ÇÑ ÇнÀµ¥ÀÌÅÍ ¼±º° ±â¹ý¿¡ °üÇÑ ¿¬±¸
¿µ¹®Á¦¸ñ(English Title) A Study on Training Set Selection Techniques for Efficient Deep Learning
ÀúÀÚ(Author) ÁÖ°æµ·   ±èö¿¬   Á¤¿Á¶õ   Gyeong-don Joo   Chulyun Kim   Ok-Ran Jeong  
¿ø¹®¼ö·Ïó(Citation) VOL 33 NO. 01 PP. 0037 ~ 0047 (2017. 04)
Çѱ۳»¿ë
(Korean Abstract)
ºòµ¥ÀÌÅÍÀÇ Áõ°¡¿Í º´·Ä Çϵå¿þ¾î ó¸®, ÇнÀ ¾Ë°í¸®ÁòÀÇ °³¼±Àº µö·¯´×ÀÇ ¹ßÀüÀ» °¡Á®¿ÔÀ¸¸ç ÀΰøÁö´ÉÀÌ Àû¿ë °¡´ÉÇÑ ¿µ¿ªÀ» ȹ±âÀûÀ¸·Î È®Àå½ÃÅ°°í ÀÖ´Ù. ÇÏÁö¸¸ ÀÌ·¯ÇÑ ´ë±Ô¸ðÀÇ ÇнÀ µ¥ÀÌÅÍ¿¡ ±â¹ÝÇÑ µö·¯´×Àº ÇнÀ½Ã°£ÀÌ Áõ°¡µÇ°í, ´õ ³ª¾Æ°¡ ÇнÀ µ¥ÀÌÅÍÀÇ ±¸ÃàÀ» À§ÇÑ ·¹ÀÌºí¸µ(labeling) ºñ¿ëÀÌ µ¥ÀÌÅÍ ¼ö¿¡ ºñ·ÊÇÏ¿© Áõ°¡ÇÏ°Ô µÈ´Ù. µû¶ó¼­ º» ³í¹®¿¡¼­´Â, µö·¯´×ÀÇ Á¤È®µµ¸¦ ÃÖ´ëÇÑ À¯ÁöÇϸ鼭 ÇÊ¿äÇÑ ÇнÀ µ¥ÀÌÅÍÀÇ ¼ö¸¦ ÁÙÀÏ ¼ö ÀÖµµ·Ï, Àüü µ¥ÀÌÅÍ¿¡¼­ Á¦ÇÑµÈ ¼öÀÇ ÇнÀ µ¥ÀÌÅÍÀÇ È常¦ ¼±º°ÇÏ´Â ±â¹ý¿¡ ´ëÇÏ¿© ¿¬±¸ÇÑ´Ù. ƯÈ÷ ±âÁ¸ÀÇ ´Éµ¿ ÇнÀ¿¡ ±â¹ÝÇÑ ±â¹ýµéÀÌ ÇнÀ µ¥ÀÌÅÍÀÇ ¼ö°¡ ¸Å¿ì Àû¾îÁú °æ¿ì µö·¯´×ÀÇ Á¤È®µµ°¡ ±Þ°ÝÈ÷ ¶³¾îÁö´Â ¹®Á¦¸¦ ÇØ°áÇÒ ¼ö ÀÖ´Â »õ·Î¿î ±â¹ýÀ» Á¦¾ÈÇÏ¿´À¸¸ç, Á¦¾ÈµÈ ±â¹ýÀ¸·Î ¼±º°µÈ ÇнÀ µ¥ÀÌÅÍ´Â ±× ¼öÀÇ º¯È­¿¡ ´ëÇØ µö·¯´×ÀÌ ÀÏ°üµÇ°Ô °æÀï·ÂÀÖ´Â Á¤È®µµ¸¦ °¡ÁüÀ» ´Ù¾çÇÑ ½ÇÇèÀ» ÅëÇÏ¿© °ËÁõÇÏ¿´´Ù.
¿µ¹®³»¿ë
(English Abstract)
The increase of big data, parallel hardware processing and enhanced learning algorithms have led to the development of deep learning and are dramatically expanding the domain of applications based on artificial intelligence. However, such large-scale training data extends the learning time, and furthermore, the labeling cost for building the training set increases in proportion to the volume of data. Therefore, in this paper, we study how to select a limited number of candidates for training set from given data so as to reduce the number of training set while maintaining the accuracy of deep learning. Especially, we found that, with the training set selected by the previous work, the accuracy of deep learning drastically decreases when the number of training set is very small and so proposed new methods to resolve this fall-off in accuracy. Through experiments, we showed that deep learning consistently has the competitive accuracy with the training set selected by the proposed method regardless of the size of training set.
Å°¿öµå(Keyword) µö·¯´×   ÇнÀ µ¥ÀÌÅÍ ¼±º°   ´Éµ¿ ÇнÀ   °¨¸¶ ºÐÆ÷   Deep Learning   Train Set Selection   Active Learning   Gamma distribution  
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå