deep learning/paper review
-
[review] TRAC : Trustworthy Retrieval Augmented Chatbotdeep learning/paper review 2024. 3. 25. 21:46
원문 링크 : [2307.04642] TRAC: Trustworthy Retrieval Augmented Chatbot (arxiv.org) TRAC: Trustworthy Retrieval Augmented Chatbot Although conversational AIs have demonstrated fantastic performance, they often generate incorrect information, or hallucinations. Retrieval augmented generation has emerged as a promising solution to reduce these hallucinations. However, these techniques arxiv.org * Summa..
-
[review] Addressing fairness in artificial intelligence for medical imagingdeep learning/paper review 2024. 3. 24. 18:41
원문 링크 : Addressing fairness in artificial intelligence for medical imaging | Nature Communications Insight 관련 범용적으로 작동할 수 있는 인공지능이 있을까 하는 의문을 갖게 하는 review라고 생각된다. 의료 부분에서의 범용성을 갖기 위해서는 인종적 다양성, 사회경제적 다양성, 연령의 다양성 등 정말 다양한 다양성이 필요로 하며, 개별 개체의 특징을 토대로 anomal detection을 수행하는 분야라는 점에서, 범용성의 정의가 다르게 정의될 필요가 있다는 것을 깨닫게 되었다. 특히 LLM이 갖고 있는 편향과 그로 말미암은 downstream tasks에서의 발생 가능한 편향을 극복하기 위해 단순한 debiasing..
-
[review] What Constitutes a Faithful Summary? Preserving Author Perspectives in News Summarizationdeep learning/paper review 2024. 3. 21. 18:31
원문 링크 : [2311.09741] What Constitutes a Faithful Summary? Preserving Author Perspectives in News Summarization (arxiv.org) What Constitutes a Faithful Summary? Preserving Author Perspectives in News Summarization In this work, we take a first step towards designing summarization systems that are faithful to the author's opinions and perspectives. Focusing on a case study of preserving political ..
-
[review] Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks from The New Yorker Caption Contestdeep learning/paper review 2024. 3. 21. 18:31
원문 링크 : [2209.06293] Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks from The New Yorker Caption Contest (arxiv.org) Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks from The New Yorker Caption Contest Large neural networks can now generate jokes, but do they really "understand" humor? We challenge AI models with three tasks derived from the New Yorker ..
-
[review] Very Deep Convolutional Networks for Large-Scale Image Recognitiondeep learning/paper review 2024. 3. 20. 00:52
원문 링크 : [1409.1556] Very Deep Convolutional Networks for Large-Scale Image Recognition (arxiv.org) Very Deep Convolutional Networks for Large-Scale Image Recognition In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architec..
-
[review] Neural Machine Translation by Jointly Learning to Align and Translatedeep learning/paper review 2024. 3. 20. 00:36
원문 링크 : [1409.0473] Neural Machine Translation by Jointly Learning to Align and Translate (arxiv.org) * Attention의 빠른 이해를 위한 개념 2024.03.20 - [deep learning/natural language process] - Attention Background Attention Backgroud 1. Positional Encoding 정의 : 순서를 고려하지 않는 모델의 입력 sequence에 위치 정보를 제공하기 위한 encoding 방법 목적 : 모델이 단어의 순서를 이해하여 더 정확한 출력을 생성할 수 있도록 하기 위 ainow.tistory.com * 핵심 요약 RNN Encoder-Deco..
-
[review] Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddingsdeep learning/paper review 2024. 3. 20. 00:25
원문 링크 : [1607.06520] Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings (arxiv.org) Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which..
-
[review] From Pretraining Data to Language Models to Downstream Tasks:Tracking the Trails of Political Biases Leading to Unfair NLP Modelsdeep learning/paper review 2024. 3. 17. 20:18
원문 링크 : [2305.08283] From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models (arxiv.org) 0. Abstract 다양한 매체를 통해 사전 학습된 언어 모델들(LMs)이 내포하고 있는 본질적인 사회적 편향을 파악하고, 이러한 모델들을 통해 수행되는 downstream tasks들 에서의 사회적 혹은 정치적 편향의 발생 여부와 그 정도를 파악하여, LMs이 본질적으로 갖게 되는 편향의 파급효과에 대해 파악한다. 1. Introduction 본 논문에서는 자연적으로 발생 가능한 media bias를 학습한 LM..