NLP Reading Group

Tehran Institute for Advanced Studies (TeIAS)

Moderator: Mohammad Taher Pilehvar

Wednesdays, 11am-12pm (online on Teams)

Contact the moderator if you are interested in joining the group.


Winter 2023

Date Presenters Topic / Paper
1 February Majid Zarharan
25 January Mohammad AkbariTajari Mehri
18 January Fereidoon Mehri
  • Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection, ACL 2020.
11 January Sadegh Jafari
  • Improving Neural Language Generation with Spectrum Control, ICLR 2020.
4 January Mohammad AkbarTajari
  • Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation, ICLR 2022.
28 December Fereidoon Mehri
  • Discovering Latent Knowledge in Language Models Without Supervision, ICLR 2023 submission.


Fall 2022

Date Presenters Topic / Paper
21 December Mohammad AkbarTajari
  • Incorporating Residual and Normalization Layers into Analysis of Masked Language Models, EMNLP 2021.
7 December Mohammad AkbarTajari
  • Attention is Not Only a Weight: Analyzing Transformers with Vector Norms, EMNLP 2020.
30 November Fereidoon Mehri
  • Chain of Thought Prompting Elicits Reasoning in Large Language Models, NeurIPS 2022.
23 November Sara Rezaeimanesh
  • Is Sparse Attention More Interpretable?, ACL 2021.
16 November Sadegh Jafari
  • Temporal Analysis of Language through Neural Language Models, ACL 2014.
9 November Fereidoon Mehri
  • PEER: A Collaborative Language Model, ICLR 2023 submission.
2 November Mahdi Zakizadeh, Sara Rezaeimanesh
  • DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference, ACL 2020.
  • Metaphor Generation with Conceptual Mapping, ACL 2021.
19 October Zahra Delbari
  • Understanding Dataset Difficulty with V-Usable Information, ICML 2022.


Summer 2022

Date Presenters Topic / Paper
31 August Mahdi Zakizadeh
  • Exploring Gender Bias in Retrieval Models, Arxiv 2022.
24 August Zahra Delbari
  • DACT-BERT: Differentiable Adaptive Computation Time for an Efficient BERT Inference, NLP Power! 2022.
  • DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference, ACL 2020.
3 August Mahdi Zakizadeh
  • Don't Forget About Pronouns: Removing Gender Bias in Language Models Without Losing Factual Gender Information, GeBNLP 2022.
27 July Zahra Delbari
  • Compacter: Efficient Low-Rank Hypercomplex Adapter Layers, NeurIPS 2021.
13 July Zahra Delbari
  • A Flexible Multi-Task Model for BERT Serving, ACL 2022.
6 July Kaveh Eskandai
  • Perturbation Augmentation for Fairer NLP, Arxiv 2022.
29 June Mehrdad Nasser, Mahdi Zakizadeh
  • Structured Prediction as Translation between Augmented Natural Languages, Arxiv 2021.
  • Fewer Errors, but More Stereotypes? The Effect of Model Size on Gender Bias, GeBNLP 2022.


Spring 2022

Date Presenters Topic / Paper
1 June Maryam Sadat Hashemi
  • Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language Models, GeBNLP 2022.
25 May ACL 2022
18 May Mahdi Zakizadeh
  • How Gender Debiasing Affects Internal Model Representations, and Why It Matters.
11 May Kaveh Eskandari
  • OPT: Open Pre-Trained Transformers.
4 MAy --
27 April Mahdi Zakizadeh
  • Towards a Unified View of Parameter-Efficient Transfer Learning.
20 April Mehrdad Nasser, Kaveh ESkandari
  • Transformer Memory as a Differentiable Search Index.
  • BRIO: Bringing Order to Abstractive Summarization.
13 April Mohammad Hossein Khojasteh, Maryam Sadat Hashemi
  • Decomposing Complex Questions Makes Multi-Hop QA Easier and More Interpretable.
  • Vision-and-Language or Vision-for-Language? On Cross-Modal Influence in Multimodal Transformers.
6 April Sara Rajaee
  • Efficient One-Pass End-to-End Entity Linking for Questions.
6 April Sara Rajaee
  • Efficient One-Pass End-to-End Entity Linking for Questions.


Winter 2021

Date Presenters Topic / Paper
16 March Mahdi Zakizadeh, Mehrdad Nasser
  • Multitask Prompted Training Enables Zero-Shot Task Generalization.
  • Efficient One-Pass End-to-End Entity Linking for Questions.
9 March Mehrdad Nasser, Kaveh Eskandari
  • Improving Multi-hop Question Answering over Knowledge Graphs using Knowledge Base Embeddings.
  • Challenges in Detoxifying Language Models.
2 March Houman Mehrafarin, Maryam Sadat Hashemi
  • Negated and Misprimed Probes for Pretrained Language Models: Birds Can Talk, But Cannot Fly.
  • Decoupling the Role of Data, Attention, and Losses in Multimodal Transformers.
23 February Mohammad Hossein Khojasteh, Mohammad Ali Modarressi
  • Reinforced History Backtracking for Conversational Question Answering.
  • What do Compressed Large Language Models Forget? Robustness Challenges in Model Compression.
16 February Mohammad Hossein Khojasteh, Mahdi Zakizadeh
  • QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering.
  • Assessing the Reliability of Word Embedding Gender Bias Measures.
9 February Kaveh Eskandari, Sara Rajaee
  • Do Transformers Encode a Foundational Ontology? Probing Abstract Classes in Natural Language.
  • Information-Theoretic Measures of Dataset Difficulty.
2 February Zahra Dehghani, Houman Mehrafarin
  • Don't Search for a Search Method—Simple Heuristics Suffice for Adversarial Text Attacks.
  • Contributions of Transformer Attention Heads in Multi- and Cross-lingual Tasks.
26 January Mehrdad Nasser, Kaveh Eskandari
  • REALM: Retrieval-Augmented Language Model Pre-Training.
  • Simple, Interpretable and Stable Method for Detecting Words with Usage Change Across Corpora.
19 January Maryam Sadat Hashemi, Amin Pourdabiri
  • Good Prompt Is Worth Millions of Parameters? Low-resource Prompt-based Learning for Vision-Language Models.
  • MEDCOD: A Medically-Accurate, Emotive, Diverse, and COntrollable Dialog System.
12 January Mehrdad Nasser, Sara Rajaee
  • Knowledgeable or Educated Guess? Revisiting Language Models as Knowledge Bases.
  • Towards Interpreting and Mitigating Shortcut Learning Behavior of NLU Models.
5 January Zahra Dehghani, Mohammad Hossein Khojasteh
  • Model Extraction and Adversarial Transferability, Your BERT is Vulnerable!
  • Neural Unification for Logic Reasoning over Natural Language.
29 December Houman Mehrafarin, Maryam Sadat Hashemi
  • Introducing Orthogonal Constraint in Structural Probes.
  • SimVLM Simple Visual Language Model Pretraining with Weak Supervision
22 December Rabeeh Karimi Mahabadi
  • PERFECT: Prompt-free and Efficient Language Model Fine-Tuning, Arxiv 2021.
22 December Sara Rajaee, Mohammad Ali Modarressi
  • Transformer Feed-Forward Layers Are Key-Value Memories, EMNLP 2021.
  • Learning from others' mistakes: Avoiding dataset biases without modeling them, ICLR 2021.


Fall 2021

Date Presenters Topic / Paper
15 December Kaveh Eskandari
  • Words of Wisdom: Representational Harms in Learning From AI Communication, LearnTec4EDI 2021.
  • Evaluating Debiasing Techniques 
for Intersectional Biases, EMNLP 2021.
8 December Fangyu Liu
  • Visually Grounded Reasoning across Languages and Cultures, EMNLP 2021.
24 November Mohammad Hossein Khojasteh, Houman Mehrafarin
  • Editing Factual Knowledge in Language Models, EMNLP 2021.
  • Factual Probing Is [MASK]: Learning vs. Learning to Recall, NAACL 2021.
17 November Zahra Sayedi, Maryam Sadat Hashemi
  • Augmenting Data for Sarcasm Detection with Unlabeled Conversation Context, FigLang, ACL 2020.
  • Effect of Visual Extensions on Natural Language Understanding in Vision-and-Language Models, EMNLP 2021.
10 November Zahra Dehghani
  • Universal Adversarial Attacks with Natural Triggers for Text Classification, NAACL 2021. [slides]
3 November Mahdi Zakizadeh, Amin Pourdabiri
  • Fairness without Demographics through Adversarially Reweighted Learning, NeurIPS 2020. [slides]
  • An animated picture says at least a thousand words: Selecting Gif-based Replies in Multimodal Dialog, Findings of EMNLP 2021. [slides]
27 October Kaveh Eskandari, Sara Rajaee
  • Grokking: Generalization Beyond Overfitting On Small Algorithmic Data Sets, MathAI, ICLR 2021.
  • EBERT: Efficient BERT Inference with Dynamic Structured Pruning, Findings of ACL 2021.
20 October Amin Pourdabiri, Mohammad Ali Modarressi
  • Enriching Pre-trained Language Model with Entity Information for Relation Classification, Arxiv 2019.
  • Beyond Preserved Accuracy: Evaluating Loyalty and Robustness of BERT Compression, EMNLP 2021.
13 October Houman Mehrafarin, Maryam Sadat Hashemi
  • Multilingual BERT Post-Pretraining Alignment, NAACL 2021.
  • Multimodal Few-Shot Learning with Frozen Language Models, Arxiv 2021.
6 October Zahra Dehghani, Mahdi Zakizadeh
  • Style is NOT a single variable: Case Studies for Cross-Style Language Understanding, ACL 2021.
  • Mitigating Unwanted Biases with Adversarial Learning, Arxiv 2018. [slides]
29 September Kaveh Eskandari, Mohammad Ali Modarressi
  • Are Gender-Neutral Queries Really Gender-Neutral? Mitigating Gender Bias in Image Search, EMNLP 2021. [slides]
  • Fine-Tuned Transformers Show Clusters of Similar Representations Across Layers, BlackboxNLP 2021.
22 September Tohid Abedini, Houman Mehrafarin
  • Constituency Parsing with a Self-Attentive Encoder, ACL 2018.
  • Discourse Probing of Pretrained Language Models, NAACL 2021.


Summer 2021

Date Presenters Topic / Paper
15 September Sara Rajaee, Maryam Sadat Hashemi
  • Too Much in Common: Shifting of Embeddings in Transformer Language Models and its Implications, NAACL 2021. [slides]
  • All Bark and No Bite: Rogue Dimensions in Transformer Language Models Obscure Representational Quality, EMNLP 2021.
  • Finetuned Language Models Are Zero-Shot Learners, Arxiv 2021. [slides]
8 September Hosein Mohebbi, Zahra Sayedi
  • Enjoy the Salience: Towards Better Transformer-based Faithful Explanations with Word Salience, EMNLP 2021. [slides]
  • BoB: BERT Over BERT for Training Persona-based Dialogue Models, ACL 2021.
1 September Mahdi Zakizadeh, Amin Pourdabiri
  • On Transferability of Bias Mitigation Effects in Language Model Fine-Tuning, NAACL 2021.
  • Towards Emotional Support Dialog Systems, ACL 2021. [slides]
18 August National holiday --
11 August Hosein Mohebbi, Mohsen Tabasi
  • Probing BERT in Hyperbolic Spaces, ICLR 2021. [slides]
  • Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning, ACL 2021.
4 August Tohid Abedini, Mohammad Ali Modarressi
  • Combating Adversarial Misspellings with Robust Word Recognition, ACL 2019. [slides]
  • On Attention Redundancy: A Comprehensive Study, NAACL 2021.
28 July Houman Mehrafarin, Maryam Sadat Hashemi
  • An Empirical Study on Robustness to Spurious Correlations using Pre-trained Language Models, TACL 2020.
  • Charformer: Fast Character Transformers via Gradient-based Subword Tokenization, Arxiv 2021. [slides]
21 July National holiday --
14 July Kiamehr Rezaee, Zahra Sayedi
  • Generationary or: “How We Went beyond Word Sense Inventories and Learned to Gloss”, EMNLP 2020.
  • All That’s ‘Human’ Is Not Gold: Evaluating Human Evaluation of Generated Text, ACL 2021.
7 July Hosein Mohebbi, Mohsen Tabasi
  • MPNet: Masked and Permuted Pre-training for Language Understanding , NeurIPS 2020. [slides]
  • Comparing Test Sets with Item Response Theory, ACL 2021.
30 June Amin Pourdabiri, Sara Rajaee
  • SIMMC 2.0: A Task-oriented Dialog Dataset for Immersive Multimodal Conversations, Arxiv 2021. [slides]
  • Positional Artefacts Propagate Through Masked Language Model Embeddings, ACL 2021. [slides]
23 June Houman Mehrafarin, Maryam Sadat Hashemi
  • Intermediate-Task Transfer Learning with Pretrained Models for Natural Language Understanding: When and Why Does It Work? ACL 2020.
  • Unsupervised Vision-and-Language Pre-training Without Parallel Images and Captions, NAACL 2021. [slides]


Spring 2021

Date Presenters Topic / Paper
16 June Hosein Mohebbi, Sara Rajaee
  • TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference, NAACL 2021. [slides]
  • How transfer learning impacts linguistic knowledge in deep NLP models? ACL 2021. [slides]
9 June Mohsen Tabasi, Mohammad Ali Modarressi
  • Entailment as Few-Shot Learner, Arxiv 2021.
  • BERT Busters: Outlier LayerNorm Dimensions that Disrupt Transformers, Findings of ACL 2021.
2 June Sara Rajaee, Maryam Sadat Hashemi
  • A Cluster-based Approach for Improving Isotropy in Contextual Embedding Space, ACL 2021.
  • FNet: Mixing Tokens with Fourier Transforms, Arxiv 2021. [slides]
26 May Kiamehr Rezaee
  • Beyond Fine-tuning: Few-Sample Sentence Embedding Transfer, ACL 2020.
19 May Amin Pourdabiri
  • Situated and Interactive Multimodal Conversations, COLING 2020. [slides]
12 May EMNLP deadline --
5 May Zahra Sayedi, Houman Mehrafarin
  • Sequential Latent Knowledge Selection for Knowledge-Grounded Dialogue, ICLR 2020.
  • What Happens To BERT Embeddings During Fine Tuning, BlackboxNLP 2020.
28 April Hosein Mohebbi, Mohammad Ali Modarressi
  • DirectProbe: Studying Representations without Classifiers, NAACL 2021. [slides]
  • Telling BERT’s Full Story: from Local Attention to Global Aggregation, EACL 2021.
21 April Mohsen Tabasi, Zahra Sayedi
  • Static Embeddings as Efficient Knowledge Bases? NAACL 2021.
  • Towards a Human-like Open-Domain Chatbot, Arxiv 2020.
14 April Amin Pourdabiri, Sara Rajaee
  • Emotion Dynamics in Movie Dialogues, Arxiv 2021. [slides]
  • Infusing Finetuning with Semantic Dependencies, TACL 2021.
7 April Kiamehr Rezaee, Houman Mehrafarin
  • AUTOPROMPT: Eliciting Knowledge from Language Models with Automatically Generated Prompts, EMNLP 2020.
  • Making Pre-trained Language Models Better Few-shot Learners, Arxiv 2020.
  • Probing What Different NLP Tasks Teach Machines about Function Word Comprehension, *SEM 2019.


Winter 2020/2021

Date Presenters Topic / Paper
17 March Hosein Mohebbi, Mohammad Ali Modarressi
  • Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference. EACL 2021. [slides]
  • How Many Data Points is a Prompt Worth?. NAACL 2021.
  • It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners (Arxiv 2020).
10 March Mohsen Tabasi, Sara Rajaee
  • Interpretation of NLP models through input marginalization. EMNLP 2020.
  • Designing and Interpreting Probes with Control Tasks. EMNLP 2019.
3 March Amin Pourdabiri, Sara Rajaee
  • Generate, Delete and Rewrite: A Three-Stage Framework for Improving Persona Consistency of Dialogue Generation. ACL 2020. [slides]
  • Analyzing Individual Neurons in Pre-trained Language Models. EMNLP 2020. [slides]
24 February Houman Mehrafarin, Kiamehr Rezaee
  • Compressing BERT: Studying the Effects of Weight Pruning on Transfer Learning. Rep4NLP 2020.
  • On Identifiability in Transformers. ICLR 2020.
17 February Hosein Mohebbi, Mohammad Ali Modarressi
  • Amnesic Probing: Behavioral Explanation with Amnesic Counterfactuals. TACL 2021. [slides]
  • First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT. EACL 2021.
10 February National holiday --
3 February Mohsen Tabasi, Mahsa Razavi
  • Generalization through Memorization: Nearest Neighbor Language Models. ICLR 2020.
  • SMRT Chatbots: Improving Non-Task-Oriented Dialog with Simulated Multiple Reference Training. EMNLP 2020. [slides]
27 January ACL deadline --
20 January Amin Pourdabiri, Sara Rajaee
  • Fine-grained Emotion and Intent Learning in Movie Dialogues (Arxiv 2020). [slides]
  • Unsupervised Distillation of Syntactic Information from Contextualized Word Representations. BlackBoxNLP 2020. [slides]
13 January Houman Mehrafarin, Maryam Sadat Hashemi
  • Attention Interpretability Across NLP Tasks (Arxiv 2019).
  • When BERT Plays the Lottery, All Tickets Are Winning. EMNLP 2020. [slides]
6 January (2021) Kiamehr Rezaee, Mohammad Ali Modarressi
  • Attention is Not Only Weight: Analyzing Transformers with Vector Norms. EMNLP 2020.
  • How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models.
30 December Mohsen Tabasi, Hosein Mohebbi
  • Experience Grounds Language. EMNLP 2020.
  • Intrinsic Probing through Dimension Selection. EMNLP 2020. [slides]
23 December Student project proposals --

Fall 2020

Date Presenters Topic / Paper
16 December Sara Rajaee, Mahsa Razavi
  • Topology of Word Embeddings: Singularities Reflect Polysemy. *SEM 2020. [slides]
  • Diversifying Dialogue Generation with Non-Conversational Text. ACL 2020. [slides]
9 December Houman Mehrafarin, Amin Pourdabiri
  • Assessing BERT's Syntactic Abilities. 2019.
  • Personal Information Leakage Detection in Conversations. EMNLP 2020. [slides]
2 December Maryam Sadat Hashemi, Kiamehr Rezaee
  • Vokenization: Improving Language Understanding with Contextualized, Visual-Grounded Supervision. EMNLP 2020. [slides]
  • Linguistic Profiling of a Neural Language Model. COLING 2020.
25 November Hosein Mohebbi, Mohsen Tabasi, Sara Rajaee
  • The elephant in the interpretability room. Why use attention as explanation when we have saliency methods? BlackboxNLP 2020. [slides]
  • How do Decisions Emerge across Layers in Neural Models? Interpretation with Differentiable Masking. EMNLP 2020.
  • Asking without Telling: Exploring Latent Ontologies in Contextual Representations. EMNLP 2020. [slides]
18 November Mohammad Ali Modarressi, Samin Fatehi
  • Pretrained Language Model Embryology: The Birth of ALBERT. EMNLP 2020.
  • ETC: Encoding Long and Structured Inputs in Transformers. EMNLP 2020. [slides]
11 November Amin Pourdabiri, Mahsa Razavi
  • Hierarchical Reinforcement Learning for Open-Domain Dialog. AAAI 2020. [slides]
  • Filtering Noisy Dialogue Corpora by Connectivity and Content Relatedness. EMNLP 2020. [slides]
4 November Houman Mehrafarin, Kiamehr Rezaee, Maryam Sadat Hashemi
  • Are Sixteen Heads Really Better Than One? NIPS 2019. [slides]
  • Do Explicit Alignments Robustly Improve Multilingual Encoders. EMNLP 2020.
  • Rethinking attention with performers. [slides]
28 October Hosein Mohebbi, Mohammad Ali Modarressi
  • A Tale of a Probe and a Parser. ACL 2020.
  • What Do Position Embeddings Learn? An Empirical Study of Pre-Trained Language Model Positional Encoding. EMNLP 2020.
21 October Sara Rajaee, Samin Fatehi, Mahsa Razavi
  • BERT-EMD: Many-to-Many Layer Mapping for BERT Compression with Earth Mover's Distance. EMNLP 2020. [slides]
  • Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection. ACL 2020.
  • Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network. ACL 2018.
14 October Sara Rajaee, Maryam Sadat Hashemi
  • Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them. NAACL 2019. [slides]
  • Cross-Modality Relevance for Reasoning on Language and Vision. ACL 2020. [slides]
7 October Houman Mehrafarin, Kiamehr Rezaee, Mohsen Tabasi
  • What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models. TACL 2020. [slides]
  • Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference.
  • Breaking Through the 80% Glass Ceiling: Raising the State of the Art in Word Sense Disambiguation by Incorporating Knowledge Graph Information. ACL 2020.
30 September Amin Pourdabiri, Maryam Sadat Hashemi, Samin Fatehi
  • Big Bird: Transformers for Longer Sequences. [slides]
  • Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks. ECCV 2020. [slides]
  • Attention over Parameters for Dialogue Systems. NeurIPS 2020 ConvAI workshop. [slides]
23 September Ali Modarressi and Hosein Mohebbi
  • Quantifying Attention Flow in Transformers. ACL 2020. [slides]
  • DeFormer: Decomposing Pre-trained Transformers for Faster Question Answering. ACL 2020. [slides]

Summer 2020

Date Presenters Topic / Paper
4 July Houman Mehrafarin, Hosein Mohebbi, Amin Pourdabiri, Ali Modarressi
  • Perturbed Masking: Parameter-free Probing for Analyzing and Interpreting BERT
  • BERT Loses Patience: Fast and Robust Inference with Early Exit. [slides]
  • Accelerating Natural Language Understanding in Task-Oriented Dialog
  • Fine-tune BERT with Sparse Self-Attention Mechanism. [slides]
11 July Houman Mehrafarin, Sara Rajaee
  • Linguistic Knowledge and Transferability of Contextual Representations. [slides]
  • Finding Universal Grammatical Relations in Multilingual BERT. [slides]
15 July Mohsen Tabasi, Kiamehr Rezaee, Zahra Sayedi
  • How does BERT’s attention change when you fine-tune? An analysis methodology and a case study in negation scope. [slides]
  • Similarity of Neural Network Representations Revisited
  • Consistent Dialogue Generation with Self-supervised Feature Learning
22 July Hosein Mohebbi, Amin Pourdabiri, Ali Modarressi
  • BERT-of-Theseus: Compressing BERT by Progressive Module Replacing. [slides]
  • Beyond Domain APIs: Task-oriented Conversational Modeling with Unstructured Knowledge Access
  • FastBERT: a Self-distilling BERT with Adaptive Inference Time. [slides]
29 July Sara Rajaee, Maryam Sadat Hashemi, Samin Fatehi
  • On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines. [slides]
  • VisualBERT: A Simple and Performant Baseline for Vision and Language. [slides]
  • Parameter-Efficient Transfer Learning for NLP. [slides]
5 August Mohsen Tabasi, Kiamehr Rezaee, Zahra Sayedi
  • Explaining Black Box Predictions and Unveiling Data Artifacts through Influence Functions. [slides]
  • Beyond Accuracy: Behavioral Testing of NLP Models with CheckList
  • You Impress Me: Dialogue Generation via Mutual Persona Perception
12 August Houman Mehrafarin, Hosein Mohebbi, Mohammad Ali Modarresi
  • Probing Linguistic Systematicity, ACL 2020. [slides]
  • ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators, ICLR 2020. [slides]
  • BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension, ACL 2020. [slides]
19 August Sara Rajaee, Samin Fatehi, Mahsa Razavi
  • A Mixture of h-1 Heads is Better than h Heads, ACL 2020. [slides]
  • Longformer: The Long-Document Transformer, 2020. [slides]
  • Transferable Multi-Domain State Generator for Task-Oriented Dialogue Systems, ACL 2019.
26 August Mohsen Tabasi, Kiamehr Rezaee, Amin Pourdabiri
  • The Sensitivity of Language Models and Humans to Winograd Schema Perturbations, ACL 2020. [slides] [notebook]
  • Learning to Speak and Act in a Fantasy Text Adventure Game. EMNLP 2019. [slides]
  • Emerging Cross-lingual Structure in Pretrained Language Models. ACL 2020.
2 September Amin Pourdabiri, Maryam Sadat Hashemi, Samin Fatehi
  • Deploying Lifelong Open-Domain Dialogue Learning. [slides]
  • Reformer: The Efficient Transformer, ICLR 2020. [slides]
  • Revealing the Dark Secrets of BERT, EMNLP 2019. [slides]
9 September Houman Mehrafarin, Kiamehr Rezaee
  • Do Neural Language Models Show Prefrences for Syntactic Formalisms? ACL 2020. [slides]
  • Multilingual Alignment of Contextual Word Representations. ICLR 2020.
16 September Mohsen Tabasi, Maryam Sadat Hashemi, Sara Rajaee
  • Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data. ACL 2020. [slides]
  • MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices. ACL 2020. [slides]
  • Knowledge Enhanced Contextual Word Representations. EMNLP 2019. [slides]

Series 1

Spring 2020

Date Presenters Topic / Paper
20 April Mohsen Tabasi
  • BERT Rediscovers the Classical NLP Pipeline. [slides]
  • Universal Adversarial Triggers for Attacking and Analyzing NLP. [slides]
27 April Hosein Mohebbi
  • oLMpics -- On what Language Model Pre-training Captures
  • Fine-Tuning Pretrained Language Models: Weight Initializations, Data Orders, and Early Stopping. [slides]
4 May Houman Mehrafarin
  • What Does BERT Look At? An Analysis of BERT's Attention. [slides]
11 May Amin Pourdabiri
  • Spying on your neighbors: Fine-grained probing of contextual embeddings for information about surrounding words
18 May Kiamehr Rezaee
  • How Multilingual is Multilingual BERT?
25 May - -
1 June Ali Modarressi
  • BERT-based Lexical Substitution. [slides]
  • Contextual Embeddings: When Are They Worth It? [slides]