Spaces:
Running
Running
| title: README | |
| emoji: ๐ | |
| colorFrom: pink | |
| colorTo: indigo | |
| sdk: static | |
| pinned: false | |
| # ๐ฅ News | |
| <h2>2022</h2> | |
| <hr> | |
| <b>[GlobEnc: Quantifying Global Token Attribution by Incorporating the Whole Encoder Layer in Transformers]()</b> <br> | |
| Ali Modarressi*, Mohsen Fayyaz*, Yadollah Yaghoobzadeh, Mohammad Taher Pilehvar <br> | |
| <small>* Equal Contribution</small><br> | |
| <i class="publication-conference">NAACL 2022</i> | |
| <!-- <br>[[๐ paper]]() [[๐ผ๏ธ Poster]]() [[๐ฅ video]]() --> | |
| <b>[Metaphors in Pre-Trained Language Models: Probing and Generalization Across Datasets and Languages](https://arxiv.org/abs/2203.14139)</b> <br> | |
| Ehsan Aghazadeh*, Mohsen Fayyaz* and Yadollah Yaghoobzadeh <br> | |
| <small>* Equal Contribution</small><br> | |
| <i class="publication-conference">ACL 2022</i> | |
| <br>[[๐ paper]](https://arxiv.org/abs/2203.14139) [[๐ผ๏ธ Poster]](https://mohsenfayyaz.github.io/files/publications/2022_metaphors_in_plms/metaphors_poster_36x48.pdf) [[๐ฅ video]](https://www.youtube.com/watch?v=UKWFZSiP7OY) [[code]](https://github.com/EhsanAghazadeh/Metaphors_in_PLMs) | |
| <h2>2021</h2> | |
| <hr> | |
| <b>[Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToidsโ Representations](https://arxiv.org/abs/2109.05958)</b> <br> | |
| Mohsen Fayyaz*, Ehsan Aghazadeh*, Ali Modarressi, Hosein Mohebbi and Mohammad Taher Pilehvar <br> | |
| <small>* Equal Contribution</small><br> | |
| <i class="publication-conference">BlackboxNLP @ EMNLP 2021</i> | |
| <br>[[๐ paper]](https://arxiv.org/abs/2109.05958) [[๐ผ๏ธ Poster]](https://mohsenfayyaz.github.io/images/posts/2021-09-layer-wise-probing-on-bertoids/NotAllModelsLocalize_poster_36x48.pdf) [[๐ป blog]](https://mohsenfayyaz.github.io/posts/layer-wise-probing-on-bertoids/) | |