Meet Beemo — Benchmark of expert-edited machine-generated outputs
Continuing the topic of AI-generated texts detection:
Colleagues from University of Oslo, MIT Lincoln Laboratory, Penn State University, and Toloka designed a novel benchmark of 2195 texts generated by ten instruction-finetuned language models (LMs) and edited by expert annotators for various use cases, ranging from creative writing to text summarization. Can it break all current AI-generated texts detectors? Yes!
Main info of the dataset:
😛Language: English
🤖Models: Mixtral, Mistral, LLaMa, Gemma, TULU, Zephyr.
✍️Edits then done by annotators from Toloka.ai platform.
🤗https://huggingface.co/datasets/toloka/beemo
Can you build a classifier that will detect these AI-generated texts?
Contacts:
Vladislav Mikhailov (vladism@ifi.uio.no)
Ekaterina Artemova (katya-art@toloka.ai)
Second Call for Papers for NLP4PI Workshop
Direct ARR Submission: link
Deadline: August 15th
Previous ARR Cycles Commitment: link
Deadline: August 20th
Notification of Acceptance: September 10, 2024
Join our workshop to explore how cutting-edge NLP technologies can drive social impact and support UN sustainability goals, addressing critical issues like poverty, healthcare, and climate change. We welcome submissions on innovative applications and interdisciplinary collaborations, with a special focus on solutions to combat digital violence in online spaces. Connect with NGO representatives and share your impactful research!
TextDetox CLEF 2024: Test Phase
Our shared task on multilingual text detoxification is ongoing and reaching its final phase😉
We are releasing the parallel pairs for the dev part:
https://huggingface.co/datasets/textdetox/multilingual_paradetox
and new toxic sentences for the test part:
https://huggingface.co/datasets/textdetox/multilingual_paradetox_test
We are waiting for you submission here:
https://codalab.lisn.upsaclay.fr/competitions/18243
till May 12th🤗
You can submit for ANY language! There are 9 of them: English, Spanish, German, Chinese, Arabic, Hindi, Ukrainian, Russian, and Amharic.
A little guide to building Large Language Models in 2024
by Thomas Wolf 🤗
Video [link]
Presentation [link]
TextDetox CLEF 2024
We are glad to invite you to participate in the first of its kind multilingual Text Detoxification shared task!
https://pan.webis.de/clef24/pan24-web/text-detoxification.html
TL;DR
Task formulation: transfer a text style from toxic to neutral (i.e. what a f**k is this about? -> what is this about?)
9 Languages: English, Spanish, Chinese, Hindi, Arabic, German, Russian, Ukrainian, and Amharic
🤗 https://huggingface.co/textdetox
More details:
Identification of toxicity in user texts is an active area of research. Today, social networks such as Facebook, Instagram are trying to address the problem of toxicity. However, they usually simply block such kinds of texts. We suggest a proactive reaction to toxicity from the user. Namely, we aim at presenting a neutral version of a user message which preserves meaningful content. We denote this task as text detoxification.
In this competition, we suggest you create detoxification systems for 9 languages from several linguistic families. However, the availability of training corpora will differ between the languages. For English and Russian, the parallel corpora of several thousand toxic-detoxified pairs (as presented above) are available. So, you can fine-tune text generation models on them. For other languages, for the dev phase, no such corpora will be provided. The main challenge of this competition will be to perform both supervised and unsupervised cross-lingual detoxification.
You are very welcome to test all modern LLMs on text detoxification and safety with our data as well as experiment with different unsupervised approaches based on MLMs or other paraphrasing methods!
The final leaderboard will be built on a manual evaluation of a test set subset performed via crowdsourcing at Toloka.ai platform.
In the end, you will have an opportunity to write and then present a paper at CLEF 2024 (https://clef2024.imag.fr/) which will take place in Grenoble, France!
Important Dates
February 1, 2024: First data available and run submission opens.
April 22, 2024: Registration closes.
May 6, 2024: Run submission deadline and results out.
May 31, 2024: Participants paper submission.
July 8, 2024: Camera-ready participant papers submission.
September 9-12, 2024: CLEF Conference in Grenoble and Touché Workshop.
Artificial Intelligence 2023 Playlist
Stanford series that brought together Chris Manning, Andrew Ng, Fei-Fei Li, and other researchers from Stanford to discuss the state of NLP:
https://youtube.com/playlist?list=PLoROMvodv4rPEjA3yzoqkq3J321MfH7FZ&si=eUEZC-4K3X0Ap074
I recommend for casual watching at times you are asking yourself "What's next?"
Especially:
* Chris Manning and Endrew Ng discussion about NLP.
* Andrew Ng and Fei-Fei li discussion about human-centered AI.
Ukrainian Toxicity Classification
I am glad to announce the first of its kind dataset for detection toxicity in Ukrainian🇺🇦 (~20k rows):
https://huggingface.co/datasets/ukr-detect/ukr-toxicity-dataset
Together with fine-tuned on it xlm-roberta-base:
https://huggingface.co/ukr-detect/ukr-toxicity-classifier
Happy to contribute to Ukrainian NLP💪
The work is done together with the amazing Masters student Valeriia Khylenko!
A Benchmark Dataset to Distinguish Human-Written and Machine-Generated Scientific Papers
SCIENTISTS ARE GOING TO SUBMIT PAPERS WRITTEN BY CHATGPT, THE SCIENCE GONNA DIE
Or not?
Our chair work about if we can detect machine-generated or paraphrased articles.
TL;DR: yes, we can, even with logistic regression.
For generation, we tried out: GPT-2, GPT-3, ChatGPT, Galactica, and SciGen.
Article looks like: Abstract + Intro + Conclusion.
🤗dataset with ~70k rows of generated scientific texts by different models;
There, you can also find fine-tuned 🤗Galactica and 🤗RoBERTa for detection.
The full paper with all tables of results and explainability investigations [link]
A PhD Student’s Perspective on Research in NLP in the Era of Very Large Language Models
As our IFAN project was recommended as one of the promising research direction, I will also recommend in return to read the recent paper to answer the question: "So what now in NLP research if ChatGPT is out?"
Spoiler: the world has not ended and we still have plenty work to do!
https://arxiv.org/abs/2305.12544
From my research work and what I also want to explore, my top list of research directions:
1. Misinformation fight. There is still zero online working automated fake news and propaganda detection systems. However, the risk of misinformation spread is increasing.
2. Multilingualism. A usual reminder, that there is more languages rather then English. Like at least 7k more.
3. Explainability and Interpretabilty. Do we trust models' decisions? Still absolutely far away from 100%. We can help to integrate these models into decisions making process only if their behavior will be transparent. And now think about if we can even explain every NLP task. The methods are absolutely different.
4. Less resources. Less memory to store models and fine-tune them. Less also data to learn! Do we need indeed all these training samples? Or we just need diverse enough data?
5. Human-NLP models interaction. What we can admit is that ChatGPT was the first NLP model used not only by specialists but by everyone. Because it is more or less pleasant and safe to use it. If the model cannot answer some input, it provides anyway nicely written answer. The wrapper is also extremely important. How we need to cover those models that user will be comfortable to work with it? What about children if we want to adjust them for education even from early ages?
Be brave, be creative, be inspired✨
Language models can explain neurons in language models
What about to use GPT-4 to automatically write explanations for the behavior of neurons in large language models and to score those explanations?
* Explain: Generate an explanation of the neuron’s behavior by showing the explainer model (token, activation) pairs from the neuron’s responses to text excerpts.
* Simulate: Use the simulator model to simulate the neuron's activations based on the explanation.
* Score: Automatically score the explanation based on how well the simulated activations match the real activations.
Blog from Closed OpenAI: [link]
Paper: [link]
Code and collected dataset of explanations: [link]
Why text detoxification is important especially now?
Any of chat-bot is not safe of being toxic at some point (even ChatGPT!). So, if you want to have safe conversations with your users, it is still important to process toxic language.
With our text detoxification technology, you can:
* Before training your language model, chatbot, you can preprocess scrapped training data to ensure that there will be no toxicity. But, you should not just through away your samples. You can detoxify them! Then, the major part of the dataset will not be lost but the content will be saved.
* You can ensure that the user message is also non-toxic. Again, the replica will be saved. Now after detoxification, we will ensure that the conversation will not be transferred into unsafe tone.
* You can cross-save the answers from your chat-bot as well! The conversation will not be stopped even if you chat-bot generates something toxic. Its reply will be detoxified and the user will see neutral answer.
Check out all the info about our research and all models in this repo!
MLSS 2023
1 day till application is closed to the Machine Learning Summer School with application in Science!
https://mlss2023.mlinpl.org/
I personaly took part in MLSS 2020, even if it was virtual, I got so many insights. This year is in Krakow! Get a chance to listen to a lectures from world-famous speakers😉
Guys, really, who answered
"My collegue was fired because of CharGPT" or
"Now I can do all my tasks with CharGPT"
Can you, please, share your stories? 🙂
Now, it is very intriguing!
🔥Free EACL 2023 for Ukrainian Students🔥
If you are Ukrainian students, you can apply or free EACL 2023 registration (both online and offline, but be aware of deadlines):
*online attendance — available for all Ukrainian students who apply through this form by April 16, 2023
*on-site attendance — available for a very limited number of Ukrainian students who apply through this form by April 7, 2023
Who is eligible to apply:
*students, including PhD students, currently studying at a Ukrainian academic institution
*students, including PhD students, who studied at a Ukrainian academic institution until February 24, 2022, but are currently studying abroad
https://forms.gle/WtMuxmNGoGnzvLRM7
Microsoft Designer
I got access to Microsoft Designer and it is super interesting thing. You can generate design for posters, presentations, instagram posts, postcards, websites, invitations...
Now the Viber and Whatsapp postcards will know no limit.
Do you want to try to promote some your product?😉
Crafting Tomorrow’s Headlines: Neural News Generation and Detection in English, Turkish, Hungarian, and Persian
We present a new benchmark for ai-generated texts, specifically, news, detection🔍
Four languages: English, Turkish, Hungarian, and Persian
Various LLMs: BloomZ, LLaMa, Mistral, Mixtral, and GPT-4
🤗https://huggingface.co/datasets/tum-nlp/neural-news-benchmark
Can your train such a classifier that will be able to detect GPT-4 texts?🤔 Or, maybe, other LLMs can detect ai-generated texts?
Find out in our new preprint:
📜https://arxiv.org/abs/2408.10724
NLP for Positive Impact Workshop
We are thrilled to invite submissions to the Third Workshop on NLP for Positive Impact!
🔗 Workshop Website: https://sites.google.com/view/nlp4positiveimpact
📅 Important Dates:
Submission Deadline: June 15, 2024, 11:59 PM AoE
Commitment Deadline: August 20, 2024
Notification of Acceptance: September 20, 2024
Camera-Ready Papers Due: October 3, 2024
Workshop Date: Co-located with EMNLP 2024 in November, Miami
This workshop is a platform to explore how all skyrocketing NLP 🚀 can address critical global issues and support the UN sustainability goals 🌍 We are looking for innovative research that focuses on the societal impact of NLP, including areas like healthcare, education, inequality, climate change, and more.
🌟 Special Theme: Tackling digital violence through NLP and AI 🌟
We encourage interdisciplinary collaborations and value submissions that connect NLP with other fields and NGOs. Submissions should include a discussion on the ethical and societal implications of the work, aiming for a positive impact.
📜 Submission Types:
Case studies of real-world deployments
Position papers proposing new tasks or directions
Literature reviews
Philosophical discussions
Approaches to interdisciplinary collaboration
Ethical considerations
Join us in Miami and share your research with a vibrant community dedicated to using NLP for the greater good. Let's harness the power of language-oriented AI to make a positive difference in the world!
📧 Contact: nlp4pi.workshop@gmail.com
Looking forward to your contributions!
Organizers:
Zhijing Jin (Max Planck Institute & ETH Zurich)
Daryna Dementieva (Technical University of Munich)
Steven Wilson (Oakland University)
Oana Ignat (University of Michigan)
Jieyu Zhao (University of Maryland, College Park)
Joel Tetreault (Dataminr, Inc.)
Rada Michaela (University of Michigan)
TextDetox CLEF 2024: Final week of the dev phase
We would like to remind you that this week is a final week of the dev phase of our multilingual TextDetox shared task:
https://pan.webis.de/clef24/pan24-web/text-detoxification.html
🤗https://huggingface.co/textdetox
On April, 22nd, the official registration to CLEF-2024 will be closed, so, please, register here if you have not done this yet:
https://clef2024-labs-registration.dei.unipd.it/
Also, we would like to remind you that still the dev phase leaderboard is open and you are welcomed to make your submission!
Please, submit to Codalab:
https://codalab.lisn.upsaclay.fr/competitions/18243
or to TIRA (as an additional option in case of technical problems):
https://www.tira.io/task/pan24-text-detoxification
Otherwise, stay tuned for the test set release!
Has It All Been Solved? Open NLP Research Questions Not Solved by Large Language Models
PhD application season is starting. If you were afraid, that the only topic you will be suggested is only to prompt LLMs, here are good scientifically proved news for you — there are still plenty to do in NLP!
Amazing colleagues from the Michigan University has prepared a list of still open NLP research questions, 45 of them! Including:
* Multilinguality
* Reasoning
* Knowledge Bases
* Language Grounding
* Computational Social Science
* Online Environments
* Child Language Acquisition
* Non-verbal Communication
* Synthetic Datasets
* Interpretability
* Efficient NLP
* NLP in Education
* NLP in Healthcare
* NLP and Ethics
Yes, in some direction, we have gone already a long way, so other topics are becoming important and just possible already to explore✨
Check the full text (is appearing at COLING):
https://arxiv.org/abs/2305.12544
P.S. And I am reminding, that we are having multilingual safe-language important shared task on texts detoxification — start you first research experiments now😉
Ukrainian Texts Classification Corpora p2
We continue to enrich datasets for the classification of texts in the Ukrainian language. This time, we worked on the translation of English-language data into Ukrainian and obtained:
1. Ukrainian NLI corpus: https://huggingface.co/datasets/ukr-detect/ukr-nli-dataset-translated-stanford translated from Stanford SNLI.
2. Ukrainian Formality corpus: https://huggingface.co/datasets/ukr-detect/ukr-formality-dataset-translated-gyafc translated from English GYAFC
3. In addition to the toxicity corpus presented previously, translated data from the English Jigsaw Toxicity Classification dataset https://huggingface.co/datasets/ukr-detect/ukr-toxicity-dataset-translated-jigsaw
You are very welcomed to use and test them😉
ELLIS Winter School on Foundation Models
Amsterdam 12-15th March
https://amsterdam-fomo.github.io/
Foundation Models, and their origin, analysis and development have been typically associated with the US and Big Tech. Yet, a critical share of important insights and novel approaches do come from Europe, both within academia and industry. Part of this winter school's goal is to highlight these fresh perspectives and give the students an in-depth look into how Europe is guiding its own research agenda with unique directions and bringing together the community. The workshop will take place at the University of Amsterdam.
Lectures from top researchers from DeepMind, Google Research, and top EU unis.
Deadline to apply: 15th February 2024 23:59 CET
Happy New Year 2024
Thank you for being interested in NLP and my view on it 🤩
For new year, I have some new ideas for the community -- stay tuned 😉
Be professional, believe in yourself, be open for new ideas, and all other positive tokens in your texts 🥳
My PyData&Conf Berlin 2023: Texts Detoxification
It was a pleasure for me to be part of PyData&Conf Berlin 2023 — amazing scientist and developers all over Europe come together to discuss and share experience in cutting edge data science. Of course, there were a lot of talks about LLMs 😉
Firstly, I want to invite you to take a look about my research in texts detoxification. Even with all advances, our models are still actual in the field of toxic speech combating: [video]
Secondly, other I recommend to pay attention to other talks that I personally found interesting:
* Keynote talk: Miroslav Šedivý: Lorem ipsum dolor sit amet. A lot of fun facts about different European languages 😃
* Erin Mikail Staples, Nikolai: Improving Machine Learning from Human Feedback. A lot of attention to HF right now, showcase of a library to help you with it.
* Ines Montani: Incorporating GPT-3 into practical NLP workflows. Told you, a lot of attention to LLMs 😉
* Lev Konstantinovskiy: Prompt Engineering 101. Introduction into LangChain — a powerful library to ease your interaction with LLMs.
* Final recommendation not from NLP: Maren Westermann: How to increase diversity in open source communities. The IT ans DS communities are diverse and spread all over the world. Let's communicate respectfully with each other!
Of course, there are way more! The whole playlist [here]😎
On the Impossible Safety of Large AI Models
The success hype of LLMs reached not only NLP-related field, but also get into life of normal humans professionals from a lot of different field. However, even I personally, have not seen any use-case where the model perform 100%, or 99.999%, or 99.9%... of the accuracy.
Theoretical proof that it is impossible to build arbitrarily accurate AI model:
https://arxiv.org/abs/2209.15259
Why? TL;DR:
* User-generated data: user-generated data are both mostly unverified and potentially highly sensitive;
* High-dimension memorization: what to achieve better score on more data? You need way more parameters. However, the contexts are limitless. So... we need infinite amount of parameters? The complexity of “fully satisfactory” language processing might be orders of magnitude larger than today’s LLMs, in which case we may still obtain greater accuracy with larger models.
* Highly heterogeneous users: the distribution of texts generated by a given user greatly diverges from the distribution of texts generated by another user. More data, more users, again, more contexts, more data which can be difficult to fully grasp and generalize.
* Sparse heavy-tailed data per user: even we take into account only one user, even their data is not so dense to be generalized. We should expect an especially large empirical heterogeneity in language data, as the samples we obtain from a user can completely stand out from the user’s language distribution.
As a result, LAIM training is unlikely to be easier than mean estimation. The usual objective for ML is to estimate a distribution which is assumed to be normal one where we want to estimate the mean. How many combinations of such distributions are we able to predict?
+ We need to find a balance between accuracy and privacy.
🤔Pretty challenging task. Will we be able to solve it anyway?
LLMs are everywhere: what other thoughts can we come up with?
This post is the list of alternative sources to read about LLMs and what changes they have brought:
* Choose Your Weapon: Survival Strategies for Depressed AI Academics 🙃 "what should we do know when ChatGPT is here?" has asked probably every student/researcher in NLP academia. This statement paper can provide you several ideas why not to continue😉
* Closed AI Models Make Bad Baselines: We will see how many papers mentioning ChatGPT will appear this ACL. However, Closed models is not the way to do benchmarking in research.
* Towards Climate Awareness in NLP Research: together with the raise of data bases and size of models, our responsibility to the environment also increases. To do modern research, it is nice to report how much of computational time/resource/CO2 emissions were used.
* Step by Step Towards Sustainable AI: if you want to finalize your reading about responsible AI, I really recommend this issue of AlgorithmWatch issue. Professionals from HuggingFace and several German institutions are sharing their thoughts about at what key points we should pay attention to deploy AI safely to humanity and nature.
IFAN: An Explainability-Focused Interaction Framework for Humans and NLP Models
We talked before about different techniques how to explain ML and NLP models. Ok, we have explained some model for a specific output, highlighted some tokens there. What should happen next?
📌You can use humans to debug and improve your model! What your steps can be:
1. 🔍You identify misclassified samples (for instance, during hate speech detection, you have noticed that the model is biased against some target words).
2. 📊You explain model's decisions and see that the models puts too much or too less weight/attention to some words.
3. 📝You edit the explanation, i.e. corresponding weights of the words spans that should contribute to the correct label.
4. 🔄You do this for several samples and retrain Adapter layer of your model based on new samples.
5. ✅Now your model's behavior is fixed, i.e. it is debiased!
All this can be done with our platform:
https://ifan.ml/
This is the first solid version, we are still developing many-many new features for it (as, for instance, the report page where you can control model's performance change). But already now, we believe that the platform can be a solid step to human-in-the-loop debugging of NLP models🤖.
📜The corresponding paper about this first version [link]
A Survey of Large Language Models
* General overview;
* Listing by the number of parameters;
* Commonly used corpora for training;
* How pre-training can be done;
* Typical architecture types;
* How to fine-tune;
* How to prompt;
* Task possible to solve;
* Evaluation setups;
A very comprehensive survey:
https://arxiv.org/abs/2303.18223
📣Urgent news 📣
We need to shut down all AI
We need to turn off all GPU clusters
https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
11 PhD Positions in the MSCA Doctoral Network HYBRIDS
If you are thinking about doing PhD in NLP, check out this possibility!
All positions connected with the fight of fake news and toxic speech. The directions are super interesting, if I was not already a postdoc, I would have applied by myself😉
There is still a month to apply: the deadline is April, 26th.
https://hybridsproject.eu/jobs/
The recommendation from @bbkjunior. Subscribe to his channel @butterflai_effect🤗
Fall is here and it's time to cozy up in our knitwear! Our sweaters are made with the softest yarns, ensuring you'll stay warm and comfortable all season long. From classic cable-knits to trendy oversized cardigans, we have the perfect sweater to match your style. Plus, our prices are affordable, so you don't have to break the bank to stay cozy. Shop now and embrace the chill in style. #CozySweaters #FallFashion #Knitwear #StayWarm #AffordableFashion
— ChatGPT
Img: Microsoft Designer
Prompts by a human (for now)