towards_nlp | Unsorted

Telegram-канал towards_nlp - Towards NLP🇺🇦

1450

All ngrams about Natural Language Processing that are of interest to @iamdddaryna

Subscribe to a channel

Towards NLP🇺🇦

My PyData&Conf Berlin 2023: Texts Detoxification

It was a pleasure for me to be part of PyData&Conf Berlin 2023 — amazing scientist and developers all over Europe come together to discuss and share experience in cutting edge data science. Of course, there were a lot of talks about LLMs 😉

Firstly, I want to invite you to take a look about my research in texts detoxification. Even with all advances, our models are still actual in the field of toxic speech combating: [video]

Secondly, other I recommend to pay attention to other talks that I personally found interesting:
* Keynote talk: Miroslav Šedivý: Lorem ipsum dolor sit amet. A lot of fun facts about different European languages 😃
* Erin Mikail Staples, Nikolai: Improving Machine Learning from Human Feedback. A lot of attention to HF right now, showcase of a library to help you with it.
* Ines Montani: Incorporating GPT-3 into practical NLP workflows. Told you, a lot of attention to LLMs 😉
* Lev Konstantinovskiy: Prompt Engineering 101. Introduction into LangChain — a powerful library to ease your interaction with LLMs.
* Final recommendation not from NLP: Maren Westermann: How to increase diversity in open source communities. The IT ans DS communities are diverse and spread all over the world. Let's communicate respectfully with each other!

Of course, there are way more! The whole playlist [here]😎

Читать полностью…

Towards NLP🇺🇦

On the Impossible Safety of Large AI Models

The success hype of LLMs reached not only NLP-related field, but also get into life of normal humans professionals from a lot of different field. However, even I personally, have not seen any use-case where the model perform 100%, or 99.999%, or 99.9%... of the accuracy.

Theoretical proof that it is impossible to build arbitrarily accurate AI model:
https://arxiv.org/abs/2209.15259

Why? TL;DR:

* User-generated data: user-generated data are both mostly unverified and potentially highly sensitive;
* High-dimension memorization: what to achieve better score on more data? You need way more parameters. However, the contexts are limitless. So... we need infinite amount of parameters? The complexity of “fully satisfactory” language processing might be orders of magnitude larger than today’s LLMs, in which case we may still obtain greater accuracy with larger models.
* Highly heterogeneous users: the distribution of texts generated by a given user greatly diverges from the distribution of texts generated by another user. More data, more users, again, more contexts, more data which can be difficult to fully grasp and generalize.
* Sparse heavy-tailed data per user: even we take into account only one user, even their data is not so dense to be generalized. We should expect an especially large empirical heterogeneity in language data, as the samples we obtain from a user can completely stand out from the user’s language distribution.

As a result, LAIM training is unlikely to be easier than mean estimation. The usual objective for ML is to estimate a distribution which is assumed to be normal one where we want to estimate the mean. How many combinations of such distributions are we able to predict?

+ We need to find a balance between accuracy and privacy.

🤔Pretty challenging task. Will we be able to solve it anyway?

Читать полностью…

Towards NLP🇺🇦

LLMs are everywhere: what other thoughts can we come up with?

This post is the list of alternative sources to read about LLMs and what changes they have brought:

* Choose Your Weapon: Survival Strategies for Depressed AI Academics 🙃 "what should we do know when ChatGPT is here?" has asked probably every student/researcher in NLP academia. This statement paper can provide you several ideas why not to continue😉

* Closed AI Models Make Bad Baselines: We will see how many papers mentioning ChatGPT will appear this ACL. However, Closed models is not the way to do benchmarking in research.

* Towards Climate Awareness in NLP Research: together with the raise of data bases and size of models, our responsibility to the environment also increases. To do modern research, it is nice to report how much of computational time/resource/CO2 emissions were used.

* Step by Step Towards Sustainable AI: if you want to finalize your reading about responsible AI, I really recommend this issue of AlgorithmWatch issue. Professionals from HuggingFace and several German institutions are sharing their thoughts about at what key points we should pay attention to deploy AI safely to humanity and nature.

Читать полностью…

Towards NLP🇺🇦

IFAN: An Explainability-Focused Interaction Framework for Humans and NLP Models

We talked before about different techniques how to explain ML and NLP models. Ok, we have explained some model for a specific output, highlighted some tokens there. What should happen next?

📌You can use humans to debug and improve your model! What your steps can be:
1. 🔍You identify misclassified samples (for instance, during hate speech detection, you have noticed that the model is biased against some target words).
2. 📊You explain model's decisions and see that the models puts too much or too less weight/attention to some words.
3. 📝You edit the explanation, i.e. corresponding weights of the words spans that should contribute to the correct label.
4. 🔄You do this for several samples and retrain Adapter layer of your model based on new samples.
5. ✅Now your model's behavior is fixed, i.e. it is debiased!

All this can be done with our platform:
https://ifan.ml/

This is the first solid version, we are still developing many-many new features for it (as, for instance, the report page where you can control model's performance change). But already now, we believe that the platform can be a solid step to human-in-the-loop debugging of NLP models🤖.

📜The corresponding paper about this first version [link]

Читать полностью…

Towards NLP🇺🇦

A Survey of Large Language Models

* General overview;
* Listing by the number of parameters;
* Commonly used corpora for training;
* How pre-training can be done;
* Typical architecture types;
* How to fine-tune;
* How to prompt;
* Task possible to solve;
* Evaluation setups;

A very comprehensive survey:
https://arxiv.org/abs/2303.18223

Читать полностью…

Towards NLP🇺🇦

📣Urgent news 📣

We need to shut down all AI

We need to turn off all GPU clusters

https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

Читать полностью…

Towards NLP🇺🇦

11 PhD Positions in the MSCA Doctoral Network HYBRIDS

If you are thinking about doing PhD in NLP, check out this possibility!

All positions connected with the fight of fake news and toxic speech. The directions are super interesting, if I was not already a postdoc, I would have applied by myself😉

There is still a month to apply: the deadline is April, 26th.

https://hybridsproject.eu/jobs/

The recommendation from @bbkjunior. Subscribe to his channel @butterflai_effect🤗

Читать полностью…

Towards NLP🇺🇦

Fall is here and it's time to cozy up in our knitwear! Our sweaters are made with the softest yarns, ensuring you'll stay warm and comfortable all season long. From classic cable-knits to trendy oversized cardigans, we have the perfect sweater to match your style. Plus, our prices are affordable, so you don't have to break the bank to stay cozy. Shop now and embrace the chill in style. #CozySweaters #FallFashion #Knitwear #StayWarm #AffordableFashion
— ChatGPT

Img: Microsoft Designer

Prompts by a human (for now)

Читать полностью…

Towards NLP🇺🇦

ELLIS Summer Schools

European Laboratory for Learning and Intelligent Systems (ELLIS) this summer holds a series of summers schools:
https://ellis.eu/events

I am personally interested in Summer School on Large-Scale AI for Research and Industry — how to train models and operate with data with distributed computations. In the era of LLM seems important to obtain such skills😉

Find your school of interest🤘

Читать полностью…

Towards NLP🇺🇦

GPT-4 Developer Livestream

https://www.youtube.com/live/outcGtbnMuQ?feature=share

Читать полностью…

Towards NLP🇺🇦

//no model architecture or implementation details 👍

Читать полностью…

Towards NLP🇺🇦

To all women in this channel — all equal possibilities to pursue your career in computer science and become super cool professionals💪

Читать полностью…

Towards NLP🇺🇦

Introducing ChatGPT and Whisper APIs

>write a blog-post about ChatGPT API release

>We are excited to announce the release of the ChatGPT API, which enables developers to integrate the power of OpenAI's cutting-edge language model into their applications. With this release, developers can access the full capabilities of ChatGPT, a large language model trained on vast amounts of text data, through a simple API.

ChatGPT is a state-of-the-art language model that can understand natural language input and generate human-like responses. It can be used in a variety of applications, such as chatbots, virtual assistants, and customer service automation. By integrating the ChatGPT API into their applications, developers can provide their users with a more natural and intuitive experience.

📌Release notes [link]

Читать полностью…

Towards NLP🇺🇦

TowardsNLP Online Meetup

Sooo... We are meeting each other on February 26th, 6pm (GM+1)📍

The link will be posted right before the event.

Читать полностью…

Towards NLP🇺🇦

Also, to have more or less structure of what to discuss, you can leave your the most desirable questions and topic here in the comments👇

Читать полностью…

Towards NLP🇺🇦

A PhD Student’s Perspective on Research in NLP in the Era of Very Large Language Models

As our IFAN project was recommended as one of the promising research direction, I will also recommend in return to read the recent paper to answer the question: "So what now in NLP research if ChatGPT is out?"
Spoiler: the world has not ended and we still have plenty work to do!

https://arxiv.org/abs/2305.12544

From my research work and what I also want to explore, my top list of research directions:

1. Misinformation fight. There is still zero online working automated fake news and propaganda detection systems. However, the risk of misinformation spread is increasing.
2. Multilingualism. A usual reminder, that there is more languages rather then English. Like at least 7k more.
3. Explainability and Interpretabilty. Do we trust models' decisions? Still absolutely far away from 100%. We can help to integrate these models into decisions making process only if their behavior will be transparent. And now think about if we can even explain every NLP task. The methods are absolutely different.
4. Less resources. Less memory to store models and fine-tune them. Less also data to learn! Do we need indeed all these training samples? Or we just need diverse enough data?
5. Human-NLP models interaction. What we can admit is that ChatGPT was the first NLP model used not only by specialists but by everyone. Because it is more or less pleasant and safe to use it. If the model cannot answer some input, it provides anyway nicely written answer. The wrapper is also extremely important. How we need to cover those models that user will be comfortable to work with it? What about children if we want to adjust them for education even from early ages?

Be brave, be creative, be inspired✨

Читать полностью…

Towards NLP🇺🇦

Language models can explain neurons in language models

What about to use GPT-4 to automatically write explanations for the behavior of neurons in large language models and to score those explanations?

* Explain: Generate an explanation of the neuron’s behavior by showing the explainer model (token, activation) pairs from the neuron’s responses to text excerpts.
* Simulate: Use the simulator model to simulate the neuron's activations based on the explanation.
* Score: Automatically score the explanation based on how well the simulated activations match the real activations.

Blog from Closed OpenAI: [link]
Paper: [link]
Code and collected dataset of explanations: [link]

Читать полностью…

Towards NLP🇺🇦

Why text detoxification is important especially now?

Any of chat-bot is not safe of being toxic at some point (even ChatGPT!). So, if you want to have safe conversations with your users, it is still important to process toxic language.

With our text detoxification technology, you can:

* Before training your language model, chatbot, you can preprocess scrapped training data to ensure that there will be no toxicity. But, you should not just through away your samples. You can detoxify them! Then, the major part of the dataset will not be lost but the content will be saved.
* You can ensure that the user message is also non-toxic. Again, the replica will be saved. Now after detoxification, we will ensure that the conversation will not be transferred into unsafe tone.
* You can cross-save the answers from your chat-bot as well! The conversation will not be stopped even if you chat-bot generates something toxic. Its reply will be detoxified and the user will see neutral answer.

Check out all the info about our research and all models in this repo!

Читать полностью…

Towards NLP🇺🇦

MLSS 2023

1 day till application is closed to the Machine Learning Summer School with application in Science!
https://mlss2023.mlinpl.org/

I personaly took part in MLSS 2020, even if it was virtual, I got so many insights. This year is in Krakow! Get a chance to listen to a lectures from world-famous speakers😉

Читать полностью…

Towards NLP🇺🇦

Guys, really, who answered

"My collegue was fired because of CharGPT" or
"Now I can do all my tasks with CharGPT"

Can you, please, share your stories? 🙂
Now, it is very intriguing!

Читать полностью…

Towards NLP🇺🇦

🔥Free EACL 2023 for Ukrainian Students🔥

If you are Ukrainian students, you can apply or free EACL 2023 registration (both online and offline, but be aware of deadlines):
*online attendance — available for all Ukrainian students who apply through this form by April 16, 2023
*on-site attendance — available for a very limited number of Ukrainian students who apply through this form by April 7, 2023

Who is eligible to apply:
*students, including PhD students, currently studying at a Ukrainian academic institution
*students, including PhD students, who studied at a Ukrainian academic institution until February 24, 2022, but are currently studying abroad

https://forms.gle/WtMuxmNGoGnzvLRM7

Читать полностью…

Towards NLP🇺🇦

Microsoft Designer

I got access to Microsoft Designer and it is super interesting thing. You can generate design for posters, presentations, instagram posts, postcards, websites, invitations...

Now the Viber and Whatsapp postcards will know no limit.

Do you want to try to promote some your product?😉

Читать полностью…

Towards NLP🇺🇦

Interviews

With Ilya Sutskever (GPT-4 creator and creator of many others important milestones in AI and NLP research):
https://youtu.be/SjhIlw3Iffs

With Sam Altman (OpenAI CEO):
https://youtu.be/L_Guz73e6fw

Читать полностью…

Towards NLP🇺🇦

Explainability for NLP

With the raise of LLMs from ClosedAI, the research in explainability for NLP is important as never before. Still, a lot of work should be done in the field. However, you already can experiment and try explain your fine-tuned LLMs on a specific task. For now, the majority of methods are explored for texts classification tasks and are adjusted from tabular data.

How it can be done?

1. Baseline approach: Leave-one-out explanations. For instance, you have a regression layer as one of the last layers in your model. You can check the tokens with major weights. Then, exclude them from the text and check if the model's answer has changed. If the tokens were indeed important, the answer should change dramatically as the model cannot orient on this words to make a correct decision.

2. Local Surrogate (LIME). Modification of the previous idea. Now, you delete each word from the sentence and check the results. The "importance" of the word will be estimated based on how the model's answer differ each time.

3. SHAP (SHapley Additive exPlanations). It is based on a game theory with the main idea to tell us how to fairly distribute the “payout” (= the prediction) among the features. So, one more modification of previous approaches with estimation of a score with three parameters — local accuracy, missingness, and consistency.

More details about how explainability can be used for general ML you can read in the book "Interpretable Machine Learning". The TUM, where I am right now, already did an overview of explainability methods for NLP and you can check this paper.

If we have explained a model, what is next? How we can fix model's misbehavior with such explanations? The continuation of explainability story will be in further posts😉

Читать полностью…

Towards NLP🇺🇦

Bing runs on GPT-4

https://blogs.bing.com/search/march_2023/Confirmed-the-new-Bing-runs-on-OpenAI%E2%80%99s-GPT-4

Читать полностью…

Towards NLP🇺🇦

GPT-4 is here

🥁

https://openai.com/research/gpt-4

To get access to API, you can sign up to the waiting list.

Читать полностью…

Towards NLP🇺🇦

Reinforcement Learning Summer School

10 day Summer School in Barcelona🏝 dedicated to the dive into Reinforcement Learning.

* Where: The summer school will be in the Campus Poblenou at Universitat Pompeu Fabra in Barcelona (Spain).
* When: June 26th to July 5th, 2023
* Suggested targeted audience: MsC and PhD students that are not yet expert in reinforcement learning but have prior knowledge in machine learning (some notions of reinforcement learning is also necessary). Post-docs, researchers and professionals working on related fields and willing to learn about reinforcement learning can also apply.
* Application ends on March 27, 2023.

The school has fees! For students, it is 200 Euros.

The official website [link]
The program [link]
Application form [link]

Читать полностью…

Towards NLP🇺🇦

TowardsNLP Online Meetup

Let us start!

The link to join:
https://tum-conf.zoom.us/j/69545123140?pwd=TWkreHhrTDlvaGhkUzlnaHpTRUhTQT09

We are collecting donations to help refugees in Germany during the call. Send to PayPal: dardem96@gmail.com

Читать полностью…

Towards NLP🇺🇦

Sorry if you timezone is not covered🙏 If you have some questions, leave the comments, I will try to cover them and publish the recording later😉

Читать полностью…

Towards NLP🇺🇦

TowardsNLP Online Meetup

Fix the date — February 26th, let us have the online meeting where we can chat all about NLP🤗

The online question that I want to discuss is the time when exactly we should start a call. Please, select the option.

The collected costs during the call will go to DaMigra that helps refugees in Germany.

Читать полностью…
Subscribe to a channel