🤔💡 How Spotify Built a Scalable Annotation Platform: Insights and Results
Spotify recently shared their case study, How We Generated Millions of Content Annotations, detailing how they scaled their annotation process to support ML and GenAI model development. These improvements enabled the processing of millions of tracks and podcasts, accelerating model creation and updates.
Key Steps:
1️⃣ Scaling Human Expertise:
✅ Core teams: annotators (primary reviewers), quality analysts (resolve complex cases), project managers (team training and liaison with engineers).
✅ Automation: Introduced an LLM-based system to assist annotators, significantly reducing costs and effort.
2️⃣ New Annotation Tools:
✅ Designed interfaces for complex tasks (e.g., annotating audio/video segments or texts).
✅ Developed metrics to monitor progress: task completion, data volume, and annotator productivity.
✅ Implemented a "consistency" metric to automatically flag contentious cases for expert review.
3️⃣ Integration with ML Infrastructure:
✅ Built a flexible architecture to accommodate various tools.
✅ Added CLI and UI for rapid project deployment.
✅ Integrated annotations directly into production ML pipelines.
😎 Results:
✅ Annotation volume increased 10x.
✅ Annotator productivity improved 3x.
✅ Reduced time-to-market for new models.
Spotify's scalable and efficient approach demonstrates how human expertise, automation, and robust infrastructure can transform annotation workflows for large-scale AI projects. 🚀
💡 A Quick Selection of GitHub Repositories for Beginners and Beyond
SQL Roadmap for Data Science & Data Analytics - a step-by-step program for learning SQL. This GitHub repository is supplemented with links to learning materials, making it a great resource for mastering SQL
kh-sql-projects - collection of source codes for popular SQL projects catering to developers of all levels, from beginners to advanced. The repository includes PostgreSQL-based projects for systems like library management, student records, hospitals, booking, and inventory. Perfect for hands-on SQL practice!
ds-cheatsheet - repository packed with handy cheat sheets for learning and working in the Data Science field. An excellent resource for quick reference and study
GenAI Showcase - repository showcasing the use of MongoDB in generative artificial intelligence. It includes examples of integrating MongoDB with Retrieval-Augmented Generation (RAG) techniques and various AI models
🧐 Distributed Computing: Hit or Miss
In the article codebykrishna/optimizing-parallel-computing-architectures-for-big-data-analytics-80eb7272377b">Optimizing Parallel Computing Architectures for Big Data Analytics, the author explains how to efficiently distribute workloads when processing Big Data using Apache Spark.
🤔 However, the author doesn't address the key advantages and disadvantages of distributed computing, which we inevitably have to navigate.
💡 Advantages:
✅ Scalability: Easily expand computational capacity by adding new nodes.
✅ Fault tolerance: The system remains operational even if individual nodes fail, thanks to replication and redundancy.
✅ High performance: Concurrent data processing across nodes accelerates task execution.
⚠️ Now for the disadvantages:
✅ Management complexity: Coordinating nodes and ensuring synchronized operation requires a sophisticated architecture.
✅ Security: Distributing data makes protecting it from breaches and attacks more challenging.
✅ Data redundancy: Ensuring fault tolerance often requires data replication, increasing storage overhead.
✅ Consistency issues: Maintaining real-time data consistency across numerous nodes is difficult (as per the CAP theorem).
✅ Update challenges: Making changes to a distributed system, such as software updates, can be lengthy and risky.
✅ Limited network bandwidth: High data transfer volumes between nodes can overload the network, slowing down operations.
🥸 Conclusion:
Distributed computing offers immense opportunities for scaling, accelerating computations, and ensuring fault tolerance. However, its implementation comes with a host of technical, organizational, and financial challenges, including managing complex architectures, ensuring data security and consistency, and meeting demanding network infrastructure requirements.
😎💡Top Collection of Useful Data Tools
✅ gitingest — A utility created for automating data analysis from Git repositories. It allows collecting information about commits, branches, and authors, then transforming it into convenient formats for integration with language models (LLM). This tool is perfect for analyzing change histories, building models based on code, and automating work with repositories.
✅ datasketch — A Python library for optimizing work with large data. It provides probabilistic data structures, including MinHash for Jaccard similarity estimation and HyperLogLog for counting unique items. These tools allow quick tasks such as finding similar items and cardinality analysis with minimal memory and time consumption.
✅Polars — A high-performance library for working with tabular data, developed in Rust with Python support. The library integrates with NumPy, Pandas, PyArrow, Matplotlib, Plotly, Scikit-learn, and TensorFlow. Polars supports filtering, sorting, merging, joining, and grouping data, providing high speed and efficiency for analytics and handling large volumes of data.
✅ SQLAlchemy — A library for working with databases, supporting interaction with PostgreSQL, MySQL, SQLite, Oracle, MS SQL, and other DBMS. It provides tools for object-relational mapping (ORM), simplifying data management by allowing developers to work with Python objects instead of writing SQL queries, while also supporting flexible work with raw SQL for complex scenarios.
✅ SymPy — A library for symbolic mathematics in Python. It allows performing operations on expressions, equations, functions, matrices, vectors, polynomials, and other objects. With SymPy, you can solve equations, simplify expressions, calculate derivatives, integrals, approximations, substitutions, factorizations, and work with logarithms, trigonometry, algebra, and geometry.
✅ DeepChecks — A Python library for automated model and data validation in machine learning. It identifies issues with model performance, data integrity, distribution mismatches, and other aspects. DeepChecks allows for easy creation of custom checks, with results visualized in convenient tables and graphs, simplifying analysis and interpretation.
✅ Scrubadub — A Python library designed to detect and remove personally identifiable information (PII) from text. It can identify and redact data such as names, phone numbers, addresses, credit card numbers, and more. The tool supports rule customization and can be integrated into various applications for processing sensitive data.
🌎TOP DS-events all over the world in January 2025
Jan 7-8 - HPC Monthly Workshop: Machine Learning and BIG DATA - https://www.psc.edu/resources/training/hpc-workshop-big-data-january-7-8-2025/
Jan 9 - Innovative Practices in Science & Technology - Taipei, Taiwan - https://phdcluster.confx.org/wcipst-9jan-taipei/
Jan 9-10 – ICUSGBD - Seville, Spain - https://conferenceineurope.net/eventdetail/2640428
Jan 10-12 - ACIE 2025 - Phuket, Thailand - https://acie.org/
Jan 10-12 - ICSIM 2025 - Singapore, Singapore - https://www.icsim.org/
Jan 15-17 - IT / Digital Transformation (DX) show - INTEX OSAKA, Japan - https://www.japan-it.jp/osaka/en-gb.html
Jan 21 - ElasticON Tour - La Salle Wagram, Paris - https://dev.events/conferences/elastic-on-6dyjbty
Jan 23-24 - 5th Annual Excellence in Master Data Management & Data Governance Summit - Amsterdam, The Netherlands - https://tbmgroup.eu/etn/5th-annual-excellence-in-master-data-management-data-governance-summit-cross-industry/
Jan 25 – CBIoTML - Atlanta,USA - https://bigdataresearchforum.com/Conference/267/ICBIoTML/
Jan 31-Feb 2 - Artificial Intelligence & Innovation in Healthcare - Dubai, UAE - https://maiconferences.com/artificial-intelligence-in-healthcare/
🧐Multithreading: PostgreSQL vs. MSSQL Server – Pros and Cons
Both PostgreSQL and MSSQL Server are popular databases for web application infrastructure. Here’s a quick comparison of their multithreading models:
PostgreSQL
👍 Pros:
✅ Process-based model ensures isolation and minimizes interference.
✅ Stability and security reduce deadlock risks.
✅ Flexible scaling for individual tasks.
❌ Cons:
✅ High memory usage per process.
✅ Limited performance with many connections.
✅ Challenges with horizontal scaling.
MSSQL Server
👍 Pros:
✅ Thread-based model efficiently utilizes CPU and memory.
✅ High scalability for numerous parallel connections.
✅ Optimized for Windows servers.
✅ Fast thread switching boosts performance in competitive systems.
❌ Cons:
✅ Troubleshooting is harder due to parallel execution.
✅ Higher risk of deadlocks.
✅ Requires advanced administrative effort for thread management.
🤔Which to Choose?
PostgreSQL: For moderate connections, stable loads, and reliability.
MSSQL Server: For high-load systems needing peak scalability and performance.
😎📊Data Trends That Will Transform Business in 2025
The article The Most Powerful Data Trends That Will Transform Business In 2025 highlights key trends shaping the future of data usage.
🤔Here are some of them:
✅ Confidential Computing: Blockchain and homomorphic encryption will enable data analysis without exposing its content. This is a crucial step for secure collaborative analytics between companies.
✅ Growth of Data Marketplaces: Businesses will start monetizing their datasets, creating new revenue streams. Specialized platforms for trading data will emerge.
✅ Expansion of Edge Computing: Processing data at the network edge will reduce latency and enhance security. Technologies like tinyML will transform industries where real-time data processing is critical.
✅ Behavioral Data as a New Asset: Emotional and behavioral data analysis will underpin personalized solutions and decision-making.
🥲TOP fails with different DBMSs: pain, tears
✅PostgreSQL and the vacuum of surprise
Everyone loves PostgreSQL until they encounter the autovacuum. If you forget to configure it correctly, the database starts to slow down so much that it's easier to migrate data to Excel.
✅Cassandra: master of sharding and chaos
Oh, this magical world of distributed data! As long as everything is running smoothly, Cassandra is cool. But when one node fails, clusters become a mystery with a surprise: what part of the data survived? And cross-DC replication in large networks is a lottery.
✅Firebase Realtime Database
Sounds cool: data synchronized in real time! But when you have tens of thousands of active users, everything becomes hell, because every little query costs a ton of money. And unmonitored updates affect all clients at once.
✅Redis as the main database
Easy, fast, everything in memory. Sounds cool until you realize that they forgot about the data recovery mechanism. Oops, the server crashed - data flew to nowhere.
🧐Data and its markup in 2024: emerging trends and future requirements
Caught an interesting bakingai/data-labeling-in-2023-emerging-trends-and-future-demands-for-impactful-results-337c130c5c02">article about data markup. Here are a few key points:
🤔 Current trends:
✅ Increasing complexity of datasets
✅ The move to real-time partitioning
✅ Large-scale development of automated tools to complement manual labor
🤔Market forecasts:
✅Expected to grow to $8.22 billion by 2028 at a CAGR of 26.6%
✅The requirements for quality and speed of markup are increasing and will grow exponentially
🤔Technological trends:
✅Adaptive AI.
✅Metauniverse
✅Industry cloud platforms
✅ Improvements in wireless technologies
Thus, the author indicates that the data partitioning industry will grow rapidly due to the increasing demand for accurate and reliable data for AI and machine learning. Automation, adaptive AI, and new technological solutions will improve the quality and speed of data partitioning.
🌎TOP DS-events all over the world in December
Dec 2-5 - TIES 2024 - Adelaide, Australia - https://www.isi-next.org/conferences/ties2024/
Dec 3 - Generation AI - Paris, France - https://dev.events/conferences/generation-ai-c4odjomu
Dec 5 - The International AI Summit 2024 - Brussels, Belgium - https://global-aiconference.com/
Dec 2-6 - Data Science Week 2024 - Fort Wayne, USA - https://sites.google.com/view/data-science-week-2024
Dec 2-6 - AWS re:Invent - LAS VEGAS, USA - https://reinvent.awsevents.com/
Dec 9-10 - ICMSCS 2024: 18 - London, United Kingdom - https://waset.org/mathematics-statistics-and-computational-sciences-conference-in-december-2024-in-london
Dec 10 - Global Big Data Conference - Online - https://www.globalbigdataconference.com/
Dec 10 - Prompt Engineering Bulgaria 2024 - Sofia, Bulgaria - https://www.eventbrite.nl/e/prompt-engineering-bulgaria-2024-tickets-796563251127?aff=oddtdtcreator
Dec 11 - AI Heroes - Torino, Italy - https://dev.events/conferences/ai-heroes-xxrqdxu9
Dec 11-12 - The AI Summit New York - New York, USA - https://newyork.theaisummit.com/
Dec 12-13 - AI: 2057 - Dubai, UAE - https://www.globalaishow.com/
Dec 15-18 - IEEE International Conference on Big Data 2024 - Washington, D.C., USA - https://www3.cs.stonybrook.edu/~ieeebigdata2024/
Dec 19 - Normandie.ai 2024 - Rouen, France - https://dev.events/conferences/normandie-ai-2024-e15asbe6
🤖Deus in Machina: Jesus-AI has been installed in a Swiss church
St. Peter's Chapel in Lucerne has launched an AI Jesus project that communicates in 100 languages. The AI is installed in the confessional where visitors can ask questions and receive answers in real time.
Trained on theological texts, Jesus-AI engaged more than 1,000 people in two months, two-thirds of whom described the experience as “spiritual.” However, the experiment has drawn criticism for the superficiality of the answers and the inability to have meaningful conversations with the machine.
🖥Read more here
😎💡AlphaQubit from Google: a new standard for accuracy in quantum computing.
Google DeepMind and Google Quantum AI have unveiled AlphaQubit, a decoder that dramatically improves error correction accuracy in quantum computing. Based on a neural network trained on synthetic and real data from the Sycamore processor, AlphaQubit uses the Transformers architecture to analyze errors.
Tests have shown that AlphaQubit reduces errors by 6% compared to tensor networks and 30% with correlation matching. However, despite the high level of accuracy, real-world speed and scalability issues remain.
✅Link to blog
🧐Anthropic CEO Dario Amodei interviews Lex Fridman
😎Highlights:
✅Dario expressed optimism about the imminent emergence of AI capable of reaching human levels. He noted that development and training costs will increase in the coming years, and by 2027, clusters will likely be built worth around $100 billion - significantly larger than the current largest supercomputers, which cost around $1 billion.
✅Amodei believes that models will continue to scale, despite the lack of a theoretical explanation for this process - there is, according to him, some "magic" in it.
✅AI models are currently improving at an astonishing rate, especially in areas such as programming, physics, and mathematics. On the SWE-bench test, their success at the beginning of the year was only 2-3%, and now reaches about 50%. The main concern in these conditions is the possible monopoly on AI, when control over it ends up in a small number of large companies, which could threaten
🖥You can watch the interview here
😂A Radical Solution from AI
Every day, thousands of programmers can breathe a sigh of relief when AI performs tasks for them like queries, data formatting, or other routine tasks😁
🖥ChatGPT was asked to write SQL queries for a store database. The answer just killed
😎Sometimes AI's views on solving a particular problem are slightly different from human ones
💡A small selection of useful things for working with Big Data
postgres-backup-local is a Docker tool for creating backups of PostgreSQL databases, storing them in the local file system with the ability to flexibly manage copies. With its help, you can back up multiple databases from one server by specifying their names through the POSTGRES_DB environment variable (separated by a comma or space).
The tool supports webhooks before and after backup, automatically manages the rotation and deletion of old copies, and is also available for Linux architectures, including amd64, arm64, arm/v7, s390x, and ppc64le.
EfCore.SchemaCompare is a tool for comparing database schemas in Entity Framework Core (EF Core), allowing you to find and analyze differences between the current database and migrations. It provides a convenient way to track changes in data structures, which helps prevent errors caused by schema mismatches during application development.
Suitable for database versioning, especially useful when developing and upgrading EF Core-based applications.
Greenmask is an open-source tool for PostgreSQL designed for masking, obfuscation, and logical backup of data. It allows you to anonymize sensitive information in database dumps, making it useful for preparing data for use in non-production environments such as development and testing. Greenmask support helps protect data by meeting privacy requirements and reducing the risk of leaks during development.
💡😎 A Small Selection of Big, Fascinating, and Useful Datasets
Sky-T1-data-17k — diverse dataset designed to train the Sky-T1-32B model, which powers the reasoning capabilities of MiniMax-Text-01. This model consistently outperforms GPT-4o and Gemini-2 in benchmarks involving long-context tasks
XMIDI Dataset — large-scale music dataset with precise emotion and genre labels. It contains 108,023 MIDI files, making it the largest known dataset of its kind—ideal for research in music and emotion recognition
AceMath-Data - family of datasets used by NVIDIA to train their flagship model, AceMath-72B-Instruct. This model significantly outperforms GPT-4o and Claude-3.5 Sonnet in solving mathematical problems
📚 A small selection of books on Data Science and Big Data
Software Engineering for Data Scientists - This book explains the mechanisms and practices of software development in Data Science. It also includes numerous implementation examples in Python.
Graph Algorithms for Data Science - The book covers key algorithms and methods for working with graphs in data science, providing specific recommendations for implementation and application. No prior experience with graphs is required. The algorithms are explained in simple terms, avoiding unnecessary jargon, and include visual illustrations to make them easy to apply in your projects.
Big Data Management and Analytics - This book covers all aspects of working with big data, from the basics to detailed practical examples. Readers will learn about selecting data models, extracting and integrating data for big data tasks, modeling data using machine learning methods, scalable Spark technologies, transforming big data tasks into graph databases, and performing analytical operations on graphs. It also explores various tools and methods for big data processing and their applications, including in healthcare and finance.
Advanced Data Analytics Using Python - This book explores architectural patterns in data analytics, text and image classification, optimization methods, natural language processing, and computer vision in cloud environments.
Minimalist Data Wrangling with Python - This book provides both an overview and a detailed discussion of key concepts. It covers methods for cleaning data collected from various sources, transforming it, selecting and extracting features, conducting exploratory data analysis, reducing dimensionality, identifying natural clusters, modeling patterns, comparing data between groups, and presenting results
⚔️ Kafka 🆚 RabbitMQ: Head-to-Head Clash
In the article hubian/rabbitmq-vs-kafka-head-to-head-confrontation-in-8-major-dimensions-7de8a3193dfd">RabbitMQ vs Kafka: Head-to-head confrontation in 8 major dimensions, the author compares two well-known tools: Apache Kafka and RabbitMQ.
Here are two primary differences between them:
✅ RabbitMQ is a message broker that handles routing and queue management.
✅ Kafka is a distributed streaming platform that focuses on data storage and message replay.
🤔 Key Characteristics:
✅ Message Order: Kafka ensures order within a single topic, while RabbitMQ provides only basic guarantees.
✅ Routing: RabbitMQ supports complex routing rules, whereas Kafka requires additional processing for message filtering.
✅ Message Retention: Kafka stores messages regardless of their consumption status, while RabbitMQ deletes messages after they are processed.
✅ Scalability: Kafka delivers higher performance and scales more efficiently.
🤔 Error Handling:
✅ RabbitMQ: Offers built-in tools for handling failed messages, such as Dead Letter Exchanges.
✅ Kafka: Error handling requires implementing additional mechanisms at the application level.
In summary, RabbitMQ is well-suited for tasks requiring flexible routing, time-based message management, and advanced error handling, while Kafka excels in scenarios with strict ordering requirements, long-term message storage, and high scalability.
💡 The article also emphasizes that both platforms can be used together to address different needs in complex systems.
🤔What is the difference between Smart Data and Big Data?
In the article What’s Smart data and how it’s different from Big data? the author examines the features of "Smart Data". Below we will give our vision of this concept (it may differ, or it may coincide🥸).
So, Smart Data is a concept focused on processing, analyzing and using data taking into account its relevance, quality and usefulness for decision-making. Unlike Big Data, where the emphasis is on volume, Smart Data focuses on extracting valuable information from a huge array of data.
🤔Smart Data Features:
✅Data Quality: Selection of only relevant, accurate and structured data
✅Contextuality: Data is processed taking into account its significance for a specific task
✅Real-time analytics: Smart Data is used to enable quick decision-making
🤔Benefits:
✅Efficiency: Saving resources by working only with the necessary data
✅Personalization: Ability to tailor services to specific needs
✅Fewer Errors: Focus on high data quality reduces the risk of obtaining incorrect results
🥸However, not everything is so rosy, there are also disadvantages:
✅Ethical and legal issues: Working with personal data carries risks of privacy violation and misuse of information. This can lead to fines and loss of trust
✅High dependence on data quality: If the source data is incomplete, inaccurate or outdated, the results of the analysis can be misleading and impair decision making
✅High implementation costs: Requires investment in technology, time and qualified personnel
✅Problems with interpreting results: Even with high-quality data, analytics can be difficult for non-experts to understand, which requires additional training costs for employees
✅Technical failures: The infrastructure for processing data can be vulnerable to failures, which is especially critical when working with real-world processes such as financial or medical management
🧐Thus, Smart Data is about the meaningful use of data to achieve specific goals. This concept allows companies not only to cope with information noise, but also to gain competitive advantages. However, implementation requires a well-thought-out strategy and resources, otherwise there is a risk of incurring huge losses
😎💡FineMath: A New Math Dataset by Hugging Face
Hugging Face has released FineMath, a comprehensive dataset for training models on mathematical content. It was constructed using CommonCrawl, a classifier trained on LLama-3.1-70B-Instruct annotations, and a thorough data filtering process.
Compared to OpenWebMath and InfiMM, FineMath shows more consistent accuracy improvements as the dataset size increases, thanks to its high quality and diverse content.
A project utilizing FineMath for training LLMs in math assistance is already live — explore the GitHub repository.
😎A Small Selection of Useful Big Data Repositories
Complete-Advanced-SQL-Series – a repository that provides everything you need to enhance your SQL skills, including over 100 exercises and examples.
ds-cheatsheet– a GitHub repository offering a variety of useful Data Science cheatsheets.
postgres_for_everything – a collection of examples showcasing how PostgreSQL can be used for tasks such as message queues, analytics, access control, GIS, time-series data handling, search, caching, and more.
GenAI Showcase – demonstrates the use of MongoDB in generative AI, featuring examples of integration with Retrieval-Augmented Generation (RAG) and various AI models.
Data-and-ML-Projects – a repository containing over 50 projects across areas like Data Analytics, Data Science, Data Engineering, MLOps, and Machine Learning.
😎🔥A small collection of useful datasets:
Synthia-v1.5-I – a dataset that includes over 20,000 technical questions and answers. It uses system prompts in the Orca style to generate diverse responses, making it a valuable resource for training and testing LLMs on complex technical data.
HelpSteer2 – an English-language dataset designed for training reward models that improve the utility, accuracy, and coherence of responses generated by other LLMs.
LAION-DISCO-12M – includes 12 million links to publicly available YouTube tracks with metadata. The dataset is created to support research in machine learning, sound processing model development, musical data analysis, audio data processing, and training recommender systems and applications.
Universe – a large-scale collection containing astronomical data of various types: images, spectra, and light curves. It is intended for research in astronomy and astrophysics.
😎Google unveiled Willow - a quantum chip with exponential scaling
Google has released Willow, the world's first quantum chip capable of exponential error reduction with increasing number of qubits. This is made possible by the efficient implementation of logical qubits that operate below the boundary of Quantum Error Correction, a method of protecting data through its distribution across qubits.
Willow features:
✅Record number of qubits: 105, far exceeding previous quantum computers.
✅Calculation speed: a septillion times faster than classical chips. Willow solves problems in 300 seconds that a conventional chip would take 10 quintillion years to complete.
✅ Error minimization: as the number of qubits increases, errors decrease exponentially, solving a major problem in quantum computing over the past 30 years.
While tasks like cracking bitcoin will require 300-400 million qubits, Willow is already setting a new bar in quantum technology.
🔎 Learn more here
😎🔥A selection of tools for Big Data processing
Timeplus Proton is a ClickHouse-based SQL engine designed to process, route, and analyze streaming data from sources such as Apache Kafka and Redpanda, with the ability to transfer aggregated data to other systems.
qsv is a command-line utility designed for quickly indexing, processing, analyzing, filtering, sorting, and merging CSV files. It offers convenient and understandable commands for performing these operations.
WrenAI is an open-source tool that prepares an existing database for working with RAG (Retrieval-Augmented Generation). It allows you to transform text queries into SQL, explore data from the database without writing SQL code, and perform other tasks.
Groll is an open-source CLI utility for managing schema migrations in PostgreSQL. It provides safe and reversible changes, supporting multiple schema versions at the same time. Groll supports complex migrations, ensuring that client applications do not stop working while updating the database schema.
Valkey is a high-performance open-source data warehouse that supports caching, message queues, and can be used as a primary database. It operates as a standalone background service or as part of a cluster, providing replication and high availability.
DataEase is an open-source BI tool for creating interactive visualizations and analyzing business metrics. It simplifies access to analytics with an intuitive drag-and-drop interface, making working with data convenient and understandable.
SurrealDB is a modern multi-model database that combines SQL, NoSQL, and graph databases. It supports relational, document, graph, temporal, and key-value data models, providing a unified solution for managing data without the need for different platforms.
LibSQL is a fork of SQLite, extended with features such as HTTP and gRPC query processing, and transparent replication support. It allows you to create distributed databases with writes on the primary server and reads from replicas. LibSQL provides secure data transfer via TLS and provides a Docker image for easy deployment.
Redash is an open-source data analytics tool designed to simplify connecting, querying, and visualizing data from a variety of sources. It allows you to create SQL and NoSQL queries, visualize results in the form of graphs and charts, and share dashboards with teams.
💡 SmolTalk: a synthetic English-language dataset for LLM education
SmolTalk is a synthetic dataset from HuggingFace designed for teacher-led LLM learning. It consists of 2 million rows and was used to develop SmolLM2-Instruct models.
🔥Dataset includes both new and existing datasets
😎New datasets:
✅Smol-Magpie-Ultra (400k rows).
✅Smol-constraints (36,000 rows)
✅Smol-rewrite (50 thousand lines)
✅Smol-summarize (101 thousand lines)
⚡️Older datasets:
✅OpenHermes2.5 (100 thousand lines)
✅MetaMathQA (50 thousand lines)
✅NuminaMath-CoT (1120 thousand lines)
✅Self-Oss-Starcoder2-Instruct (1120 thousand lines)
✅SystemChats2.0 (30 thou. lines)
✅LongAlign (less than 16 thousand tokens)
✅Everyday-conversations (50 thousand lines)
✅APIGen-Function-Calling (80k lines)
✅Explore-Instruct-Rewriting (30k lines)
📚Training results:
SmolTalk showed significant improvements in model performance, especially in the tasks of math, programming, and following system prompts. SmolTalk training gave better results on IFEval, BBH, GS8Mk and MATH labels, including when training Mistral-7B.
🤔CUPED: advantages and disadvantages
CUPED (Controlled Pre-Experiment Data) is a data preprocessing technique used to improve the accuracy of A/B test evaluation. CUPED reduces the variance of metrics by utilizing data collected before the experiment, allowing statistically significant differences to be identified more quickly.
Benefits of CUPED:
✅Reduces variance of metrics: Improves test sensitivity by accounting for prior data.
Resource savings: Reduces the sample size required to achieve statistical significance.
✅Faster interpretation of results: Reducing noise allows real effects to be found more quickly.
✅Accounting for seasonality: Using data before the experiment helps account for trends and external factors.
Disadvantages of CUPED:
✅Implementation complexity: Requires knowledge of statistics and proper choice of covariates.
✅Dependence on data quality: Pre-experimental data must be reliable and representative.
✅Necessity of covariates: A significant correlation between metric and predictor is required, otherwise the effect will be minimized.
✅Risk of overestimation: If not properly adjusted, may lead to overestimation of the effect.
Thus, CUPED is particularly useful when it is important to maximize the efficiency of experiments but requires careful data preparation and analysis.
🔎 Optimizing search in MongoDB
MongoDB is a non-relational database that differs from SQL databases such as PostgreSQL or MySQL in its structure. Instead of tables with columns and rows, MongoDB uses collections.
Searching for text in MongoDB involves using special query operators to work with text data. It allows you to search for text phrases in collections and return documents containing the specified words. This is often used for complex operations where data is grouped by common attributes such as price, authors, or age.
In this article, the author also shares his experience with MongoDB, including the challenges in creating optimal search queries to make them easier to understand for beginners.
The article also mentions Mongoose, a popular ORM (object-relational mapping) tool that simplifies the interaction between MongoDB and programming languages such as Node.js/JavaScript. It provides functions for data modeling, schema development, model authentication, and data management.
😎The Power of Data: Analyzing Quarterly Revenue Growth for Business Success
💡I recently came across an article in which the author talks about analyzing quarterly revenue growth. He argues that focusing only on annual data can hide trends and slow down decision making. Quarterly analysis allows you to better understand the current performance of the business and identify potential problems, such as a decrease in revenue in a certain period. This granularity helps you identify causes (such as seasonal fluctuations or marketing shortcomings) faster and take action faster than when analyzing only annual data. Quarterly data creates a foundation for optimizing growth strategies, moving from reactive to more effective data-driven management.
The author also highlights key metrics for analyzing quarterly revenue growth:
✅Customer Acquisition Cost (CAC): It is important to understand the cost of acquiring new customers to optimize marketing and sales efforts, which helps increase ROI and revenue growth.
✅Customer Lifetime Value (CLTV): This metric shows the total revenue a customer brings in over their entire relationship with the company, helping to identify high-yield segments for targeting and retention.
✅Sales Conversion: Analyzing conversion at each stage of the funnel helps identify bottlenecks and improve overall sales efficiency, which contributes to revenue growth.
🖥ccdallas/the-power-of-data-analyzing-quarterly-revenue-growth-for-business-success-173fc7dcc2ab">Link to the article
😎How Spotify accelerated data markup for ML by 10x
Spotify shared how it accelerated data markup for machine learning models using large language models (LLMs) in conjunction with the work of annotators. Automated initial LLM partitioning significantly reduced processing time by allowing annotators to focus on complex or ambiguous cases. This combined solution tripled process throughput and reduced costs. This scalable solution is especially relevant for a rapidly growing platform and is used to monitor compliance with service rules and policies.
💡 Spotify's data partitioning strategy is based on three core principles:
✅Scaling human expertise: annotators validate and refine results to improve data accuracy.
✅Annotation tools: creating efficient tools that simplify the work of annotators and allow models to be integrated more quickly into the process.
✅Fundamental infrastructure and integration: the platform is designed to handle large amounts of data in parallel and run dozens of projects simultaneously.
This approach has allowed Spotify to run multiple projects simultaneously, reduce costs, and maintain high accuracy.
More information about Spotify's solution can be found in their whitepaper.
🌎TOP DS-events all over the world in November
Nov 4-8 - PASS Data Community Summit 2024 - Seattle, USA - https://passdatacommunitysummit.com/
Nov 6 - Enterprise AI & Big Data - London, UK - https://whitehallmedia.co.uk/bdanov2024/
Nov 6-8 - PyData NYC, New York, USA - https://pydata.org/nyc2024
Nov 7 - Data Science Day 2024 - https://events.altair.com/data-science-day-2024/
Nov 7 - Data & Analytics Congres 2024 - Liemes, Utrecht - https://datainsightsnetwork.nl/events/dac-2024/
Nov 14 - IMPACT: The Data Observability Summit - Online - https://impactdatasummit.com/
Nov 18-19 - Machine Learning Week Europe - Munich, Germany - https://machinelearningweek.eu/
Nov 18-22 - LEADING GLOBAL AI EVENT - Belgrade, Serbia - https://datasciconference.com/
Nov 18-22 - QCon - San Francisco, USA - https://qconsf.com/
Nov 20 - Tech & AI LIVE 2024 - New York, USA - https://live.technologymagazine.com/tech-ai-newyork-2024/
Nov 20-23 - FMLDS - Sydney, Australia - https://www.fmlds.org/
Nov 20-21 - Data & Analytics Insight Summit - San Diego, USA - https://gdsgroup.com/events/physical-summit/data-analytics-na-nov-24/
Nov 21 - Data Science Summit - Warsaw, Polland - https://dssconf.pl/
Nov 28-29 - AI ML, Data Science & Robotics Conferences 2024 - Porto, Portugal - https://aiml.events/events/ai-ml-data-science-robotics-conferences-2024