- Researchers from the University of Vienna and SBA Research uncovered a massive privacy vulnerability in WhatsApp impacting over 3.5 billion users worldwide, making it potentially the largest data leak in history.[^3][^5]
- The leak exploited a flaw in WhatsApp’s contact lookup feature — a function designed to let users find others by their phone numbers.[^5][^9]
Functional and Business Requirements
- Goal: Improve individual user engagement (view, like, comment) on suggested posts—boosting metrics like Daily Active Users (DAU) and session numbers.
- Scope: Focus on non-friend content (from creators, not just connections). Aim is to predict and increase personalized engagement.
- ML Objective: Aligns with business needs but optimizes a correlated surrogate metric (like engagement probability) at the user level, not global DAU directly.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Google's Nested Learning and Meta's (not Google's) Sparse Memory Finetuning are two distinct approaches to the problem of continual learning in AI, aiming to prevent "catastrophic forgetting". Nested Learning is an architectural paradigm, while Sparse Memory Finetuning is a specific training method within existing architectures. | |
| Google Nested Learning | |
| Nested Learning is a novel architectural paradigm that treats a single model as a system of interconnected, multi-level learning problems that are optimized simultaneously at different rates. | |
| Core Concept: It introduces a Continuum Memory System (CMS), a spectrum of memory modules updating at different frequencies. | |
| Mechanism: It uses various "layers" of memory: | |
| High-frequency layers update often, storing recent, fast-changing information (short-term memory). | |
| Low-frequency layers update rarely, storing stable, core knowledge that shouldn't change easily (long-term memory). | |
| Result: This structural approach allows the model to naturally integrate new information |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| **Microsoft Agent Lightning** differs from **Unsloth** and the **Hugging Face fine-tune library** in several key ways: | |
| - **Purpose & Scope**: | |
| - **Agent Lightning** is designed to *automate optimization of AI agents* in real production environments—this means not just fine-tuning LLMs, but also orchestrating prompts, reward-based RL, supervised fine-tuning, and managing agent workflows and traces. It can interface with multiple agent frameworks (LangChain, CrewAI, etc.) and integrates fine-tuning directly into the agent's operational loop.[1][2][3] | |
| - **Unsloth** focuses specifically on *speeding up and simplifying LLM fine-tuning*. It optimizes memory to support larger models on smaller GPUs and significantly reduces time and complexity for traditional supervised fine-tuning tasks.[4][5][6] | |
| - **Hugging Face’s library** (Transformers/Trainer) provides the foundational tools for *fine-tuning and training models* using standard workflows, with flexibility but less opinionated automation for agent-centric R |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Google’s Quantum Computing Breakthrough (Willow Chip & Quantum Echoes Algorithm): | |
| Historic achievement: Google’s Willow chip delivers the first verifiable quantum advantage—solving real, testable problems 13,000× faster than the world’s best supercomputers. | |
| Quantum Echoes algorithm: This algorithm lets scientists run quantum operations forward and backward in time, observing how information spreads or scrambles when a tiny disturbance is introduced (like dropping a pebble in a pond and watching ripples). | |
| Why qubits are special: Qubits (quantum bits) can be both "0" and "1" at the same time (superposition), and can be linked so that changing one changes another instantly (entanglement). | |
| Fragility of qubits: Qubits are very fragile—they get messed up by heat, vibration, or noise, so Willow chip runs at near absolute zero temperatures to keep them stable. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Linear Algebra (Matrices): Learn about matrix properties, multiplying matrices, LU decomposition, and determinants. This is needed for data analysis, processing, and techniques like PCA (Principal Component Analysis) [08:54]. | |
| Probability and Statistics: Learn about random variables, probability distributions, expectation value, variance, covariance, correlation, and Bayes' Rule. This is essential for understanding your data and model results. | |
| Numerical Computation: Learn about Gradient Descent, which is used to find a local minimum. The speaker suggests writing code for gradient descent. | |
| Calculus Basics: Learn the Chain Rule, which is at the heart of backpropagation. | |
| Theory of Machine Learning: Learn key terminologies and concepts like regression, train/test/validation sets, labels/targets, weights, generalization error, regularization, hyperparameter tuning (using cross-validation), and bias-variance tradeoff. | |
| https://www.deeplearningbook.org/exercises.html |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Bits vs Qubits (with analogy): | |
| A classical bit is like a coin lying flat—only "heads" (0) or "tails" (1). | |
| A qubit is like a spinning coin in the air—while it spins, it can be both heads and tails at the same time! (superposition). | |
| Superposition and Probability: | |
| A qubit “in the air” can be 70% likely to land heads and 30% likely to land tails. It holds both until you catch (measure) the coin, then it picks one. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Here is a supporting content for searching or downloading research papers (with their specific uses): | |
| ### Websites and Tools | |
| - **Consensus**: https://get.consensus.app/neha | |
| *AI-powered search and summarization tool; highlights open-access status and offers free Pro trial for research paper search.* | |
| - **Arxiv**: https://arxiv.org/ | |
| *Preprint repository for free academic papers in physics, computer science, math, and more.* | |
| - **Biorxiv**: https://www.biorxiv.org/ |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| 1. Ubuntu 2020 -> default python 3.8 -> require pyenv to handle higher versions of python | |
| 2. Ubuntu 2024 -> default python 3.12 -> PyTorch does not currently provide prebuilt wheels for Python 3.12 (cp312) via the official website or PyPI. | |
| 3. Installed Ubunt 2022 -> default python 3.10 | |
| wsl --install -d Ubuntu-22.04 | |
| wsl --setdefault Ubuntu-22.04 |
NewerOlder