Created
November 28, 2025 01:56
-
-
Save RodolfoFerro/34b7f50775a338e33f5a24d5ca0862f5 to your computer and use it in GitHub Desktop.
Introducción a las neuronas artificiales
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| { | |
| "cells": [ | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "view-in-github", | |
| "colab_type": "text" | |
| }, | |
| "source": [ | |
| "<a href=\"https://colab.research.google.com/gist/RodolfoFerro/34b7f50775a338e33f5a24d5ca0862f5/introducci-n-a-las-neuronas-artificiales.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "R6jO_1gISKxk" | |
| }, | |
| "source": [ | |
| "# 🧠 Introducción a las neuronas artificiales\n", | |
| "\n", | |
| "> **Descripción:** Cuaderno de contenidos para el taller de Introducción a las Neuronas Artificiales, UDL 2025. <br>\n", | |
| "> **Autor:** [Rodolfo Ferro](https://github.com/RodolfoFerro) <br>\n", | |
| "> **Contacto:** [X](https://twitter.com/rodo_ferro) / [Instagram](https://www.instagram.com/rodo_ferro/)\n", | |
| "\n", | |
| "\n", | |
| "## Contenidos\n", | |
| "\n", | |
| "### Sección I\n", | |
| "\n", | |
| "1. Brief histórico\n", | |
| "2. Unidad Lineal de Umbralización (TLU)\n", | |
| "3. Activación y bias → El perceptrón\n", | |
| "\n", | |
| "### Sección II\n", | |
| "\n", | |
| "4. Aprendizaje en neuronas\n", | |
| "5. Entrenamiento de una neurona\n", | |
| "6. Inferencia → Predicciones\n", | |
| "\n", | |
| "### Sección III – Reto\n", | |
| "\n", | |
| "7. El dataset a utilizar\n", | |
| "8. Preparación de los datos\n", | |
| "9. Creación del modelo\n", | |
| "10. Entrenamiento del modelo\n", | |
| "11. Evaluación y predicción" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "xNVG2PnSEtQN" | |
| }, | |
| "source": [ | |
| "## **Sección I**" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "tPk1Rkc4FZ5g" | |
| }, | |
| "source": [ | |
| "### **Breve historia de las redes neuronales**\n", | |
| "\n", | |
| "Podríamos decir que la historia se remonta a dar un inicio con el modelo neuronal de McCulloch y Pitts de 1943, la **Threshold Logic Unit (TLU)**, o **Linear Threshold Unit**, que fue el primer modelo neuronal moderno, y ha servido de inspiración para el desarrollo de otros modelos neuronales. (Puedes leer más [aquí](https://es.wikipedia.org/wiki/Neurona_de_McCulloch-Pitts).)\n", | |
| "\n", | |
| "Posterior a los TLU, se la historia se complementa con el desarrollo de un tipo de neurona artificial con una **función de activación**, llamada **perceptrón**. Ésta fue desarrollada entre 1950 y 1960 por el científico **Frank Rosenblatt**." | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "uehq48zoSocy" | |
| }, | |
| "source": [ | |
| "### **Entonces, ¿qué es una neurona artificial?**\n", | |
| "\n", | |
| "Una neurona artificial es una función matemática que fue concebida como un modelo a partir del funcionamiento de las neuronas biológicas. (Puedes leer un poco más [aquí](https://en.wikipedia.org/wiki/Artificial_neuron).)\n", | |
| "\n", | |
| "> El modelo general de una **neurona artificial** toma varias **entradas** $x_1, x_2,..., x_n $ y produce una **salida** $\\hat{y}$.\n", | |
| "\n", | |
| "Se propuso que las entradas tuviesen **pesos** asciados $w_1, w_2, ..., w_n$, siendo éstos números reales que podemos interpretar como una expresión de la importancia respectiva para cada entrada de información para el cálculo del valor de salida de la neurona.\n", | |
| "\n", | |
| "La salida de la neurona, $0$ o $1$ (en el caso del TLU), está determinada con base en que la suma ponderada,\n", | |
| "\n", | |
| "$$\\displaystyle\\sum_{j}w_jx_j,$$\n", | |
| "\n", | |
| "<!-- $\\textbf{w}_{Layer}\\cdot\\textbf{x} =\n", | |
| "\\begin{bmatrix}\n", | |
| "w_{1, 1} & w_{1, 2} & \\cdots & w_{1, n}\\\\\n", | |
| "w_{2, 1} & w_{2, 2} & \\cdots & w_{2, n}\\\\\n", | |
| "\\vdots & \\vdots & \\ddots & \\vdots\\\\\n", | |
| "w_{m, 1} & w_{m, 2} & \\cdots & w_{m, n}\\\\\n", | |
| "\\end{bmatrix} \\cdot\n", | |
| "\\begin{bmatrix}\n", | |
| "x_1\\\\\n", | |
| "x_2\\\\\n", | |
| "\\vdots\\\\\n", | |
| "x_n\n", | |
| "\\end{bmatrix}$ -->\n", | |
| "\n", | |
| "(para $j \\in \\{1, 2, ..., n\\}$ ) sea menor o mayor que un **valor límite** que por ahora llamaremos **umbral**. (Aquí comenzamos con la formalización de lo que es un TLU y cómo funciona.)\n", | |
| "\n", | |
| "Visto de otro modo, una neurona artificial puede interpretarse como un sistema que toma decisiones con base en la evidencia presentada." | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "q33kCpXyFgJ_" | |
| }, | |
| "source": [ | |
| "#### **Implementemos una TLU**" | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": null, | |
| "metadata": { | |
| "id": "cLBMuek3lBHd" | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "import numpy as np\n", | |
| "\n", | |
| "\n", | |
| "class TLU:\n", | |
| " def __init__(self, inputs, weights):\n", | |
| " \"\"\"Class constructor.\n", | |
| "\n", | |
| " Parameters\n", | |
| " ----------\n", | |
| " inputs : list\n", | |
| " List of input values.\n", | |
| " weights : list\n", | |
| " List of weight values.\n", | |
| " \"\"\"\n", | |
| "\n", | |
| " self.inputs = None # TODO: np.array <- inputs\n", | |
| " self.weights = None # TODO: np.array <- weights\n", | |
| "\n", | |
| " def predict(self, threshold):\n", | |
| " \"\"\"Function that operates inputs @ weights.\n", | |
| "\n", | |
| " Parameters\n", | |
| " ----------\n", | |
| " threshold : int\n", | |
| " Threshold value for decision.\n", | |
| " \"\"\"\n", | |
| "\n", | |
| " # TODO: Inner product of data\n", | |
| " return None" | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": null, | |
| "metadata": { | |
| "id": "t42O74IdmKIw" | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "# Ahora, inicializamos los valores de entrada y los pesos\n", | |
| "inputs, weights = [], [1, 1, 1]\n", | |
| "\n", | |
| "questions = [\n", | |
| " \"🌀 ¿Cuál es la velocidad? \",\n", | |
| " \"🌀 ¿Ritmo cardiaco? \",\n", | |
| " \"🌀 ¿Respiración? \"\n", | |
| "]\n", | |
| "\n", | |
| "for question in questions:\n", | |
| " # Ingresamos valor de entrada\n", | |
| " i = int(input(question))\n", | |
| " inputs.append(i)\n", | |
| "\n", | |
| " # Ingresamos valor de peso\n", | |
| " # w = int(input(\"🌀 Y su peso asociado es... \"))\n", | |
| " # weights.append(w)\n", | |
| " print()\n", | |
| "\n", | |
| "threshold = int(input(\"🌀 Y nuestro umbral/límite será: \"))" | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": null, | |
| "metadata": { | |
| "id": "ZHjy-k33oNFm" | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "tlu = None # TODO Instantiate TLU\n", | |
| "# TODO Apply decision function with threshold" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "gUCCwUG6DgCX" | |
| }, | |
| "source": [ | |
| "### **Bias y funciones de activación – El perceptrón**\n", | |
| "\n", | |
| "_Antes de continuar, introduciremos otro conceptos, el **bias** y la **función de activación**._\n", | |
| "\n", | |
| "La operación matemática que realiza la neurona para la decisión de umbralización se puede escribir como:\n", | |
| "\n", | |
| "$$ f(\\textbf{x}) =\n", | |
| " \\begin{cases}\n", | |
| " 0 & \\text{si $\\displaystyle\\sum_{j}w_jx_j <$ umbral o threshold} \\\\\n", | |
| " 1 & \\text{si $\\displaystyle\\sum_{j}w_jx_j \\geq$ umbral o threshold} \\\\\n", | |
| " \\end{cases}$$\n", | |
| "\n", | |
| "donde $j \\in \\{1, 2, ..., n\\}$, y $\\textbf{x} = (x_1, x_2, ..., x_n)$.\n", | |
| "\n", | |
| "De lo anterior, podemos despejar el umbral y escribirlo como $b$, obteniendo:\n", | |
| "\n", | |
| "$$ f(\\textbf{x}) =\n", | |
| " \\begin{cases}\n", | |
| " 0 & \\text{si $\\displaystyle\\sum_{j}w_jx_j + b < 0$} \\\\\n", | |
| " 1 & \\text{si $\\displaystyle\\sum_{j}w_jx_j + b > 0$} \\\\\n", | |
| " \\end{cases}$$\n", | |
| "\n", | |
| "donde $\\textbf{x} = (x_1, x_2, ..., x_n)$ y $j \\in \\{1, 2, ..., n\\}$.\n", | |
| "\n", | |
| "Esto que escribimos como $b$, también se le conoce como **bias**, y describe *qué tan susceptible la red es a __dispararse__*.\n", | |
| "\n", | |
| "Curiosamente, esta descripción matemática encaja con una función de salto o de escalón (función [_Heaviside_](https://es.wikipedia.org/wiki/Funci%C3%B3n_escal%C3%B3n_de_Heaviside)), que es una **función de activación**. Esto es, una función que permite el paso de información de acuerdo a la entrada y los pesos, permitiendo el disparo del lo procesado hacia la salida. La función de salto se ve como sigue:\n", | |
| "\n", | |
| "<center>\n", | |
| " <img src=\"https://upload.wikimedia.org/wikipedia/commons/4/4a/Funci%C3%B3n_Cu_H.svg\" width=\"40%\" alt=\"Función escalón de Heaviside\">\n", | |
| "</center>\n", | |
| "\n", | |
| "Sin embargo, podemos hacer a una neurona aún más susceptible con respecto a los datos de la misma (entradas, pesos, bias) añadiendo una función [sigmoide](https://es.wikipedia.org/wiki/Funci%C3%B3n_sigmoide). Esta fue una de las agregaciones de Rosenblatt al momento del desarrollo de su propuesta de perceptrón. La función sigmoide se ve como a continuación:\n", | |
| "\n", | |
| "<center>\n", | |
| " <img src=\"https://upload.wikimedia.org/wikipedia/commons/6/66/Funci%C3%B3n_sigmoide_01.svg\" width=\"40%\" alt=\"Función sigmoide\">\n", | |
| "</center>\n", | |
| "\n", | |
| "Esta función es suave, y por lo tanto tiene una diferente \"sensibililad\" a los cambios abruptos de valores. También, sus entradas en lugar de solo ser $1$'s o $0$'s, pueden ser valores en todos los números reales. La función sigmoide es descrita por la siguiente expresión matemática:\n", | |
| "\n", | |
| "$$f(z) = \\dfrac{1}{1+e^{-z}}$$\n", | |
| "\n", | |
| "O escrito en términos de entradas, pesos y bias:\n", | |
| "\n", | |
| "$$f(z) = \\dfrac{1}{1+\\exp{\\left\\{-\\left(\\displaystyle\\sum_{j}w_jx_j +b\\right)\\right\\}}}$$" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "0G1MY4HQFsEd" | |
| }, | |
| "source": [ | |
| "#### **Volviendo al ejemplo**" | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": null, | |
| "metadata": { | |
| "id": "qSn8VaEoDtHo" | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "# Modificamos para añadir la función de activación\n", | |
| "class Perceptron(TLU):\n", | |
| " def predict(self, bias):\n", | |
| " \"\"\"Function that operates inputs @ weights.\n", | |
| "\n", | |
| " Parameters\n", | |
| " ----------\n", | |
| " bias : int\n", | |
| " The bias value for operation.\n", | |
| " \"\"\"\n", | |
| "\n", | |
| " # TODO: Inner product of data + bias\n", | |
| " # TODO: Apply sigmoid function f(z) = 1 / (1 + e^(-z))\n", | |
| " z = None\n", | |
| " return" | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": null, | |
| "metadata": { | |
| "id": "ogPy6NpfERfJ" | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "bias = int(input(\"· El nuevo bias será: \"))\n", | |
| "perceptron = None # TODO Instantiate Perceptron\n", | |
| "# TODO Apply decision function with threshold" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "mRGlbVZsFxdk" | |
| }, | |
| "source": [ | |
| "> Esta es la neurona que usaremos para los siguientes tópicos." | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "NvmIk2G9EgOQ" | |
| }, | |
| "source": [ | |
| "<center>\n", | |
| " *********\n", | |
| "</center>" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "YnY-np7LE3lS" | |
| }, | |
| "source": [ | |
| "## **Sección II**" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "I7-Ja9DK9cIA" | |
| }, | |
| "source": [ | |
| "### Aprendizaje de neuronas\n", | |
| "\n", | |
| "Veamos cómo se puede entrenar una sola neurona para hacer una predicción.\n", | |
| "\n", | |
| "Para este problema construiremos un perceptrón simple, como el propuesto por McCulloch & Pitts, usando la función sigmoide.\n", | |
| "\n", | |
| "#### **Planteamiento del problema:**\n", | |
| "\n", | |
| "Queremos mostrarle a una neurona simple un conjunto de ejemplos para que pueda aprender cómo se comporta una función. El conjunto de ejemplos es el siguiente:\n", | |
| "\n", | |
| "- `(1, 0)` debería devolver `1`.\n", | |
| "- `(0, 1)` debe devolver `1`.\n", | |
| "- `(0, 0)` debería devolver `0`.\n", | |
| "\n", | |
| "Entonces, si ingresamos a la neurona el valor de `(1, 1)`, debería poder predecir el número `1`.\n", | |
| "\n", | |
| "> **Pregunta clave:** Esta función corresponde a una compuerta lógica, ¿puedes adivinar cuál?\n", | |
| "\n", | |
| "#### ¿Que necesitamos hacer?\n", | |
| "\n", | |
| "Programar y entrenar una neurona para hacer predicciones.\n", | |
| "\n", | |
| "En concreto, vamos a hacer lo siguiente:\n", | |
| "\n", | |
| "- Construir la clase y su constructor.\n", | |
| "- Definir la función sigmoide y su derivada\n", | |
| "- Definir el número de épocas para el entrenamiento.\n", | |
| "- Resolver el problema y predecir el valor de la entrada deseada" | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": null, | |
| "metadata": { | |
| "id": "2NKx40hxqmo4" | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "import numpy as np\n", | |
| "from tqdm.notebook import tqdm\n", | |
| "\n", | |
| "\n", | |
| "class TrainableNeuron():\n", | |
| " def __init__(self, n):\n", | |
| " \"\"\"Class constructor.\n", | |
| "\n", | |
| " Parameters\n", | |
| " ----------\n", | |
| " n : int\n", | |
| " Input size.\n", | |
| " \"\"\"\n", | |
| "\n", | |
| " # Fijamos una semilla aleatoria para reproducibilidad del experimento\n", | |
| " np.random.seed(123)\n", | |
| "\n", | |
| " # TODO. Usar np.random.random((n, 1)) para generar valores en (-1, 1)\n", | |
| " # TODO. Usar np.random.random() para generar valores en (-1, 1)\n", | |
| " self.weights = None\n", | |
| " self.bias = None\n", | |
| "\n", | |
| " def sigmoid(self, z):\n", | |
| " \"\"\"Sigmoid function.\n", | |
| "\n", | |
| " Parameters\n", | |
| " ----------\n", | |
| " z : float\n", | |
| " Input value to sigmoid function.\n", | |
| " \"\"\"\n", | |
| "\n", | |
| " # TODO: Retornar el resultado de la función f(z) = 1 / (1 + e^(-z))\n", | |
| " return None\n", | |
| "\n", | |
| " def predict(self, X):\n", | |
| " \"\"\"Prediction function. Applies input function to inputs tensor.\n", | |
| "\n", | |
| " Parameters\n", | |
| " ----------\n", | |
| " inputs : list\n", | |
| " List of inputs to apply sigmoid function.\n", | |
| " \"\"\"\n", | |
| " # TODO: Aplicar la función self.sigmoid a (X . self.weights) + bias\n", | |
| " return None\n", | |
| "\n", | |
| " def loss(self, y_train, y_pred):\n", | |
| " \"\"\"Function to compute loss function.\n", | |
| "\n", | |
| " Parameters\n", | |
| " ----------\n", | |
| " y_train : np.array\n", | |
| " The array with output from training.\n", | |
| " y_pred : np.array\n", | |
| " The array with output from prediction.\n", | |
| "\n", | |
| " Returns\n", | |
| " -------\n", | |
| " float\n", | |
| " The loss applied to the predicton.\n", | |
| " \"\"\"\n", | |
| " # Binary cross-entropy (log-loss)\n", | |
| " eps = 1e-8\n", | |
| " log_loss = -np.mean(y_train * np.log(y_pred + eps) +\n", | |
| " (1 - y_train) * np.log(1 - y_pred + eps))\n", | |
| "\n", | |
| " # Error simple\n", | |
| " simple_loss = None\n", | |
| "\n", | |
| " return simple_loss\n", | |
| "\n", | |
| "\n", | |
| " def train(self, X, y, epochs, lr=0.1):\n", | |
| " \"\"\"Training function.\n", | |
| "\n", | |
| " Parameters\n", | |
| " ----------\n", | |
| " X : np.array\n", | |
| " Array of features for training.\n", | |
| " y : np.array\n", | |
| " Array of labels for training.\n", | |
| " epochs : int\n", | |
| " Number of iterations for training.\n", | |
| " lr : float\n", | |
| " Learning rate for training. Default avlue is 0.1.\n", | |
| "\n", | |
| " Returns\n", | |
| " -------\n", | |
| " history : np.array\n", | |
| " An array containing the training history.\n", | |
| " \"\"\"\n", | |
| "\n", | |
| " # Historial de entrenamiento\n", | |
| " history = []\n", | |
| "\n", | |
| " # Transposición de vector de muestras\n", | |
| " n = len(X)\n", | |
| " y_train = y.reshape((n, 1))\n", | |
| "\n", | |
| " for _ in tqdm(range(epochs)):\n", | |
| " # Realizamos una predicción inicial\n", | |
| " y_pred = None\n", | |
| "\n", | |
| " # Medimos qué tan buena es la predicción y\n", | |
| " # calculamos el error, guardamos el\n", | |
| " # histórico de entrenamiento\n", | |
| " error = self.loss(y_train, y_pred)\n", | |
| " history.append(error)\n", | |
| "\n", | |
| " # Calculamos los gradientes\n", | |
| " dz = y_pred - y_train\n", | |
| " dw = (X.T @ dz) / n\n", | |
| " db = np.sum(dz) / n\n", | |
| "\n", | |
| " # Actualizamos pesos y bias con\n", | |
| " # gradientes y taza de aprendizaje\n", | |
| " self.weights -= lr * dw\n", | |
| " self.bias -= lr * db\n", | |
| "\n", | |
| " history = np.array(history)\n", | |
| "\n", | |
| " return history" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "Ym_oEzbhxYKT" | |
| }, | |
| "source": [ | |
| "### Generando las muestras\n", | |
| "\n", | |
| "Ahora podemos generar una lista de ejemplos basados en la descripción del problema." | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": null, | |
| "metadata": { | |
| "id": "BYW9aYSCxc1q" | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "# Muestras de entrenamiento\n", | |
| "input_values = [] # TODO. Definir los valores de entrada como lista de tuplas\n", | |
| "output_values = [] # TODO. Definir las salidas esperadas\n", | |
| "\n", | |
| "X = np.array(input_values)\n", | |
| "y = np.array(output_values).T.reshape((3, 1))" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "DJUYV8H-xf7Y" | |
| }, | |
| "source": [ | |
| "### Entrenando la neurona\n", | |
| "\n", | |
| "Para hacer el entrenamiento, primero definiremos una neurona. De forma predeterminada, contendrá pesos aleatorios (ya que aún no se ha entrenado):" | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": null, | |
| "metadata": { | |
| "id": "cThkcQGMxrX8" | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "# TODO: Creamos una instancia de neurona\n", | |
| "neuron = None\n", | |
| "\n", | |
| "print(\"Pesos iniciales (aleatorios):\")\n", | |
| "print(neuron.weights)\n", | |
| "print()\n", | |
| "print(\"Bias inicial (aleatorio):\")\n", | |
| "print(neuron.bias)" | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": null, | |
| "metadata": { | |
| "id": "WnuCP6eHxtQk" | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "# TODO.\n", | |
| "# Modifiquemos el número de épocas de entranemiento para ver el\n", | |
| "# performance de la neurona.\n", | |
| "epochs = 0\n", | |
| "learning_rate = 0.01\n", | |
| "\n", | |
| "# Entrenamos la neurona por tantas épocas\n", | |
| "history = neuron.train(X, y, epochs, lr=learning_rate)\n", | |
| "\n", | |
| "print()\n", | |
| "print()\n", | |
| "print(\"Pesos después del entrenamiento:\")\n", | |
| "print(neuron.weights)\n", | |
| "print()\n", | |
| "print(\"Bias después del entrenamiento:\")\n", | |
| "print(neuron.bias)" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "7KFucScQncbe" | |
| }, | |
| "source": [ | |
| "Podemos evaluar el entrenamiento de la neurona." | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": null, | |
| "metadata": { | |
| "id": "8vhWL1nLnZ-R" | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "import plotly.express as px\n", | |
| "\n", | |
| "\n", | |
| "eje_x = np.arange(len(history))\n", | |
| "\n", | |
| "fig = px.line(\n", | |
| " x=eje_x,\n", | |
| " y=history,\n", | |
| " title=\"Historia de entrenamiento\",\n", | |
| " labels=dict(x=\"Épocas\", y=\"Error\")\n", | |
| ")\n", | |
| "fig.show()" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "7vPb5a65x0bA" | |
| }, | |
| "source": [ | |
| "### Realizando predicciones" | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": null, | |
| "metadata": { | |
| "id": "YlhaCvTeyeYt" | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "# Realizamos predicciones para verificar el resultado esperado\n", | |
| "one_one = np.array((1, 1))\n", | |
| "\n", | |
| "print(\"Predicción para (1, 1):\")\n", | |
| "neuron.predict(one_one)" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "WnjqyURkFH7H" | |
| }, | |
| "source": [ | |
| "> **Pregunta clave:** ¿Cómo se ven los datos utilizados para entrenamiento? ¿Qué sucedería si intentáramos utilizar la compuerta XOR?\n" | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": null, | |
| "metadata": { | |
| "id": "_rp9_fj1FKkT" | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "import plotly.graph_objects as go\n", | |
| "\n", | |
| "\n", | |
| "# Construcción de una rejilla\n", | |
| "x = np.linspace(-0.5, 1.5, 201)\n", | |
| "y = np.linspace(-0.5, 1.5, 201)\n", | |
| "xy = np.meshgrid(x, y)\n", | |
| "zz = np.array(list(zip(*(x.flat for x in xy))))\n", | |
| "\n", | |
| "# Predicción en la rejilla de valores\n", | |
| "surface = neuron.predict(zz).flatten()\n", | |
| "\n", | |
| "fig = go.Figure(data=[go.Scatter3d(\n", | |
| " x=zz[:, 0],\n", | |
| " y=zz[:, 1],\n", | |
| " z=surface,\n", | |
| " mode=\"markers\",\n", | |
| " marker=dict(\n", | |
| " size=1,\n", | |
| " color=surface,\n", | |
| " colorscale=\"Viridis\",\n", | |
| " opacity=0.8\n", | |
| " )\n", | |
| ")])\n", | |
| "\n", | |
| "# Tight layout\n", | |
| "fig.update_layout(margin=dict(l=0, r=0, b=0, t=0))\n", | |
| "fig.show()" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "0IHtR4uPEaCO" | |
| }, | |
| "source": [ | |
| "\n", | |
| "<center>\n", | |
| " *********\n", | |
| "</center>" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "7NE4e3KuEVst" | |
| }, | |
| "source": [ | |
| "## **Sección III – Tarea(s)**" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "source": [ | |
| "### El dataset a utilizar: Aprobación de estudiantes\n", | |
| "\n", | |
| "Simularemos datos de 500 estudiantes con estas características:\n", | |
| "\n", | |
| "- 🕐 Horas de estudio por semana (distribuidas alrededor de 5h)\n", | |
| "- 📝 Calificaciones previas (media cercana a 7.5)\n", | |
| "\n", | |
| "Crearemos un \"índice de aprobación\" combinando ambos factores. Esta será la \"fórmula secreta\" 🧪 para etiquetar si aprueban o no. (¿Será que la neurona podrá descubrirla?)\n", | |
| "\n", | |
| "Si ese índice > 0.7, consideraremos que el estudiante aprueba. 🎯\n", | |
| "\n", | |
| "→ Así nacen nuestras etiquetas. ✅ / ❌\n" | |
| ], | |
| "metadata": { | |
| "id": "cDv_Pu8PmnWK" | |
| } | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "import numpy as np\n", | |
| "import matplotlib.pyplot as plt\n", | |
| "\n", | |
| "\n", | |
| "# Fijaremos una semilla aleatoria para fines de replicabilidad del experimento\n", | |
| "np.random.seed(123)\n", | |
| "\n", | |
| "# Especificamos el tamaño de las muestras.\n", | |
| "n = 500\n", | |
| "\n", | |
| "\n", | |
| "# Crearemos dos variables a partir de una distribución normal:\n", | |
| "# → Horas de estudio ~ N(𝜇=5, 𝜎²=2)\n", | |
| "# → Promedio de calificaciones previas ~ N(𝜇=7.5, 𝜎²=1)\n", | |
| "horas_estudio = np.random.normal(5, 2, n).clip(0, 10)\n", | |
| "calif_previas = np.random.normal(7.5, 1, n).clip(0, 10)\n", | |
| "\n", | |
| "# Con ello, creamos la matriz de entrada (para entrenar la neurona)\n", | |
| "X = np.column_stack((horas_estudio, calif_previas))\n", | |
| "\n", | |
| "# Generamos la etiqueta: Índice de aprobación:\n", | |
| "indice_aprob = (0.4 * horas_estudio + 0.6 * calif_previas) / 10\n", | |
| "\n", | |
| "# Si el índice de aprobación es mayor a 0.7, consieraremos al\n", | |
| "# estudiante como aprobado\n", | |
| "y_true = (indice_aprob > 0.7).astype(int)\n", | |
| "\n", | |
| "# Total de estudiantes aprobados en los datos simulados:\n", | |
| "print(f\"Total de estudiantes aprobados: {sum(y_true)}\")\n", | |
| "print(f\"Porcentaje de aprobación: {sum(y_true)/n*100:.2f}%\")" | |
| ], | |
| "metadata": { | |
| "id": "-4GP3LJUrH9G" | |
| }, | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "source": [ | |
| "### 🔍 Visualización de los datos\n", | |
| "\n", | |
| "Antes del entrenamiento, los datos se ven como a continuación.\n", | |
| "\n", | |
| "Cada punto es un estudiante, coloreado por si aprobó (🟢) o no (🔴).\n", | |
| "\n", | |
| "**Podemos observar que hay una cierta separación...**\n", | |
| "Esta separación nos ayuda a identificar una región en el plano que separa a los estudiantes aprobados de los reprobados.\n", | |
| "\n", | |
| "¿Será capaz la neurona de descubrirla por sí sola? 🤔" | |
| ], | |
| "metadata": { | |
| "id": "5wMChnYcrYQY" | |
| } | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "# Especificamos el estilo del gráfico\n", | |
| "plt.style.use(\"seaborn-v0_8\")\n", | |
| "\n", | |
| "# Creamos una figura\n", | |
| "fig = plt.figure(figsize=(7, 6), dpi=300)\n", | |
| "\n", | |
| "# Agregamos el gráfico de dispersión de los datos\n", | |
| "scatter = plt.scatter(\n", | |
| " horas_estudio, calif_previas,\n", | |
| " c=y_true, cmap='RdYlGn', alpha=0.6\n", | |
| ")\n", | |
| "\n", | |
| "# Especificamos los límites de la gráfica\n", | |
| "x_min, x_max = horas_estudio.min() - 1, horas_estudio.max() + 1\n", | |
| "y_min, y_max = calif_previas.min() - 1, calif_previas.max() + 1\n", | |
| "plt.xlim(x_min, x_max)\n", | |
| "plt.ylim(y_min, y_max)\n", | |
| "\n", | |
| "# Agregamos las etiquetas de los ejes y el título del gráfico\n", | |
| "plt.xlabel(\"Horas de estudio\")\n", | |
| "plt.ylabel(\"Calificaciones previas\")\n", | |
| "plt.title(\"Gráfico de dispersión de los datos\")\n", | |
| "plt.colorbar(scatter, label=\"0 = Reprueba, 1 = Aprueba\")\n", | |
| "plt.show()" | |
| ], | |
| "metadata": { | |
| "id": "h7ueCQfZrSPL" | |
| }, | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "source": [ | |
| "**Nota:** Del grafico anterior podemos observar varios datos interesantes:\n", | |
| "- Notamos que el mínimo de horas dedicadas al estudio es de 0, sin embargo, podemos observar nadie con esta cantidad de horas de estudio está aprobado. Por otro lado, que el mínimo de horas estudiadas hasta que aparece el primer aprobado es de cerca de 4 horas de estudio. Esto puede sugerir que estudiar es un factor importante para aprobar.\n", | |
| "- Por otro lado, si bien podemos identificar a algunos estudiantes con un índice de aprobación positivo a pesar de tener un promedio de calificaciones bajo, notamos en esas mismas observaciones que dichos estudiantes tienen una cantidad considerable de horas de estudio, sugiriendo que estudiar puede ayudar a tener un índice de aprobación positivo.\n" | |
| ], | |
| "metadata": { | |
| "id": "eFHxZCdRrhbF" | |
| } | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "source": [ | |
| "### 📉 Entrenamiento\n", | |
| "\n", | |
| "Entrenamos la neurona para que aprenda a distinguir quién aprueba y quién no.\n", | |
| "\n", | |
| "Para ello usamos descenso de gradiente y entropía cruzada como función de pérdida.\n", | |
| "\n", | |
| "Resultado: la pérdida disminuye con cada época, ¡se está entrenando!\n", | |
| "\n", | |
| "**🔽 Aquí el gráfico de disminución del error en su predicción.**" | |
| ], | |
| "metadata": { | |
| "id": "iHP5z60YrkPB" | |
| } | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "# TODO: Creamos una instancia de neurona\n", | |
| "neuron = None\n", | |
| "\n", | |
| "print(\"Pesos iniciales (aleatorios):\")\n", | |
| "print(neuron.weights)\n", | |
| "print()\n", | |
| "print(\"Bias inicial (aleatorio):\")\n", | |
| "print(neuron.bias)" | |
| ], | |
| "metadata": { | |
| "id": "HzgNNFJ0rtd8" | |
| }, | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "# Entrenamos una neurona\n", | |
| "epochs = 0 # Prueba algo como 100_000\n", | |
| "learning_rate = 0.01\n", | |
| "\n", | |
| "history = neuron.train(X, y_true, epochs=epochs, lr=learning_rate)" | |
| ], | |
| "metadata": { | |
| "id": "bzrvBL1ar3Gj" | |
| }, | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "import plotly.express as px\n", | |
| "\n", | |
| "\n", | |
| "eje_x = np.arange(len(history))\n", | |
| "\n", | |
| "fig = px.line(\n", | |
| " x=eje_x,\n", | |
| " y=history,\n", | |
| " title='Historia de entrenamiento',\n", | |
| " labels=dict(x='Épocas', y='Error')\n", | |
| ")\n", | |
| "fig.show()" | |
| ], | |
| "metadata": { | |
| "id": "AZG32AwUr-27" | |
| }, | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "source": [ | |
| "### ✨ Frontera de decisión aprendida\n", | |
| "\n", | |
| "**¡Y lo logra!**\n", | |
| "\n", | |
| "Después de entrenarse, la neurona aprende una frontera de decisión que separa ambos grupos.\n", | |
| "\n", | |
| "Esa línea (aprendida por los pesos $w$ y el bias $b$) es lo que usa para decidir si alguien aprueba o no.\n", | |
| "\n", | |
| "🧠 En general, así aprende una red neuronal artificial, ajustando los parámetros asociados a cada neurona que conforma la red." | |
| ], | |
| "metadata": { | |
| "id": "J4hmAfvKsxHu" | |
| } | |
| }, | |
| { | |
| "cell_type": "code", | |
| "source": [ | |
| "# Obtenemos los parámetros\n", | |
| "w1 = neuron.weights[0, 0]\n", | |
| "w2 = neuron.weights[1, 0]\n", | |
| "b = neuron.bias\n", | |
| "\n", | |
| "# Creamos figura\n", | |
| "fig = plt.figure(figsize=(7, 6), dpi=300)\n", | |
| "\n", | |
| "scatter = plt.scatter(\n", | |
| " horas_estudio, calif_previas,\n", | |
| " c=y_true, cmap='RdYlGn', alpha=0.6\n", | |
| ")\n", | |
| "\n", | |
| "# Límites de la gráfica\n", | |
| "x_min, x_max = horas_estudio.min() - 1, horas_estudio.max() + 1\n", | |
| "y_min, y_max = calif_previas.min() - 1, calif_previas.max() + 1\n", | |
| "plt.xlim(x_min, x_max)\n", | |
| "plt.ylim(y_min, y_max)\n", | |
| "\n", | |
| "plt.xlabel(\"Horas de estudio\")\n", | |
| "plt.ylabel(\"Calificaciones previas\")\n", | |
| "plt.title(\"Clasificación de estudiantes: ¿Aprueba o no?\")\n", | |
| "plt.colorbar(scatter, label=\"0 = Reprueba, 1 = Aprueba\")\n", | |
| "\n", | |
| "# Frontera de decisión\n", | |
| "x_vals = np.linspace(x_min, x_max, 100)\n", | |
| "\n", | |
| "if w2 == 0:\n", | |
| " plt.axvline(\n", | |
| " x=-b / w1, color='k',\n", | |
| " linestyle='--', label=\"Frontera de decisión\"\n", | |
| " )\n", | |
| "else:\n", | |
| " y_boundary = -(w1 * x_vals + b) / w2\n", | |
| " plt.plot(x_vals, y_boundary, 'k--', label=\"Frontera de decisión\")\n", | |
| "\n", | |
| "plt.legend()\n", | |
| "plt.tight_layout()\n", | |
| "plt.show()\n" | |
| ], | |
| "metadata": { | |
| "id": "T5LHvRlksyXC" | |
| }, | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "source": [ | |
| "---" | |
| ], | |
| "metadata": { | |
| "id": "5YSwGPuWDBFm" | |
| } | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "source": [ | |
| "## Reto: Un conjunto de datos más complejo" | |
| ], | |
| "metadata": { | |
| "id": "HQL4_yToH1iI" | |
| } | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "1Z7JrTygMDSx" | |
| }, | |
| "source": [ | |
| "### El dataset a utilizar: Naranjas vs. Manzanas\n", | |
| "\n", | |
| "El dataset ha sido una adaptación de datos encontrados en [Kaggle](https://www.kaggle.com/datasets/theblackmamba31/apple-orange). Dicho dataset está compuesto por conjuntos de imágenes de naranjas y manzanas que serán un utilizados para entrenar una neurona artificial.\n" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "UVg0AU2-Fqzr" | |
| }, | |
| "source": [ | |
| "Para cargar los datos, primero los descargaremos de un repositorio donde previamente los preparé para ustedes.\n", | |
| "\n", | |
| "Puedes explorar directamente los archivos fuente del [repositorio en GitHub – `apple-orange-dataset`](https://github.com/RodolfoFerro/apple-orange-dataset).\n", | |
| "\n", | |
| "Puedes también explorar el [script](https://github.com/RodolfoFerro/apple-orange-dataset/blob/main/script.py) que he utilizado para la preparación de los mismos." | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": null, | |
| "metadata": { | |
| "id": "1S81FXVEFzQo" | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "!wget https://raw.githubusercontent.com/RodolfoFerro/apple-orange-dataset/main/training_data.csv\n", | |
| "!wget https://raw.githubusercontent.com/RodolfoFerro/apple-orange-dataset/main/testing_data.csv" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "CxfNdPU3NQge" | |
| }, | |
| "source": [ | |
| "### Preparación de los datos\n" | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": null, | |
| "metadata": { | |
| "id": "4fh3DURvLBvA" | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "import pandas as pd\n", | |
| "\n", | |
| "\n", | |
| "training_df = pd.read_csv('training_data.csv')\n", | |
| "testing_df = pd.read_csv('testing_data.csv')\n", | |
| "\n", | |
| "training_df" | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": null, | |
| "metadata": { | |
| "id": "8IWxRHjQ4GS4" | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "training_df['class_str'] = training_df['class'].astype('str')\n", | |
| "training_df['hover'] = [text.split('/')[-1] for text in training_df['filename']]\n", | |
| "\n", | |
| "testing_df['class_str'] = testing_df['class'].astype('str')\n", | |
| "testing_df['hover'] = [text.split('/')[-1] for text in testing_df['filename']]\n", | |
| "\n", | |
| "training_df" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "h7SGMNlqx8Dx" | |
| }, | |
| "source": [ | |
| "### Exploración de los datos" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "wRHZdY0B4NNB" | |
| }, | |
| "source": [ | |
| "Podemos verificar si el conjunto de datos está balanceado:" | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": null, | |
| "metadata": { | |
| "id": "dOvDsf0V3i7D" | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "training_df.groupby('class').count()" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "5MVOWcHT4Qiz" | |
| }, | |
| "source": [ | |
| "Podemos explorar cómo se ven los datos en un gráfico 3D:" | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": null, | |
| "metadata": { | |
| "id": "RXINRt1ox_-G" | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "import plotly.express as px\n", | |
| "\n", | |
| "\n", | |
| "fig = px.scatter_3d(\n", | |
| " training_df,\n", | |
| " x='r', y='g', z='b',\n", | |
| " color='class_str',\n", | |
| " symbol='class_str',\n", | |
| " color_discrete_sequence=['#be0900', '#ffb447'],\n", | |
| " opacity=0.5,\n", | |
| " hover_data=['hover']\n", | |
| ")\n", | |
| "fig.show()" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "L8aw6ijc3QZ7" | |
| }, | |
| "source": [ | |
| "Puedes explorar las imágenes y sus valores de color utilizando el color picker que ofrece Google: https://g.co/kgs/uarXyu\n", | |
| "\n", | |
| "> **Pregunta clave:** ¿Los datos son linealmente separables? Con lo que hemos explorado hasta ahora, ¿basta una neurona para resolver el problema planteado?" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "npjrVs7jUBC3" | |
| }, | |
| "source": [ | |
| "### Creación de una neurona artificial\n" | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": null, | |
| "metadata": { | |
| "id": "eHmZ4nnccToB" | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "# TODO: Crear una instancia de neurona\n", | |
| "neuron = TrainableNeuron(3)\n", | |
| "neuron.weights" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "B4DmYPVAUJ2d" | |
| }, | |
| "source": [ | |
| "### Entrenamiento del modelo\n", | |
| "\n", | |
| "Para entrenar el modelo, simplemente utilizamos el método `.train()` del modelo.\n", | |
| "\n", | |
| "Antes de entrenar los datos, procedemos a escalarlos a valores en [0, 1]." | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": null, | |
| "metadata": { | |
| "id": "_0o5NZsB7ORw" | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "training_inputs = training_df[['r', 'g', 'b']].values / 255.\n", | |
| "training_output = training_df['class'].values\n", | |
| "\n", | |
| "training_inputs, training_output" | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": null, | |
| "metadata": { | |
| "id": "KX3X_t7B73NV" | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "history = neuron.train(training_inputs, training_output, epochs=10_000, lr=0.01) #TODO: Train a neuron" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "_2oyTh_jMAIM" | |
| }, | |
| "source": [ | |
| "### Evaluación y predicción\n", | |
| "\n", | |
| "Podemos evaluar el entrenamiento de la neurona." | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": null, | |
| "metadata": { | |
| "id": "buRgAf7xLvln" | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "import plotly.express as px\n", | |
| "\n", | |
| "\n", | |
| "eje_x = np.arange(len(history))\n", | |
| "\n", | |
| "fig = px.line(\n", | |
| " x=eje_x,\n", | |
| " y=history,\n", | |
| " title='Historia de entrenamiento',\n", | |
| " labels=dict(x='Épocas', y='Error')\n", | |
| ")\n", | |
| "fig.show()" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "KMX5Gjzu92e1" | |
| }, | |
| "source": [ | |
| "\n", | |
| "> **Pregunta clave:** ¿Qué sucede con la historia de entrenamiento?\n", | |
| "\n", | |
| "> **Pro-tip:** Exploremos con una nueva función de pérdida, qué tal la utilizada usualemente en una regresión logística: https://developers.google.com/machine-learning/crash-course/logistic-regression/model-training" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "ZsC5ELq7Ad-F" | |
| }, | |
| "source": [ | |
| "Para predecir un color de ejemplo:" | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": null, | |
| "metadata": { | |
| "id": "kLqvq2cnUfdD" | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "# Preparamos los datos\n", | |
| "sample_index = 0\n", | |
| "\n", | |
| "input_sample = testing_df[['r', 'g', 'b']].iloc[sample_index].values\n", | |
| "# input_sample = np.array([])\n", | |
| "print('Color real:', input_sample)\n", | |
| "\n", | |
| "input_sample = input_sample / 255.\n", | |
| "print('Color transformado:', input_sample)\n", | |
| "\n", | |
| "real_class = testing_df[['class']].iloc[sample_index].values\n", | |
| "print('Clase real:', real_class)" | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": null, | |
| "metadata": { | |
| "id": "l8mB_4-T6l7G" | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "neuron.predict(input_sample).tolist()" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "8ubrtbZdoJ-m" | |
| }, | |
| "source": [ | |
| "Para evaluar esta tarea, vamos a utilizar funciones de scikit-learn para la que nos permitirán realizar la evaluación de resultados en el conjunto de pruebas. (Utilizar [`sklearn.metrics.accuracy_score`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html#sklearn.metrics.accuracy_score))" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "hMCddqlrYosR" | |
| }, | |
| "source": [ | |
| "<center>\n", | |
| " *********\n", | |
| "</center>" | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": null, | |
| "metadata": { | |
| "id": "20x0UwqUAtdz" | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "import plotly.express as px\n", | |
| "\n", | |
| "\n", | |
| "fig = px.scatter_3d(\n", | |
| " testing_df,\n", | |
| " x='r', y='g', z='b',\n", | |
| " color='class_str',\n", | |
| " symbol='class_str',\n", | |
| " color_discrete_sequence=['#be0900', '#ffb447'],\n", | |
| " opacity=0.5,\n", | |
| " hover_data=['hover']\n", | |
| ")\n", | |
| "fig.show()" | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": null, | |
| "metadata": { | |
| "id": "tccP9w_EBGvG" | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "def get_predictions(testing_df, threshold=0.5):\n", | |
| " testing_inputs = testing_df[['r', 'g', 'b']].values / 255.\n", | |
| " testing_output = testing_df['class'].values\n", | |
| "\n", | |
| " predictions = []\n", | |
| " for test_input in testing_inputs:\n", | |
| " if neuron.predict(test_input)[0] <= threshold:\n", | |
| " prediction = 0\n", | |
| " else:\n", | |
| " prediction = 1\n", | |
| " predictions.append(prediction)\n", | |
| " predictions = np.array(predictions)\n", | |
| "\n", | |
| " return testing_output, predictions" | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": null, | |
| "metadata": { | |
| "id": "JZvNFNY4B-Z9" | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "from sklearn.metrics import accuracy_score\n", | |
| "\n", | |
| "\n", | |
| "testing_output, predictions = get_predictions(testing_df, threshold=0.5)\n", | |
| "result = accuracy_score(testing_output, predictions)\n", | |
| "print(f'Accuracy: {result * 100:.6}%')" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "fYFSRK0P_c1d" | |
| }, | |
| "source": [ | |
| "> **Pregunta clave:** ¿Qué sucede si cambiamos el _threshold_ a 0.7? A veces conviene explorar el valor de umbral que seleccionamos y no siempre dar por hecho que 0.5 va a funcionar todas las veces. <br><br>\n", | |
| "> Lee más aquí: https://ploomber.io/blog/threshold/" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "QKp_PZ_NDqbS" | |
| }, | |
| "source": [ | |
| "> **Para resolver la tarea, el reto es:** Mejor accuracy obtenido en la clase.\n", | |
| "\n", | |
| "**Puedes explorar:**\n", | |
| "- Utilizar 1 a 3 variables (de las dadas).\n", | |
| "- Investigar e implementar una nueva función para estimar el error.\n", | |
| "- Realizar transformaciones en los datos.\n", | |
| "- Entrenar por más épocas.\n", | |
| "- Mover el umbral para definir la clase.\n", | |
| "- Explorar otras funciones de activación.\n", | |
| "- Generar tu nuevo dataset de datos a partir de las imágenes originales." | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "hSdbQU3e6-Ky" | |
| }, | |
| "source": [ | |
| "--------\n", | |
| "\n", | |
| "> Contenido creado por **Rodolfo Ferro**, 2025. <br>\n", | |
| "> Puedes contactarme a través de Insta ([@rodo_ferro](https://www.instagram.com/rodo_ferro/)) o X ([@rodo_ferro](https://twitter.com/rodo_ferro))." | |
| ] | |
| } | |
| ], | |
| "metadata": { | |
| "accelerator": "GPU", | |
| "colab": { | |
| "provenance": [], | |
| "include_colab_link": true | |
| }, | |
| "kernelspec": { | |
| "display_name": "Python 3", | |
| "name": "python3" | |
| } | |
| }, | |
| "nbformat": 4, | |
| "nbformat_minor": 0 | |
| } |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment