Created
June 25, 2017 04:13
-
-
Save lbxa/fafa72a9dd15aa4c2425d38f52df8e39 to your computer and use it in GitHub Desktop.
Tensorflow beginnings
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| import tensorflow as tf | |
| ''' | |
| Test Driving TensorFlow: First Time | |
| This is a basic program which basically plays around with Tensors | |
| and what not. | |
| 23.06.2017 | Lucas Barbosa | Open Source Software (C) | |
| ''' | |
| ''' | |
| "You are the rock upon which I will build my church" | |
| Let us build a computational graph which is just a series of | |
| TensroFlow operations arranged into a graph of nodes. | |
| 1) Lets actually get some nodes started | |
| ''' | |
| node1 = tf.constant(3.0, dtype=tf.float32) | |
| node2 = tf.constant(15.4, dtype=tf.float32) | |
| print(node1, node2) | |
| ''' | |
| "Sometimes you gotta run before you can walk" | |
| The above print statement did the equivalent to python's built in | |
| type() function, however we want to print or evaluate the value within | |
| the nodes. | |
| To get the desired results we want, we need to run this computational | |
| graph through a TensorFlow session. A session encapsulates the control | |
| and state of the TensorFlow runtime. | |
| ''' | |
| tfSession = tf.Session() | |
| print(tfSession.run([node1, node2])) | |
| ''' | |
| We can build more complicated computational graphs by combining Tensor | |
| nodes with operations (yes, operations are also nodes). | |
| Lets add 2 constants into a new node. | |
| ''' | |
| node3 = tf.add(node1, node2) | |
| print("First node operation: ", tfSession.run(node3)) | |
| ''' | |
| **Not to forget TensorBoards, which are tools to visualise computational | |
| graphs built on Tensorflow | |
| ''' | |
| ''' | |
| Right now our graph is actually not that exciting as it always produces | |
| constant results (that being our addition operation), we generalise an | |
| operation to a node, a retrieve the values LATER. This involves the | |
| use of a placeholder which is a promise to provide a value later. | |
| All we need to do is specify the nature of the value which will be | |
| provided later. | |
| tf.placeholder(<dtype>) | |
| ''' | |
| alphaNode = tf.placeholder(tf.float64) | |
| betaNode = tf.placeholder(tf.float64) | |
| addernode = alphaNode + betaNode # set up an adder node to placeholders | |
| ''' | |
| Once we have the placeholders ready to 'place hold' we can put tmp | |
| values into them using the feed_dict parameter joined with the | |
| operation we wish to perform, in this case being the adder_node | |
| ''' | |
| print(tfSession.run(addernode, feed_dict={alphaNode: 56.34, | |
| betaNode: 64.89})) | |
| ''' | |
| Lets make our graph a little more complex with some new nodes | |
| ''' | |
| pow_node = (alphaNode * alphaNode) * (betaNode * betaNode) | |
| mul_node = alphaNode * betaNode | |
| mod_node = alphaNode % betaNode | |
| # We can store the contents of a session in a new node | |
| ped_node = tfSession.run(pow_node, feed_dict={alphaNode: 2.4, | |
| betaNode: 45.4}) | |
| print("pow_node: %s" % ped_node) | |
| ped_node = tfSession.run(mul_node, feed_dict={alphaNode: 2.4, | |
| betaNode: 45.4}) | |
| print("mul_node: %s" % ped_node) | |
| ped_node = tfSession.run(mod_node, feed_dict={alphaNode: 50, | |
| betaNode: 5}) | |
| print("mod_node: %s" % ped_node) | |
| ''' | |
| So far all that we have achieved, has been outputting some numbers, and | |
| doing things python can do on its own. Let's start getting more | |
| sophisticated with out approach. Constants and placeholders can be used | |
| to change the inputs, and get a resulting change in the outputs. | |
| However now we want a model that can take arbitrary inputs, and get new | |
| outputs. To make a model trainable, we need to use TensorFlow Variables | |
| to add trainable parameters to a graph. | |
| They are constructed with an initial value and a graph | |
| ''' | |
| m = tf.Variable([.3], dtype=tf.float64) | |
| b = tf.Variable([-.3], dtype=tf.float64) | |
| x = tf.placeholder(tf.float64) | |
| y = m * x + b # y = mx + b | |
| ''' | |
| *Something important to concider is that Variables, unlike constants | |
| are no initialized until we explicitly state the following | |
| var_init = tf.initialize_all_variables() | |
| tf.run(var_init) | |
| Until this line is called the Variables will remain uninitialized! | |
| ''' | |
| var_init = tf.initialize_all_variables() | |
| tfSession.run(var_init) | |
| linear_model = tfSession.run(y, feed_dict={x: [1, 2, 3, 4]}) | |
| print(linear_model) | |
| ''' | |
| Okay cool, we got a linear model all set up, but we can't stop their, | |
| we still don't know good it is. To evaluate the model on trainning | |
| data, we need placeholders which hold out desired values, and after | |
| that we have to write a loss function which measures how far apart | |
| our current model is from the provided area. | |
| We'll use a standard loss model for linear regression, which sums the | |
| squares of the deltas between the current model and the provided data. | |
| ''' | |
| desired_y = tf.placeholder(tf.float64) | |
| powed_deltas = tf.square(linear_model - desired_y) | |
| loss_functor = tf.reduce_sum(powed_deltas) | |
| linear_loss = tfSession.run(loss_functor, feed_dict={x: [1, 2, 3, 4], | |
| desired_y: [0, -1, -2, -3]}) | |
| print(linear_loss) | |
| ''' | |
| We can tell that our linear model isn't perfect, so we can 're-assign' | |
| values to our gradient and b value to reduce the loss | |
| ''' | |
| fixM = tf.assign(m, [-1.]) | |
| fixb = tf.assign(b, [1.]) | |
| tfSession.run([fixM, fixb]) | |
| linear_loss = tfSession.run(loss_functor, feed_dict={x: [1, 2, 3, 4], | |
| desired_y: [0, -1, -2, -3]}) | |
| print(linear_loss) | |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment