start new:
tmux
start new with session name:
tmux new -s myname
| To whom it may concern, | |
| I wanted to follow up on my previous emails. | |
| My understanding after consulting with others is that what we do in the nixpkgs derivation for CUDA does not preclude binary caching & redistribution, as we only modify the library metadata such as the dynamic section (i.e. DT_RUNPATH, setting interpreter, setting RPATH). As I understand, object code refers to machine code that is executed by the processor, and thus my understanding is that we leave the object code untouched. I previously shared these post-distribution patches on 2/3/2020 for your review. | |
| Furthermore, my understanding under Section 2.3 is that we are ok to redistribute the SDK in full as long as this redistribution only happens under Linux. | |
| Thus, we plan on proceeding with setting up a binary cache for distributing CUDA and packages requiring CUDA using nixpkgs. |
| struct TypedFunction{T <: Function,I,O} | |
| f::T | |
| end | |
| (f::TypedFunction)(args...) = f.f(args...) | |
| function tupleize(args) # I don't know how to go from (Int,Int) to Tuple{Int,Int} | |
| t = Tuple{Any} | |
| t.parameters = Core.svec(args...) |
| """ Trains an agent with (stochastic) Policy Gradients on Pong. Uses OpenAI Gym. """ | |
| import numpy as np | |
| import cPickle as pickle | |
| import gym | |
| # hyperparameters | |
| H = 200 # number of hidden layer neurons | |
| batch_size = 10 # every how many episodes to do a param update? | |
| learning_rate = 1e-4 | |
| gamma = 0.99 # discount factor for reward |
| """ | |
| DESCRIPTION: | |
| Using SublimeREPL, this plugin allows one to easily transfer AND | |
| evaluate blocks of python code. The code automatically detect python | |
| blocks and executes only code lines, omitting empty space and comments. | |
| One can skips space, comment blocks and comment lines by executing on | |
| empty lines, comments etc. | |
| REQUIRES: | |
| working with only 2 groups in the window. the main group (group 0) |