Skip to content

Instantly share code, notes, and snippets.

View lmyslinski's full-sized avatar

Łukasz Myśliński lmyslinski

View GitHub Profile
1. Setup mock workflow server - it needs to receive a call from the reasoning api, do some sleep work/processing and return a randomized(?) response with expected schema
2. Prepare DB for custom skills storage - we need to store workflow references, config and any additional fields that might be required
Q: Should we use json here? It makes sense since the workflow config is likely to be highly customizable
Q: Should we explore using langgraph here?
3. Implement workflow API calling and pausing the skill execution
with tf.Session() as sess:
# Set up all the tensors, variables, and operations.
input = tf.constant(x_with_bias)
target = tf.constant(np.transpose([y]).astype(np.float32))
weights = tf.Variable(tf.random_normal([2, 1], 0, 0.1))
tf.initialize_all_variables().run()
yhat = tf.matmul(input, weights)
yerror = tf.sub(yhat, target)