Skip to content

Instantly share code, notes, and snippets.

View raytroop's full-sized avatar
🎯
Focusing

raytroop

🎯
Focusing
View GitHub Profile
@raytroop
raytroop / noise_generator.py
Created December 8, 2025 02:40 — forked from m-schubert/noise_generator.py
Generate noise in Python with a specific colour / power spectral density
# Copyright 2022 Ioces Pty. Ltd.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
import uvm_pkg :: *;
class my_seq_item extends uvm_sequence_item;
rand logic [7:0] addr;
rand logic [7:0] data;
constraint addr_range_cn {
addr inside {[10:20]};
}
constraint data_range_cn {
@raytroop
raytroop / pycurses.py
Created October 16, 2019 13:22 — forked from claymcleod/pycurses.py
Python curses example
import sys,os
import curses
def draw_menu(stdscr):
k = 0
cursor_x = 0
cursor_y = 0
# Clear and refresh the screen for a blank canvas
stdscr.clear()
@raytroop
raytroop / static_inline_example.md
Created December 5, 2018 16:32 — forked from htfy96/static_inline_example.md
static inline vs inline vs static in C++

In this article we compared different behavior of static, inline and static inline free functions in compiled binary. All the following test was done under g++ 7.1.1 on Linux amd64, ELF64.

Test sources

header.hpp

#pragma once

inline int only_inline() { return 42; }
static int only_static() { return 42; }
@raytroop
raytroop / tf_gradient_clip_lr_decay.py
Created September 2, 2018 08:31 — forked from InnerPeace-Wu/tf_gradient_clip_lr_decay.py
ways to do gradients clipping and learning rate decay in tensorflow
import tensorflow as tf
#aplly exponential decay on learning rate
global_step = tf.Variable(0, trainable=False)
stater_learning_rate = lr #for start
learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,
decay_steps, decay_rate, staircase=True)
optimizer = tf.train.AdamOptimizer(learning_rate)