You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
3D DOM viewer, copy-paste this into your console to visualise the DOM topographically.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
note: these are designed to be primarily a re-install guide for myself (writing things down helps me memorize the knowledge), as such don't take any of this on blind faith - some areas are well tested and the docs are very robust, some items, less so). YMMV
With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback".
I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The following are appendices from Optics By Example, a comprehensive guide to optics from beginner to advanced! If you like the content below, there's plenty more where that came from; pick up the book!
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
All other guides I've seen (https://github.com/drduh/YubiKey-Guide being the most prolific) tell you to use the Yubikey's smartcard (PKCS#11) features with GnuPG via gpg-agent.
STOP THE MADNESS!
OpenSSH has supported OpenSC since version 5.4. This means that all you need to do is install the OpenSC library and tell SSH to use that library as your identity.
Setting up ssh public key authentication on macOS using a YubiKey 4
Setting up ssh public key authentication on macOS using a YubiKey 4
I largely followed Florin's blog post, but have a few notes to add regarding issues I encountered:
Basic setup notes
I used a YubiKey 4, while the blog describes using a YubiKey NEO. I'm sure a YubiKey 5 would also work. I'm also running macOS 10.13.6.
I installed GPGTools as recommended. However, as I'll note later, it seems that gpg-agent only automatically starts when gpg is used; for ssh, you'll need to ensure it's running.
Before generating your keys, decide what key size you want to use. If you run the list command inside gpg --edit-card, look for the Key attributes line to see what is currently selected. On my YubiKey 4, it defaulted to 2048 bits for all keys:
There isn't a way to configure DND for the weekend on Slack. However, you can use the /dnd 24 hours command to disable notifications for 24 hours using the [undocumented API][api] and [legacy tokens][token]. We're going to use the undocumented api and IFTTT to automate this.
Steps
Generate a [token] for your account
Navigate to your personal DM and copy the url (e.g. DAYM93S0N from acme.slack.com/messages/DAYM93S0N/convo/GBG661YR2-1536698414.000100/)
Update the following url with your information (TOKEN looks like xoxp-4030334043-4959593942-3030320302032-sdf23rfasdf23rsf and CHANNEL looks like DAYM93S0N)