Run this command to remember your password:
git config --global credential.helper 'cache --timeout 28800'Above command will tell git to cache your password for 8 hours.
| #!/usr/bin/env bash | |
| # Installs ffmpeg from source (HEAD) with libaom and libx265, as well as a few | |
| # other common libraries | |
| # binary will be at ~/bin/ffmpeg | |
| sudo apt update && sudo apt upgrade -y | |
| mkdir -p ~/ffmpeg_sources ~/bin | |
| export PATH="$HOME/bin:$PATH" |
| ### | |
| # HAProxy configuration for Eventmq Web-node. | |
| # Configured to serve: | |
| # - 100k websocket connections | |
| # - 2k (2% of WS) streaming connections (5k fullconn) | |
| # - 100 (0.1% of WS) xhr connections (5k fullconn) | |
| ### | |
| global | |
| log 127.0.0.1 local2 info |
| # Deleting all keys in redis that match a regular expression | |
| # General | |
| redis-cli [options] KEYS "prefix:*" | xargs redis-cli [options] DEL | |
| # If authorization is needed | |
| redis-cli -a <AUTH> KEYS "prefix:*" | xargs redis-cli -a <AUTH> DEL | |
| # If no authorization is needed | |
| redis-cli KEYS "prefix:*" | xargs redis-cli DEL |
| -Xmx10g | |
| m2.xlarge (4 virtual cores) | |
| Both Jetty and Netty execute the same code--generate 8k of random bits and compute a sha1, returning it over the wire. | |
| INTERNAL (Benchmark tool runs on same machine) | |
| -------- | |
| Jetty: |
| import java.io.FileDescriptor; | |
| import java.io.FileOutputStream; | |
| import java.io.IOException; | |
| import java.io.OutputStream; | |
| import java.io.PrintStream; | |
| public class HelloWorld{ | |
| private static HelloWorld instance; | |
| public static void main(String[] args){ | |
| instantiateHelloWorldMainClassAndRun(); |
| If you want, I can try and help with pointers as to how to improve the indexing speed you get. Its quite easy to really increase it by using some simple guidelines, for example: | |
| - Use create in the index API (assuming you can). | |
| - Relax the real time aspect from 1 second to something a bit higher (index.engine.robin.refresh_interval). | |
| - Increase the indexing buffer size (indices.memory.index_buffer_size), it defaults to the value 10% which is 10% of the heap. | |
| - Increase the number of dirty operations that trigger automatic flush (so the translog won't get really big, even though its FS based) by setting index.translog.flush_threshold (defaults to 5000). | |
| - Increase the memory allocated to elasticsearch node. By default its 1g. | |
| - Start with a lower replica count (even 0), and then once the bulk loading is done, increate it to the value you want it to be using the update_settings API. This will improve things as possibly less shards will be allocated to each machine. | |
| - Increase the number of machines you have so |
Not for everyone. Each programmer has their own appreciation of what is good coding music.
(From most influential to least)