Javaで多引数のコンストラクタを避けるために使われる Builder パターンは、Scala では名前付き引数呼び出しが使えるので必要ありません。
Builderパターンに制約を加えて条件に合わない場合、コンパイルエラーにしたい。
| #!/bin/bash | |
| # Put under /etc/autoMemoryReclaim.sh | |
| # set variables at the top | |
| low_cpu_usage=50 # Note: We work with integer percentages (e.g., 50%) | |
| idle_time=2 # Minutes | |
| cached_memory_limit=1000 # MB | |
| percent_memory_to_reclaim=5 # Percentage as an integer | |
| wait_period=3 |
| oc project kubernauts | |
| podname=$(oc get pods | grep postgres-$(oc get dc postgres -o jsonpath='{.status.latestVersion}') | grep -v deploy | grep Running | awk '{print $1}') | |
| oc rsh -c postgres $podname | |
| OR | |
| oc get pods | grep postgres | |
| oc exec -it postgres-pod-name bash | |
| from starlette.applications import Starlette | |
| from starlette.routing import Route | |
| from starlette.responses import PlainTextResponse | |
| import httpx | |
| import aiohttp | |
| HOST, PORT = "localhost", 8000 | |
| URL = f"http://{HOST}:{PORT}/" |
| name := "HBaseDemo" | |
| version := "0.1" | |
| scalaVersion := "2.11.12" | |
| libraryDependencies += "com.typesafe" % "config" % "1.3.2" | |
| libraryDependencies += "org.apache.hadoop" % "hadoop-common" % "2.7.0" | |
| libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "2.7.0" | |
| libraryDependencies += "org.apache.hbase" % "hbase-client" % "1.1.8" |
| References: https://gist.github.com/noelboss/3fe13927025b89757f8fb12e9066f2fa | |
| More info: https://www.digitalocean.com/community/tutorials/how-to-use-git-hooks-to-automate-development-and-deployment-tasks | |
| https://www.youtube.com/watch?v=6mYtJu0E47U | |
| [misc commands deluser newuser] | |
| The following steps is for Developers to push their code from their local machine, to their server with git, and let git auto | |
| pull the update to your remote folder. How it works: | |
| - From your local machine, you do your normal coding. When done, you push your new code to git | |
| - Git then updates your local machine, and push it to your server's git | |
| - Git on your server gets the new update, and push it to your server's working folder |
| --- | |
| apiVersion: v1 | |
| kind: ConfigMap | |
| metadata: {name: content} | |
| data: | |
| index.html: '<html><body><h1>Hello world</h1></body></html>' | |
| --- | |
| apiVersion: apps/v1beta2 | |
| kind: Deployment | |
| metadata: {name: lb} |
| After automatically updating Postgres to 10.0 via Homebrew, the pg_ctl start command didn't work. | |
| The error was "The data directory was initialized by PostgreSQL version 9.6, which is not compatible with this version 10.0." | |
| Database files have to be updated before starting the server, here are the steps that had to be followed: | |
| # need to have both 9.6.x and latest 10.0 installed, and keep 10.0 as default | |
| brew unlink postgresql | |
| brew install [email protected] | |
| brew unlink [email protected] | |
| brew link postgresql |
| After automatically updating Postgres to 10.0 via Homebrew, the pg_ctl start command didn't work. | |
| The error was "The data directory was initialized by PostgreSQL version 9.6, which is not compatible with this version 10.0." | |
| Database files have to be updated before starting the server, here are the steps that had to be followed: | |
| # need to have both 9.6.x and latest 10.0 installed, and keep 10.0 as default | |
| brew unlink postgresql | |
| brew install [email protected] | |
| brew unlink [email protected] | |
| brew link postgresql |
| package com.chetan.poc.hbase | |
| /** | |
| * Created by chetan on 24/1/17. | |
| */ | |
| import org.apache.spark._ | |
| import org.apache.hadoop.hbase.{CellUtil, HBaseConfiguration, TableName} | |
| import org.apache.hadoop.hbase.mapreduce.TableInputFormat | |
| import org.apache.hadoop.hbase.util.Bytes | |
| import org.apache.hadoop.hbase.client._ |