Say you want a Moat Pro "common" package that looks like this:
moatpro moatpro - init.py - models - lib - db ...etc...
- setup.py
| /* | |
| Unfortunately, `npx prisma db pull` has two shortcomings: | |
| 1. It doesn't automatically convert to PascalCase / camelCase and add @map and @@map. Thankfully, | |
| it does leave existing @map/@@map alone. | |
| 2. It automatically converts enums back to snake_case, even if they were already PascalCase / camelCase. | |
| This script ameliorates that. It works as a state machine, going over the file line by line. | |
| - It will convert enum names to PascalCase | |
| - It will convert model names to PascalCase and add an @@map | |
| - It will convert field types to camelCase and add a @map |
Say you want a Moat Pro "common" package that looks like this:
moatpro moatpro - init.py - models - lib - db ...etc...
| # Get our data | |
| import input_data | |
| mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) | |
| import tensorflow as tf | |
| def weight_variable(shape): | |
| initial = tf.truncated_normal(shape, stddev=0.1) | |
| return tf.Variable(initial) |
| from datetime import date | |
| class DateRange(object): | |
| def __init__(self, start_date, end_date): | |
| self.start_date = start_date | |
| self.end_date = end_date | |
| def __repr__(self): | |
| return "<%s: %s - %s>" % (self.__class__.__name__, | |
| self.start_date, |
| import string | |
| f = open("/usr/share/dict/words") | |
| word_list = set(f.read().split('\n')) | |
| def is_word(word): | |
| return word in word_list | |
| candidates_checked = 0 | |
| def longest_word_chain(starting_word='', min_length=1): |