You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
hey, thanks for submitting the csv-to-sheets project. i've reviewed the codebase and wanted to share some feedback.
the project has a number of bugs and issues that would cause problems in real usage. there are several places where errors are silently swallowed, meaning the user would see broken or incomplete results with no indication that something went wrong. there are also code paths that would crash the app outright on certain inputs, and some logic that exists in the code but doesn't actually do anything. beyond the bugs, there are no tests anywhere in the project, which makes it hard to trust that things work correctly or to make changes safely.
i'd recommend going through that and focusing on error handling, thinking through what happens when things fail, not just when they succeed. that's a critical skill and it's something we look for.
latin-1 accepts every byte (0x00-0xFF), never throws. a renamed zip uploaded as .csv decodes into garbage and gets pushed to sheets. no log entry on fallback. prefer cp1252 if the goal is supporting excel exports, it actually rejects invalid bytes.
also, csv.reader returns plain lists. csv.DictReader gives rows keyed by header, cleaner for column handling.
two completely different code paths for local vs hosted auth, no tests for either. hosted mode doesn't persist refreshed tokens so it re-authenticates on every request. no confidence any of this works reliably.
Exception is a superclass of HttpError, so this just catches Exception. more importantly, it retries permanent errors (403, 404) the same as transient ones. retrying a 403 three times with backoff is wasted time.
CELL_LIMIT defined inline here and separately in main.py:26. the check in upload_data_to_sheet is dead code in the current flow since main.py already rejects oversized data before this is called. if kept as defensive code, extract the constant so the two can't drift.
4. failed batches produce a sheet with silent gaps
user already has a 202 with a sheet url. if the upload fails, they see an empty sheet with no indication anything went wrong. also, there's no way to track progress. bad design imo.
starlette already supports max_part_size on request.form(), which rejects oversized uploads at the parsing level. instead of manual chunked reading, this could be:
not a problem since this is a test project, but something to keep in mind. no rate limiting, no CORS config. anyone can hit /upload and burn through the google api quota.
general
no tests anywhere in this repo. no unit tests, no integration tests, nothing. any change risks breaking existing behavior with zero safety net.
bare except Exception used throughout the codebase. this swallows everything, including bugs you'd want to see. always catch the specific exceptions you expect and let the rest propagate.
docker-compose.yml is unnecessary. this is a single fastapi app with no database, no redis, no separate services. a Dockerfile is all you need.
nothing wrong with using ai to write code, but i'd have expected it to produce tests and catch most of these bugs.