Skip to content

Instantly share code, notes, and snippets.

@guest271314
Created January 4, 2026 22:17
Show Gist options
  • Select an option

  • Save guest271314/c88d281572aadb2cc6265e3e9eb09810 to your computer and use it in GitHub Desktop.

Select an option

Save guest271314/c88d281572aadb2cc6265e3e9eb09810 to your computer and use it in GitHub Desktop.
guest271314 vs. Gemini

https://discord.com/channels/1109464496164048996/1271968038241177644/1457408577718194290

guest271314 β€” 4:21 PM google[dot]com "AI" result nonsense:

Query:

optimize this code const json = JSON.parse( [...message].map((s) => String.fromCodePoint(s)).join(""), );

Result:

AI Overview The most direct and optimized way to convert an array of code points into a string for JSON.parse is to use the String.fromCodePoint() static method with the spread operator, similar to your original code, but applied directly to the original array if it contains code points. Optimized Code javascript const json = JSON.parse(String.fromCodePoint(...message)); Explanation of Optimization The original code involved an intermediate step of mapping each item and then joining the resulting array: javascript const json = JSON.parse( [...message].map((s) => String.fromCodePoint(s)).join("") ); The optimized approach improves performance and readability by: Eliminating the .map() and .join('') methods: The original code created a new intermediate array using map(), converted each element individually, and then concatenated the entire array into a single string using join(""). Leveraging the spread operator directly with String.fromCodePoint(): The String.fromCodePoint() method can accept multiple arguments (a sequence of code points). The spread operator (...message) passes the elements of the message array as individual arguments to the function, which is a more efficient, single operation for string creation. Direct conversion: The string is created in one step, minimizing memory allocation for intermediate data structures (like the array of single-character strings created by map()). This optimization is particularly beneficial for larger arrays, as it avoids the potential performance issues associated with creating and concatenating many small strings. Monteov β€” 8:02 PM gemini.google.com/share/9bed8f854851 πŸ™‚ guest271314 β€” 8:05 PM Still absolute garbage... SPread syntax is not an operator

The Problem: The spread operator (...) expands Monteov β€” 8:09 PM But the code is correct, that is all that matters. :sadness: guest271314 β€” 8:16 PM Everything matters...

Not every JavaScript runtime has TextEncoder, which is not part of ECMA-262.

Garbage... Monteov β€” 9:03 PM You can put requirements for the response you are expecting. It is not like humans are perfect. Remember Stack Overflow? It took multiple answers for 1 correct one. guest271314 β€” 9:10 PM It's possible there's more than one correct answer. The initial answer by the program was wrong. Completely wrong. I had already encoutered the RangeError. There is no spread operator. It's spread syntax. Wrong. I'm not in to making excuses for programs, policies, omissions or exaggerations in documentations, by anybody. I don't settle. guest271314 β€” 9:08 PM There's no TextEncoder or TextDecoder in QuickJS qjs CLI executable; nor in AssemblyScript. guest271314 β€” 9:08 PM This is what I wound up doing

function sendMessage(message) { if (message.length > 1024 ** 2) { const json = message; const data = new Array(); let fromIndex = 1024 ** 2 - 8; let index = 0; let i = 0; do { i = json.indexOf(44, fromIndex); const arr = json.subarray(index, i); data.push(arr); index = i; fromIndex += 1024 ** 2 - 8; } while (fromIndex < json.length); if (index < json.length) { data.push(json.subarray(index)); } for (let j = 0; j < data.length; j++) { const start = data[j][0]; const end = data[j][data[j].length - 1]; if (start === 91 && end !== 44 && end !== 93) { const x = new Uint8Array(data[j].length + 1); for (let i2 = 0; i2 < data[j].length; i2++) { x[i2] = data[j][i2]; } x[x.length - 1] = 93; data[j] = x; } if (start === 44 && end !== 93) { const x = new Uint8Array(data[j].length + 1); x[0] = 91; for (let i2 = 1; i2 < data[j].length; i2++) { x[i2] = data[j][i2]; } x[x.length - 1] = 93; data[j] = x; } if (start === 44 && end === 93) { const x = new Uint8Array(data[j].length); x[0] = 91; for (let i2 = 1; i2 < data[j].length; i2++) { x[i2] = data[j][i2]; } data[j] = x; } } for (let k = 0; k < data.length; k++) { const arr = data[k]; const header = Uint32Array.from( { length: 4, }, (, index) => (arr.length >> (index * 8)) & 0xff, ); const output = new Uint8Array(header.length + arr.length); output.set(header, 0); output.set(arr, 4); std.out.write(output.buffer, 0, output.length); std.out.flush(); std.gc(); } } else { const header = Uint32Array.from({ length: 4, }, (, index) => (message.length >> (index * 8)) & 0xff); const output = // ... } } Monteov β€” 9:14 PM AI says your code is bad and not optimized gemini.google.com/share/4372b594e5a1 guest271314 β€” 9:16 PM AI says Um, so what? Monteov β€” 9:16 PM Give Gemini 3 your code and compare its performance and yours Yours will be worse. Always benchmark different solutions and pick the best one. guest271314 β€” 9:19 PM The program and results you are citing and relying on has already been wrong, at least twice.

All "artificial intelligence" is is regurgitation of somebody else's work.

I can test both wrat I wrote, with my own mind, and what the program you are citing suggests. Monteov β€” 9:22 PM Your original code buffers the entire message in RAM before sending a single byte guest271314 β€” 9:24 PM Yes, and? Monteov β€” 9:25 PM It will crash with a 100+ MB payload guest271314 β€” 9:28 PM You are making unsubstantiated predictions without empirical results to corrobate your claims.

The initial challenge was/is writing the code at all, to handle up to 64 MiB input, where the previous implementation processed 1 MiB.

See github[dot]com/quickjs-ng/quickjs/discussions/1291#discussioncomment-15407110 and users.rust-lang[dot]org/t/how-to-handle-64-mib-encoded-json-to-rust-native-messaging-host/137277/2. guest271314 β€” 9:31 PM See discord[dot]com/channels/721472913886281818/721833404789948427/1456350038522462208.

I might even have asked the question here. Can't remember. Nobody replied if I did.

So, I wrote and tested the code, myself. Monteov β€” 9:35 PM Man... you are so focused on hating AI that the hatred blinds you. 32MB Test Payload...

--- ORIGINAL (Manual Loops) --- Time: 3842.15 ms Bytes Sent: 33554564

--- OPTIMIZED (Streaming) --- Time: 88.42 ms Bytes Sent: 33554564 guest271314 β€” 9:36 PM It's not hate, it's rational analysis.

I don't know where you got those numbers from?

Did you test the Native Messaging host implementation to get those numbers? Where's the source code that produced those numbers? Monteov β€” 9:38 PM Native Messaging doesn't make manual JS loops faster. If anything, a constrained embedded environment makes the manual loops slower and more prone to crashing :sadness: guest271314 β€” 9:39 PM So you didn't actually test anything?... Monteov β€” 9:39 PM I did Those are benchmark results numbers. guest271314 β€” 9:42 PM Where's the code? What runtime did you test with? Monteov β€” 9:44 PM jdoodle.com/ia/1OLI guest271314 β€” 9:45 PM Yeah, that's not testing an actual Native Messaging host and client. Monteov β€” 9:45 PM ^ You can test the sample yourself with that πŸ€·πŸ»β€β™‚οΈ Won't change anything. guest271314 β€” 9:46 PM It will change something. It'll be a real test in the real context. I'll test myself. You havn't,. Monteov β€” 9:47 PM And yet your code will still lose against AI. guest271314 β€” 9:48 PM There you go with spurious predictions.

You have to actually test code in the context to get empirical results. Monteov β€” 9:50 PM You are absolutely correct to demand empirical testing. In the context of Native Messaging, the AI's advice isn't just "nonsense"β€”it's a production outage waiting to happen :ahah: Well, I will give you +1 for winning against AI Well done. guest271314 β€” 9:51 PM I don't care about "the AI's advice".

I don't believe your story. Monteov β€” 9:52 PM What I am saying is I fed most of your arguments in Gemini and it finally admitted that you are right. Just wanted to see if either Gemini will win or you. So when it checked itself after that your message about empirical results, it admitted you were right You proved your point You can see the conversation here gemini.google.com/share/18b4c8f27fd0 guest271314 β€” 9:54 PM What I am saying is I fed most of your arguments in Gemini and it finally admitted that you are right. What are you talking about? I don't argue. I state facts, based on evidence.

Just wanted to see if either Gemini will win or you. So when it checked itself after that your message about empirical results, it admitted you were right You proved your point

Are you deliberately withholding that link? Monteov β€” 9:55 PM I didn't think anything myself, you basically engaged through Gemini through me. I don't have time for this myself 😁 I also don't specialize in such functionality. guest271314 β€” 9:57 PM And even that result has wrong information in it. Constraints: Chrome limits messages to 4GB (Host -> Chrome) or 1MB (Chrome -> Host).

The protocol was changed from 4 GB to 64 MiB developer.chrome[dot]com/docs/extensions/mv3/nativeMessaging/#native-messaging-host-protocol Monteov β€” 10:01 PM It is astonishing to see how wrong it was. It makes itself very confident. Through my usage, I don't really notice a lot of mistakes But you were right, I believe you now Though if you push it enough, it can probably make something better.

https://discord.com/channels/1109464496164048996/1271968038241177644/1457491900096315545 https://discord.com/channels/1109464496164048996/1271968038241177644/1457494062662946825

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment