-
-
Save navjotdhanawat/92f8683ddfc5bf99c6bd47ce6dedaa4e to your computer and use it in GitHub Desktop.
| var AWS = require('aws-sdk'); | |
| AWS.config.update({ region: 'us-east-1' }); | |
| var s3 = new AWS.S3(); | |
| var obj = { | |
| firstname: "Navjot", | |
| lastname: "Dhanawat" | |
| }; | |
| var buf = Buffer.from(JSON.stringify(obj)); | |
| var data = { | |
| Bucket: 'bucket-name', | |
| Key: 'filename.json', | |
| Body: buf, | |
| ContentEncoding: 'base64', | |
| ContentType: 'application/json', | |
| ACL: 'public-read' | |
| }; | |
| s3.upload(data, function (err, data) { | |
| if (err) { | |
| console.log(err); | |
| console.log('Error uploading data: ', data); | |
| } else { | |
| console.log('succesfully uploaded!!!'); | |
| } | |
| }); |
good stuff thanks 👍
could you please give me advice, on the reason why we use Buffer.from to convert STRING to Buffer?
Thanks! I had the same idea of using Buffer but I wasn't sure about the S3 configuration to store it as JSON.
Cheers mate :)
could you please give me advice, on the reason why we use
Buffer.fromto convert STRING to Buffer?
Basically, as I understand it, to transfer data across the internet (in this case, the content of a file) it needs to be represented as chunks of binary information that can be segmented and indexed while the transfer is undergoing. A simple string would not do the trick since its representation –while at a machine level can be read as binary– is not formatted correctly, I think it is technically just a pointer to a memory range where it is stored. The Buffer here actually creates –or rather, replicates– the binary information that represents the content of the string variable and makes it available as a variable itself.
thank you
Thanks brother
Thanks, I was looking for Buffer.from ... :)