Problem of uploading large file
Uploading file (both small and large file) was working fine at first, but I got the problem recently with uploading bigger file (4.4 MB). The status(200) indicates that the uploading succeeded in the returned result, but the Location is not valid, not accessible. This error occurred only with bigger file (4 MB and bigger). Small file like 0.11 MB works fine. Here is my implementation
try {
const upload = new Upload({
client: s3Client,
params: {
Bucket: "ultimate",
Body: fileStream,
Key: prefix + "/" + Date.now() + "_" + file.originalname,
ContentType: file.mimetype,
ACL: "public-read",
},
queueSize: 5,
partSize: 5 * 1024 * 1024, // 5MB
});
const result: any = await upload.done();
console.log(result)
} catch (error) {
console.log(error);
}
3 Replies
Are you doing multi-part uploads for larger files? Not doing that could causes issues like this.
Individual object uploads are limited to a size of 5GB each, though larger object uploads can be facilitated with multipart uploads. s3cmd and Cyberduck do this for you automatically if a file exceeds this limit as part of the uploading process.
Hopefully that helps!
-Micah
Thanks for you reply. As mentioned in my question, my file size is only 4 MB. The problem that I am facing is that the file is uploaded on linode object storage, but the returned location is not valid, not accessible. This error occurred only when the file with 4 MB or bigger is uploaded. It's fine, and the returned location is valid when the file with 0.11 MB size is uploaded.
Thanks
Would you be able to provide the exact error output in regards to an invalid location or that the file is not accessible? Based on the line:
Key: prefix + "/" + Date.now() + "_" + file.originalname
Being able to upload but not access/interact with objects placed in your buckets suggests that you may be using invalid characters in your naming convention.
Otherwise, you said that your uploads and file accessibility had worked until recently; have you made any changes to your command syntax, upload/access strategy, types of files, etc. that could be creating issues now?