Object Storage upload: [ERR_STREAM_WRITE_AFTER_END]: write after end

Hey guys! I use aws-sdk for NodeJS for Linode Object Storage but I get the following error [ERR_STREAM_WRITE_AFTER_END]: write after end when uploading one particular file.

import aws from 'aws-sdk';

const s3 = new aws.S3({
  endpoint: env.AWS_S3_ENDPOINT,
  accessKeyId: env.AWS_ACCESS_KEY_ID,
  secretAccessKey: env.AWS_SECRET_ACCESS_KEY,
});

function putObject({ body, key }: UploadArg): Promise<unknown> {
  const request: aws.S3.PutObjectRequest = {
    Body: body,
    Key: key,
    Bucket: env.AWS_S3_BUCKET,
  };

  return s3.putObject(request).promise();
}

// ...

putObject({ body: SomeBuffer, key: 'foo' });

I tried to change key to be very simple, I tried the following workaround https://github.com/aws/aws-sdk-js/issues/1300#issuecomment-282779101 (also tried to modify maxSockets from 1 to 2000), but got no luck. Unfortunately I can't create an issue at aws-sdk repo, since I don't actually use AWS. Any ideas how to debug that and make the error disappear? Thank you!

15 Replies

Hi @finom

Are you still seeing this error?

There was a temporary issue with the Object Storage service within the last half hour or so.

We are also seeing this error with our backups (which use s3 API to object storage). The backups last succeeded 5 days ago, which matches the temporary issue timeline. Each attempt (at 24 hour intervals) has failed with the same error.

@andysh yes, I still see the same error. I'm going to ask my client if I can share the file so you can check that by yourself if you don't mind.

It appeared that the file isn't confidential. Here you go: https://filebin.net/9ei9mhbi1wadeuva/2020-Personal-tax-checklist.pdf?t=fyg2e1jp (the link is going to expire in 1w). @andysh can you try to upload it to linode object storage? I'm not sure if it's aws-sdk issue.

Hi @finom

I'm not too familiar with NodeJS.

Please can you provide the full code sample, in particular how you read the file to upload - i.e. where SomeBuffer is populated?

Hi @andysh,

We have the same issue with Cloudron backups to Linode Object Storage - https://docs.cloudron.io/backups/#linode-object-storage

After much investigation, I narrowed the issue to a strange Linode Object Storage misbehavior with HTTP Expect: 100-continue.

  • For payloads > 1MB, aws-sdk-js will automatically send a Expect: 100-continue.

  • Linode servers respond with HTTP/1.1 100 Continue multiple times. So, the response looks like this:

HTTP/1.1 100 Continue

HTTP/1.1 100 CONTINUE
Date: Wed, 24 Mar 2021 01:31:03 GMT
Connection: keep-alive

HTTP/1.1 200 OK
Content-Length: 0
ETag: "08db8030b2f1434199d74a4e8189c954"
Accept-Ranges: bytes
x-amz-request-id: tx00000000000000db14129-00605a9657-1f74dbc-default
Date: Wed, 24 Mar 2021 01:31:04 GMT
Connection: close
  • As seen above, the 100 CONTINUE is sent twice. I don't know if this violates http standard or not, but this causes aws-js-sdk to start uploading the stream twice because of this code. I read the standard but not sure if this is allowed or not.

FWIW, Other S3 compatible storage providers do not sent 100 continue twice.

I have submitted a PR upstream - https://github.com/aws/aws-sdk-js/pull/3674. In the meantime, you can try npm install git+https://github.com/cloudron-io/aws-sdk-js.git#continue_once.

@andysh are you from linode? Or should I report this via some other channel?

@cloudron nice work!

I’m not from Linode, no, so the best bet would be to submit a support ticket. I believe Linode uses Ceph for object storage so this may be an upstream issue.

I’ve used Object Storage since it was in beta, with PHP, Go, Rclone, Restic and Cyberduck (sometimes uploading single files >500MB) and never encountered an issue, so these other libraries/clients must be handling it.

@andysh thanks! I have opened a ticket #15378267

Hey @cloudron - thanks for posting the ticket number and for taking the time to dig into this. I replied to your ticket and passed this along to our Object Storage team so they can take a look.

I’ve used Object Storage since it was in beta, with PHP, Go, Rclone, Restic and Cyberduck (sometimes uploading single files >500MB) and never encountered an issue, so these other libraries/clients must be handling it.

The issue is specific to use of 100-continue by aws js sdk. Uploads work fine if that is not sent. It's part of the s3 spec - https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html#API_PutObject_Examples

Thanks @cloudron for fixing this issue, it was a big help to me. 👍

It seems AWS js SDK v3 is affected by this same bug.

I created a bug report.
https://github.com/aws/aws-sdk-js-v3/issues/2538

Running into this issue as well, would love to hear an update on this since the last we heard from staff was 3 months ago

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct