Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: memory store #3834

Open
wants to merge 12 commits into
base: main
Choose a base branch
from
Open

fix: memory store #3834

wants to merge 12 commits into from

Conversation

ronag
Copy link
Member

@ronag ronag commented Nov 13, 2024

Simplify and fix memory leak

This relates to...

Rationale

Changes

Features

Bug Fixes

Breaking Changes and Deprecations

Status

@ronag
Copy link
Member Author

ronag commented Nov 13, 2024

@flakey5

@flakey5
Copy link
Member

flakey5 commented Nov 14, 2024

The test for locking is failing

Simplify and fix memory leak
@ronag
Copy link
Member Author

ronag commented Nov 15, 2024

I dont' see why we need any locking here... I might need some help fixing the tests though

@ronag ronag force-pushed the memory-store branch 2 times, most recently from 977c314 to 2c1c839 Compare November 15, 2024 09:57
@ronag ronag force-pushed the memory-store branch 5 times, most recently from 530ec18 to df0d75b Compare November 15, 2024 11:12
await readResponse(result)

notEqual(store.createWriteStream(request, requestValue), undefined)
})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this test was removed? Is this functionality not needed anymore?

Copy link
Member Author

@ronag ronag Nov 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO. We don't need locking. The behavior is removed.

@ronag ronag force-pushed the memory-store branch 2 times, most recently from b570a78 to 57e0416 Compare November 15, 2024 12:31
Copy link
Member

@mcollina mcollina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@mcollina
Copy link
Member

cc @flakey5

@ronag ronag force-pushed the memory-store branch 12 times, most recently from db4512e to 551e79d Compare November 15, 2024 13:26
@ronag ronag force-pushed the memory-store branch 3 times, most recently from f426a4c to 70308fc Compare November 15, 2024 13:47
@flakey5
Copy link
Member

flakey5 commented Nov 15, 2024

With how we have the cache store currently set up, locking is needed so we don't read from a response that's currently being written to

For example if a request is made to some server, it's possible for another request to be made to the same resource before the first request finishes. If that happens currently (with locking), the cache store will see that the first request is still in flight since it's locked and return undefined.

Without locking, the cache store will see that the first request has some cached details (from the CachedResponse object) but won't know that it's still in flight. The cache store would then return the incomplete response to the user as if it was completed. The body would also be missing since we only set it in the value once the write stream finishes.

Or in other words what could happen w/o locking,

  • request 1 is made to http://localhost
    • cache store writes the data of CachedResponse into memory (status code, headers, ...)
    • write stream continues
  • request 2 is made to http://localhost before request 1 finishes
    • cache store sees the contents from CachedResponse, thinks the result is complete, and returns it
    • client doesn't get the full response (only gets things set in CachedResponse: status code, headers, ...)
  • client waiting on request 1 eventually gets the full response

Another smaller thing is that w/o locking we could end up w/ two write streams for the same resource. Currently, because we return undefined from get if the resource is locked, the cache handler ultimately does try to make another write stream for a locked resource. However, we again check if the value is locked in createWriteStream and if it is, we return undefined. (This is a smaller thing though that just concerns resource consumption. The different write streams would race to write to this.#values but that operation is about as atomic as we can make it)

Optionally, we could just not write the response into this.#values until we have the full thing (which is what was done here), however, the way it's set up currently will help us out when we figure out how we wanna add back request deduplication in the future. This also doesn't solve the smaller issue of multiple write streams in the cache store for the same resource.

I think we should stick with the locking, however, either or works

@ronag
Copy link
Member Author

ronag commented Nov 15, 2024

I still think we should skip locking. Unless you have a strong opinion about it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants