You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One thing I've always missed in Scrapoxy is ability to support multiple clients. Would be great to see it implemented here.
In Scrapoxy you could set (min, required, max) scaling and it works well as long as there is just one client application trying to use the proxy cloud. But as soon as you want to share same cloud between multiple applications, you run into a problem that they conflict with each other. E.g. when one application has finished crawling, it can't just downscale the cloud as it's still being used by another application etc.
Ideally that requires a centralized logic that manages requests from multiple client applications. It would need to track the most recently requested scaling for each client, and combine them. A very simple logic could be to just take max of all min/required/max parameters across clients and use that as the scaling. That way, the cloud would only downscale when the last client sends the downscale request. You can imagine logic becoming more complex though, e.g. when one client asks to destroy an instance that the other client still uses etc.
As an extra feature, it should ideally handle stale clients - if a client has not communicated with it for a while, it should disregard its requirements, to avoid leaving dangling instances when client unexpectedly disappears.
The text was updated successfully, but these errors were encountered:
One thing I've always missed in Scrapoxy is ability to support multiple clients. Would be great to see it implemented here.
In Scrapoxy you could set (
min
,required
,max
) scaling and it works well as long as there is just one client application trying to use the proxy cloud. But as soon as you want to share same cloud between multiple applications, you run into a problem that they conflict with each other. E.g. when one application has finished crawling, it can't just downscale the cloud as it's still being used by another application etc.Ideally that requires a centralized logic that manages requests from multiple client applications. It would need to track the most recently requested scaling for each client, and combine them. A very simple logic could be to just take max of all
min
/required
/max
parameters across clients and use that as the scaling. That way, the cloud would only downscale when the last client sends the downscale request. You can imagine logic becoming more complex though, e.g. when one client asks to destroy an instance that the other client still uses etc.As an extra feature, it should ideally handle stale clients - if a client has not communicated with it for a while, it should disregard its requirements, to avoid leaving dangling instances when client unexpectedly disappears.
The text was updated successfully, but these errors were encountered: