Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logs not sent on application closing #43

Open
bmaury-bouet opened this issue Apr 22, 2020 · 3 comments
Open

Logs not sent on application closing #43

bmaury-bouet opened this issue Apr 22, 2020 · 3 comments

Comments

@bmaury-bouet
Copy link

Description
I tried this plugin and I must say it works great thank you !
But I've noticed in my application that last log in my application doesn't appear in ES.

Configuration
I use this plugging to provide a logstash for the buildsystem that we use : Quickbuild (www.pmease.com)

Runtime (please complete the following information):

  • Module name and version: log4j2-elasticsearch-hc-1.4.1
  • Framework/server/module system used:
  • JVM OpenJDK8 Hotspot
  • OS: Windows 10

Additional context

@rfoltyns
Copy link
Owner

Thank you for the good word! :) Happy to help.

Shipping last logs may be a bit tricky. Once Log4j2 Lifecycle.stop() is called, ElasticsearchAppender sets Log4j2 lifecycle state to STOPPED immediately, dropping all further log events on the floor. However, log4j2-elasticsearch lifecycle is still waiting for last logs to arrive from FailoverPolicy, delaying actual shutdown by AsyncBatchDelivery.shutdownDelayMillis (5 seconds by default!) to flush last batches properly. You can follow this process by setting the status logger level to INFO or DEBUG:

<Configuration status="INFO">
...
</Configuration>

So, you may have hit the 5000ms period when the Root Logger (or maybe even last open Console appender) was still able to log some messages, but ElasticsearchAppender was closed already, so the log wasn't enqueued for delivery.

You may try to set shutdownDelayMillis to 0, but some 'in-flight' logs may be lost anyway depending where they are on the call stack. If they reach BulkEmitter queue before ElasticsearchAppender.stop() was called at least one delivery attempt will take place.

Long story short: delivery guarantee on shutdown is difficult to achieve even when shutting down gracefully.

@bmaury-bouet
Copy link
Author

Hi @rfoltyns ,

No problem :)
it makes sense I just wanting to make sure if this could be address or not.

Thanks a lot for the reply

@rfoltyns
Copy link
Owner

No problem. I hope it was enough to help you make a decision on your next steps.

I had some more thoughts on logging on shutdown though. Since logging should be torn down at the very end of application lifetime, IMHO any logging happening after the tear down should be considered obsolete. If it's critical for your business, can happen a bit earlier?

Another option would be to switch to programmatic config (see example) and control the lifecycle of AsyncBatchDelivery and ItemSourceFactory on your own (that was one of the main reason behind a separate, indepentent lifecycle in log4j2-elasticsearch). You could stop() right before the exit from your main thread, get more control during the shutdown and even deliver some documents AFTER Log4j2 is stopped. You'd have to extend some classes and override lifecycle methods, but if you need these logs, maybe it's worth it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants