From 188c1c4158db53fd94555a9423732c21c8610c0d Mon Sep 17 00:00:00 2001 From: Jessica Kerr Date: Mon, 5 Aug 2024 16:49:27 -0500 Subject: [PATCH] Update deployed link --- README.md | 54 ++++++++++++++++++++++++++++-------------------------- 1 file changed, 28 insertions(+), 26 deletions(-) diff --git a/README.md b/README.md index 086eb61..7d17739 100644 --- a/README.md +++ b/README.md @@ -1,9 +1,8 @@ - # O11yDay Meminator This contains a sample application for use in the Observability Day workshops. -See it in action: [o11yday.jessitron.honeydemo.io](http://o11yday.jessitron.honeydemo.io) +See it in action: [meminator.honeydemo.io](http://meminator.honeydemo.io) It generates images by combining a randomly chosen picture with a randomly chosen phrase. @@ -11,7 +10,7 @@ It generates images by combining a randomly chosen picture with a randomly chose 1. Hello! Welcome to Advanced Instrumentation with OpenTelemetry. A few [slides](https://docs.google.com/presentation/d/1jNJCuns5wrL9sOJfT8yAaQ5HR5bc_e1d6i88oGspe2k/edit?usp=sharing) 2. Look at this app. It has default instrumentation. -3. Run this app. +3. Run this app. 4. Connect this app to Honeycomb. 5. See what the traces look like. 6. Improve the traces. @@ -71,25 +70,27 @@ The app begins with automatic instrumentation installed. Test the app, look at t Here's my daily for looking at the most recent traces: -* log in to Honeycomb -* (you should be in the same environment where you got the API key; if you're not sure, there's [my little app](https://honeycomb-whoami.glitch.me) that calls Honeycomb's auth endpoint and tells you.) +- log in to Honeycomb +- (you should be in the same environment where you got the API key; if you're not sure, there's [my little app](https://honeycomb-whoami.glitch.me) that calls Honeycomb's auth endpoint and tells you.) See the data: -* Click New Query on the left -* At the top, it says 'New Query in <dropdown>' -- click the dropdown and pick the top option, "All datasets in ..." -* click 'Run Query'. Now you have a count of all events (trace spans, logs, and metrics). If it's 0, you're not getting data :sad: -* If you want to take a look at all the data, click on 'Events' under the graph. +- Click New Query on the left +- At the top, it says 'New Query in <dropdown>' -- click the dropdown and pick the top option, "All datasets in ..." +- click 'Run Query'. Now you have a count of all events (trace spans, logs, and metrics). If it's 0, you're not getting data :sad: +- If you want to take a look at all the data, click on 'Events' under the graph. Get more info (optional): -* change the time to 'Last 10 minutes' to zoom in on just now. -* In the query, click under 'GROUP BY' and add 'service.name' as a group-by field. GROUP BY means "show me the values please." -* 'Run Query' again. (alt-enter also does it) -* Now see the table under the graph. You should see all 4 services from this app listed. + +- change the time to 'Last 10 minutes' to zoom in on just now. +- In the query, click under 'GROUP BY' and add 'service.name' as a group-by field. GROUP BY means "show me the values please." +- 'Run Query' again. (alt-enter also does it) +- Now see the table under the graph. You should see all 4 services from this app listed. Get to a trace: -* In the graph, click on one of the lines. It brings up a popup menu. -* In the menu, click "View Trace" + +- In the graph, click on one of the lines. It brings up a popup menu. +- In the menu, click "View Trace" This should take you to a trace view! @@ -103,7 +104,7 @@ While that's going, show them the few intro slides. Then walk them through getting an API Key in Honeycomb. I tell them to create a new team, unless they already have a personal team for play. -Tell them to put the API key in .env, and then restart the app. If they see traces in Honeycomb, victory. +Tell them to put the API key in .env, and then restart the app. If they see traces in Honeycomb, victory. ### Flow through improving the traces @@ -116,14 +117,15 @@ There's more here than fits in 1.5 hours. 2. Notice, maybe that some fail, or maybe that some are slower than others. 3. See that you don't have important data like "which image was it?" 4. Go to backend-for-frontend-python/server.py and **add attributes to the current span**. -4. Rerun just that service: `./run backend-for-frontend` -5. Maybe notice that there are some metrics coming in, in unknown_metrics. Look at the events in them, at the fields they have available. They're useless. Talk about how these would be better as attributes on the spans. -5. Remove the metrics in backend-for-frontend-python/Dockerfile. This is an opportunity to talk about how otel is added from the outside in python. -6. In meminator-python/server.py, un-comment-out the CustomSpanProcessor bit at the top. Show how the custom processor is adding the free space in /tmp, which it measures at most 1x/sec. -7. Maybe notice (in the traces) that there's a blank space in meminator. After it downloads the file, what does it do? -8. in meminator-python/server.py, create a span around the subprocess call. +5. Rerun just that service: `./run backend-for-frontend` +6. Maybe notice that there are some metrics coming in, in unknown_metrics. Look at the events in them, at the fields they have available. They're useless. Talk about how these would be better as attributes on the spans. +7. Remove the metrics in backend-for-frontend-python/Dockerfile. This is an opportunity to talk about how otel is added from the outside in python. +8. In meminator-python/server.py, un-comment-out the CustomSpanProcessor bit at the top. Show how the custom processor is adding the free space in /tmp, which it measures at most 1x/sec. +9. Maybe notice (in the traces) that there's a blank space in meminator. After it downloads the file, what does it do? +10. in meminator-python/server.py, create a span around the subprocess call. #### Node + There are different problems in node.js 1. change PROGRAMMING_LANGUAGE in .env to nodejs @@ -131,10 +133,10 @@ There are different problems in node.js 3. Push go (or run the loadgen) and look at traces. 4. The obvious major problem is that the traces aren't connected. Propagation is broken. 5. Drill from backend-for-frontend-nodejs/index.ts into fetchFromService. It uses `fetch`. The autoinstrumentation (right now) doesn't include this. -4. The code is there to implement propagation manually, in case you want to talk about that. But the library exists now -5. Go to backend-for-frontend-nodejs/tracing.ts, and add UndiciInstrumentation. Rerun the service & see connected traces. Notice the values in span.kind -6. Maybe notice that there's a crapton of blahblah from fs-instrumentation. Show them library.name -7. Disable the fs-instrumentation in backend-for-frontend-nodejs/tracing.ts. Note that i leave it on for meminator because meminator does meaningful stuff in the filesystem. It's already off for phrase-picker and image-picker. +6. The code is there to implement propagation manually, in case you want to talk about that. But the library exists now +7. Go to backend-for-frontend-nodejs/tracing.ts, and add UndiciInstrumentation. Rerun the service & see connected traces. Notice the values in span.kind +8. Maybe notice that there's a crapton of blahblah from fs-instrumentation. Show them library.name +9. Disable the fs-instrumentation in backend-for-frontend-nodejs/tracing.ts. Note that i leave it on for meminator because meminator does meaningful stuff in the filesystem. It's already off for phrase-picker and image-picker. THe story of a feature flag: nodejs has a feature flag around the imagemagick call, with a separate implementation that runs 25% of the time (or whatever is set in featureFlags.ts). We want to know whether the new way is faster.