Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bring back total_system_memory? #228

Closed
juris opened this issue Jan 18, 2019 · 11 comments
Closed

Bring back total_system_memory? #228

juris opened this issue Jan 18, 2019 · 11 comments

Comments

@juris
Copy link
Contributor

juris commented Jan 18, 2019

I understand that you've dropped total_system_memory because node_exporter has the same functionality.

However, not every Redis instance comes with node_exporter. In my case it is AWS ElastiCache Redis. Yes, I know I can use CloudWatch exporter, but we have multiple ElastiCache Redises running and CloudWatch metrics cost money :(

Would you consider bringing total_system_memory metric back?

@oliver006 oliver006 changed the title Make total_system_memory great again Bring back total_system_memory? Jan 18, 2019
@oliver006
Copy link
Owner

I think we should bring back total_system_memory (I guess as total_system_memory_bytes). It was removed here https://github.com/oliver006/redis_exporter/pull/203/files#diff-a75393c3715eaef61ada7312397db7f2L78 and in retrospect I think I would have preferred to keep it because there are more users like you that use e.g. AWS ElastiCache Redis.
Open a PR if you like or I might be able to get to it in the next couple of days.

@oliver006
Copy link
Owner

Closed by #229

@SuperQ
Copy link
Contributor

SuperQ commented Mar 28, 2019

I'm extremely annoyed by this. I understand the desire to have this metric, but however inconvenient it is, this violates best practices for Prometheus exporters.

At a minimum, this should be behind a feature flag and disabled by default.

@SuperQ
Copy link
Contributor

SuperQ commented Mar 28, 2019

Might I also suggest using the cloudwatch_exporter, the costs are not that large.

@oliver006
Copy link
Owner

Thanks for your feedback, @SuperQ. I understand that exporting total_system_memory is not in line with the best practices.
But the metric is readily available from Redis and not everyone can or wants to run another exporter just for that one metric. It's not just about $$$ cost but also cognitive and maintenance overhead and for me that outweighs the concerns regarding best practices.

@SuperQ
Copy link
Contributor

SuperQ commented Mar 29, 2019

The problem is, where do you end that convenience? Should the redis exporter also include system CPU information?

For the majority of users that are already running the node_exporter, this exporter now exposes duplicate information.

This is why the Prometheus project is designed the way it is. Separate concerns for separate systems.

Please put this metric behind a default-disabled feature flag so that the majority of users don't get duplicate data.

@oliver006
Copy link
Owner

As I said, I understand why you're raising this issue, I just don't think it's that big a problem.

Let me think about the feature flag suggestion.

@SuperQ
Copy link
Contributor

SuperQ commented Apr 9, 2019

I will see about adding the feature flag.

@oliver006
Copy link
Owner

Here's a thought I had: there are a couple of issues (ConstMetrics, put binary releases in a folder, a feature flag to enable this total_system_memory metric, maybe some general cleanup, anything else that comes to mind? ) and they are all somewhat backward incompatible and I was thinking about lumping them all into one release and make it 1.0.0.

In that way it's clear that it's a new major version, some things won't be working the same as before and people can stay on 0.3x if they so desire.

I'm a little pressed for time and esp the ConstMetrics issue might take me a bit but I think that'd be the cleanest approach.

Thoughts?

@SuperQ
Copy link
Contributor

SuperQ commented Apr 9, 2019

Yea, I think moving things toward a 1.0 would be good.

@oliver006
Copy link
Owner

👍

@oliver006 oliver006 mentioned this issue May 10, 2019
6 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants