If MongoDB's WiredTiger memory cache in Red Hat Mobile Application Platform (RHMAP) consumes all the memory available for a container, memory issues and Nagios alerts will occur. This article describes how to configure the WiredTiger memory cache to prevent high-usage memory issues and Nagios alerts. Failover happens very quickly (within seconds) and all the drivers we use to connect MongoDB handle this internally by reconnecting to the new master. This triggers an alert and we then investigate what happened. We have also found we can easily scale MongoDB either vertically by adding more resources (memory, SSDs) or by adding new shards.
I have the below scenario for an alert notification application, for which we are going to use mongodb.
Number of writes per day 20Mill
Data size of 1month data 150 GB
index(default primary index) size of 1 month data 15 GB
I want to keep the data of 12 months. How much RAM and Hard Disc is recommended for my application.I will add the additional memory that need to accomodate the working set.
UdayUday
1 Answer
The physical memory used by mongodb depends on the data being accessed. MongoDB only loads into physical memory data that is 'touched'/required. Else even though the entire database is memory mapped it is not loaded into RAM (virtual memory vs resident memory). The actual RAM used is determined by the number of OS pages required for the data/index that you use based on queries/commands etc..
The following link has some information on this:
If you are going to be storing approximately 165g per month and need to retain that data for a 12 month period, then you are looking at the need for at least 2tb, possibly more especially if the numbers are growing beyond what is stated above.
BillBill
Not the answer you're looking for? Browse other questions tagged mongodb or ask your own question.
Can somebody tell me why Mongo doesn't consume more than ~300-400MB of memory when there's 2GB available and dataSize is about 4GB currently with a bit under 5 million documents?
Even with queries that process A LOT of documents memory consumption doesn't spike. I've few other processes running on the same server but as I'm watching New Relic, memory consumption is never over 500MB.
Don't know if it matters in this case but the server is virtualised with KVM. We are using 64bit version so no 32bit limits.
edit cat /proc/meminfo
jimmy
jimmyjimmy
1 Answer
If you run
free -m
or cat /proc/meminfo
, you should be able to see the breakdown of memory usage.Buffer relates to how much RAM is used to cache disk blocks. Cached is similar to buffers, except it's cached pages from reading files.
If these are high, it would indicate that your read-ahead settings are too large and MongoDB cannot use the remaining memory - degrading performance.
NickNick