For a very long time, I had this doubt in my mind if "in-memory" cache is a justified solution if the number of items to be cached are quite high. I guess I had my own reasons for this; primarily because I had never tried to put so much data in such process level cache.
Of course, if it was 2005, we might still be using 32 bit hardware and memory constraints were equally important as optimized processing logic. But now that memory is cheap, we can be assured that it is ok to trade-off memory for better performance.
So, i ran an experiment.
var memoryCache = MemoryCache.Default;
Console.WriteLine(memoryCache.Count());
Console.WriteLine(Process.GetCurrentProcess().PrivateMemorySize64);
Stopwatch sw = new Stopwatch();
sw.Start();
for (int i = 0; i < 1000 * 1000; i++)
{
memoryCache.Add("key" + i.ToString(), new Tuple("key" + i.ToString(), true), new CacheItemPolicy() { AbsoluteExpiration = DateTime.UtcNow.AddDays(1) });
}
sw.Stop();
Console.WriteLine($"Time taken to write 1M values to cache {sw.ElapsedMilliseconds}");
Console.WriteLine(memoryCache.Count());
Console.WriteLine(Process.GetCurrentProcess().PrivateMemorySize64);
memoryCache = MemoryCache.Default;
Console.WriteLine(memoryCache.Count());
sw.Restart();
for (int i = 0; i < 1000 * 1000; i++)
{
var val = memoryCache.Get("key" + i.ToString()) as Tuple;
if (val == null)
{
Console.WriteLine("Found null");
}
}
sw.Stop();
Console.WriteLine($"Time taken to read 1M values to cache {sw.ElapsedMilliseconds}");
I wanted to check the impact on memory usage and if it there is any apparent issue with keeping 1M records in "in-memory" cache.
Of course, if it was 2005, we might still be using 32 bit hardware and memory constraints were equally important as optimized processing logic. But now that memory is cheap, we can be assured that it is ok to trade-off memory for better performance.
So, i ran an experiment.
var memoryCache = MemoryCache.Default;
Console.WriteLine(memoryCache.Count());
Console.WriteLine(Process.GetCurrentProcess().PrivateMemorySize64);
Stopwatch sw = new Stopwatch();
sw.Start();
for (int i = 0; i < 1000 * 1000; i++)
{
memoryCache.Add("key" + i.ToString(), new Tuple
}
sw.Stop();
Console.WriteLine($"Time taken to write 1M values to cache {sw.ElapsedMilliseconds}");
Console.WriteLine(memoryCache.Count());
Console.WriteLine(Process.GetCurrentProcess().PrivateMemorySize64);
memoryCache = MemoryCache.Default;
Console.WriteLine(memoryCache.Count());
sw.Restart();
for (int i = 0; i < 1000 * 1000; i++)
{
var val = memoryCache.Get("key" + i.ToString()) as Tuple
if (val == null)
{
Console.WriteLine("Found null");
}
}
sw.Stop();
Console.WriteLine($"Time taken to read 1M values to cache {sw.ElapsedMilliseconds}");
I wanted to check the impact on memory usage and if it there is any apparent issue with keeping 1M records in "in-memory" cache.
Well, there were no apparent issues.
0
19460096
Time taken to write 1M values to cache 4167
1000000
757383168
1000000
Time taken to read 1M values to cache 676
As you can see, there is not much delay in writing or reading so many records. The memory usage does definitely jump by a factor of 10 but i guess we planned to use more memory anyways.