# LRU Cache LeetCode Solution

Difficulty Level Medium
Frequently asked in Adobe Amazon Apple Bloomberg ByteDance eBay Facebook Goldman Sachs Google Microsoft Morgan Stanley Oracle PayPal Salesforce Snapchat Splunk Twilio VMware Walmart Labs ZillowViews 235

## Question

Design a data structure that follows the constraints of a Least Recently Used (LRU) cache.

Implement the `LRUCache` class:

• `LRUCache(int capacity)` Initialize the LRU cache with positive size `capacity`.
• `int get(int key)` Return the value of the `key` if the key exists, otherwise return `-1`.
• `void put(int key, int value)` Update the value of the `key` if the `key` exists. Otherwise, add the `key-value` pair to the cache. If the number of keys exceeds the `capacity` from this operation, evict the least recently used key.

The functions `get` and `put` must each run in `O(1)` average time complexity.

### Example 1:

#### Input

```["LRUCache", "put", "put", "get", "put", "get", "put", "get", "get", "get"]
[[2], [1, 1], [2, 2], [1], [3, 3], [2], [4, 4], [1], [3], [4]]
```

#### Output

```[null, null, null, 1, null, -1, null, -1, 3, 4]

```

#### Explanation

```LRUCache lRUCache = new LRUCache(2);
lRUCache.put(1, 1); // cache is {1=1}
lRUCache.put(2, 2); // cache is {1=1, 2=2}
lRUCache.get(1);    // return 1
lRUCache.put(3, 3); // LRU key was 2, evicts key 2, cache is {1=1, 3=3}
lRUCache.put(4, 4); // LRU key was 1, evicts key 1, cache is {4=4, 3=3}
lRUCache.get(3);    // return 3
lRUCache.get(4);    // return 4
```

#### Constraints:

• `1 <= capacity <= 3000`
• `0 <= key <= 10`4
• `0 <= value <= 10`5
• At most 2` * 10`5 calls will be made to `get` and `put`.

## LRU Cache LeetCode Solution

The LinkedHashMap method `LinkedHashMap.removeEldestEntry` is a method very commonly used in cache data structures, where the size of the cache is limited to a certain threshold. In such cases, the `removeEldestEntry` method can be set to automatically remove the oldest entry when the size exceeds the threshold (defined by the `MAX_ENTRIES` attribute)

```public LinkedHashMap(int initialCapacity,
boolean accessOrder) {
this.accessOrder = accessOrder;
}```

An instance of HashMap has two parameters that affect its performance: initial capacity and load factor. The capacity is the number of buckets in the hash table, and the initial capacity is simply the capacity at the time the hash table is created. The load factor is a measure of how full the hash table is allowed to get before its capacity is automatically increased. When the number of entries in the hash table exceeds the product of the load factor and the current capacity, the hash table is rehashed (that is, internal data structures are rebuilt) so that the hash table has approximately twice the number of buckets.

As a general rule, the default load factor (.75) offers a good tradeoff between time and space costs. Higher values decrease the space overhead but increase the lookup cost (reflected in most of the operations of the HashMap class, including get and put). The expected number of entries in the map and its load factor should be taken into account when setting its initial capacity, so as to minimize the number of rehash operations. If the initial capacity is greater than the maximum number of entries divided by the load factor, no rehash operations will ever occur.

### Java Solution for LRU Cache LeetCode

```class LRUCache extends LinkedHashMap<Integer, Integer>{
private int capacity;

public LRUCache(int capacity) {
super(capacity, 0.75F, true); //calling constructor of LinkedHashMap
this.capacity = capacity;
}

@Override
protected boolean removeEldestEntry(Map.Entry<Integer, Integer> eldest) {
return size() > capacity;
}

public void put(int key, int value) {
super.put(key, value);
}

public int get(int key) {
return super.getOrDefault(key, -1);
}
}```
```Your input

["LRUCache","put","put","get","put","get","put","get","get","get"]
[[2],[1,1],[2,2],[1],[3,3],[2],[4,4],[1],[3],[4]]

Output
[null,null,null,1,null,-1,null,-1,3,4]

Expected
[null,null,null,1,null,-1,null,-1,3,4]```