This page is under review. The page is likely incorrect, contains invalid links, and/or needs technical review. In the future it may change substantially or be removed entirely.
How is Valkey different from other key-value stores?
- Valkey has a different evolution path in the key-value DBs where values can contain more complex data types, with atomic operations defined on those data types. Valkey data types are closely related to fundamental data structures and are exposed to the programmer as such, without additional abstraction layers.
- Valkey is an in-memory but persistent on disk database, so it represents a different trade off where very high write and read speed is achieved with the limitation of data sets that can't be larger than memory. Another advantage of in-memory databases is that the memory representation of complex data structures is much simpler to manipulate compared to the same data structures on disk, so Valkey can do a lot with little internal complexity. At the same time the two on-disk storage formats (RDB and AOF) don't need to be suitable for random access, so they are compact and always generated in an append-only fashion (Even the AOF log rotation is an append-only operation, since the new version is generated from the copy of data in memory). However this design also involves different challenges compared to traditional on-disk stores. Being the main data representation on memory, Valkey operations must be carefully handled to make sure there is always an updated version of the data set on disk.
What's the Valkey memory footprint?
To give you a few examples (all obtained using 64-bit instances):
- An empty instance uses ~ 3MB of memory.
- 1 Million small Keys -> String Value pairs use ~ 85MB of memory.
- 1 Million Keys -> Hash value, representing an object with 5 fields, use ~ 160 MB of memory.
Testing your use case is trivial. Use the valkey-benchmark
utility to generate random data sets then check the space used with the INFO memory
command.
64-bit systems will use considerably more memory than 32-bit systems to store the same keys, especially if the keys and values are small. This is because pointers take 8 bytes in 64-bit systems. But of course the advantage is that you can have a lot of memory in 64-bit systems, so in order to run large Valkey servers a 64-bit system is more or less required. The alternative is sharding.
Why does Valkey keep its entire dataset in memory?
In the past, developers experimented with Virtual Memory and other systems in order to allow larger than RAM datasets, but after all we are very happy if we can do one thing well: data served from memory, disk used for storage. So for now there are no plans to create an on disk backend for Valkey. Most of what Valkey is, after all, a direct result of its current design.
If your real problem is not the total RAM needed, but the fact that you need to split your data set into multiple Valkey instances, please read the partitioning page in this documentation for more info.
Can you use Valkey with a disk-based database?
Yes, a common design pattern involves taking very write-heavy small data in Valkey (and data you need the Valkey data structures to model your problem in an efficient way), and big blobs of data into an SQL or eventually consistent on-disk database. Similarly sometimes Valkey is used in order to take in memory another copy of a subset of the same data stored in the on-disk database. This may look similar to caching, but actually is a more advanced model since normally the Valkey dataset is updated together with the on-disk DB dataset, and not refreshed on cache misses.
How can I reduce Valkey' overall memory usage?
A good practice is to consider memory consumption when mapping your logical data model to the physical data model within Valkey. These considerations include using specific data types, key patterns, and normalization.
Beyond data modeling, there is more info in the Memory Optimization page.
What happens if Valkey runs out of memory?
Valkey has built-in protections allowing the users to set a max limit on memory
usage, using the maxmemory
option in the configuration file to put a limit
to the memory Valkey can use. If this limit is reached, Valkey will start to reply
with an error to write commands (but will continue to accept read-only
commands).
You can also configure Valkey to evict keys when the max memory limit is reached. See the eviction policy docs for more information on this.
Background saving fails with a fork() error on Linux?
Short answer: echo 1 > /proc/sys/vm/overcommit_memory
:)
And now the long one:
The Valkey background saving schema relies on the copy-on-write semantic of the fork
system call in
modern operating systems: Valkey forks (creates a child process) that is an
exact copy of the parent. The child process dumps the DB on disk and finally
exits. In theory the child should use as much memory as the parent being a
copy, but actually thanks to the copy-on-write semantic implemented by most
modern operating systems the parent and child process will share the common
memory pages. A page will be duplicated only when it changes in the child or in
the parent. Since in theory all the pages may change while the child process is
saving, Linux can't tell in advance how much memory the child will take, so if
the overcommit_memory
setting is set to zero the fork will fail unless there is
as much free RAM as required to really duplicate all the parent memory pages.
If you have a Valkey dataset of 3 GB and just 2 GB of free
memory it will fail.
Setting overcommit_memory
to 1 tells Linux to relax and perform the fork in a
more optimistic allocation fashion, and this is indeed what you want for Valkey.
You can refer to the proc(5) man page for explanations of the available values.
Are Valkey on-disk snapshots atomic?
Yes, the Valkey background saving process is always forked when the server is outside of the execution of a command, so every command reported to be atomic in RAM is also atomic from the point of view of the disk snapshot.
How can Valkey use multiple CPUs or cores?
It's not very frequent that CPU becomes your bottleneck with Valkey, as usually Valkey is either memory or network bound. For instance, when using pipelining a Valkey instance running on an average Linux system can deliver 1 million requests per second, so if your application mainly uses O(N) or O(log(N)) commands, it is hardly going to use too much CPU.
However, to maximize CPU usage you can start multiple instances of Valkey in the same box and treat them as different servers. At some point a single box may not be enough anyway, so if you want to use multiple CPUs you can start thinking of some way to shard earlier.
You can find more information about using multiple Valkey instances in the Partitioning page.
As of version 4.0, Valkey has started implementing threaded actions. For now this is limited to deleting objects in the background and blocking commands implemented via Valkey modules. For subsequent releases, the plan is to make Valkey more and more threaded.
What is the maximum number of keys a single Valkey instance can hold? What is the maximum number of elements in a Hash, List, Set, and Sorted Set?
Valkey can handle up to 2^32 keys, and was tested in practice to handle at least 250 million keys per instance.
Every hash, list, set, and sorted set, can hold 2^32 elements.
In other words your limit is likely the available memory in your system.
Why does my replica have a different number of keys than its primary instance?
If you use keys with limited time to live (Valkey expires) this is normal behavior. This is what happens:
- The primary generates an RDB file on the first synchronization with the replica.
- The RDB file will not include keys already expired in the primary but which are still in memory.
- These keys are still in the memory of the Valkey primary, even if logically expired. They'll be considered non-existent, and their memory will be reclaimed later, either incrementally or explicitly on access. While these keys are not logically part of the dataset, they are accounted for in the
INFO
output and in theDBSIZE
command. - When the replica reads the RDB file generated by the primary, this set of keys will not be loaded.
Because of this, it's common for users with many expired keys to see fewer keys in the replicas. However, logically, the primary and replica will have the same content.
Why did Linux Foundation start the Valkey project?
Read about the history of Valkey.