What is an In-Memory Database?

An in-memory database stores all an organization or individual’s data in the main memory of a computer.

Data analysis on the in-memory database is fast compared to traditional databases, which use secondary storage devices. These secondary storage devices include a hard disk or solid-state devices. The central processing unit (CPU) of a computer has direct access only to the data stored in the main memory. Therefore, a computer can read/write data in the main memory much faster than the data in the secondary storage device. That makes the in-memory databases incredibly fast.

in-memory database diagram

Organizations use an in-memory database for applications that demand high-speed database operations. Real-time bidding for advertisement spots uses an in-memory database. In real-time ad bidding, a bidding platform puts up an advertisement spot for auction while the user is loading a web page. The real-time bidding platform collects the bid data from several bidders, selects a winning bid based on several rules, and displays the advertisement of the winning bidder. All these have to happen within milliseconds while the web page loads. An in-memory database helps the real-time bidding platform to perform all these data operations within milliseconds’ latency.

Why Do Organizations Need an In-Memory Database?

With the advent of the Internet of Things (IoT) and the growth of cloud-based solutions, organizations need to process data in real-time. Millions of devices like health and security monitors generate data every second. It is crucial to analyze this data in real-time. Organizations need high-performing database solutions to process their real-time data. In-memory databases also help organizations to improve their productivity by speeding up their database operations. It also helps them to leverage the benefits of big data. If an organization needs one of the following, it should consider adopting an in-memory database:

  • The organization needs to leverage the real-time advantages of big data.
  • The organization collects data regularly and needs fast access.
  • Data persistence isn’t a big issue for the organization.
In-Memory Database Ebook
Looking for A Solution to Your Data Challenges? Build A Data Fabric!
Learn how to unify all your data, align IT with business, and empower remote and mobile workers in the newest O’Reilly eBook!

In-Memory Database Vs. Disk-based Database

  • An in-memory database allows faster reads/writes compared to a traditional, disk-based database. In traditional databases, each database operation requires to write/read from the disk, which involves an input/output (IO) operation. This extra process slows down the disk-based databases.
  • Data in the traditional disk-based databases are persistent compared to the data in the in-memory database. The main memory used in the in-memory database is volatile. The data may be lost in the case of a system failure. In-memory databases use various techniques to overcome the issue of data volatility.
  • In traditional databases, structures used for the storage of data are complex. This is to ensure the data access from the disk is efficient. Usually, random access of the secondary storage devices takes a long time. Traditional databases use various data structures like B-Trees to overcome this. In the case of in-memory databases, the storage structures are simple as the random access of data from the main memory is highly efficient.
  • Traditional disk-based databases can only run in systems with a secondary storage device. Many embedded devices don’t support secondary storage. In-memory databases can run efficiently on the main memory of these embedded devices. Organizations that work within the internet of things sector often choose in-memory databases due to this feature.
  • In-memory databases are more suitable for applications with low data requirements. Even though there has been a significant increase in the RAM size, it is still in the multi-gigabyte range. In traditional disk-based databases, there is no such restriction. They can operate on multi-terabyte data size.

How Does an In-memory Database Ensure Data Durability?

It is a prerequisite for every database to guarantee the ACID properties:

  • Atomicity
  • Consistency
  • Isolation
  • Durability

While in-memory databases guarantee the first three properties, additional steps need to be taken to ensure durability. This property dictates that all the data should be intact, even if there is a system or power failure. In-memory databases are based on the volatile main memory. The volatile memory is designed in such a way that all the data will be lost if the system is powered off. In-memory databases use various techniques to ensure that the data doesn’t get erased after a power off or a system failure.

Snapshotting

An in-memory database creates periodic snapshots of the database and stores it in the disk drives (non-volatile). This snapshot of the database is a copy of that whole database at a particular point in time. While periodic snapshotting is a way to preserve data, it might not ensure durability. There is always a chance of system failure after a snapshot has been saved. All the changes after the snapshot will be lost.

Transaction Logging

In this method, the in-memory database keeps a record of every modification made to the database. These transaction logs will have the details of every insert and modify operation. It’s stored in a non-volatile file that can be used to recover the database in case of a failure. In databases that perform thousands of operations per minute, transaction logging adds overhead to the system’s performance and storage capacity. Most of the in-memory databases keep the transaction logs till a snapshot is made and discard them.

Non-volatile Random-Access Memory (NVRAM)

Another way of ensuring data durability is by using non-volatile random-access memory. The NVRAM retains data even after power shut down. NVRAM is a popular solution that’s used by in-memory databases to achieve data durability. In-memory databases use battery-powered static RAM or electrically erasable programmable read-only memory (EEPROM).

MDM and In-Memory Database Resource
Master Data Management Implementation: What You Need to Know
Ready to implement an MDM program? Download our ultimate MDM guide for best practices, implementation strategies, and more.

Essential Features of an In-Memory Database

With the increase in real-time applications, the demand for the in-memory database has also increased. Multiple in-memory databases are available on the market. It is sometimes difficult to choose a perfect in-memory database solution. The following are the essential features that an in-memory database should possess to be useful in the current market scenarios.

Ready for Cloud Migration

Most organizations are moving to the cloud and prefer the software as a service (SAAS) business model. The in-memory database also needs to be able to support the cloud model. This means the in-memory database should be offered as a database-as-a-service. With database-as-a-service, developers and users can use a shared pool of database resources on an on-demand basis. Once they are finished, the database resources are returned to the resource pool. The cloud-ready, in-memory databases are fast, scalable, and flexible.

Ready for the Internet of Things

If an organization wants to explore the internet of things market, they need a real-time database designed to support it. The Internet of Things includes hundreds of sensors (like health and security monitors) continuously sending real-time data. The in-memory database should be ready to handle millions of messages sent by numerous sensors present across the globe. The in-memory database should store and retrieve this data very quickly so that analytics can make quick decisions.

ACID Compliant

Every database should fulfill the requirements for atomicity, consistency, isolation, and durability. These features are essential for data integrity. From a business point of view, having a database that complies with the ACID properties will help the organization to make correct decisions. Using a proprietary implementation of an in-memory database that does not have ACID properties may lead to inaccurate data and erroneous decisions. Make your choice carefully.

What Are the Advantages of In-Memory Databases?

Performance

In-memory databases store data on the computer’s main memory, which the processor can directly access. The read and write operations of an in-memory database is much faster than the traditional disk-based databases.

Simplicity

The traditional databases’ storage requirements are complex as they need to optimize the read/write operations on the secondary storage device. This includes the logic to store data on continuous blocks of disks, which makes retrieval easy. With an in-memory database, random access is highly efficient. Hence, the data structures to store the in-memory databases is quite simple.

Use in Embedded Systems

An in-memory database doesn’t need a secondary storage device. This property is highly useful in the case of today’s highly mobilized. Embedded devices like gaming consoles or smart TV cannot afford a secondary storage device. So,they can use an in-memory database. This property is very significant in today’s world, where the Internet of Things has become quite popular and secondary storage devices are an impossibility.

What Are the Disadvantages of In-Memory Databases?

Data Durability

While the random-access memory facilitates high-speed data operations, in-memory databases are susceptible to data losses. The data in the in-memory database is stored only temporarily. In case of a system crash, there is a chance of data loss.

Solution: We have seen in the earlier section how in-memory databases use different techniques like snapshotting, transaction logs, and non-volatile random-access memory to solve the data durability issue.

Size of Data

As the size of the random-access memory of a computer is limited (typically in gigabytes), in-memory databases may not be able to handle huge data requirements.

Solution: This issue has multiple solutions. Connecting multiple computers over the grid to increase the size of the main memory is one way of solving the size issue. Also, with multi-core computers, we can extend the size of the random-access memory.