How to Conduct SSD Forensic | How do SSDs operate | What do we know?

Computer Forensics Jay Ravtole todayMarch 28, 2024

share close

Do disk forensics approaches such as block-level analysis apply to solid-state devices and flash drives?

Flash drives completely eliminate the concept of a block or any rotating portion, so what happens now?

Do your delete, erase, wipe, and format concepts continue to function similarly? So on and so forth.Do your delete, erase, wipe, and format concepts continue to function similarly? So on and so forth.

Why it’s needed?

With technology constantly improving, even criminals are getting updated with the new technology

With technology growing at a rapid pace, even criminals are becoming more technologically advanced and destructive. This has resulted in a rise in crime rate and complexity. Digital forensics can help solve many different kinds of situations. Forensic investigators have used several traditional methods to solve these cases on hard drives. People now use solid-state drives, which compete with hard disk drives due to technological advancements. However, the approaches that were successful in cracking evidence in hard drives did not work in solid-state drives.

As I have said above, in two points I can answer the question:

  • Current best practices mostly apply to rotating magnetic media, such as conventional hard drives.
  • Solid state drives (SSDs) have unique behaviours and create new issues.

Both do the same thing (by offering a mechanism to store files on a computer system), but in different ways. Hard drives utilize magnetic spinning platters, whereas SSDs use flash memory chips.

Traditional Drives

A traditional spinning hard drive is a computer’s primary non-volatile storage medium. That is, information on it does not “vanish” when the system is turned off, as data saved in RAM does. A hard drive is essentially a metal platter with a magnetic covering that saves data such as historical reports, vintage movies, or your digital music collection. A read/write head on an arm accesses the data as the platters spin. Hard drives have been the most extensively utilized storage devices for decades. The most crucial component of hard drives is platters. They are made of a durable material such as glass and aluminum, covered with a tiny film of metal that may be demagnetized and magnetized. The read-write head does not come into contact with the magnetic platter; instead, there is a layer of air or fluid between the head and the magnetic platter surface, decreasing wear and tear.

A standard solid-state drive (SSD, commonly known as a solid-state disk) is a solid-state storage device that uses integrated circuit assemblies as memory to store data permanently. SSD technology is primarily based on electronic interfaces that are compatible with classic block input/output (I/O) hard disk drives (HDDs), allowing for easy replacement in popular applications.

What do we know

New I/O interfaces, such as SATA Express and M.2, have been developed to meet the special requirements of SSD technology.

At this point in computer evolution, the operation of ordinary hard drives is widely understood: bits of data are recorded on magnetic media using repositionable recording R/W heads. The data can be retrieved randomly by moving the heads over a specific cylinder. All of these activities are easily controlled by drive control commands, which, for example, allow a sector of data to be read or written.

  • Data on traditional hard drives is often recoverable.
  • Formatting, Quick Formatting, Delete, Erase, and Wipe all play distinct roles. A simple fast format does not guarantee that your HDD has been wiped clean. No complete purging occurred.
  • Even a full 1 Pass write does not guarantee permanent data loss.
  • The operating system does not affect the physical position of data blocks. OS involvement is required.
  • Incoming data is not optimized or modified. It accepts raw, unedited data.
  • These properties are common across all hard disks.

In comparison, flash chips are less widely known. Complicating matters is the fact that flash memory implementation approaches cause data to be stored within the SSD in an order that appears to randomly arrange sectors of any file in any physical sector. (There is no internal linear mapping of sectors in an SSD). To learn about SSD forensics, we must first understand Flash and how it works. This is because, when it comes to data recovery and analysis, we are familiar with hard disks; but, for SSDs, we must first understand how information is stored, what happens when data is edited, and when data is lost. The answers to these questions are the key to understanding the difficulties in recovering knowledge and how to overcome them.

How do SSDs operate?

Solid state disks use memory known as “flash memory,” which is similar to RAM. However, unlike RAM, which clears its information if the computer is turned off or power sources from RAM are withdrawn, SSD memory will remain unchanged even if there is a power loss. Solid-state drives (SSDs) deliver and receive data via an electrical grid. Grids are separated by sections called pages, which are where data is kept. These pages clumped together to form blocks.SSD can write to empty pages in a block, however on a hard drive, data can be written to any spot on the magnetic plate at any moment, allowing for easy overwriting. SSDs cannot overwrite data; instead, they should find an empty page in the block and write data to it. When enough pages in the block are identified as unused, the SSD will read the content of the block, commit it to memory, and then erase the entire block. Once completed, it will take the committed picture and reprint it on the block with no unused pages.

To completely understand how an SSD works and the forensic issues it presents, we require To understand the operation of an SSD, we must first understand its two most crucial components: the controller and NAND flash memory. These components, along with a few others, are put on a printed circuit board (PCB) that is enclosed in a solid state drive enclosure. The actual memory blocks are the flash chips.


The controller is an embedded processor that connects the flash memory components to the host, which is a computer. The controller uses the codes provided by the SSD’s firmware, i.e., the micro operating system, to satisfy data requests from the host. The controller would determine how the SSD performed and what functions it offered. The controller’s most popular functions and features are reading, writing, error checking, erasing, trash collection, encryption, wear-leveling, overprovisioning, and RAISE (Redundant Array of Independent Silicon Elements).

Mainstream SSD controllers include the following electrical parts, often integrated within a single Integrated Circuit (IC):

  • Embedded processor—usually a 32-bit microcontroller
  • Electrically Erasable Data Firmware ROM
  • System RAM
  • Support for external RAM, commonly DDR/DDR2 as SDRAM.
  • ECC circuitry, Error Correction Code
  • The Flash component interface is typically a standard interface, such as the Open NAND Flash Interface (ONFI).

Host electrical interfaces often include SATA, USB, SAS, or a mix.


EPROMs and EEPROMs were where it all started. In the early 1980s, before cell phones, tablets, and digital cameras existed, a scientist named Dr Fujio Masuoka was working for Toshiba in Japan on the limitations of EPROM and E2PROM chips.

An EPROM (Erasable Programmable Read Only Memory) is a type of memory chip that, unlike RAM, does not lose data when the power supply fails – in technical terms, it is non-volatile. It does this by storing data in “cells” made up of floating-gate transistors. EPROMs could have data put into them (known as programming), but this data could also be wiped using ultraviolet light to allow new data to be written. This cycle of programming and erasing is known as the program erase cycle (or PE Cycle), and it is significant because it can only occur a finite number of times per device, resulting in a limited number of Read/Write operations. However, while EPROMs’ reprogrammability was valuable in laboratories, it was not a solution for packaging into consumer devices — after all, incorporating an ultra-violet light source into a device would make it unwieldy and commercially unsustainable. And we wanted something that was both readable and writeable.

EEPROMs, a following development, may be erased by using an electric field rather than light. When words like “consumers” and “feasible” are used, this is what we mean. This was obviously useful because it could now readily take place inside a packed product. Unlike EPROMs, E2PROMs could erase program-specific bytes rather than the full device. However, E2PROMs had a drawback in that each cell required at least two transistors, as opposed to the one transistor required by EPROMs. In other words, they stored less data and had a lower density.

EPROMs had higher density, but EEPROMs could electrically update cells. Of course, we would prefer to have two. What if a new method could be developed that integrates both benefits while avoiding their related drawbacks? Dr Masuoka’s idea accomplished just that. It used only one transistor per cell, boosting density (the quantity of data stored) while still allowing for electrical reprogramming. We use this new design as the basis for our contemporary SSDs.

The new design met this goal by allowing only many cells to be deleted and programmed, rather than individual cells. This not only provides the density benefits of EPROM and the electrically reprogrammable features of E2PROMs, but it also results in speedier access times: issuing a single command to program or erase a large number of cells takes less time than issuing one per cell.

However, without a drawback, it is impossible to combine two technologies so readily. The same thing happened here. A single erase operation affects more cells than a single program operation. And it is this fact that, above all, results in the behavior we witness from devices designed with flash memory.


The Flash Drives use two types of memory technology:

  • NAND-based Flash
  • NOR-Based Flash

Both are regarded as the premier non-volatile Flash memory technology. NAND and NOR Flash fulfill completely different design requirements due to their distinct characteristics. NOR provides quicker read speeds and random-access capabilities, making it ideal for code storage in devices like smartphones and fitness bands. However, it has slower write and erase functions than NAND. NOR has a higher bit density than NAND. Because code storage typically requires lower-density memory than file storage, NOR’s greater cell size is not a concern in these applications.

Well, NAND has a faster write/erase capability than NOR. But the read speed is sluggish. However, NAND is more than adequate for the majority of consumer applications, including movies, music, documents, and games. NAND is the preferred technology for file storage due to its faster write/erase speeds, larger possible densities, and lower cost-per-bit than NOR. Because of its quicker read/write speeds, NAND is commonly used to store huge amounts of data in devices such as Flash drives, MP3 players, multi-function cell phones, digital cameras, and USB drives. And the SSDs we use are NAND Flash-based.

Stochastic forensics

Stochastic forensics is a method to forensically reconstruct digital activity lacking artifacts, by analyzing emergent properties resulting from the stochastic nature of modern computers.

The way these modern SSDs operate leaves little room for optimistic affirmations.

With SSD drives, the only thing that can be presumed is that an investigator has access to the disk’s existing data. However, retrieval is a whole different issue than HDDs. Deleted files and data that the suspect was trying to remove (e.g., formatting the drive in “Quick Format” mode) might be lost permanently in a matter of minutes. Even if the computer is turned off shortly after issuing a destructive command (for example, a few minutes after performing a Quick Format), there is no straightforward way to prevent the disk from destroying the data once the power is turned on.

The golden age of forensics is coming to an end. “Given the pace of development in SSD memory and controller technology, and the increasing growth of manufacturers, drives, and firmware versions, it will probably never be possible to remove or narrow this new grey area within the forensic and legal domain,” the researchers, who are from Murdoch University in Australia, wrote. “It seems possible that the golden age for forensic recovery and analysis of deleted data and deleted metadata may now be ending.”

SSD Obstacles

The issue with SSD storage devices is that they use flash memory chips. And the flash memory chips have a stumbling hurdle. They suffer from two nearly disastrous flaws:

  1. Limited Read/Write Cycles: Over time, repeated write operations wear them down. Standard flash lifetimes are up to 100,000 cycles per block before failure occurs.
  2. Block-based Operations: All write operations to the flash memory chips must be performed on a block-by-block basis. Overwriting old data with new data is practically impossible (unless you have a lot of time). (A block within a flash memory chip is comparable to, but not identical to, a sector of information.) As a result, for optimal SSD performance, we must constantly have a fresh supply of empty SSD blocks ready to be filled with new data. It would be too time-consuming to always clear a block before writing (or overwriting) new data into it.

Written by: Jay Ravtole

Tagged as: .

Rate it

Previous post

Post comments (0)

Leave a reply

Your email address will not be published. Required fields are marked *

Open chat
Can we help you?