What is the first step required in preparing a computer for forensics investigation?

Forensics Process

Leighton R. JohnsonIII, in Computer Incident Response and Forensics Team Management, 2014

There are many methods and techniques which define the steps to a forensics investigation; however, it has been my experience in performing investigations and teaching higher level forensics courses, the following methodology seems to work the best. So the basic steps to a forensics investigation are as follows:

1.

Prepare—Specific forensics training, overarching corporate policies and procedures, as well as practice investigations and examinations will prepare you for an “event.” Specialized forensics or incident handling certifications are considered of great value for forensics investigators.

2.

Identify—When approaching an incident scene—review what is occurring on the computer screen. If data is being deleted, pull the power plug from the wall; otherwise perform real-time capture of system “volatile” data first.

3.

Preserve—Once the system-specific “volatile” data is retrieved, then turn off machine, remove it from scene, and power it up in an isolated environment. Perform a full system bit-stream image capture of the data on the machine, remembering to “hash” the image with the original data for verification purposes.

4.

Select —Once you have a verified copy of the available data, start investigation of data by selecting potential evidence files, datasets, and locations data could be stored. Isolate event-specific data from normal system data for further examination.

5.

Examine—Look for potential hidden storage locations of data such as slack space, unallocated space, and in front of File Allocation Table (FAT) space on hard drives. Remember to look in registry entries or root directories for additional potential indicators of data storage activity.

6.

Classify—Evaluate data in potential locations for relevance to current investigation. Is the data directly related to case, or does it support events of the case, or is it unrelated to the case?

7.

Analyze—Review data from relevant locations. Ensure data is readable, legible, and relevant to investigation. Evaluate it for type of evidence: Is it direct evidence of alleged issue or is it related to issue?

8.

Present—Correlate all data reviewed to investigation papers (warrants, corporate documents, etc.). Prepare data report for presentation—either in a court of law or to corporate officers.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597499965000108

Portable Virtualization, Emulators, and Appliances

Diane Barrett, Gregory Kipper, in Virtualization and Forensics, 2010

Publisher Summary

This chapter discusses some of the various virtual environments that can be run on portable devices such as thumb drives, iPods, and cell phones. The use of virtualization is growing in the individual use market, as many corporate organizations use devices such as the IronKey. Virtualized environments, especially those run from a removable drive, can make forensics investigation more difficult. Technological advances in virtualization tools can transform removable media into a portable personal computer (PC) that can be carried around in a shirt pocket or on a lanyard around a neck. Running operating systems (OSes) and applications in this fashion leaves very little evidence on the host system. Currently most portable environments save at least some part of the information in the system registry or configuration files. In “Portable Desktop Applications Based on P2P Transportation and Virtualization,” Zhang, Wang, and Hong propose an application that can work without installation by making a two-part application. One part is portable and enables the application to run in a sandbox where it can access and store the data associated with it, and the second part can run in an isolation mode. Other similar environments include Feather-weight Virtual Machine and Progressive Deployment System.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597495578000047

Information Security Essentials for IT Managers

Albert Caballero, in Computer and Information Security Handbook, 2009

Security Monitoring Mechanisms

Security monitoring involves real-time or near-real-time monitoring of events and activities happening on all your organization’s important systems at all times. To properly monitor an organization for technical events that can lead to an incident or an investigation, usually an organization uses a security information and event management (SIEM) and/or log management tool. These tools are used by security analysts and managers to filter through tons of event data and to identify and focus on only the most interesting events.

Understanding the regulatory and forensic impact of event and alert data in any given enterprise takes planning and a thorough understanding of the quantity of data the system will be required to handle. The better logs can be stored, understood, and correlated, the better the possibility of detecting an incident in time for mitigation. In this case, what you don’t know will hurt you. Responding to incidents, identifying anomalous or unauthorized behavior, and securing intellectual property has never been more important. Without a solid log management strategy it becomes nearly impossible to have the necessary data to perform a forensic investigation, and without monitoring tools, identifying threats and responding to attacks against confidentiality, integrity, or availability become much more difficult. For a network to be compliant and an incident response or forensics investigation to be successful, it is critical that a mechanism be in place to do the following:

Securely acquire and store raw log data for as long as possible from as many disparate devices as possible while providing search and restore capabilities of these logs for analysis.

Monitor interesting events coming from all important devices, systems, and applications in as near real time as possible.

Run regular vulnerability scans on your hosts and devices and correlate these vulnerabilities to intrusion detection alerts or other interesting events, identifying high-priority attacks as they happen and minimizing false positives.

SIEM and log management solutions in general can assist in security information monitoring (see Figure 14.21) as well as regulatory compliance and incident response by:

What is the first step required in preparing a computer for forensics investigation?

Figure 14.21. Security monitoring.

Aggregating and normalizing event data from unrelated network devices, security devices, and application servers into usable information.

Analyze and correlate information from various sources such as vulnerability scanners, IDS/IPS, firewalls, servers, and so on, to identify attacks as soon as possible and help respond to intrusions more quickly.

Conduct network forensic analysis on historical or real-time events through visualization and replay of events.

Create customized reports for better visualization of your organizational security posture.

Increase the value and performance of existing security devices by providing a consolidated event management and analysis platform.

Improve the effectiveness and help focus IT risk management personnel on the events that are important.

Meet regulatory compliance and forensics requirements by securely storing all event data on a network for long-term retention and enabling instant accessibility to archived data.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123743541000145

Information Security Essentials for IT Managers

Albert Caballero, in Computer and Information Security Handbook (Second Edition), 2013

Security Monitoring Mechanisms

Security monitoring involves real-time or near-real-time monitoring of events and activities happening on all your organization’s important systems at all times. To properly monitor an organization for technical events that can lead to an incident or an investigation, usually an organization uses a security information and event management (SIEM) and/or log management tool. These tools are used by security analysts and managers to filter through tons of event data and to identify and focus on only the most interesting events.

Understanding the regulatory and forensic impact of event and alert data in any given enterprise takes planning and a thorough understanding of the quantity of data the system will be required to handle (see checklist: “An Agenda For Action When Implementing A Critical Security Mechanism”). The better logs can be stored, understood, and correlated, the better the possibility of detecting an incident in time for mitigation. In this case, what you don’t know will hurt you. Responding to incidents, identifying anomalous or unauthorized behavior, and securing intellectual property has never been more important.

An Agenda for Action when Implementing a Critical Security Mechanism

Without a solid log management strategy, it becomes nearly impossible to have the necessary data to perform a forensic investigation; and, without monitoring tools identifying threats and responding to attacks against confidentiality, integrity, or availability, it becomes much more difficult. For a network to be compliant and an incident response or forensics investigation to be successful, it is critical that a mechanism be in place to do the following (check all tasks completed):

_____1.

Securely acquire and store raw log data for as long as possible from as many disparate devices as possible while providing search and restore capabilities of these logs for analysis.

_____2.

Monitor interesting events coming from all important devices, systems, and applications in as near real time as possible.

_____3.

Run regular vulnerability scans on your hosts and devices; and, correlate these vulnerabilities to intrusion detection alerts or other interesting events, identifying high-priority attacks as they happen, and minimizing false positives. SIEM and log management solutions in general can assist in security information monitoring (see Figure 21.21); as well as, regulatory compliance and incident response.

What is the first step required in preparing a computer for forensics investigation?

Figure 21.21. Security monitoring.

_____4.

Aggregate and normalize event data from unrelated network devices, security devices, and application servers into usable information.

_____5.

Analyze and correlate information from various sources such as vulnerability scanners, IDS/IPS, firewalls, servers, and so on, to identify attacks as soon as possible and help respond to intrusions more quickly.

_____6.

Conduct network forensic analysis on historical or real-time events through visualization and replay of events.

_____7.

Create customized reports for better visualization of your organizational security posture.

_____8.

Increase the value and performance of existing security devices by providing a consolidated event management and analysis platform.

_____9.

Improve the effectiveness and help focus IT risk management personnel on the events that are important.

_____10.

Meet regulatory compliance and forensics requirements by securely storing all event data on a network for long-term retention and enabling instant accessibility to archived data.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123943972000210

Open Source Cloud Storage Forensics

Darren Quick, ... Kim-Kwang Raymond Choo, in Cloud Storage Forensics, 2014

Cloud forensics framework

The digital forensics framework (see Chapter 2) is discussed as follows (see Figure 2.1 for a summary). One of the key features of this framework is its iterative nature which is essential in our research—the client is used both to identify the existence of cloud storage and to recover any data synced/cached on the client. As such, forensic analysis of the client is carried out before analysis of the server environment.

Commence (scope): This phase outlines a number of factors to determine the scope of the forensic investigation such as the persons involved, any data or evidence already seized, keyword terms, any urgent time frames, and other relevant information.

Preparation: Preparation is primarily concerned with ensuring that the relevant resources (both in terms of personnel and technical resources) are available to conduct the investigation. Preparation also includes other aspects of an investigation, such as timely response, time frame, personnel, duties, and locations of interest.

Evidence source identification and preservation: This phase is concerned with identifying sources of evidence in a digital forensics investigation. During the first iteration, sources of evidence identified will likely be a physical device (e.g., desktop computers, laptops, and mobile devices). During the second iteration, this phase is concerned with identifying cloud services/providers relevant to the case, possible evidence stored with the cloud provider, and processes for preservation of this potential evidence. Regardless of the identified source of evidence, forensic investigators need to ensure the proper preservation of the evidence.

Collection: This phase is concerned with the actual capture of the data. There are various methods of collection suited for the various cloud computing platforms and deployment models. For example, Infrastructure as a Service (IaaS) may provide an export of the virtual hard disk and memory provided to the user while Software as a Service (SaaS) may only provide a binary export of the data stored on the hosted software environment.

McKemmish (1999) suggested that extraction could be separate from processing, and we believe that due to the complications of cloud computing data collection (e.g., significant potential for the cloud service to be hosted outside of the law enforcement agencies (LEAs) jurisdiction and the potential for technical measures such as data striping to complicate collection), this separation from timely preservation is critical, and hence the separate collection step.

Generally cloud servers are physically located in a different jurisdiction from that of the investigating LEA and/or suspect. It is, therefore, important for the agency collecting the evidence in one jurisdiction for use in a criminal prosecution taking place in another jurisdiction to work and cooperate closely with their foreign counterparts to ensure that the methods used in the collection are in full accordance with applicable laws, legal principles, and rules of evidence of the jurisdiction in which the evidence is ultimately to be used (UNODC, 2012).

Examination and analysis: This phase is concerned with the examination and analysis of forensic data. It is during this phase that cloud computing usage would most likely be discovered based upon the examination and analysis of physical devices and this would lead to a second (or more) iteration(s) of the process.

Presentation: This phase is concerned with legal presentation of the evidence collected. This phase remains very similar to the frameworks of McKemmish and NIST (as discussed in Martini & Choo, 2012). In general, the report should include information on all processes, the tools and applications used, and any limitations to prevent false conclusions from being reached (see US NIJ, 2004).

Complete: This phase allows a practitioner to review their findings with a view to determining if further analysis should be completed to meet the needs of the investigator or legal counsel. If no further analysis is required, this phase deals with appropriate completion processes for the case including archiving evidential data and review of processes for use in future cases.

This chapter mainly focuses on the analysis stage of the framework with some discussion on pertinent of evidence source identification, preservation, and collection.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124199705000065

Information Security Essentials for IT Managers

Albert Caballero, in Managing Information Security (Second Edition), 2014

Incidence Response and Forensic Investigations

Network forensic investigation is the investigation and analysis of all the packets and events generated on any given network in hope of identifying the proverbial needle in a haystack. Tightly related is incident response, which entails acting in a timely manner to an identified anomaly or attack across the system. To be successful, both network investigations and incident response rely heavily on proper event and log management techniques. Before an incident can be responded to there is the challenge of determining whether an event is a routine system event or an actual incident. This requires that there be some framework for incident classification (the process of examining a possible incident and determining whether or not it requires a reaction). Initial reports from end users, intrusion detection systems, host- and network-based malware detection software, and systems administrators are all ways to track and detect incident candidates.40

An Agenda for Action when Implementing a Critical Security Mechanism

Without a solid log management strategy, it becomes nearly impossible to have the necessary data to perform a forensic investigation; and, without monitoring tools identifying threats and responding to attacks against confidentiality, integrity, or availability, it becomes much more difficult. For a network to be compliant and an incident response or forensics investigation to be successful, it is critical that a mechanism be in place to do the following (check all tasks completed):

_____1.

Securely acquire and store raw log data for as long as possible from as many disparate devices as possible while providing search and restore capabilities of these logs for analysis.

_____2.

Monitor interesting events coming from all important devices, systems, and applications in as near real time as possible.

_____3.

Run regular vulnerability scans on your hosts and devices; and, correlate these vulnerabilities to intrusion detection alerts or other interesting events, identifying high-priority attacks as they happen, and minimizing false positives. SIEM and log management solutions in general can assist in security information monitoring (see Figure 1.21); as well as, regulatory compliance and incident response.

What is the first step required in preparing a computer for forensics investigation?

Figure 1.21. Security monitoring.

_____4.

Aggregate and normalize event data from unrelated network devices, security devices, and application servers into usable information.

_____5.

Analyze and correlate information from various sources such as vulnerability scanners, IDS/IPS, firewalls, servers, and so on, to identify attacks as soon as possible and help respond to intrusions more quickly.

_____6.

Conduct network forensic analysis on historical or real-time events through visualization and replay of events.

_____7.

Create customized reports for better visualization of your organizational security posture.

_____8.

Increase the value and performance of existing security devices by providing a consolidated event management and analysis platform.

_____9.

Improve the effectiveness and help focus IT risk management personnel on the events that are important.

_____10.

Meet regulatory compliance and forensics requirements by securely storing all event data on a network for long-term retention and enabling instant accessibility to archived data.

As mentioned in earlier sections, the phases of an incident usually unfold in the following order: preparation, identification (detection), containment, eradication, recovery and lessons learned. The preparation phase requires detailed understanding of information systems and the threats they face; so to perform proper planning an organization must develop predefined responses that guide users through the steps needed to properly respond to an incident. Predefining incident responses enables rapid reaction without confusion or wasted time and effort, which can be crucial for the success of an incident response. Identification occurs once an actual incident has been confirmed and properly classified as an incident that requires action. At that point the IR team moves from identification to containment. In the containment phase, a number of action steps are taken by the IR team and others. These steps to respond to an incident must occur quickly and may occur concurrently, including notification of key personnel, the assignment of tasks, and documentation of the incident. Containment strategies focus on two tasks: first, stopping the incident from getting any worse, and second, recovering control of the system if it has been hijacked.

Once the incident has been contained and system control regained, eradication can begin, and the IR team must assess the full extent of damage to determine what must be done to restore the system. Immediate determination of the scope of the breach of confidentiality, integrity, and availability of information and information assets is called incident damage assessment. Those who document the damage must be trained to collect and preserve evidence in case the incident is part of a crime investigation or results in legal action.

At the moment that the extent of the damage has been determined, the recovery process begins to identify and resolve vulnerabilities that allowed the incident to occur in the first place. The IR team must address the issues found and determine whether they need to install and/or replace/upgrade the safeguards that failed to stop or limit the incident or were missing from system in the first place. Finally, a discussion of lessons learned should always be conducted to prevent future similar incidents from occurring and review what could have been done differently.41

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124166882000015

Domain 1: Security and Risk Management (e.g., Security, Risk, Compliance, Law, Regulations, Business Continuity)

Eric Conrad, ... Joshua Feldman, in CISSP Study Guide (Third Edition), 2016

Evidence

Evidence is one of the most important legal concepts for information security professionals to understand. Information security professionals are commonly involved in investigations, and often have to obtain or handle evidence during the investigation. Some types of evidence carry more weight than others; however, information security professionals should attempt to provide all evidence, regardless of whether that evidence proves or disproves the facts of a case. While there are no absolute means to ensure that evidence will be allowed and helpful in a court of law, information security professionals should understand the basic rules of evidence. Evidence should be relevant, authentic, accurate, complete, and convincing. Evidence gathering should emphasize these criteria.

Real Evidence

The first, and most basic, category of evidence is that of real evidence. Real evidence consists of tangible or physical objects. A knife or bloody glove might constitute real evidence in some traditional criminal proceedings. However, with most computer incidents, real evidence is commonly made up of physical objects such as hard drives, DVDs, USB storage devices, or printed business records.

Direct Evidence

Direct evidence is testimony provided by a witness regarding what the witness actually experienced with her five senses. The witnesses must have experienced what they are testifying to, rather than have gained the knowledge indirectly through another person (hearsay, see below).

Circumstantial Evidence

Circumstantial evidence is evidence which serves to establish the circumstances related to particular points or even other evidence. For instance, circumstantial evidence might support claims made regarding other evidence or the accuracy of other evidence. Circumstantial evidence provides details regarding circumstances that allow for assumptions to be made regarding other types of evidence. This type of evidence offers indirect proof, and typically cannot be used as the sole evidence in a case. For instance, if a person testified that she directly witnessed the defendant create and distribute malware this would constitute direct evidence. If the forensics investigation of the defendant’s computer revealed the existence of source code for the malware, this would constitute circumstantial evidence.

Corroborative Evidence

In order to strengthen a particular fact or element of a case there might be a need for corroborative evidence. This type of evidence provides additional support for a fact that might have been called into question. This evidence does not establish a particular fact on its own, but rather provides additional support for other facts.

Hearsay

Hearsay evidence constitutes second-hand evidence. As opposed to direct evidence, which someone has witnessed with her five senses, hearsay evidence involves indirect information. Hearsay evidence is normally considered inadmissible in court. Numerous rules including Rules 803 and 804 of the Federal Rules of Evidence of the United States provide for exceptions to the general inadmissibility of hearsay evidence that is defined in Rule 802.

Business and computer generated records are generally considered hearsay evidence, but case law and updates to the Federal Rules of Evidence have established exceptions to the general rule of business records and computer generated data and logs being hearsay. The exception defined in Rule 803 provides for the admissibility of a record or report that was “made at or near the time by, or from information transmitted by, a person with knowledge, if kept in the course of a regularly conducted business activity, and if it was the regular practice of that business activity to make the memorandum, report, record or data compilation.”[1]

An additional consideration important to computer investigations pertains to the admissibility of binary disk and physical memory images. The Rule of Evidence that is interpreted to allow for disk and memory images to be admissible is actually not an exception to the hearsay rule, Rule 802, but is rather found in Rule 1001, which defines what constitutes originals when dealing with writings, recordings, and photographs. Rule 1001 states that “if data are stored in a computer or similar device, any printout or other output readable by sight, shown to reflect the data accurately, is an ‘original’.”[2] This definition has been interpreted to allow for both forensic reports as well as memory and disk images to be considered even though they would not constitute the traditional business record exception of Rule 803.

Best Evidence Rule

Courts prefer the best evidence possible. Original documents are preferred over copies: conclusive tangible objects are preferred over oral testimony. Recall that the five desirable criteria for evidence suggest that, where possible, evidence should be: relevant, authentic, accurate, complete, and convincing. The best evidence rule prefers evidence that meets these criteria.

Secondary Evidence

With computer crimes and incidents best evidence might not always be attainable. Secondary evidence is a class of evidence common in cases involving computers. Secondary evidence consists of copies of original documents and oral descriptions. Computer-generated logs and documents might also constitute secondary rather than best evidence. However, Rule 1001 of the United States Federal Rules of Evidence can allow for readable reports of data contained on a computer to be considered original as opposed to secondary evidence.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128024379000023

Information Security Essentials for Information Technology Managers

Albert Caballero, in Computer and Information Security Handbook (Third Edition), 2017

Security Monitoring

Security monitoring involves real-time or near real-time monitoring of events and activities happening on all mission-critical systems. To properly monitor an organization for security events that can lead to an incident or an investigation, usually an organization uses a Security Information and Event Management (SIEM) tool. Security analysts and managers must filter through tons of event data identifying and focusing on only the most interesting events.

Understanding the regulatory and forensic impact of event and alert data in any given enterprise takes planning and a thorough understanding of the quantity of data the system will be required to handle (see checklist: “An Agenda for Action When Implementing a Critical Security Mechanism”). The better logs can be stored, understood, and correlated, the better the possibility of detecting an incident in time for mitigation. Responding to incidents, identifying anomalous or unauthorized behavior, and securing intellectual property have never been more important.

An Agenda for Action When Implementing a Critical Security Mechanism

Without a solid log management strategy it becomes nearly impossible to have the necessary data to perform a forensic investigation, and without monitoring tools, identifying threats and responding to attacks become much more difficult. For an incident response and forensics investigation to be successful, it is important that certain mechanisms be in place; for example, an organization may want to implement some of the following (check all tasks completed):

_____1.

Securely acquire and store raw log data for as long as possible from as many disparate devices as possible while providing search and restore capabilities of these logs for analysis.

_____2.

Monitor interesting events coming from all important devices, systems, and applications in as near real time as possible.

_____3.

Run regular vulnerability scans and correlate these vulnerabilities to intrusion detection alerts or other interesting events, identifying high-priority attacks as they happen and minimizing false positives.

_____4.

Aggregate and normalize event data from unrelated network devices, security devices, and application servers into usable information.

_____5.

Analyze and correlate information from various sources such as vulnerability scanners, IDS/IPS, firewalls, servers, and so on, to identify attacks as soon as possible and help respond to intrusions more quickly.

_____6.

Conduct network forensic analysis on historical or real-time events through visualization and replay of events.

_____7.

Create customized reports for better visualization of your security posture.

_____8.

Increase the value and performance of existing security devices by providing a consolidated event management and analysis platform.

_____9.

Improve the effectiveness and help focus IT risk management personnel on the events that are important.

Security monitoring is a key component in gaining the visibility necessary to identify incidents quickly and having the information necessary to respond and remediate. Monitoring any environment is difficult but there are additional challenges that crop up in the cloud which are not easily overcome, primarily when it comes to monitoring parts of the infrastructure that are in the control of the provider and not of the data owner or subscriber. One major challenge in gaining visibility into what's happening in your cloud environment is the inability to analyze network traffic and perform basic packet capture or install intrusion detection systems. As an alternative to monitoring activity in this fashion there have been new cloud access security technologies that leverage APIs to constantly query a particular cloud service to log every activity that happens in that instance of the cloud. With this type of monitoring activity there are Indicators of Compromise (IoC) that can be identified and reported as anomalies. In addition to calling out these anomalies, such as logging in with the same credentials at the same time from geographically disparate regions, these security technologies can also implement some machine-learning algorithms to trend the behavior of every user and alert when something out of the ordinary happens.

Every cloud provider publishes a subset APIs that allows subscribers to query the cloud instance for different data the problem arises when the subscriber has a need to monitor more granular information that what the provider's API support. If sufficiently granular security information is available it can be compared to activity provided by threat feeds and watch lists which can provide insight into malicious behavior that has been observed in other customer and cloud environments. These technologies and techniques should be implemented in addition to the regular security-monitoring tools that are used to monitor traditional IT infrastructures. Some of the important cloud security-monitoring techniques that should be considered for implementation above and beyond traditional controls is as follows:

Secure APIs: Secure APIs are automated queries that allow for the monitoring of cloud activities and actions.

Cloud Access Security Brokers (CASB): CASBs are platforms that leverage secure cloud APIs for many cloud services enabling subscribers to have a centralized location for the monitoring and inspection of all their cloud events.

Anomaly detection: Methodologies for identifying and alerting on activities that are not considered normal and have never been seen before in an effort to prevent a security breach before it gets out of control.

Machine learning: This is the automation of longstanding techniques that have been used to identify anomalies in the past. The correlation of events was largely manual in the past but many platforms have incorporated the ability to automatically develop anomaly criteria without user intervention.

Threat intelligence: This term refers to threat feeds, watch lists, and other mechanisms by which threats to a particular environment have been identified and are communicated to end users, security tools, and customers.

Behavioral detection: It is common for many security tools nowadays to first learn the behavior of users, systems, and networks before they start generating alerts for unauthorized activity. This type of behavioral detection goes beyond the blanket anomaly and creates a profile for each object using the cloud. Where it might be normal for an administrator to transfer 10 GB of data every day to and from the cloud and no alarm sounds a typical end user performing the same action would fire an alarm because they have never performed that type of action before.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128038437000247

Forensics Tools

Leighton R. JohnsonIII, in Computer Incident Response and Forensics Team Management, 2014

Types of Forensics Tools

Different kinds and types of tools are needed by the analyst, examiner or investigator as they begin to look at, review and analyze the various data sources during their investigation. Here is a listing of suggested tool types available today, but please understand this list is by NO means all inclusive. There are many other areas being discovered each day with new tools coming to the workplace all the time, so do NOT view this as covering all areas.

File System Navigation tool

Many operating systems come with an embedded file navigation mechanism. There are also many external third-party tools available for use during investigations. Each has features and components which allow searching for specific file extensions, metadata about files, and other file parameters. These tools provide quick identification of files which meet the needed criteria.

Hashing tool

Each and every time an evidence component is captured, it is to be cryptographically signed to ensure its integrity. This process is known as “hashing.” It is called that because the process of one-way encryption of the file structure to a fixed-length output utilizes encryption algorithms known as hashes. These processes are of two primary encryption types: Message Digest (MD) outputs and Secure Hash Algorithm (SHA) outputs. The MD output depends upon the type of algorithm used with the most common one known as MD5. SHA output is slightly longer but its algorithm is more secure when it is used for confidentiality. Remember, the primary purpose is for integrity, to scientifically prove the data has been unaltered when reviewing and examining it. The integrity hash does not indicate where in the data the alteration has occurred. By recalculating the integrity hash at a later time, one can determine if the data in the disk image has been changed.

Binary Search tool

The tools used for binary search have the purpose of examining files to reveal bit patterns within. These tools look for specific patterns and types of data sequences found in known and maybe unknown types of files. Expecting data to be altered during storage and transmittal is a common mechanism the examiner must be aware of and look for when performing the evaluation of the files and data components so these types of tools assist in that endeavor.

Imaging tool

One of the basic requirements for any forensics investigation is to capture the data in a format that allows for examination of the complete dataset being retrieved. This process is called disk imaging. There are two primary areas where this process is applied in forensics. Bit copy image which covers the entire media where the data is found and filesystem imaging where the data structures are defined and stored.

Bit Copy

Disk imaging is the process of making a bit-by-bit copy of a disk. Imaging (in more general terms) can apply to anything that can be considered as a bit-stream, e.g., a physical or logical volume, network streams, file directories. There are many tools and programs available to conduct these bit-stream image activities and I have listed several within this section. Always ensure your organization has tested and validated the tools before usage in a real-time capture event.

File System

Within the UNIX and Linux operating systems, there is the concept of capturing an image of the filesystem as a copy of the entire state of the computer in a nonvolatile form such as a file. The operating system then can use this system image if it is shut down and later restored to exactly the same state as original. In these cases, system file images can and often are used for full system backups. Laptop computer hibernation is an example that uses an image of the entire machine’s RAM.

Deep Retrieval tool

A forensics-based tool designed to retrieve data that has been deleted or “erased” for long periods of time, as well as the more recent material. Most current Data Recovery tools are also known as deep retrieval tools and provide the mechanisms to obtain and retrieve data from past uses, deletions and hiding of files and folders, so long as the drive has not been reformatted. Some of these tools allow for the hardware recovery of damaged drive utilizing various aspects of the physics of the media, the actual data magnetic platters, etc. The recovery for deep retrieval can also involve addition of or replacement of physical components on the drives, then retrieval of the data for use in evidence recovery efforts.

File Chain Navigation tool

A tool designed to trace dependencies and linking of files as they are found in the directories throughout the computer. This tool assists in determining possible alternate data streams and binding of files and libraries to executables.

Case Management Systems

There are many forensics case management software packages available in the industry. Several of these packages are well known, such as EnCase the Sleuth Toolkit and Forensics Tool Kit (FTK), and others are not so well known but just as functional. Always make sure the investigators are utilizing the organizational-approved case management system during each event and examination. If possible, obtain certifications for the investigators on each system used by the organization. The actual systems available cover end-to-end requirements for forensics investigations including case tracking of individual evidence components, data carving of evidence, found and identified evidence components, etc. Each case management system performs these activities in a little bit different manner which is usually what makes them unique, so always ensure the case tools match the case needs and criteria.

Specific Examination Tools

There are many specific forensics tools that have evolved to cover specific areas of investigations they include:

Steganography

This class of tools assists in identifying images and files which have had data hidden inside other files. There are a number of steganographic tools available in the marketplace and for free on the Internet.

Internet history

This class of tool is used for examining the cookies and Internet history files found in the use of browsers. Internet Evidence Finder (IEF) is just one of toolsets available. Each browser creates and stores cookies in its own format, so always ensure the tools used are related to the browser used.

Log Management Tools

The arena of log management tools has exploded over the past few years. Most implementations are included in the entire class of toolsets known as Security Event and Identification Monitor (SEIM) systems. There are many such tools available which receive log files from various devices on the network and correlate them by time and event and allow dashboard review, detailed analysis and deep data search capabilities within the package.

Volatile Data Capture Tools

There are a number of new data capture tools available today which allow capture of data from storage devices in “real-time” to retrieve and retain the data from the areas of machines which either cease to exist or are removed when a device or machine is powered down and turned off.

One such tool is Helix First Response developed by e-fense. This is a USB tool with its own method of enablement which does not interfere with the operating system running on the suspect device at the time of capture; therefore, there is no alteration of the system processes or memory when retrieving data.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597499965000145

What is the first step required in preparing a computer for forensics investigation quizlet?

What is the First Step required in preparing a computer for forensics investigation? network traffic and event logs in order to investigate a network security incident.

What are the five 5 steps of digital forensics?

Digital forensics is the process of identifying, preserving, analyzing, and documenting digital evidence..
Identification. First, find the evidence, noting where it is stored..
Preservation. ... .
Analysis. ... .
Documentation. ... .
Presentation..

What are the most important steps of a computer forensic examination?

Process of Digital forensics includes 1) Identification, 2) Preservation, 3) Analysis, 4) Documentation and, 5) Presentation.

What are the steps in digital forensic process?

There are nine steps that digital forensic specialists usually take while investigating digital evidence..
First Response. ... .
Search and Seizure. ... .
Evidence Collection. ... .
Securing of the Evidence. ... .
Data Acquisition. ... .
Data Analysis. ... .
Evidence Assessment. ... .
Documentation and Reporting..