Logging

Cards (12)

  • Logging
    • used to provide a "source of truth" for activity that occurs on a network. Logging is most commonly used, but not limited to incident response and security monitoring. During the incident response process, a user may be held accountable for an action or behavior, and logging plays a crucial role in proving a user's actions.
  • Logging:
    • Accountability is the final pillar of the Identification, Authentication, Authorization, and Accountability (IAAA) model. The model is used to protect and maintain confidentiality, integrity, and availability of information.
    • Accountability holds users and peers on a network responsible for their actions. Logging is a large part of this pillar and maintains a record of activities.
  • To ensure the efficacy of accountability, logs and other data sources must be protected, and their authenticity must be proved. If it cannot be proven that a log was kept in its original state, it loses its integrity for accountability and the incident response process.
  • Logging aids any member involved in the incident response process. Depending on the log source, it may provide different benefits or visibility into a network or device. Some examples may include:
    • Files created.
    • Emails sent.
    • Other TTPs (Tactics, Techniques, and Procedures) as outlined by the MITRE ATT&CK framework.
    Because logs play an important role in incident response, they must be authentic and, when analyzed, identical to when they were produced.
  • Security Information and Event Management
    • A Security Information and Event Management system (SIEM) is a tool used to collect, index, and search data from various endpoints and network locations.
    • SIEMs have many features and capabilities, often for specific use cases. Below is a summary of the benefits and features that an SIEM can offer at the most basic level:
    • Real-time log ingestion.
    • Alerting against abnormal activities.
    • 24/7 monitoring and visibility.
    • Data insights and visualization.
    • Ability to investigate past incidents.
  • How data arrives from a device to the indexer; this process is commonly known as data ingestion.
    • Types of data ingestion
    • Agent/forwarder
    • Port-forwarding
    • Syslog
    • Upload
    • When dealing with storage concerns, it is often not attackers we must worry about, but technical faults; for example, an index is accidentally deleted, or a storage device is corrupted.
  • While keeping this data authentic and secure is important for accountability, it often has overlapping themes with compliance. Compliance and regulations go hand in hand; one such regulation may be that log data must be archived or stored for X amount of time. This plays into accountability again, where non-repudiation must be applied to a log source for compliance. For example, an audit requires the past six months of X log source. As a stakeholder, you must guarantee that those log sources reflect the activity of the network.
    • Cold storage is a process or standard for storing data, which can be summarized as storing a large quantity of data optimally.
    • Cold storage is rarely accessed and thus does not require high-performance storage devices. Examples of cold storage may include low-cost hard drives or even tape drives! Conversely, hot storage is data accessed often and requires higher performance, which may consist of solid-state drives and, in some cases, high-performance hard drives. There may be other levels of access and performance throughout the life cycle of data that can be referred to as warm storage
  • The standard for how long data stays in each phase will depend on regulatory requirements and company guidelines. An example of a storage process may be that data is stored hot for six months, warm for three months, and cold for three years. Depending on the data, it may be indefinitely stored in cold storage.
    • Payment Card Industry Data Security Standard (PCI DSS) is one example of a standard that requires audit logs to be stored for a year and kept immediately available for 90 days to remain compliant.
  • Log Sources:
    • Manual log sources
    • Any log that is manually written or composed by an author
    • Change logs
    • Automated log sources
    • Any logs that are automatically generated by default, for example, a configuration, tool, or from a developer
    • System logs
    • Application logs
    • Other types of logs
    • Some logs may not be categorized but are often required for compliance
    • Email logs
    • Messaging or other communication
  • A good log source may not include only one log. Due to the nature of a network, it may require multiple log types to create one quality log source, for example, a firewall log and a system log used together to hold each other accountable. That is, the validity of one log can be proven using another and vice-versa.
    • A log source could also be collecting too much information; that is, if several types of logs are collecting the same data or creating the same alerts, it can increase noise, storage complexity, and other consequences.
  • Using multiple log types and sources is beneficial for validating logs and creating a complete story of an incident. This concept is more formally known as correlation or building a relationship between two things: logs and data.