CISSP CBK 6 – Security Architecture & Models

Security Model

Is a statement that outlined the requirements necessary to properly support a certain security policy.

Computer Architecture

CPU – Central Processing Unit: Is a microprocessor. Contains a control unit, an ALU / Arithmetic Logic Unit and primary storage. Instructions and data are held in the primary storage unit needed by the CPU. The primary storage is a temporary memory area to hold instructions that are to be interpreted by the CPU and used for data processing.

Buffer overflow – Data being processed is entered into the CPU in blocks at a time. If the software instructions do not properly set the boundaries for how much data can come in as a block, extra data can slip in and be executed.

Real storage – As instructions and data are processed, they are moved back to the system’s memory space / real storage.


RAM / Random Access Memory – Is a volatile memory, because when power is lost -> information is lost.

Types of ram:
– Static RAM – When it stores data, it stays there without the need of being continually refreshed.
– Dynamic RAM – Requires that that data held within it be periodically refreshed because the data dissipates and decays.

ROM / Read-only memory – Is a nonvolatile memory. Software that is stored within ROM is called firmware.

EPROM / Erasable and programmable read-only memory – Holds data that can be electrically erased or written to.

Cache memory: Is a part of RAM that is used for high-speed writing and reading activities.

PLD – Programmable Logic Device: An integrated circuit with connections or internal logic gates that can be changed through programming process.

Memory Mapping

Real or primary memory – Memory directly addressable by the CPU and used for the storage of instructions and data associated with the program that is being executed.

Secondary memory – Is a slower memory (such as magnetic disks) that provides non-volatile storage.

Sequential memory – Memory from which information must be obtained by sequential searching from the beginning rather than directly accessing the location (magnetic tape, etc.)

Virtual memory – Uses secondary memory in conjunction with primary memory to present a CPU with a larger, apparent address space of the real memory locations.

Memory addressing:

Register addressing – Addressing the registers within a CPU or other special purpose registers that are designated in the primary memory.

Direct addressing – Addressing a portion of primary memory by specifying the actual address of the memory location. The memory addresses are usually limited to the memory page that is being executed or page zero.

Absolute addressing – Addressing all of the primary memory space.

Indexed addressing – Developing a memory address by adding the contents of the address defined in the program’s instruction to that of an index register. The computed, effective address is used to access the desired memory location. Thus, if an index register is incremented or decremented, a range of memory location can be accessed.

Implied addressing – Used when operations that are internal to the processor must be performed such as clearing a carry bit that was set as a result of an arithmetic operation. Because the operation is being performed on an internal register that is specified within the instruction itself, there is no need to provide an address.

Indirect addressing – Addressing where the address location that is specified in the program instruction contains the address of the final desired location.

CPU Modes and Protection Rings

Protection rings – Provide strict boundaries and definitions on what the processes that work within each ring can access and what commands the can successfully execute. The processes that operate within the inner rings have more privileges, privileged / supervisor mode, than the processes operating in the outer rings, user mode.

Operating states:
Ready state – An application is ready to resume processing.
Supervisory state – The system is executing a system, or highly privileged, routine.
Problem state – The system is executing an application.
Wait state – An application is waiting for a specific event to complete, like the user finishing typing in characters or waiting for a print job to finish.

Multi-threading, -tasking, -processing:
Multithreading – One application can make several calls at one time, that use different threads.
Multitasking – The CPU can process more than one process or task at one time.
Multiprocessing – If a computer has more than one CPU and can use them in parallel to execute instructions.

Input/Output Device Management: Deadlock situation – If structures are not torn down and released after use. Resources should be used by other programs and processes.

System architecture

TCB – Trusted Computing Base: Is defined as the total combination of protection mechanisms within a computer system. Includes hardware, software and firmware. Originated from the Orange Book. The Orange Book defines a trusted system as hardware and software that utilize measures to protect the integrity of unclassified or classified data for a range of users without violating access rights and the security policy. It looks at all protection mechanisms within a system to enforce the security policy and provide an environment that will behave in a manner expected of it.

Security perimeter: Defined as resources that fall outside of TCB. Communication between trusted components and untrusted components needs to be controlled to ensure that confidential information does not flow in an unintended way.

Reference monitor: Is an abstract machine, which mediates all access subjects have to objects to ensure that the subjects have the necessary access rights and to protect the objects from unauthorized access and destructive modification. Is an access control concept, not an actual physical component.

Security kernel: Is made up of mechanisms that fall under the TCB and implements and enforces the reference monitor concept. Is the core of the TCB and is the most commonly used approach to building trusted computing systems. Three requirements:
– It must provide isolation for the processes carrying out the reference monitor concept and they must be tamperproof.
– The reference monitor must be invoked for every access attempt and must be impossible to circumvent. Thus, the reference monitor must be implemented in a complete and foolproof way.
– It must be small enough to be able to be tested and verified in a complete and comprehensive manner.

Domains: Defined as a set of objects that a subject is able to access.

Execution Domain – A program that resides in a privileged domain needs to be able to execute its instructions and process its data with the assurance that programs in a different domain cannot negatively affect its environment.

Security Domain – Has a direct correlation to the protection ring that a subject or object is assigned to. The lower the protection ring number, the higher the privilege and the larger the security domain.

Resource isolation: Hardware segmentation – Memory is separated physically instead of just logically.

Security policy: Is a set of rules, practices and procedures dictating how sensitive information is managed, protected and distributed.

Multilevel security policy – Security policies that prevent information from flowing from a high security level to a lower security level.

Least privilege: Means that a resource, process has no more privileges than necessary to be able to fulfil its functions.

Layering: A structured and hierarchical architecture that has the basic functionality taking place at lower layers and more complex functions at the higher layers.

Data hiding: When it is required that processes in different layers do not communicate, therefore, they are not supplied with interfaces to interact with each other.

Abstraction: When a class of objects is assigned specific permissions and acceptable activities are defined. This makes management of different objects easier because classes can be dealt with instead of each and every individual object.

Security Models

Maps the abstract goals of the policy to information systems terms by specifying explicit data structures and techniques necessary to enforce the security policy.

State machine model: To verify the security of a system, the state is used, which means all current permissions and all current instances of subjects accessing objects must be captured.

State transitions – Activities that can alter a state.

A system that has employed a state machine model will be in a secure state in each and every instance of its existence. It will boot up into a secure state, execute commands and transactions securely, and will allow subjects to access resources only in secure states.

Bell-Lapaduda model: Address concerns about system security and leakage of classified information.

Multilevel security system – A system that employs the Bell-Lapuda model, where users with different clearances use the systems and the systems process data with different classifications. The level at which information is classified determines the handling procedures that should be used -> forms a lattice.

Lattice – Is an upper bound and lower bound of authorized access. Is a state machine model enforcing the confidentiality aspects of access control. An access control matrix and security levels are used to determine if subjects can access different objects. The model uses subjects, objects, access operations (read, write and read/write) and security levels.

Bell-Lapadula: Is an information flow security model, which means that information does not flow to an object of lesser or noncomparable classification.

Two main rules:
– The simple security rule – A subject at a given security level cannot read data that resides at a higher security level. Is referred to no ”read up” rule.
– *(star)-property – States that a subject in a given security level cannot write information to a lower security level. Is referred to no ”write down” rule.

Defines a secure state as a secure computing environment and the allowed actions which are security-preserving operations.

Basic Security Theorem – If a system initializes in a security state and all state transitions are secure, then every subsequent state will be secure no matter what inputs occur. The model provides confidentiality, and does not address integrity of the data the system maintain.

Biba model: Is an information flow model, concerned about data flowing from one security level to another. Uses a state machine model. Address the integrity of data being threatened when subject can read data at lower levels. Prevents data from any integrity level from flowing to a higher integrity level. Two main rules:
– ”No write up” – A subject cannot write data to an object at a higher integrity level.
– ”No read down” – A subject cannot read data from a lower integrity level.

Clark-Wilson model: Protecting the integrity of information by focusing on preventing authorized users from making unauthorized modifications of data, fraud, and errors within commercial applications. Users cannot access and manipulate objects directly, but must access the object through a program. Uses also separation of duties, which divides an operation into different parts and requires different users to perform each part. This prevents authorized user from making unauthorized modifications to data, which again protects its integrity. Auditing is also required to track the information coming in from the outside of the system.

Information flow model: Can deal with any kind of information flow, not only the direction of the flow. Looks at insecure informational flow that can happen at the same level and between objects along with the flow between different levels. A system is secure if there is no illegal information flow permitted.

Non interference Model: Ensure that any actions that take place at a higher security level do not affect, or interfere, with actions that take place at a lower level

Security Modes of Operation

Dedicated Security Mode: If all users have the clearance or authorization and need-to-know to all data processed within the system. All users have been given formal access approval for all information on the system and have signed nondisclosure agreements pertaining to this information. The system can handle a single classification level of information.

System-High Security Mode: All users have a security clearance or authorization to access the information but not necessarily a need-to-know for all the information processed on the system (only some of the data). Require all users to have the highest level of clearance, but a user is restricted via the access control matrix.

Compartmented Security Mode: All users have the clearance to access all the information processed by the system, but might not have the need-to-know and formal access approval. Users are restricted to being able to access some information because they do not need to access it to perform the functions of their jobs and they have not been given formal approval to access this data. Compartments are security levels with limited number of subjects cleared to access data at each level.

CMW / Compartments – Enable users to process multiple compartments of data at the same time, if they have the necessary clearance.

Multilevel Security Mode: Permits two or more classification levels of information to be processed at the same time when all the users do not have the clearance of formal approval to access all the information being processed by the system.

Trust and Assurance:

Trust – Tells the customer how much he can expect out of this system, what level of security it will provide.
Assurance – The system will act in a correct and predictable manner in each and every computing situation.

System Evaluation Methods

Examines the security-relevant parts of a system, meaning the TCB, access control mechanisms, reference monitor, kernel, protection mechanisms.

The Orange Book / TCSEC: TCSEC – Trusted Computer System Evaluation Criteria. Evaluates products to assess if they contain the security properties they claim and evaluate if the product is appropriate for a specific application or function. Looks at the functionality, effectiveness and assurance of a system during its evaluation and it uses classes that were devised to address typical patterns of security requirements. Focuses on the operating system. Hierarchical division of security levels –
A – Verified protection
B – Mandatory protection
C – Discretionary protection
D – Minimal security

Topics – Security policy, accountability, assurance and documentation


Security policy – Must be explicit and well defined and enforced by the mechanisms within the system.

Identification – Individual subjects must be uniquely identified.

Labels – Access control labels must be associated properly with objects.

Documentation – Includes test, design, specification documents, user guides and manuals.

Accountability – Audit data must be captured and protected to enforce accountability.

Life cycle assurance – Software, hardware and firmware must be able to be tested individually to ensure that each enforces the security policy in an effective manner throughout its lifetime.

Continuous protection – The security mechanisms and the system as a whole must perform predictably and acceptably in different situations continuously.

Evaluation levels –
D – Minimal Protection
C1 – Discretionary Security Protection
C2 – Controlled Access Protection
B1 – Labeled Security
B2 – Structured Protection
B3 – Security Domains
A1 – Verified Design

The Red Book / TNI: TNI – Trusted Network Interpretation. Addresses security evaluation topics for networks and network components. It addresses isolated local area networks and wide area internetwork systems.

Security items addressed:
* Communication integrity
— Authentication
— Message integrity
— Nonrepudiation
* Denial of service prevention
— Continuity of operations
— Network management
* Compromise protection
— Data confidentiality
— Traffic flow confidentiality
— Selective routing

Ratings –
– None
– C1 – Minimum
– C2 – Fair
– B2 – Good

ITSEC: ITSEC – Information Technology Security Evaluation Criteria. Only used in Europe. Two main attributes – Functionality and Assurance. Is a criteria for both security products and security systems and refers to both as the target of evaluation (TOE).

Common Criteria: Is an international evaluation standard.

EAL – Evaluation assurance level.

Protection profile – The set of security requirements, their meaning and reasoning and the corresponding EAL rating.

Two main attributes – Functionality and Assurance. Five sections of the protection profile:

 – Descriptive elements

 – Rationale

 – Functional requirements

 – Development assurance requirements

 – Evaluation assurance requirements

Certification <-> Accreditation
Certification: Is the technical evaluation of the security components and their compliance for the purpose of accreditation. Is the process of assessing the security mechanisms and controls and evaluating their effectiveness.

Accreditation: Is the formal acceptance of the adequacy of a system’s overall security by the management. Is management’s official acceptance of the information in the certification process findings.

Open Systems <-> Closed Systems

Open Systems: Have an architecture that has published specifications, which enables third-party vendors to develop add-on components and devices. Provides interoperability between products by different vendors of different operating systems, applications and hardware devices.

Closed Systems: Use an architecture that does not follow industry’s standards. Interoperability and standard interfaces are not employed to enable easy communication between different types of systems and add-on features. Are proprietary, meaning that the system can only communicate with like systems.

Threats to Security Models and Architectures

Covert Channels: Is a way for an entity to receive information in an unauthorized manner. It is an information flow that is not controlled by a security mechanism.

Covert timing channel – One process relays information to another by modulating its use of system resources.

Covert storage channel – When a process writes data to a storage location and another process directly or indirectly reads it. The problem occurs when the processes are at different security levels, and therefore not supposed to be sharing sensitive data.

 – Countermeasures: There is not much a user can do to countermeasure these channels. For trojan horses that uses HTTP, intrusion detection and auditing may detect a covert channel.

Back Doors: Also called maintenance hooks. Are instructions within software that only the developer knows about and can invoke.

 – Countermeasures: Code reviews and unit and integration testing should always be looking out for back doors.

Preventative measures against back doors

-Host intrusion detection system

-Use File system permissions to protect configuration files and sensitive information from being modified.

-Strict access control.

-File system encryption.


Timing Issues: Also called asynchronous attack. Deals with the timing difference of the sequences of steps a system uses to complete a task. A time-of-check versus time-of-use attack, also called race conditions, could replace autoexec.bat.

 – Countermeasures:

Host intrusion detection system

File system permissions and encryption

Strict access control measures


Buffer Overflows: Sometimes referred to ”smashing the stack”. When programs do not check the length of data that is inputted into a program and then processed by the CPU.

 – Countermeasures

Proper programming and good coding practices.

Host intrusion detection system

File system permission and encryption

Strict access control