Certificates that provide SSL/TLS encryption capability:
are similar to the unencrypted data.
can be purchased from certificate authorities.
are for data located on thumb drives.
can provide authorization of data access.
SSL/TLS relies ondigital certificatesto support encrypted communications and to help users trust that they are connecting to the correct server. A TLS certificate is typically anX.509 certificatethat binds a public key to an identity, such as a domain name, and is digitally signed by a trusted issuer. In most public internet use cases, these certificates are issued byCertificate Authoritiesthat browsers and operating systems already trust through pre-installed root certificates. Because of that trust chain, organizations commonly obtain certificates by purchasing or otherwise obtaining them from certificate authorities, which is why option B is correct.
During the TLS handshake, the server presents its certificate to the client. The client validates the certificate’s signature chain, validity period, and that the certificate matches the domain being accessed. Once validated, TLS establishes session keys used to encrypt data in transit and protect it from eavesdropping and tampering. Certificates themselves are not “similar to unencrypted data,†and they are not specific to thumb-drive storage; they are used to secure network communications. Certificates also do not primarily provide “authorization†to access data. Authorization is typically enforced by application and access control mechanisms after authentication. Certificates supportauthenticationof endpoints and enable secure key exchange, which are prerequisites for secure transport encryption and trustworthy connections.
Which of the following should be addressed by functional security requirements?
System reliability
User privileges
Identified vulnerabilities
Performance and stability
Functional security requirements definewhat security capabilities a system must provideto protect information and enforce policy. They describe required security functions such as identification and authentication, authorization, role-based access control, privilege management, session handling, auditing/logging, segregation of duties, and account lifecycle processes. Because of this,user privilegesare a direct and core concern of functional security requirements: the system must support controlling who can access what, under which conditions, and with what level of permission.
In cybersecurity requirement documentation, “privileges†include permission assignment (roles, groups, entitlements), enforcement of least privilege, privileged access restrictions, elevation workflows, administrative boundaries, and the ability to review and revoke permissions. These are functional because they require specific system behaviors and features—for example, the ability to define roles, prevent unauthorized actions, log privileged activities, and enforce timeouts or re-authentication for sensitive operations.
The other options are typically classified differently.System reliabilityandperformance/stabilityare generally non-functional requirements (quality attributes) describing service levels, resilience, and operational characteristics rather than security functions.Identified vulnerabilitiesare findings from assessments that drive remediation work and risk treatment; they inform security improvements but are not themselves functional requirements. Therefore, the option best aligned with functional security requirements is user privileges.
How is a risk score calculated?
Based on the confidentiality, integrity, and availability characteristics of the system
Based on the combination of probability and impact
Based on past experience regarding the risk
Based on an assessment of threats by the cyber security team
A risk score is commonly calculated by combining two core factors: how likely a risk scenario is to occur and how severe the consequences would be if it did occur. This is often described in cybersecurity risk documentation as likelihood times impact, or as a structured mapping using a risk matrix.Probability or likelihoodreflects the chance that a threat event will exploit a vulnerability under current conditions. It may consider elements such as threat activity, exposure, ease of exploitation, control strength, and historical incident patterns.Impactreflects the magnitude of harm to the organization, usually measured across business disruption, financial loss, legal or regulatory exposure, reputational damage, and harm to confidentiality, integrity, or availability.
While confidentiality, integrity, and availability are essential for understanding what matters and can influence impact ratings, they are typically inputs into impact determination rather than the full scoring method by themselves. Past experience and expert threat assessment can inform likelihood estimates, but they are not the standard calculation model on their own. The key concept is that risk must reflect both chance and consequence; a highly impactful event with very low likelihood may be scored similarly to a moderate impact event with high likelihood depending on the organization’s methodology.
Therefore, the most accurate description of how a risk score is calculated is the combination of probability and impact, enabling prioritization and consistent risk treatment decisions.
ITIL Information Technology Infrastructure Library defines:
a standard of best practices for IT Service Management.
how technology and hardware systems interface securely with one another.
the standard set of components used in every business technology system.
a set of security requirements that every business technology system must meet.
ITIL is a widely adopted framework that definesbest-practice guidance for IT Service Management. Its focus is on how organizations design, deliver, operate, and continually improve IT services so they reliably support business outcomes. In cybersecurity and service delivery documentation, ITIL is often referenced because strong service management processes are foundational to secure operations. For example, ITIL practices such as incident management, problem management, change enablement, configuration management, and service continuity help ensure security controls are implemented consistently and that deviations are identified, tracked, and corrected.
ITIL does not define how hardware systems interface securely with one another; that is more aligned with architecture standards, security engineering, and network or platform design frameworks. It also does not prescribe a universal set of components for every technology system; that belongs to reference architectures and enterprise architecture standards. Likewise, ITIL is not primarily a security requirements standard. While ITIL supports security governance through practices like risk management, access management, and information security management integration, it does not itself serve as a mandatory security control catalog.
From a cybersecurity perspective, ITIL contributes by promoting repeatable processes, clear roles and responsibilities, measurable service levels, and continual improvement. These elements reduce operational risk, improve response effectiveness, and strengthen accountability—key requirements for maintaining confidentiality, integrity, and availability in production environments.
What common mitigation tool is used for directly handling or treating cyber risks?
Exit Strategy
Standards
Control
Business Continuity Plan
In cybersecurity risk management,risk treatmentis the set of actions used to reduce risk to an acceptable level. The most common tool used to directly treat or mitigate cyber risk is acontrolbecause controls are the specific safeguards that prevent, detect, or correct adverse events. Cybersecurity frameworks describe controls as measures implemented to reduce either thelikelihoodof a threat event occurring or theimpactif it does occur. Controls can be technical (such as multifactor authentication, encryption, endpoint protection, network segmentation, logging and monitoring), administrative (policies, standards, training, access approvals, change management), or physical (badges, locks, facility protections). Regardless of type, controls are the direct mechanism used to mitigate identified risks.
Anexit strategyis typically a vendor or outsourcing risk management concept focused on how to transition away from a provider or system; it supports resilience but is not the primary tool for directly mitigating a specific cyber risk.Standardsguide consistency by defining required practices and configurations, but the standard itself is not the mitigation—controls implemented to meet the standard are. Abusiness continuity plansupports availability and recovery after disruption, which is important, but it primarily addresses continuity and recovery rather than directly reducing the underlying cybersecurity risk in normal operations. Therefore, the best answer is the one that represents the direct implementation of safeguards:controls.
Separation of duties, as a security principle, is intended to:
optimize security application performance.
ensure that all security systems are integrated.
balance user workload.
prevent fraud and error.
Separation of duties is a foundational access-control and governance principle designed to reduce the likelihood of misuse, fraud, and significant mistakes by ensuring thatno single individual can complete a critical process end-to-end without independent oversight. Cybersecurity and audit frameworks describe this as splitting high-risk activities into distinct roles so that one person’s actions are checked or complemented by another person’s authority. This limits both intentional abuse, such as unauthorized payments or data manipulation, and unintentional errors, such as misconfigurations or accidental deletion of important records.
In practice, separation of duties is implemented by defining roles and permissions so that incompatible functions are not assigned to the same account. Common examples include separating the ability to create a vendor from the ability to approve payments, separating software development from production deployment, and separating system administration from security monitoring or audit log management. This is reinforced through role-based access control, approval workflows, privileged access management, and periodic access reviews that detect conflicting entitlements and privilege creep.
The value of separation of duties is risk reduction through accountability and control. When actions require multiple parties or independent review, it becomes harder for a single compromised account or malicious insider to cause large harm without detection. It also improves reliability by introducing checkpoints that catch mistakes earlier. Therefore, the correct purpose is to prevent fraud and error.
Information classification of data is a level of protection that is based on an organization's:
retention for auditing purposes.
need for access by employees.
timing of availability for automated systems.
risk to loss or harm from disclosure.
Information classification is the practice of assigning data a sensitivity level so the organization can apply protections that match thebusiness impactif the information is exposed, altered, or becomes unavailable. The core driver for classification is therisk of harm—especially harm caused by unauthorized disclosure. If disclosure would result in regulatory penalties, reputational damage, competitive disadvantage, contractual breach, or harm to customers and employees, the data is classified at a higher level and requires stronger controls. These controls commonly include tighter access restrictions (least privilege and role-based access), stronger authentication, encryption at rest and in transit, stricter handling and sharing rules, audit logging, monitoring, and secure disposal requirements.
While retention can be influenced by compliance obligations, it is not what determines the classification level; retention policies typically reference classification but do not define it. “Need for access†is managed through access control decisions, which are appliedafterthe data’s sensitivity is understood; classification informs who should have access, not the other way around. “Timing of availability†relates to availability requirements and service resilience, which are important, but classification schemes primarily focus on sensitivity and potential damage from inappropriate exposure, with integrity and availability considerations often handled as additional impact dimensions.
Therefore, the best verified basis for information classification is the organization’s assessment ofrisk of loss or harm from disclosure.
Why would a Business Analyst include current technology when documenting the current state business processes surrounding a solution being replaced?
To ensure the future state business processes are included in user training
To identify potential security impacts to integrated systems within the value chain
To identify and meet internal security governance requirements
To classify the data elements so that information confidentiality, integrity, and availability are protected
A Business Analyst documents current technology in the “as-is†state because business processes are rarely isolated; they depend on applications, interfaces, data exchanges, identity services, and shared infrastructure. From a cybersecurity perspective, replacing one solution can unintentionally change trust boundaries, authentication flows, authorization decisions, logging coverage, and data movement across integrated systems. Option B is correct because understanding the current technology landscape helps identify where security impacts may occur across the value chain, including upstream data providers, downstream consumers, third-party services, and internal platforms that rely on the existing system.
Cybersecurity documents emphasize that integration points are common attack surfaces. APIs, file transfers, message queues, single sign-on, batch jobs, and shared databases can introduce risks such as broken access control, insecure data transmission, data leakage, privilege escalation, and gaps in monitoring. If the BA captures current integrations, dependencies, and data flows, the delivery team can properly perform threat modeling, define security requirements, and avoid breaking compensating controls that other systems depend on. This also supports planning for secure decommissioning, migration, and cutover, ensuring credentials, keys, service accounts, and network paths are rotated or removed appropriately.
The other options are less precise for the question. Training is not the core driver for documenting current technology. Governance requirements apply broadly but do not explain why current tech must be included. Data classification is important, but it is a separate activity from capturing technology dependencies needed to assess integration security impacts.
What business analysis deliverable would be an essential input when designing an audit log report?
Access Control Requirements
Risk Log
Future State Business Process
Internal Audit Report
Designing an audit log report requires clarity onwho is allowed to do what, which actions are considered security-relevant, and what evidence must be captured to demonstrate accountability.Access Control Requirementsare the essential business analysis deliverable because they define roles, permissions, segregation of duties, privileged functions, approval workflows, and the conditions under which access is granted or denied. From these requirements, the logging design can specify exactly which events must be recorded, such as authentication attempts, authorization decisions, privilege elevation, administrative changes, access to sensitive records, data exports, configuration changes, and failed access attempts. They also help determine how logs should attribute actions to unique identities, including service accounts and delegated administration, which is critical for auditability and non-repudiation.
Access control requirements also drive necessary log fields and report structure: user or role, timestamp, source, target object, action, outcome, and reason codes for denials or policy exceptions. Without these requirements, an audit log report can become either too sparse to support investigations and compliance, or too noisy to be operationally useful.
A risk log can influence priorities, but it does not define the authoritative set of access events and entitlements that must be auditable. A future state process can provide context, yet it is not as precise as access rules for determining what to log. An internal audit report may highlight gaps, but it is not the primary design input compared to formal access control requirements.
Controls that are put in place to address specific risks may include:
only initial reviews.
technology or process solutions.
partial coverage of one or more risks.
coverage for partial extent and scope of the risk.
Cybersecurity controls are the safeguards an organization implements to reduce risk to an acceptable level. In standard risk-management language, a control is not limited to a one-time review; it is an ongoing capability that is designed, implemented, and operated to prevent, detect, or correct unwanted events. That capability is typically delivered throughtechnology solutions(technical controls) andprocess solutions(administrative or procedural controls), which is why option B is correct.
Technology controls include items like firewalls, endpoint protection, encryption, multifactor authentication, logging and monitoring, vulnerability scanning, secure configuration baselines, and data-loss prevention. These controls directly enforce security requirements through system behavior and automation, helping reduce the likelihood or impact of threats.
Process controls include policies, standards, access approval workflows, segregation of duties, change management, secure development practices, incident response playbooks, training, and periodic access recertification. These ensure people consistently perform security-critical tasks correctly and create accountability and repeatability.
Options C and D describe possible outcomes or limitations (controls may not fully eliminate risk and may only mitigate part of it), but they are not what controlsinclude. Option A is incorrect because “only initial reviews†are insufficient; reviews can be a component of a control, but effective controls require sustained operation, evidence, and reassessment as systems, threats, and business needs change.
Which of the following should be addressed in the organization's risk management strategy?
Acceptable risk management methodologies
Controls for each IT asset
Processes for responding to a security breach
Assignment of an executive responsible for risk management across the organization
An organization’s risk management strategy is a governance-level artifact that sets direction for how risk is managed across the enterprise. A core requirement in cybersecurity governance frameworks is clear accountability, including executive ownership for risk decisions that affect the whole organization. Assigning an executive responsible for risk management establishes authority to set risk appetite and tolerance, coordinate risk activities across business units, resolve conflicts between competing priorities, and ensure risk decisions are made consistently rather than in isolated silos. This executive role also supports oversight of risk reporting to senior leadership, ensures resources are allocated to address material risks, and drives integration between cybersecurity, privacy, compliance, and operational resilience programs. Without an accountable executive function, risk management often becomes fragmented, with inconsistent scoring, uneven control implementation, and unclear decision rights for accepting or treating risk.
Option A can be part of a strategy, but the question asks what should be addressed, and the most critical foundational element is enterprise accountability and governance. Option B is too granular for a strategy; selecting controls for each IT asset belongs in security architecture, control baselines, and system-level risk assessments. Option C is typically handled in incident response and breach management plans and procedures, which are operational documents derived from strategy but not the strategy itself. Therefore, the best answer is the assignment of an executive responsible for risk management across the organization.
blob:https://chatgpt.com/af9ae31e-1548-4f92-9dac-5758ab0a9a66
A significant benefit of role-based access is that it:
simplifies the assignment of correct access levels to a user based on the work they will perform.
makes it easier to audit and verify data access.
ensures that employee accounts will be shut down on departure or role change.
ensures that tasks and associated privileges for a specific business process are disseminated among multiple users.
Role-based access control assigns permissions to defined roles that reflect job functions, and users receive access by being placed into the appropriate role. The major operational and security benefit is that itsimplifies and standardizes access provisioning. Instead of granting permissions individually to each user, administrators manage a smaller, controlled set of roles such as Accounts Payable Clerk, HR Specialist, or Application Administrator. When a new employee joins or changes responsibilities, access can be adjusted quickly and consistently by changing role membership. This reduces manual errors, limits over-provisioning, and helps enforce least privilege because each role is designed to include only the permissions required for that function.
RBAC also improves governance by making access decisions more repeatable and policy-driven. Security and compliance teams can review roles, validate that each role’s permissions match business needs, and require approvals for changes to role definitions. This approach supports segregation of duties by separating conflicting capabilities into different roles, which lowers fraud and misuse risk.
Option B is a real advantage of RBAC, but it is typically a secondary outcome of having structured roles rather than the primary “significant benefit†emphasized in access-control design. Option C relates to identity lifecycle processes such as deprovisioning, which can be integrated with RBAC but is not guaranteed by RBAC alone. Option D describes distributing tasks among multiple users, which is more aligned with segregation of duties design, not the core benefit of RBAC.
Which of the following control methods is used to protect integrity?
Principle of Least Privilege
Biometric Verification
Anti-Malicious Code Detection
Backups and Redundancy
Integritymeans information and systems remain accurate, complete, and protected from unauthorized or improper modification. ThePrinciple of Least Privilegeis a direct integrity protection control because it limits who can change data and what changes they are allowed to make. Under least privilege, users, applications, and service accounts receive only the minimum permissions needed to perform approved tasks, and nothing more. This reduces the chance that an attacker using a compromised account can alter records, manipulate transactions, or change configurations, and it also reduces accidental changes by well-meaning users who do not need write or administrative rights.
Least privilege is commonly enforced through role-based access control, separation of duties, restricted administrative roles, just-in-time elevation for privileged tasks, and periodic access reviews to remove excess permissions. These practices are emphasized in cybersecurity frameworks because integrity failures often occur when excessive access allows unauthorized edits to sensitive data, logs, security settings, or application code.
The other options relate to security but are less directly tied to integrity as the primary objective.Biometric verificationis an authentication method that helps confirm identity; it supports access control broadly, but it does not by itself limit modification capability once access is granted.Anti-malicious code detectionhelps prevent malware that could corrupt data, but it is primarily a detection/prevention tool rather than the foundational control for authorized modification.Backups and redundancyprimarily support availability and recovery after corruption, not the prevention of unauthorized changes.
Which organizational resource category is known as "the first and last line of defense" from an attack?
Firewalls
Employees
Endpoint Devices
Classified Data
In cybersecurity guidance,employees are often described as the first and last line of defensebecause human actions influence nearly every stage of an attack. They are thefirst linesince many threats begin with user interaction: phishing emails, malicious links, social engineering calls, unsafe file handling, weak passwords, and accidental disclosure of sensitive information. A well-trained user who recognizes suspicious requests, verifies identities, and reports anomalies can stop an incident before any technical control is even engaged.
Employees are also thelast linebecause technical protections such as firewalls, filters, and endpoint tools are not perfect. Attackers routinely bypass or evade automated defenses using stolen credentials, living-off-the-land techniques, misconfigurations, or novel malware. When those controls fail, the organization still depends on people to apply secure behaviors: following least privilege, protecting credentials, using multifactor authentication correctly, confirming out-of-band requests for payments or data, and escalating unusual activity quickly. Incident response, containment, and recovery also depend on humans making correct decisions under pressure, following documented procedures, and communicating accurately.
Cybersecurity documents emphasize that a strong security culture, regular awareness training, role-based education, clear reporting channels, and consistent policy enforcement reduce human-enabled risk and turn employees into an effective security control rather than a vulnerability.
What privacy legislation governs the use of healthcare data in the United States?
Privacy Act
PIPEDA
HIPAA
PCI-DSS
In the United States,HIPAA, the Health Insurance Portability and Accountability Act, is the primary federal framework that governs how certain healthcare information must be protected and used. In cybersecurity and compliance documentation, HIPAA is most often discussed through its implementing rules, especially thePrivacy Ruleand theSecurity Rule. The Privacy Rule establishes when protected health information may be used or disclosed and grants individuals rights over their health information. The Security Rule focuses specifically on safeguarding electronic protected health information by requiring administrative, physical, and technical safeguards.
From a security controls perspective, HIPAA-driven programs typically include risk analysis and risk management, policies and workforce training, access controls based on least privilege, unique user identification, authentication controls, audit logging, integrity protections, transmission security such as encryption for data in transit, and contingency planning such as backups and disaster recovery. HIPAA also expects organizations to manage third-party risk through appropriate agreements and oversight when vendors handle protected health information.
The other options do not fit the question. The Privacy Act generally applies to U.S. federal agencies’ handling of personal records, PIPEDA is a Canadian privacy law, and PCI-DSS is an industry security standard focused on payment card data rather than healthcare data. Therefore, HIPAA is the correct legislation for U.S. healthcare data protection requirements.
What is the "impact" in the context of cybersecurity risk?
The potential for violation of privacy laws and regulations from a cybersecurity breach
The financial costs to the organization resulting from a breach
The probability that a breach will occur within a given period of time
The magnitude of harm that can be expected from unauthorized information use
In cybersecurity risk management,impactrefers to theseverity of adverse consequencesif a threat event occurs and successfully affects information or systems. It is the “so what†of a risk scenario: how much damage the organization, its customers, or other stakeholders could experience when confidentiality, integrity, or availability is compromised. Impact commonly includes multiple dimensions such as operational disruption, loss of critical services, harm to customers, legal or regulatory exposure, reputational damage, and direct and indirect financial loss. Because these consequences can extend beyond money, impact is broader than just costs and also includes mission failure, safety implications, loss of competitive advantage, and degradation of trust.
Option D captures this correctly by describing impact as the magnitude of harm expected from unauthorized use of information. Option C describes likelihood, not impact, because it focuses on probability over time. Option B is only one component of impact, since financial cost is important but does not fully represent business, legal, and operational consequences. Option A is also a possible consequence but is narrower than the full impact concept. Cybersecurity risk scoring typically combines likelihood and impact to prioritize treatment, ensuring high-impact scenarios receive attention even when probabilities vary.
What is a Recovery Point Objective RPO?
The point in time prior to the outage to which business and process data must be recovered
The maximum time a system may be out of service before a significant business impact occurs
The target time to restore a system without experiencing any significant business impact
The target time to restore systems to operational status following an outage
ARecovery Point Objectivedefines the acceptable amount of data loss measured in time. It answers the question: “After an outage or disruptive event,how far back in time can we restore data and still meet business needs?†If the RPO is 4 hours, the organization is stating it can tolerate losing up to 4 hours of data changes, meaning backups, replication, journaling, or snapshots must be frequent enough to restore to a point no older than 4 hours before the incident. That is exactly what option A describes: the specific point in time prior to the outage to which data must be recovered.
RPO is often paired withRecovery Time Objectivebut they are not the same. RTO focuses onhow quicklyservice must be restored, while RPO focuses onhow much datathe organization can afford to lose. Options B, C, and D all describe time-to-restore concepts, which align with RTO or related recovery targets rather than RPO.
In operational resilience and disaster recovery planning, RPO drives technical design choices: backup frequency, replication methods, storage and retention strategies, and validation testing. Lower RPO values generally require more robust and often more expensive solutions, such as near-real-time replication and strong change capture controls. RPO also influences incident response and recovery procedures to ensure restoration steps reliably meet the agreed data-loss tolerance.
Top of Form
Analyst B has discovered multiple attempts from unauthorized users to access confidential data. This is most likely?
Admin
Hacker
User
IT Support
Multiple attempts by unauthorized users to access confidential data most closely aligns with activity from a hacker, meaning an unauthorized actor attempting to gain access to systems or information. Cybersecurity operations commonly observe this pattern as repeated login failures, password-spraying, credential-stuffing, brute-force attempts, repeated probing of restricted endpoints, or abnormal access requests against protected repositories. While “user†is too generic and could include authorized individuals, the question explicitly states “unauthorized users,†pointing to malicious or illegitimate actors. “Admin†and “IT Support†are roles typically associated with legitimate privileged access and operational troubleshooting; repeated unauthorized access attempts from those roles would be atypical and would still represent compromise or misuse rather than normal operations. Cybersecurity documentation often classifies these attempts as indicators of malicious intent and potential precursor events to a breach. Controls recommended to counter such activity include strong authentication (multi-factor authentication), account lockout and throttling policies, anomaly detection, IP reputation filtering, conditional access, least privilege, and monitoring of authentication logs for patterns across accounts and geographies. The key distinction is that repeated unauthorized attempts represent hostile behavior by an external or rogue actor, which is best described as a hacker in the provided options.
The hash function supports data in transit by ensuring:
validation that a message originated from a particular user.
a message was modified in transit.
a public key is transitioned into a private key.
encrypted messages are not shared with another party.
A cryptographic hash function supports data in transit primarily by providingintegrity assurance. When a sender computes a hash (digest) of a message and the receiver recomputes the hash after receipt, the two digests should match if the message arrived unchanged. If the message is altered in any way while traveling across the network—whether by an attacker, a faulty intermediary device, or transmission errors—the recomputed digest will differ from the original. This difference is the key signal that the messagewas modified in transit, which is what option B expresses. In practical secure-transport designs, hashes are typically combined with a secret key or digital signature so an attacker cannot simply modify the message and generate a new valid digest. Examples include HMAC for message authentication and digital signatures that hash the content and then sign the hash with a private key. These mechanisms provide integrity and, when keyed or signed, also provide authentication and non-repudiation properties.
Option A is more specifically about authentication of origin, which requires a keyed construction such as HMAC or a signature scheme; a plain hash alone cannot prove who sent the message. Option C is incorrect because keys are not “converted†from public to private. Option D relates to confidentiality, which is provided by encryption, not hashing. Therefore, the best answer is B because hashing enables detection of message modification during transit.
TESTED 22 Feb 2026