Why the Spiral Model Puts Risk at the Center — Not the Margins
Most software development methodologies treat risk as something you manage alongside the project. The spiral model treats risk as something you build the project around. That’s not a subtle distinction. It’s the fundamental design choice that makes the spiral model different from every other lifecycle model in common use.
Barry Boehm introduced the spiral model in 1986 in a paper that remains one of the most-cited works in software engineering. His core insight was straightforward: software projects fail not because developers can’t code, but because teams make commitments before they understand the risks they’re taking on.
The spiral model’s answer is to make risk identification and resolution the mandatory first step of every development cycle, before a single line of production code is written.
If you’re here because you’re studying for an exam, working on a project proposal, or trying to explain to a client why you’re recommending the spiral model over Agile or Waterfall, this guide gives you the full picture: what the model does, how risk management works within it, worked examples, templates, and the KRIs you should be tracking.
| The Spiral Model in One Sentence The spiral model is an iterative software development lifecycle in which each cycle (spiral) begins with risk identification and analysis, uses prototyping or other techniques to resolve the most significant risks, and only then proceeds to development — repeating this cycle until the system is complete or the project is cancelled on risk grounds. |
The Four Quadrants of the Spiral Model
The spiral model is typically depicted as a diagram where the radial dimension represents cumulative project cost and the angular dimension represents progress through each cycle. Each revolution of the spiral passes through four quadrants. Understanding what happens in each quadrant is the foundation for understanding where and how risk management operates.
| Quadrant | Phase Name | Primary Activities | Risk Management Focus | ISO 31000 Alignment |
| Q1 | Objectives, Alternatives & Constraints | Define goals; identify constraints; clarify stakeholder requirements | Establish risk context; identify high-level risks to objectives | Clause 6.3 — Establishing the context |
| Q2 | Risk Identification & Resolution | Identify risks; evaluate alternatives; prototype/simulate to resolve uncertainty | Risk identification, analysis, and evaluation; prototype as risk treatment | Clauses 6.4–6.6 — Identify, Analyse, Evaluate |
| Q3 | Development & Test | Design, code, integrate, and test the increment | Execute risk treatment plans; monitor residual risks in build | Clause 6.7 — Risk treatment |
| Q4 | Planning the Next Spiral | Review, plan, and commit to the next cycle; stakeholder sign-off | Update risk register; carry forward open risks; replan based on outcomes | Clause 6.8–6.9 — Monitor, review, communicate |
Table 1: Spiral model quadrant overview with risk management activities and ISO 31000 alignment.
The key thing to understand about this structure is that Q2 is the decision gate. After you’ve identified and analyzed your risks, you have three possible paths: resolve the risk and proceed, accept the risk and proceed with a mitigation plan, or determine the risk is unacceptable and terminate the project.
That third option is what makes the spiral model genuinely risk-driven rather than just risk-aware. A project that can’t survive its own Q2 analysis shouldn’t survive at all.
Six Risk Categories the Spiral Model Addresses
Not all risks are equal, and the spiral model doesn’t treat them that way. Boehm’s framework identifies several distinct risk categories, each of which requires a different resolution approach. Here’s how they map to modern engineering practice:
| Risk Category | Description | Typical Indicators in Software Projects | Primary Resolution Technique |
| Technical Risk | Uncertainty about whether the technology, architecture, or design will work as intended | Novel algorithm, unproven integration, performance under load | Proof-of-concept prototype; technical spike; benchmark test |
| Requirements Risk | Uncertainty about what stakeholders actually need; incomplete or volatile requirements | Conflicting stakeholder input; vague acceptance criteria; scope creep | Requirements prototype; structured workshops; formal sign-off gate |
| Schedule Risk | Uncertainty about whether the project can be completed within the planned timeline | Task dependency chains; resource constraints; unrealistic estimates | Monte Carlo schedule simulation; critical path compression; buffer management |
| Cost Risk | Uncertainty about whether the project will stay within budget | Scope instability; vendor dependency; estimate accuracy | Earned Value Management (EVM); contingency reserve; phased commitment |
| Process Risk | Uncertainty about whether the development process itself will support quality and predictability | Immature process; team capability gaps; tool chain instability | Process maturity assessment (CMMI); training; tool validation |
| Business Risk | Uncertainty about whether the delivered system will deliver business value or face adoption barriers | Changing market conditions; regulatory shifts; organizational change | Business case review gates; stakeholder alignment checkpoints; pilot deployment |
Table 2: Risk categories in the spiral model and their resolution techniques.
In practice, most spiral projects encounter risks from multiple categories simultaneously. The discipline is in prioritizing them correctly.
Boehm’s advice — which holds up as well today as it did in 1986 — is to rank risks by their risk exposure (probability multiplied by loss) and attack the highest-exposure items first. The spiral structure gives you a built-in forcing function to do exactly that.
Boehm’s Top 10 Software Risk Items — Mapped to the Spiral
In his 1991 paper, Boehm published a ranked list of the ten most common software project risk items, derived from an analysis of large defense and commercial projects. These items have remained remarkably stable across decades of software engineering practice. Here’s how each maps to the spiral model’s quadrants and what you should do about them:
| # | Boehm’s Risk Item | Risk Category | Spiral Quadrant to Address | Typical Resolution Technique |
| 1 | Personnel shortfalls | Process | Q1 — Objectives setting | Staff augmentation plan; cross-training; contractor contingency |
| 2 | Unrealistic schedules and budgets | Schedule / Cost | Q1 — Objectives; Q4 — Planning | Parametric estimation; Monte Carlo schedule simulation; phased commitment |
| 3 | Developing the wrong software functions | Requirements | Q2 — Risk resolution via prototype | Requirements prototype; joint application design (JAD); acceptance criteria sign-off |
| 4 | Developing the wrong user interface | Requirements | Q2 — UI/UX prototype | UI wireframe prototype; usability testing with end users before commit |
| 5 | Gold plating | Process / Business | Q1 — Constraints definition; Q4 — Scope review | MoSCoW prioritization; change control gate; scope freeze discipline |
| 6 | Continuing stream of requirements changes | Requirements | Q1 and Q4 — At every spiral boundary | Spiral commitment partition; change impact analysis; version-controlled requirements baseline |
| 7 | Shortfalls in externally supplied components | Technical / Process | Q2 — Evaluate alternatives | Vendor due diligence; build vs. buy analysis; contractual SLA with acceptance tests |
| 8 | Shortfalls in externally performed tasks | Process | Q2 and Q3 | Third-party audit; milestone-based payment; escrow arrangements |
| 9 | Real-time performance shortfalls | Technical | Q2 — Performance prototype | Load testing prototype; performance modelling; architecture optimization prior to full build |
| 10 | Straining computer science capabilities | Technical | Q2 — Technology evaluation | Technology feasibility study; academic/industry consultation; phased R&D spiral |
Table 3: Boehm’s Top 10 software risk items mapped to spiral model quadrants and resolution techniques.
What’s striking about this list is how few of the items are purely technical. Six of the ten are fundamentally about people, process, and requirements, not about whether the code works. That matches what experienced project managers observe: technical risks are usually the ones you can resolve with a prototype.
Process and requirements risks are the ones that kill projects slowly and expensively if you don’t address them early. For a structured approach to quantifying these risks, see our guide on Monte Carlo Simulation for Risk Analysis.
Prototyping as Risk Resolution: How It Actually Works
The most distinctive feature of the spiral model is its use of prototyping as a primary risk resolution technique. This is worth understanding in detail because “build a prototype” is advice that sounds obvious but is badly applied more often than not.
What a Risk-Resolution Prototype Is — And Isn’t
A risk-resolution prototype in the spiral model is a targeted, throwaway piece of work designed to answer a specific risk question. It is not:
- A proof of concept that you’ll clean up and ship
- An early version of the production system
- A demo built to impress stakeholders
It is specifically designed to answer the question: “Does this risk exist at the magnitude we think it does, or can we reduce our uncertainty about it enough to proceed safely?” Once it answers that question, its job is done. The prototype may be discarded entirely.
Types of Prototypes Used in Spiral Projects
Throwaway prototypes are built quickly to explore a specific technical or requirements question, then discarded. Common in Q2 to resolve TECH or REQ risks.
Evolutionary prototypes are refined across multiple spirals and eventually become the production system. More common in requirements-heavy domains where stakeholder feedback drives design.
Architectural prototypes test the viability of a proposed technical architecture, particularly around performance, scalability, or integration. Used to resolve high-severity technical risks before committing to a full design.
UI/UX prototypes address requirements risk around user interfaces. They range from paper wireframes (appropriate in Spiral 1) to interactive mockups (Spiral 2 or 3). Usability testing with real users is the resolution test.
The Prototype Decision Rule
Boehm’s rule of thumb for when to prototype: if a risk item has a risk exposure (probability x cost of loss) that exceeds the cost of building a prototype to resolve it, build the prototype.
This sounds obvious, but it requires actually estimating risk exposure in quantitative terms rather than relying on gut feel. That discipline separates well-run spiral projects from ones that prototype everything regardless of cost-benefit logic.
Worked Example: Spiral Risk Management for a Hospital EHR Integration
The best way to see how risk management works in the spiral model is through a concrete example. Let’s walk through a three-spiral project to integrate a clinical decision support module with an existing Electronic Health Records (EHR) system at a regional hospital network.
This is the kind of project where the spiral model earns its keep: significant technical uncertainty around EHR API behavior, complex and conflicting stakeholder requirements from clinical, administrative, and IT departments, and regulatory constraints from HIPAA and ONC certification requirements.
| Spiral | Quadrant | Risk ID | Risk Description | Likelihood (1-5) | Severity (1-5) | Score | Resolution Action |
| 1 | Q2 | TECH-01 | EHR API integration fails to meet response time SLA under peak load | 4 | 4 | 16 — Critical | Build performance prototype before Spiral 1 design commit; load test at 200% of peak volume |
| 1 | Q2 | REQ-01 | Clinician workflow requirements conflict across three hospital departments | 5 | 3 | 15 — High | Facilitated workflow mapping workshop with all three departments before requirements sign-off |
| 2 | Q2 | TECH-02 | HL7 FHIR R4 message parsing errors in edge cases not covered by prototype | 3 | 4 | 12 — High | Extend prototype with edge-case test suite; engage HL7 vendor support |
| 2 | Q3 | SCHED-01 | Third-party security audit timeline slips by 3 weeks, blocking go-live | 4 | 3 | 12 — High | Pre-book audit slot at Spiral 2 start; maintain parallel audit track; define minimum viable security scope |
| 3 | Q2 | BUS-01 | Hospital procurement freeze delays hardware provisioning for go-live environment | 3 | 3 | 9 — Medium | Cloud-based staging environment as fallback; escalation path to hospital CIO agreed at Q4 review |
| 3 | Q4 | PROC-01 | Clinical staff training completion below 80% threshold 2 weeks before go-live | 4 | 4 | 16 — Critical | Mandatory training milestone added to Spiral 3 Q4 gate criteria; go/no-go decision at gate |
Table 4: Spiral risk management worksheet for a hospital EHR integration project (three-spiral example).
What the Example Demonstrates
A few things worth noting in this worked example. First, risk items evolve across spirals. TECH-01 (API performance) is a critical risk in Spiral 1 that gets resolved by a targeted prototype, so it doesn’t appear in Spiral 2.
But TECH-02 emerges in Spiral 2 as a new technical risk surfaced by the prototype work, which is exactly how it should work. The spiral model expects new risks to appear as understanding deepens.
Second, PROC-01 in Spiral 3 is a business and process risk, not a technical one. By Spiral 3 the technology is well understood; what’s uncertain is whether the organization can absorb the change. This is a classic late-spiral risk pattern, and the response — making training completion a gate criterion — reflects good risk management practice. Unresolved risks don’t get carried silently into go-live.
Third, every action has an owner and is tied to a specific quadrant. Risk management in the spiral model isn’t a separate workstream that runs in parallel with development. It’s embedded in the project structure itself.
Spiral vs. Waterfall vs. Agile: The Risk Management Difference
Teams choosing a development lifecycle often frame the decision as Waterfall vs. Agile, with the spiral model as a historical footnote. That framing misses the point. The spiral model occupies a distinct and still-relevant position in the model space, particularly for high-stakes, high-uncertainty projects. Here’s the comparison:
| Risk Dimension | Waterfall | Spiral | Agile (Scrum) |
| When risks are identified | Primarily upfront in planning; limited reassessment | Explicitly in every cycle before development begins | Continuously in sprint planning and retrospectives |
| Risk documentation | Risk register at project start; infrequent updates | Risk register updated at each spiral cycle; resolution documented | Lightweight — risk items in backlog; minimal formal documentation |
| Prototyping for risk resolution | Rare; prototype not part of standard lifecycle | Core mechanism — prototype used to resolve high-priority risks before committing | Incremental delivery acts as de facto prototype; no dedicated risk prototype step |
| Ability to change requirements | Low — change control heavy; late changes are expensive | Moderate — requirements refined at each spiral cycle | High — sprint-by-sprint reprioritization |
| Best suited for | Well-defined, stable requirements; low-uncertainty projects | Large, complex, high-risk projects with significant technical or requirements uncertainty | Smaller teams; frequent delivery; customer collaboration; lower formal risk requirements |
| Risk of over-engineering | Low — scope fixed upfront | Moderate — each spiral can add scope if risk analysis not disciplined | Low — backlog prioritization limits scope |
| Standards fit | DO-178C, ISO 26262 (with tailoring) | NASA NPR 7150.2, DoD acquisition frameworks, safety-critical systems | SAFe, DSDM, ISO/IEC 29110 (small entities) |
Table 5: Risk management comparison across Waterfall, Spiral, and Agile lifecycle models.
The practical implication: if you’re working on a project where the cost of getting it wrong is very high (defense systems, medical devices, critical infrastructure, large-scale financial systems), the spiral model’s explicit risk resolution gates are worth the overhead.
If you’re building a web application with a small team and frequent customer access, Agile’s continuous delivery model handles risk differently but adequately.
For organizations operating under formal risk management frameworks, it’s worth noting that the spiral model’s Q2 quadrant maps directly to the risk identification and analysis phases in ISO 31000:2018 and the COSO ERM framework’s risk assessment component.
If your organization already has an enterprise risk register, spiral project risks should feed into it, not sit in a separate project-only document. See our guide on ISO 31000 Risk Assessment Framework for how to structure that integration.
KRIs for Spiral Model Projects: What to Monitor Between Spirals
One of the weaknesses of the spiral model in practice is that risk management can become a point-in-time activity — something that happens intensively in Q2 and then gets forgotten until the next spiral starts. KRIs fix this by creating continuous early-warning signals that bridge the gap between formal Q2 risk sessions.
Here are the six KRIs we recommend tracking for any spiral project, with threshold definitions:
| KRI | What It Measures | Green Threshold | Amber Threshold | Red Threshold | Escalation Action |
| Risk Resolution Rate | % of Q2 risks resolved before Q3 development begins | 90-100% | 75-89% | < 75% | Hold Q3 start; escalate to sponsor |
| Prototype Defect Density | Defects per KLOC in risk-resolution prototype | < 2.0 | 2.0–4.0 | > 4.0 | Architecture review; extend prototype cycle |
| Schedule Variance (SV) | Earned value minus planned value at spiral midpoint | SV >= 0 | SV -5% to 0 | SV < -5% | Re-baseline; adjust scope or resource |
| Open Critical Risks | Count of risk score 15+ items not yet resolved | 0 | 1–2 | > 2 | Project board escalation; resource reallocation |
| Stakeholder Approval Rate | % of Q4 deliverables approved on first review | > 85% | 70–85% | < 70% | Requirements re-validation; extend Q4 review cycle |
| Risk Register Age (unreviewed) | Days since last risk register update | 0–7 days | 8–14 days | > 14 days | Mandatory risk review session within 48 hours |
Table 6: KRI dashboard for spiral model risk monitoring. Review weekly; escalate on amber or red.
These KRIs should be reviewed at a weekly project status meeting and reported at each Q4 spiral review. If you’re building a broader KRI framework for your project or program portfolio, see our article on Key Risk Indicators: Design and Implementation Guide for guidance on threshold calibration and escalation design.
Spiral Risk Register Template
Every spiral project should maintain a single risk register that is updated at every quadrant transition. The register isn’t a static document — it’s a living log of risk identification, scoring, resolution status, and carry-forward decisions. Here’s a template structured for spiral projects:
| Risk ID | Spiral # | Quadrant | Risk Description | Category | L (1-5) | S (1-5) | Score | Resolution / Owner / Due |
| TECH-01 | 1 | Q2 | [Description of technical risk] | Technical | 3 | 4 | 12 — High | [Prototype / analysis action / owner / date] |
| REQ-01 | 1 | Q2 | [Requirements uncertainty description] | Requirements | 4 | 3 | 12 — High | [Workshop / sign-off / owner / date] |
| SCHED-01 | 2 | Q2 | [Schedule dependency or constraint risk] | Schedule | 3 | 3 | 9 — Medium | [Buffer / replanning / owner / date] |
| BUS-01 | 2 | Q4 | [Business or stakeholder risk] | Business | 2 | 4 | 8 — Medium | [Stakeholder engagement / owner / date] |
Table 7: Spiral project risk register template. Add rows for each identified risk; update at each quadrant transition.
A few structural notes on this template. The Spiral # column is important because it creates a time-stamped record of when a risk was first identified and whether it has been re-scored in subsequent spirals.
Risk items that persist across multiple spirals without resolution are a red flag that deserves specific attention in Q4 planning. The Resolution field should include not just the action but the evidence that the risk was actually resolved — prototype results, test outcomes, stakeholder sign-off, or other closure criteria.
Where the Spiral Model Is Used Today
The spiral model is most commonly used in contexts where the cost of failure is high and the level of technical or requirements uncertainty is significant. In practice, that means:
US Department of Defense and Federal IT
The DoD’s acquisition frameworks have drawn heavily on Boehm’s work. NASA’s software engineering requirements (NPR 7150.2) explicitly reference risk-driven lifecycle approaches aligned with spiral principles. The Software Engineering Institute (SEI) at Carnegie Mellon has published extensive guidance on spiral model implementation in defense acquisition contexts.
Safety-Critical and Regulated Systems
Medical device software under FDA 21 CFR Part 820 (Quality System Regulation) and IEC 62304 (medical device software lifecycle), avionics software under DO-178C, and automotive software under ISO 26262 all operate in environments where iterative risk resolution before commitment is a regulatory expectation, even if the specific lifecycle model isn’t mandated. The spiral model’s gate structure aligns well with these regulatory frameworks.
Large-Scale Enterprise Systems
Enterprise resource planning (ERP) implementations, financial systems migrations, and infrastructure modernization programs often use spiral-influenced approaches even if they don’t label them as such. The pattern of running a limited-scope pilot (analogous to an early spiral), identifying the highest-risk items, resolving them before the next phase, and then scaling is spiral logic applied without the formal label.
Research and Development Projects
R&D projects by definition operate in high-uncertainty environments where the outcome of one phase determines whether and how to proceed with the next. The spiral model’s willingness to terminate a project at a Q2 gate — rather than carrying it forward out of sunk-cost inertia — makes it well-suited to R&D contexts.
This connects to the broader principle of Business Continuity Planning for Technology Teams where managing project failure scenarios is as important as planning for success.
The 5 Most Common Spiral Model Risk Management Mistakes
1. Treating Q2 as Documentation Rather Than Decision
The risk identification and resolution quadrant exists to make a decision: do we have enough confidence in this approach to commit resources to building it? Teams that treat Q2 as a documentation exercise — filling out a risk register because the process requires it — miss the point entirely. Every Q2 should end with an explicit go/no-go decision backed by documented risk analysis.
2. Never Exercising the Termination Option
If a project has never been at risk of termination at a Q2 gate, your risk analysis isn’t credible. Boehm’s model explicitly includes “terminate the project” as a valid Q2 outcome. Organizations that always find reasons to proceed regardless of risk analysis outcomes create a culture where the spiral structure becomes theater rather than governance. Build a real termination threshold into your Q2 criteria.
3. Prototyping Without a Specific Risk Question
“Let’s build a prototype to see what we’re dealing with” is not a risk-resolution strategy. Every prototype in a spiral project should have a documented risk question it’s designed to answer and a defined test for whether the risk has been resolved. Without those anchors, prototype work expands indefinitely and the cost-benefit logic breaks down.
4. Risk Register as a Project Artifact, Not a Living Document
Risk registers on spiral projects sometimes get created in Q2 of Spiral 1 and then filed away until someone asks for them. A spiral risk register should be updated at every quadrant transition, reviewed at every Q4 planning session, and actively used to drive prototype decisions in Q2. If your risk register hasn’t been touched since the last spiral started, it’s not doing its job.
5. Ignoring Process and Business Risks in Favour of Technical Ones
Technical risks are visible and tractable. Process and business risks are diffuse and uncomfortable. Teams running spiral projects tend to load their Q2 risk sessions with technical items because those are the risks engineers feel equipped to analyze. But as Boehm’s Top 10 list shows, six of the ten most common project killers are non-technical. Budget explicit time in Q2 for requirements, process, schedule, and business risk analysis, not just technical deep-dives.
Standards and Frameworks That Reference the Spiral Model
The spiral model doesn’t have its own dedicated ISO standard, but it is referenced in or aligned with several frameworks that matter in US engineering practice:
ISO/IEC/IEEE 12207:2017 — Software Lifecycle Processes: The international standard for software lifecycle processes provides a framework of processes that can be instantiated under various lifecycle models, including spiral. The risk management processes in 12207 (Section 6.3.4) align directly with Q2 risk identification and resolution activities.
IEEE Std 1540-2001 — Software Risk Management: This standard, now withdrawn but still referenced in practice, provided detailed guidance on software risk management activities that map closely to spiral Q2 processes. The SEI’s Continuous Risk Management Guidebook remains the most comprehensive practitioner reference for spiral risk management.
CMMI for Development (CMMI-DEV): The Risk Management (RSKM) process area in CMMI-DEV is essentially a formalization of spiral risk management principles. Achieving Maturity Level 3 on the RSKM process area requires exactly the kind of systematic risk identification, analysis, and resolution tracking that the spiral model mandates structurally.
For organizations aligning project risk management with enterprise risk, the connection to ISO 31000:2018 Risk Management Guidelines is direct: the spiral’s Q2 phase implements Clauses 6.4 (risk identification), 6.5 (risk analysis), and 6.6 (risk evaluation); Q3 implements Clause 6.7 (risk treatment); and Q4 implements Clauses 6.8-6.9 (monitoring, review, and communication).
Related Guides on riskpublishing.com
If you’re building out a broader risk management capability beyond the spiral model, these guides will help:
• ISO 31000 Risk Assessment Framework Explained — How to structure risk identification, analysis, and evaluation under the international standard.
• Monte Carlo Simulation for Risk Analysis: A Practical Tutorial — Quantify schedule and cost risk exposure using probabilistic simulation, directly applicable to spiral Q2 analysis.
• Key Risk Indicators: Design and Implementation Guide — Build a KRI dashboard for ongoing risk monitoring between spiral cycles.
• NUDD Analysis Explained: Meaning, Engineering Applications, and Examples — A complementary hazard identification technique for engineering system risk in spiral projects.
• Business Continuity Planning for Technology Teams — What happens when a spiral project’s highest-risk scenario actually materializes?
Download the Free Spiral Model Risk Register Template
The risk register template, KRI dashboard, and spiral risk worksheet from this article are available as a free downloadable Excel file at riskpublishing.com/spiral-model-risk-template. The file includes the complete risk register with embedded formulas for automatic risk scoring and color-coded heat mapping, the KRI tracking sheet with threshold indicators, and the worked hospital EHR example pre-populated as a reference.
If you’re working on a specific spiral project — whether it’s a defense acquisition, a healthcare system, or a large enterprise modernization — and want to think through the risk management structure, the contact page is where to start.
Sources & Further Reading
1. Boehm, B.W. (1988). A Spiral Model of Software Development and Enhancement. IEEE Computer, 21(5), 61-72. — The original paper.
2. Boehm, B.W. (1991). Software Risk Management: Principles and Practices. IEEE Software, 8(1), 32-41. — Source of the Top 10 risk items.
3. ISO/IEC/IEEE 12207:2017 — Systems and Software Engineering — Software Life Cycle Processes — ISO
4. ISO 31000:2018 Risk Management Guidelines — International Organization for Standardization
5. NASA NPR 7150.2 — NASA Software Engineering Requirements — NASA
6. SEI Continuous Risk Management Guidebook — Software Engineering Institute, Carnegie Mellon University
7. CMMI for Development v2.0 — Risk Management Process Area — CMMI Institute
8. IEC 62304:2006+AMD1:2015 — Medical Device Software Lifecycle Processes — International Electrotechnical Commission

Chris Ekai is a Risk Management expert with over 10 years of experience in the field. He has a Master’s(MSc) degree in Risk Management from University of Portsmouth and is a CPA and Finance professional. He currently works as a Content Manager at Risk Publishing, writing about Enterprise Risk Management, Business Continuity Management and Project Management.
