Key Takeaways
| # | Takeaway |
| 1 | Monte Carlo simulation generates thousands of random scenarios from probability distributions to produce a range of possible outcomes, replacing single-point estimates with probability-weighted forecasts. |
| 2 | The free Excel template includes five sheets: Project Cost Model (10 PERT-distributed cost items), Simulation (1,000 iterations with triangular random sampling), Simulation Statistics (17 output metrics including P50/P80/P90), Sensitivity Analysis (tornado chart data), and a step-by-step How-To Guide. |
| 3 | The core Excel formula uses the triangular distribution: IF(RAND()<(ML–Min)/(Max–Min), Min+SQRT(R×(Max–Min)×(ML–Min)), Max–SQRT((1–R)×(Max–Min)×(Max–ML))). |
| 4 | Monte Carlo answers the question deterministic estimates cannot: “What is the probability that the actual outcome will exceed our budget/target/threshold?” The answer is usually 50–70%, which is why deterministic estimates consistently underfund projects. |
| 5 | Sensitivity analysis (tornado charts) identifies the 2–3 variables that drive the majority of outcome uncertainty. Focus mitigation effort and management attention on those variables. |
| 6 | Present Monte Carlo results to the board as: Deterministic vs. P50 vs. P80 vs. P90, contingency = P80 minus Deterministic, and the probability of exceeding the approved budget. |
| 7 | Monte Carlo applies beyond project cost: use the same technique to model revenue uncertainty, operational loss distributions, funded-ratio projections, cyber loss scenarios, and any risk that can be expressed as a probability distribution. |
What Is Monte Carlo Simulation?
Monte Carlo simulation is a quantitative technique that uses repeated random sampling from probability distributions to model the range of possible outcomes and their likelihood, enabling risk-informed decision-making under uncertainty.
Named after the Monte Carlo Casino in Monaco, the method leverages the law of large numbers: run enough random trials, and the distribution of results converges toward the true probability distribution of the outcome.
ISO 31010:2019 (Risk Assessment Techniques) lists Monte Carlo simulation as a Tier 1 quantitative risk analysis method applicable across all risk domains.
The FAIR Institute uses Monte Carlo as the computational engine behind the Factor Analysis of Information Risk model. The PMI PMBOK Guide recommends Monte Carlo as the primary method for quantitative schedule and cost risk analysis on capital projects.
Deterministic estimates give you a single number. Monte Carlo gives you a probability distribution: a P50 (50% confidence), a P80 (80% confidence), a P90 (90% confidence), and the probability that the actual outcome will exceed your approved budget.
That distribution is worth more to a decision-maker than any single-point estimate.
Why Single-Point Estimates Fail: The Case Against Deterministic Budgeting
Most organizations still budget and plan using single-point deterministic estimates. Here is why that approach systematically underestimates risk.
| Problem | What Happens | Monte Carlo Solution |
| Asymmetric risk distributions | Most real-world uncertainties are not symmetric. Construction costs, project durations, and operational losses skew right: the upside is capped, but the downside tail is long. A single “most likely” estimate ignores the tail. | Monte Carlo samples from the full distribution (including the tail), producing a realistic picture of downside exposure. |
| Compounding uncertainty | A project with 10 uncertain cost items does not produce 10 independent risks. The combined uncertainty is larger than any individual item. Deterministic estimates add “most likely” values, which understates the combined uncertainty. | Monte Carlo simultaneously randomizes all variables and sums them, capturing the compounding effect. The total distribution is wider than the sum of individual “most likely” values. |
| No probability information | A deterministic estimate says “the project will cost $10.6M.” Is that a 30% probability? 50%? 80%? Nobody knows. The board approves a number without understanding how likely the outcome actually is. | Monte Carlo produces: “$10.6M at P50 (50% confidence); $11.8M at P80; $12.4M at P90.” Decision-makers choose the confidence level that matches their risk appetite. |
| False precision | A single number creates an illusion of certainty. Stakeholders anchor to the estimate and treat any overrun as a failure, even when the overrun was always probable. | Monte Carlo explicitly communicates uncertainty as a range. The board sees the distribution and understands that outcomes below P50 are as likely as outcomes above P50. |
| Contingency guessing | Contingencies are often set as arbitrary percentages (10%, 15%) unrelated to the actual risk profile. Too low = underfunded; too high = capital locked up unnecessarily. | Monte Carlo calculates the evidence-based contingency: Contingency = P80 (or P90) minus the Deterministic estimate. The contingency is calibrated to the project’s actual risk profile. |
The Society of Actuaries (SOA) and the CFA Institute both advocate Monte Carlo simulation as the standard approach to quantifying pension fund investment risk, asset-liability mismatches, and contribution-rate uncertainty.
Our guide on risk management for pension funds shows how ALM models use Monte Carlo to project funded-ratio distributions.
How Monte Carlo Simulation Works: The Six-Step Process
| Step | Action | Excel Implementation | Output |
| 1. Define the model | Identify all uncertain input variables and the mathematical relationship that produces the output | Sheet 1 (Project Cost Model): 10 cost line items with Min, Most Likely, Max estimates; output = sum of all items | Model structure with PERT-distributed inputs |
| 2. Assign probability distributions | Select the distribution type per variable: Triangular (Min/ML/Max), PERT, Normal (Mean/StdDev), Uniform (Min/Max), or Lognormal | Each input row defines Min (Col C), Most Likely (Col D), and Max (Col E). The PERT Mean and StdDev auto-calculate. | Distribution parameters per variable |
| 3. Generate random samples | Run N iterations (1,000–10,000). Each iteration randomly samples a value per variable from its assigned distribution. | Sheet 2: 1,000 rows × 10 columns. Each cell uses RAND() inside a triangular distribution formula to generate a random sample. | 1,000 random samples per variable |
| 4. Calculate the output per iteration | Apply the model logic (sum, product, or formula chain) to each set of random samples to produce one output value per iteration | Column L sums Columns B through K per row, producing 1,000 total-cost values | 1,000 simulated total-cost outcomes |
| 5. Analyze the output distribution | Compute summary statistics on the 1,000 output values: Mean, Median (P50), StdDev, P80, P90, P95, Min, Max | Sheet 3 (Simulation Statistics): 17 formulas compute all key metrics using AVERAGE, PERCENTILE, STDEV, etc. | Probability distribution of the output; percentile estimates; contingency calculations |
| 6. Run sensitivity analysis | Determine which input variables contribute the most to output uncertainty (tornado chart) | Sheet 4: Holds all variables at Most Likely except one; swings that variable from Min to Max; ranks by impact | Tornado chart data showing the top risk drivers |
The Excel template automates all six steps. Open the workbook, enter your own Min/ML/Max estimates on Sheet 1, press F9 to regenerate all random values, and read the results on Sheet 3. No VBA macros, no add-ins, no coding required.
Probability Distributions Used in Monte Carlo Simulation
Choosing the right distribution is the most important modeling decision. The distribution shapes the random samples and directly determines the simulation output.
| Distribution | Parameters | Shape | When to Use | Excel Formula |
| Triangular | Min, Most Likely (Mode), Max | Three-point triangle; peak at Mode; bounded on both sides | Project costs, durations, and operational estimates where you can specify a best case, most likely, and worst case | IF(RAND()<(ML–Min)/(Max–Min), Min+SQRT(R×(Max–Min)×(ML–Min)), Max–SQRT((1–R)×(Max–Min)×(Max–ML))) |
| PERT (Beta-PERT) | Min, Most Likely, Max | Bell-shaped curve weighted toward the Most Likely value; bounded | Refined version of triangular; preferred when the Most Likely estimate should carry more weight than the tails | Mean = (Min+4×ML+Max)/6; StdDev = (Max–Min)/6; then sample using the normal approximation capped at [Min, Max] |
| Normal (Gaussian) | Mean, Standard Deviation | Symmetric bell curve; unbounded | Variables with roughly symmetric uncertainty: measurement errors, market returns (short-horizon), natural phenomena | NORM.INV(RAND(), Mean, StdDev) |
| Lognormal | Mean, Standard Deviation (of the log) | Right-skewed; bounded below at zero; long right tail | Financial losses, claim amounts, project delays, and any variable that cannot go negative and has a long upside tail | LOGNORM.INV(RAND(), LN_Mean, LN_StdDev) |
| Uniform | Min, Max | Flat; equal probability across the entire range | Maximum-uncertainty scenarios where you know the bounds but have no information about which value is more likely | Min + RAND() × (Max – Min) |
| Discrete / Binary | Probability of occurrence (p) | Point probability; event either occurs or does not | Risk events (cyber attack, regulatory fine, vendor failure) that have a probability of occurring and a loss magnitude if they do | IF(RAND()<p, LossAmount, 0) |
The Excel template uses the triangular distribution because the inputs (Min, Most Likely, Max) are the most intuitive to elicit from subject-matter experts in a risk assessment workshop.
PERT and lognormal are superior when historical data is available to calibrate the shape parameters.
How to Read Monte Carlo Results: The Key Statistics Explained
| Statistic | What the Number Tells You | Decision Implication |
| Mean (Expected Value) | The average outcome across all simulations; the statistical center of gravity | Use as the baseline estimate. The mean is often higher than the deterministic “most likely” sum because right-skewed distributions pull the mean above the mode. |
| Median (P50) | The 50th percentile: half of all simulated outcomes fall below this value | There is a 50/50 chance the actual outcome will exceed this number. Funding at P50 means accepting a 50% chance of overrun. |
| P80 (80th Percentile) | 80% of all simulated outcomes fall at or below this value | The most common basis to set project contingency. Funding at P80 means accepting a 20% chance of overrun. Appropriate when risk appetite is moderate. |
| P90 (90th Percentile) | 90% of all simulated outcomes fall at or below this value | Conservative estimate used when the consequences of overrun are severe (board-level capital projects, regulatory commitments, pension fund projections). |
| P95 (95th Percentile) | 95% of all simulated outcomes fall at or below this value | Near-worst-case scenario; used in stress testing and capital-adequacy calculations. |
| Standard Deviation | Measures the spread of the distribution; higher StdDev = more uncertainty | Quantifies the overall uncertainty. Useful to compare the risk profiles of alternative strategies. |
| Coefficient of Variation (CV) | StdDev divided by the Mean; normalized measure of relative uncertainty | CV > 0.20 signals high uncertainty; CV < 0.10 signals low uncertainty. Useful to compare risk across projects of different sizes. |
| Probability of Exceeding Budget | Count of simulations above the approved budget divided by total simulations | If the probability exceeds 50%, the budget is underfunded. Present this number to the board as a funding-risk metric. |
| Contingency (P80 – Deterministic) | The dollar difference between the P80 estimate and the single-point deterministic estimate | Evidence-based contingency: the amount above the deterministic estimate required to achieve 80% confidence. Replaces arbitrary contingency percentages. |
Present these statistics using the “What, So What, Now What” framework from our risk quantification for boards guide: “The P90 project cost is $12.4M (What).
There is a 65% probability the project will exceed the approved $10.6M budget (So What). We recommend funding at P80 ($11.8M), which requires $1.2M additional contingency (Now What).”
Sensitivity Analysis: Identifying the Biggest Risk Drivers
Monte Carlo generates the output distribution. Sensitivity analysis identifies which input variables contribute the most to that distribution. The tornado chart is the standard visualization.
How to Build a Tornado Chart in Excel
The Sensitivity Analysis sheet (Sheet 4) in the template automates the data preparation. Here is the method:
| Step | Action | Result |
| 1. Baseline | Set all variables to their Most Likely values. Calculate the total (Deterministic estimate). | Baseline total: $10,600,000 |
| 2. Swing one variable | Hold all other variables at Most Likely. Move variable #1 from its Min to its Max. Record the total at each extreme. | Total at Min = $10,200,000; Total at Max = $11,400,000 |
| 3. Calculate the swing | Swing = Total at Max – Total at Min. This is the sensitivity of the output to that single variable. | Swing = $1,200,000 |
| 4. Repeat across all variables | Perform Step 2–3 across all 10 input variables. | 10 swing values |
| 5. Rank and chart | Sort variables by swing size (largest at top). Insert a horizontal bar chart. | Tornado chart: visual ranking of risk drivers. The top 2–3 bars typically drive 60–80% of total uncertainty. |
| 6. Focus mitigation | Direct risk mitigation resources to the top 2–3 variables. Reducing uncertainty on the biggest drivers has the largest impact on the overall distribution. | Management attention and contingency allocation are evidence-based, not intuition-based. |
The tornado chart answers the board’s natural follow-up question after seeing the probability distribution: “What can we do to narrow the range?”
The answer is always: reduce uncertainty on the variables at the top of the tornado. This links Monte Carlo directly to risk treatment actions.
Seven Applications of Monte Carlo Simulation in Risk Management
| Application | Uncertain Variables | Output Distribution | Standards / Frameworks |
| 1. Project Cost Risk Analysis | Line-item costs (Min/ML/Max per work package) | Total project cost distribution; P50/P80/P90 estimates; evidence-based contingency | PMI PMBOK; AACE International; ISO 31010 |
| 2. Project Schedule Risk Analysis | Activity durations (Min/ML/Max per task); dependency logic | Project completion date distribution; probability of meeting the deadline; critical-path risk | PMI PMBOK; AACE RP 57R-09 |
| 3. Cyber Risk Quantification (FAIR Model) | Threat event frequency; vulnerability; loss magnitude components (primary/secondary) | Annualized loss expectancy distribution; VaR at 95th percentile; expected loss | FAIR Institute; NIST CSF 2.0; ISO 27005 |
| 4. Operational Loss Modeling | Loss frequency (Poisson); loss severity (lognormal) | Aggregate operational loss distribution; expected loss; unexpected loss (VaR/CVaR) | Basel II/III; COSO ERM; ISO 31000 |
| 5. Pension Fund ALM Projections | Asset returns (by class); discount rates; inflation; mortality/longevity | Funded-ratio distribution over 10–30 year horizons; contribution-rate sensitivity; probability of underfunding | SOA; CFA Institute; OECD pension guidelines |
| 6. Revenue / Cash-Flow Forecasting | Sales volume; unit pricing; customer churn; market growth | Revenue distribution; cash-flow-at-risk; probability of missing revenue target | COSO ERM; ISO 31000 |
| 7. Insurance / Catastrophe Modeling | Event frequency; event severity; geographic correlation | Aggregate annual loss distribution; probable maximum loss (PML); reinsurance attachment point | Solvency II; NAIC; ISO 31000 |
The Excel template demonstrates Application #1 (project cost), but the identical six-step process applies to every application above.
Replace the cost line items with the relevant uncertain variables, assign distributions, run iterations, and analyze the output.
Our scenario analysis guide complements Monte Carlo by providing the narrative stress-test scenarios that frame the quantitative results.
What’s Inside the Free Monte Carlo Excel Template
| Sheet | Purpose | Key Features |
| 1. Project Cost Model | Input model: define 10 cost line items with three-point estimates (Min, Most Likely, Max) | PERT Mean and StdDev auto-calculate; total row sums all items; summary statistics section computes P50/P80/P90 using the normal approximation; blue-font inputs, black-font formulas |
| 2. Simulation (1,000 Runs) | The Monte Carlo engine: 1,000 iterations × 10 variables = 10,000 random samples | Each cell uses RAND() inside a triangular distribution formula; Column L sums each iteration; color-scale conditional formatting on totals (green→amber→red); press F9 to regenerate |
| 3. Simulation Statistics | Output analysis: 17 statistical measures computed from the 1,000 simulated totals | Mean, Median (P50), StdDev, Min, Max, Range, P10/P25/P75/P80/P90/P95, CV, Contingency at P80 and P90, Probability of Exceeding Deterministic; each metric includes the Excel formula and plain-language interpretation |
| 4. Sensitivity Analysis | Tornado chart data: one-at-a-time sensitivity showing each variable’s impact on the total | Total at Min and Max per variable; swing calculation; swing as % of deterministic; rank column; color-scale on swing; ready to paste into a bar chart |
| 5. How-To Guide | Step-by-step instructions embedded in the workbook | Six-step process from model definition through board reporting; formula explanations; tips on iteration count, correlation, and add-in alternatives |
11,102 formulas. Zero errors. Open the workbook, replace the sample cost items with your own data, press F9, and read the results.
Eight Pitfalls in Monte Carlo Simulation
| # | Pitfall | Consequence | Fix |
| 1 | Garbage in, garbage out: poorly estimated distributions | The simulation produces a mathematically precise but factually wrong distribution | Elicit Min/ML/Max from subject-matter experts in structured workshops. Cross-validate with historical data. Document assumptions. |
| 2 | Ignoring correlation between variables | Independent sampling underestimates combined risk when variables are positively correlated (e.g., steel and labor costs both rise in a hot market) | Model correlation using Iman-Conover rank correlation or Cholesky decomposition. Use @RISK or Crystal Ball add-ins. |
| 3 | Too few iterations | Percentile estimates are unstable with fewer than 500 iterations; P95 requires 5,000+ iterations to stabilize | Run at least 1,000 iterations. Increase to 10,000 when P90/P95 precision matters. Check convergence by comparing results at 1,000 vs. 5,000. |
| 4 | Wrong distribution choice | Using a normal distribution when losses are right-skewed (lognormal) underestimates the tail; using triangular when data supports a better fit wastes available information | Match the distribution to the data shape. Use lognormal if losses cannot go negative and are right-skewed. Use triangular/PERT when only three-point estimates are available. |
| 5 | Presenting raw statistics without interpretation | The board sees P50, P80, P90 numbers but does not understand what to do with them | Present using “What, So What, Now What”: the number, the implication, the recommended action. Always include the probability of exceeding the approved budget. |
| 6 | Treating Monte Carlo as a black box | Stakeholders distrust results they cannot understand; the simulation becomes a credibility risk | Walk stakeholders through the six-step process. Show the input distributions. Explain that the method is repeated random sampling, not artificial intelligence. |
| 7 | Running Monte Carlo without sensitivity analysis | The board knows the range but not which variables drive the range; mitigation effort is unfocused | Always pair Monte Carlo with a tornado chart. The tornado identifies where to spend mitigation resources. |
| 8 | Simulating once and never updating | The distribution reflects assumptions at a point in time; as the project progresses and uncertainty resolves, the model must update | Re-run the simulation at each project phase gate or risk-review cycle. Narrow the distributions as actual data replaces estimates. |
Deploying Monte Carlo in Your Risk Program
| Phase | Timeline | Actions | Owner | Deliverable |
| Phase 1: Pilot Model | Days 1–25 | Select one high-value project or risk scenario to pilot; download the Excel template; customize the input variables and distributions; run the first simulation; compare P50/P80/P90 to the existing deterministic estimate | Risk Manager / CRO | Pilot Monte Carlo model; comparison of probabilistic vs. deterministic estimates; gap analysis |
| Phase 2: Sensitivity & Validation | Days 26–50 | Run sensitivity analysis (tornado chart); validate input distributions with subject-matter experts; back-test the model against historical project outcomes if data is available; refine distributions based on validation feedback | Risk Manager / Project Manager / SMEs | Validated model; tornado chart; documented assumptions; back-test results |
| Phase 3: Board Presentation | Days 51–75 | Prepare the first Monte Carlo board report: deterministic vs. P50/P80/P90, contingency calculation, tornado chart (top 3 risk drivers), probability of exceeding budget; present to the Risk Committee and Board | CRO / CFO | First Monte Carlo board report; Board discussion and decision on contingency funding level |
| Phase 4: Scale & Embed | Days 76–90 | Extend Monte Carlo to additional projects, business plans, or risk scenarios; develop a standard Monte Carlo methodology guide; train risk managers and project managers; integrate simulation outputs into the enterprise risk register and KRI dashboard | CRO / Risk Manager / PMO | Enterprise Monte Carlo methodology guide; trained analysts; simulation outputs integrated into the risk register |
The Future of Monte Carlo in Risk Management
AI-Augmented Distribution Fitting. Machine learning models are improving the accuracy of distribution selection by analyzing historical data patterns and recommending the best-fit distribution automatically.
Rather than assuming triangular or normal, the model suggests a lognormal with specific parameters calibrated to your organization’s loss history. See our AI risk assessment framework guide.
Real-Time Monte Carlo. Cloud computing enables Monte Carlo simulations with 100,000+ iterations running in seconds and updating in real time as input data changes.
Pension fund ALM models, trading-desk VaR calculations, and supply-chain risk models are already running on this infrastructure.
The same capability is becoming accessible through browser-based tools and GRC platforms.
Integrated Simulation Across Risk Types. Advanced organizations are combining Monte Carlo across risk types: market risk, credit risk, operational risk, and strategic risk into a single enterprise-wide simulation that produces an aggregate loss distribution.
This enterprise Monte Carlo replaces siloed risk models with a correlated portfolio view. The COSO ERM Framework envisions this integrated approach under the Performance and Review & Revision components.
Download Your Free Monte Carlo Simulation Template
Five sheets, 11,102 formulas, zero errors. The template demonstrates the complete Monte Carlo process: input model, 1,000-iteration simulation, statistical analysis, sensitivity analysis, and a step-by-step guide.
Download, replace the sample data with your own, press F9, and present the results to your board. Explore these riskpublishing.com resources: Risk Quantification for Boards • Scenario Analysis • Risk Assessment Matrix • Enterprise Risk Management Framework.
More resources: Risk Register Template • KRI Dashboard Guide • Risk Appetite vs. Tolerance • Three Lines Model • Cyber Risk Assessment Framework • Risk Management for Pension Funds • Third-Party Risk Management • Business Continuity Plan • How to Mitigate Risk • Project Risk Assessment • AI Risk Assessment Framework.
Frequently Asked Questions
How many iterations should a Monte Carlo simulation run?
A minimum of 1,000 iterations produces stable mean and P50 estimates. Increase to 5,000–10,000 iterations when P90 and P95 precision matters (the tails require more data to stabilize).
Check convergence by comparing results at 1,000 vs. 5,000 iterations; if the P90 shifts by more than 2%, run more iterations. The Excel template runs 1,000 by default; expand the simulation table to 10,000 rows when needed.
Can Monte Carlo simulation be done in Excel without add-ins?
Yes. The accompanying template demonstrates a fully functional Monte Carlo simulation using native Excel functions (RAND, IF, SQRT, AVERAGE, PERCENTILE, STDEV, COUNTIF). No VBA, no macros, no add-ins required. Press F9 to regenerate all random values.
Commercial add-ins like @RISK (Palisade) and Crystal Ball (Oracle) add convenience features: correlation modeling, distribution fitting, charting, and iteration management. Start with native Excel; graduate to add-ins as your simulation needs grow.
How does Monte Carlo relate to scenario analysis?
Monte Carlo and scenario analysis are complementary, not competing, techniques. Monte Carlo generates thousands of random scenarios to produce a probability distribution. Scenario analysis constructs a small number of deliberate, narrative-driven scenarios (e.g., “stagflation,” “cyber catastrophe,” “pandemic”) to explore specific risk pathways.
Use Monte Carlo to quantify the probability range; use scenario analysis to stress-test specific tail events. Present both to the board: the distribution from Monte Carlo and the narrative impact assessment from scenarios.
What is the PERT distribution?
PERT (Program Evaluation and Review Technique) is a modified beta distribution that uses three-point estimates (Min, Most Likely, Max) to produce a bell-shaped curve weighted toward the Most Likely value. The PERT Mean = (Min + 4×ML + Max) / 6. The PERT StdDev = (Max – Min) / 6.
PERT is preferred over the triangular distribution when the Most Likely estimate should carry more weight than the tails. The Excel template calculates PERT Mean and StdDev on Sheet 1 alongside the triangular simulation on Sheet 2.
How should Monte Carlo results be presented to the board?
Use the “What, So What, Now What” framework. What: “The Monte Carlo simulation produces a P50 project cost of $10.8M, a P80 of $11.8M, and a P90 of $12.4M.” So What: “
There is a 65% probability the project will exceed the approved $10.6M deterministic budget. The top risk driver is structural steel pricing, which accounts for 35% of total uncertainty.”
Now What: “We recommend funding at P80 ($11.8M), requiring $1.2M additional contingency. The project team will focus mitigation on steel-price hedging and MEP design finalization.” See our risk quantification for boards guide.
References
1. ISO 31010:2019 – Risk Assessment Techniques
2. ISO 31000:2018 – Risk Management Guidelines
3. PMI PMBOK Guide – Quantitative Risk Analysis
4. AACE International – Cost Estimate Classification and Risk Analysis
5. FAIR Institute – Factor Analysis of Information Risk
6. COSO Enterprise Risk Management (2017)
7. Society of Actuaries – Model Risk Management (2025)
9. NIST Cybersecurity Framework 2.0
10. IIA Three Lines Model (2020)
11. Palisade (@RISK)
13. IRM – Institute of Risk Management
14. OECD Pension Funds’ Risk-Management Framework

Chris Ekai is a Risk Management expert with over 10 years of experience in the field. He has a Master’s(MSc) degree in Risk Management from University of Portsmouth and is a CPA and Finance professional. He currently works as a Content Manager at Risk Publishing, writing about Enterprise Risk Management, Business Continuity Management and Project Management.
