Platform engineering forms the backbone of today's businesses, making digital services reliable and scalable. To steer this ship well, having clear performance indicators is key. Think of Key Performance Indicators (KPIs) as your team's friendly guide, helping focus on what the business truly needs. They're like signposts, showing how well your systems are doing and pointing out ways to make them even better. 

In this blog, we'll explore how KPIs play a crucial role in optimizing your strategies for platform engineering teams. Join us as we explore how KPIs wield their influence in crafting a more efficient and successful methodology for platform engineering teams.

What Are KPIs?

Key Performance Indicators (KPIs) are measurable metrics that help organizations gauge their performance against specific goals and objectives. 

In the context of platform engineering, KPIs can provide valuable insights into the health, efficiency, and effectiveness of your engineering efforts. By tracking and analyzing KPIs, you can make data-driven decisions to improve your platform's performance and functionality.

Choosing the Right KPIs:

Selecting the right KPIs is crucial for a successful strategy of platform engineering teams. Here are some essential KPIs that can help you optimize your efforts:

1) Uptime and Availability:

Importance: The availability of the platform is crucial as downtime can lead to revenue loss and a poor user experience. High uptime ensures that the platform is accessible to users when they need it.

Calculating Uptime:

24 hours/day x 365 days/year = 8,760 hours/year.

Number of hours your network is up and running per year ÷ 8,760 hours x 100 = Yearly uptime percentage.

Uptime signals a system's binary state of operation, whereas Availability delves deeper, encompassing performance and functionality. It acknowledges that a system, while operational, may run below par, impacting user service. 

Thus, evaluating both uptime and availability offers a comprehensive insight into a system's operational capacity and user satisfaction.

2) Error Rate:

Importance: Monitoring error rates helps identify issues and bugs within the platform. A high error rate indicates potential problems that need to be addressed to maintain reliability.

The Error Rate is a crucial Key Performance Indicator (KPI) that measures the frequency of errors or mistakes made in a specific process or system. To calculate the Error Rate for a platform engineering team, you can use the following formula:

Error Rate = (Number of Errors/ Total Number of Actions) ×100

Where:

  • Number of Errors: This represents the total count of errors encountered within a given period.

  • Total Number of Actions: This refers to the total count of actions or transactions performed within the same period.

The result is typically multiplied by 100 to express the Error Rate as a percentage, providing a clearer representation of the error frequency in relation to the total actions taken.

For instance, let's say the platform engineering team performed 10,000 actions within a month and encountered 100 errors during that time. Using the formula:

Error Rate = (100/10000) × 100

Error Rate=1%

This means that the Error Rate for that period is 1%, indicating that 1% of the actions performed resulted in errors. Adjust the time frame and actions/errors count according to the specific period you're evaluating to calculate the Error Rate accurately for your platform engineers.

What are actions and errors?

Actions: Any meaningful user interaction or system operation within the platform, such as clicking buttons, processing transactions, or executing commands.

Errors: Unexpected or undesired outcomes resulting from actions, encompassing system malfunctions, failed transactions, error messages, or inaccurate data output.

Why is it important for the platform engineering team?

Measuring the error rate involves tallying the count of errors against the total number of actions within a specified timeframe. 

This clarity ensures a precise evaluation of the platform's performance, aiding engineers in identifying and addressing issues effectively.

3) Scalability Metrics:

Importance: Tracking the platform's ability to scale horizontally or vertically is crucial as it ensures the system can handle increased load without performance degradation. Scalability is vital for growth and accommodating more users.

The Scalability Index formula is: 

Scalability Index =(Performance of Larger System/Performance of Smaller System) ×100%     

Performance of Larger System: Refers to the system's performance when it's scaled up or expanded, handling higher loads or resources.

Performance of Smaller System: Denotes the system's performance in its initial or smaller state, typically handling lower loads or resources.

Evaluating performance often encompasses factors like response time (in milliseconds), throughput (in transactions per second), resource utilization (in percentage), or any specific metric relevant to the platform's functionality.

For instance, measuring the system's response time under increasing loads, say from 100 to 1000 concurrent users, allows engineers to assess how quickly the system processes requests as user numbers rise. 

These metrics aid in understanding performance changes as the system transitions from its smaller state to a larger, more heavily loaded environment.

Suggested Approach:

For instance, in measuring the Scalability Index, consider using response time as a performance metric. Measure the response time of critical operations or user interactions under varying loads, comparing performance between the smaller and larger system configurations.

Scalability Index Formula using Response Time:

= (Response Time of Larger System/Response Time of Smaller System) x100% 

By assessing how response times scale as the system expands, this approach offers insights into how well the system handles increased loads or resources. 

These metrics aid in understanding performance changes as the system transitions from its smaller state to a larger, more heavily loaded environment.

Engineers can adapt this method by replacing "response time" with other relevant performance metrics to comprehensively evaluate scalability based on the platform's unique requirements and functionalities.

4) Throughput:

Importance: Throughput, quantifying successful requests or transaction volumes within a timeframe, is critical for evaluating system efficiency and capacity. It guides capacity planning, ensuring the platform can handle expected user loads, and showcasing its scalability and performance.

Formula: Throughput = Total processed requests or transactions/Time period

5) Resource Utilization:

Importance: Monitoring resource usage helps optimize infrastructure, ensuring efficient resource allocation, and cost-effectiveness, and preventing resource bottlenecks.

Utilization Rate = (Resource Used/Total Available Resource) x 100%    

Examples of Resources Used can be:

CPU: This refers to the amount of processing power utilized by the system.

RAM (Memory): It indicates the extent of memory resources in use by the system.

Disk Space: Represents the amount of storage space occupied by data, applications, or system files.

Network Bandwidth: Denotes the level of data transfer or network usage.

Examples of Total Available Resources can be:

CPU: The total processing capacity of the system's CPU.

RAM (Memory): The total amount of available memory in the system.

Disk Space: Total storage capacity or free disk space.

Network Bandwidth: The maximum data transfer capacity or available network bandwidth.

Utilization Rate Calculation:

Utilization Rate calculates the percentage of resources utilized concerning their total available capacity. It's computed for each resource by dividing the used amount by the total available amount and multiplying by 100.

For example, for CPU utilization rate:

CPU Utilization Rate = (CPU Used/Total CPU Capacity) x 100 

Similarly, this formula applies to other resources like RAM, disk space, or network bandwidth to determine their utilization rates.

When assessing resource utilization across different components like CPU, memory (RAM), disk space, and network bandwidth, it's crucial to understand acceptable utilization rates. 

Here's guidance on evaluating utilization rates for various resources:

Acceptable Utilization Rates for Different Resources:

  1. CPU Utilization Rate:

Guideline: Generally, average CPU utilization rates around 50% are considered optimum. Beyond this, sustained high usage may indicate the need for additional processing power to maintain optimal performance.

  1. RAM (Memory) Utilization Rate:

Guideline: Memory utilization rates up to 70% are often acceptable. Higher utilization might impact system responsiveness or lead to swapping, affecting performance.

For workloads that are not volatile in terms of memory use 70% - 80% memory utilization rates can be often acceptable. For volatile workloads maximum provisioning should be done and of course actions to reduce volatility should be taken by the team.

  1. Disk Space Utilization Rate:

Guideline: Aim to maintain disk space utilization below 50%-80% to prevent performance degradation or potential data loss due to insufficient space. For this utmost care should be taken and when the disk space utilization reaches 70%- 80% new disk space should be created immediately. 

  1. Network Bandwidth Utilization Rate:

Guideline: Network utilization rates vary based on the network's capacity and usage patterns. Consistently full utilization of the network's bandwidth should be avoided to ensure adequate bandwidth for smooth data transfer and prevents congestion.

Importance of Monitoring Utilization Rates:

Regularly monitoring these utilization rates allows engineers to:

  • Identify potential bottlenecks or resource constraints.

  • Proactively allocate resources or scale infrastructure as needed.

  • Maintain system performance within baselines.

  • Prevent service disruptions due to resource exhaustion.

Having different resources with unique utilization rates can be overwhelming to calculate and consolidate. It can be equally difficult to see all resources from a single prism. 

So, here are a few suggestions to overcome this:-

Dashboard Display: Create a centralized view to show CPU, RAM, disk space, and network metrics together.

Resource Patterns Analysis: Study usage trends to find connections between different resources.

Consistent Metrics: Normalize metrics for easy comparison across resources.

Alert Systems: Set up alerts for resource usage thresholds to take proactive measures.

Optimization Strategies: Develop plans based on combined resource analysis for efficient system management.

Reporting Tools: Generate comprehensive reports merging diverse resource data for informed decision-making.

This approach offers a streamlined view of system health, reveals resource correlations, and enables proactive resource management for optimal system performance.

Adjustments Based on Specific Needs:

Resource utilization thresholds may vary based on the platform's unique requirements, application characteristics, and industry standards. 

Therefore, adjusting these thresholds to suit specific operational demands ensures an effective and comprehensive understanding of resource utilization.

6) Incident Response and Resolution Time:

Incident Response Time:- 

Importance: Incident Response Time measures the duration taken to acknowledge and start addressing an incident after it occurs. A shorter response time indicates a more proactive and efficient response, reducing the impact of incidents on system availability and user experience.

Formula: 

Incident Response Time = Time of Incident Acknowledgement - Time of Incident Occurrence

Incident Resolution Time:- 

Importance: Incident Resolution Time measures the duration taken to resolve an incident fully. A shorter resolution time indicates a faster restoration of service, minimizing downtime, and ensuring a quicker return to normal system functionality.

Formula: 

Incident Resolution Time = Time of Incident Resolution - Time of Incident Occurrence

7) Security Metrics:

Importance: Security metrics track the system's resilience against cyber threats and compliance with standards. It helps engineers identify vulnerabilities and maintain data integrity and user trust.

Examples of Security Metrics:

Number of Security Incidents: Quantifies the total count of security breaches, incidents, or unauthorized access attempts within a defined period. Each instance of unauthorized access or breach is documented, aiding in understanding the frequency and severity of security issues.

Formula: Count the total incidents detected within a specific period.

Example: In a month, the system encountered 15 unauthorized access attempts.

Vulnerability Rate: Measures the total count of identified vulnerabilities or weaknesses within the system's infrastructure, applications, or networks. It assesses the system's susceptibility to potential threats and highlights areas requiring immediate attention or patching.

Formula: Divide the number of identified vulnerabilities by the total systems or applications assessed.

Example: 20 vulnerabilities found in an assessment of 100 systems.

Compliance Adherence Rate: Evaluates the system's compliance with industry-specific or regulatory standards such as GDPR, HIPAA, ISO, etc. It quantifies the system's alignment with required security protocols, policies, and guidelines.

Formula: Divide compliant systems by the total number of systems assessed, multiplied by 100 for a percentage.

Example: 80 out of 100 systems meet GDPR standards.

Patch Management Effectiveness: Assesses the efficiency and timeliness of applying security patches or updates to address known vulnerabilities. It tracks the system's vulnerability exposure duration and the speed of implementing necessary security fixes.

Formula: Calculate the time taken to apply patches after their release.

Example: On average, patches are applied within 3 days of release.

Time to Detect and Time to Respond: Measures the duration taken to detect security incidents or breaches and the subsequent response time to mitigate or resolve these issues. A shorter detection and response time indicates a more proactive and efficient security posture.

Formula: Measure the time taken from incident detection to resolution.

Example: Incidents are detected within 1 hour and resolved in 4 hours.

These metrics collectively provide a comprehensive overview of a system's security posture, aiding engineers in identifying potential weaknesses, implementing necessary safeguards, and ensuring ongoing compliance with security standards and regulations. 

Adjust and prioritize these metrics based on the system's specific security needs and potential threats it faces.

8) Deployment Frequency and Lead Time:

Importance: These metrics indicate the team's agility and efficiency in delivering new features or updates. Faster deployment frequencies and shorter lead times contribute to faster innovation and adaptability.

Formulas:

  • Deployment Frequency = Number of Deployments / Time Period

  • Lead Time = Time from Idea to Deployment

9) Developer Satisfaction:

Importance: Gathering user feedback and measuring developer satisfaction can help gauge the platform's developer-friendliness and identify areas for improvement.

Formulas: Net Promoter Score (NPS) = Percentage of Promoters - Percentage of Detractors

10) Capacity Planning and Forecasting:

Importance: Predicting future resource needs and capacity requirements is essential for cost-effectively scaling the platform. This can be a place where there are chances of over-provisioning which can result in a dent in our pockets for no reason.

"Current Usage" and "Forecasted Growth" can encompass various resources and cannot typically be represented by a single number.

Capacity Needed: The anticipated resources required in the future to support the platform's demand.

Current Usage: Represents the present utilization of various resources like CPU, memory, storage, network bandwidth, etc.

Forecasted Growth: Refers to the expected percentage increase in demand or workload across these resources over a specific period.

Considerations for Calculation:

Instead of a single number for "Current Usage," it involves assessing multiple resource parameters and their respective utilization levels. 

For instance:

  • CPU usage percentage

  • Memory utilization in GB

  • Storage space used in TB

  • Network bandwidth in Mbps

Similarly, "Forecasted Growth" estimates the expected percentage increase across these resources based on business projections, user demand trends, or application usage forecasts.

Capacity Needed Formula:

Capacity Needed = (Current Usage/ Forecasted Growth) x 100

This formula estimates the required resources for future scalability based on the present resource utilization and anticipated growth. 

However, actual resource planning involves analyzing various resource parameters individually and projecting their respective growth for accurate capacity planning. 

Adjust the parameters according to specific resource types and anticipated growth rates to derive more precise estimations.                                                                   

Implementing KPI’s Strategy for Your Platform Engineering Team:

Define Clear Objectives: Begin by setting clear and specific goals for your platform engineering team. What do you want to achieve? Your KPIs should align with these objectives.

Collect and Analyze Data: Implement tools and processes to collect relevant data for your chosen KPIs. Use data analytics and visualization tools to gain insights from the data.

Regular Monitoring: Establish a routine for monitoring KPIs. Regularly check and update your KPI data to ensure you have a real-time understanding of your platform's performance.

Benchmarking: Compare your KPIs with industry standards and competitors to understand where you stand and identify areas for improvement.

Actionable Insights: Use the insights gained from KPI analysis to make informed decisions and optimize your platform engineering team. If you notice a KPI is consistently falling short of its target, take corrective actions.

Continuous Improvement: Your platform engineering strategy should be an ongoing process. Continuously review and refine your KPIs as your platform evolves and new challenges emerge.

Benefits of KPI-driven Platform Engineering

Data-Driven Decision Making: KPIs provide objective data, enabling informed decisions rather than relying on assumptions or intuition.

Proactive Issue Identification: Monitoring KPIs can help in the early detection of potential issues or bottlenecks, allowing for timely interventions.

Performance Optimization: Since this is an emerging role, clear KPIs can help platform engineers direct their efforts towards areas that need improvement, optimizing the platform's overall performance.

Enhanced User Experience: Prioritizing KPIs related to user satisfaction directly translates into a better user experience and retention.

Conclusion:

In the realm of platform engineering, Key Performance Indicators (KPIs) are our guiding stars, steering us toward efficiency, reliability, and scalability.

Harnessing KPI insights empowers teams to make informed decisions, drive continuous improvement, and align strategies with organizational goals.

As we navigate the complex terrains of technological innovation, the strategic utilization of KPIs by platform engineering teams remains not just a tool, but a guiding principle, ensuring that every step taken is purposeful, measured, and in pursuit of excellence. 

With KPIs as our guiding light, the journey toward optimizing the platform engineering team becomes an ongoing commitment to elevate performance, enhance user experiences, and pave the way for a more agile and successful future.


Platform Engineering Team
Key Performance Indicators (KPI)
Data Driven Decisions
Performance Metrics