Performance Bottleneck Analysis: Applicable Theories and Methodologies

Performance Bottleneck Analysis: Applicable Theories and Methodologies
Page content

Profiling the Workflow Processes

In planning a process improvement project, the initial step taken is to perform an analysis of the current processes and their set-up. The primary objective is to determine the flow of the procedures and how it affects the performance of a business. Hence different aspects for analysis will come into focus:

  • The volume of production output at each process level.
  • How the inputs and outputs are organized.
  • The inventory levels, i.e. raw materials, work-in-process and finished goods.
  • The length of lead time and cycle time.
  • Cost of labor and the number of outputs to determine efficiency.
  • The efficiency of the machine and equipment used for each process level.
  • Method of managing quality.

Profiling the workflow processes depicts a clearer view of how the materials and workers move from entry points to their end stations. This makes it possible for the analyst to identify the areas where the production or operational build-ups take place. Since these constraints pose obstructions that result in poor business performance, bottleneck analysis that is applied with an in-depth understanding of the theory of constraint (TOC) would be the best approach to consider.

Understanding the Theory of Constraint

Evaporating Cloud Example

Comprehending TOC and the principle of focusing on throughput or the amount of material or work that goes into the production or operational system is important. Recognizing and managing the points of constraints at which throughputs converge can clear up the supply chain.

The point of convergence is often referred to as the bottleneck, at which the resource (raw material or an activity) requires the longest time to transform or perform. As a result, the volume of work to be processed increases at pre-convergence point, while the volume of outputs or completed work decreases at post-convergence point. This of course equates to low productivity and later is quantified as low profitability.

In identifying the process at fault, one must scrutinize the entire workflow and not just the pre or post-convergence points. It’s important to perceive that processes are linked together and the objective at each stage is to keep the flow of work moving from entry point to the final phase. Hence, identifying the stage at which inconsistency is detected requires the most attention.

One must perceive that a flaw is not always visible; hence the system simply proceeds to the next stages. The problem then becomes evident if the quality or volume of output is no longer consistent or no longer proportionate to the objective of the entire production system.

In analyzing the flaws or inconsistencies, take into consideration the underlying reasons why the workforce or the machine performs in a manner that is contradictory to the objectives of the production or operational system. One can recommend solutions on how to correct the flaw from the root cause in order to avoid the creation of another point of convergence. Superficial solutions take corrective actions only on what is apparent and visible, which stands opposed to those that will expose the real issue. Doing the latter can de-clog the entire system from all inconsistencies and variables.

A method of analysis often used is the queuing simulation, a technique that analyzes how methods and procedures distribute throughputs across different stages.

What Do Queuing Simulations Tell?

VV&A Comparisons

The queuing theory examines not only the flow but also the people and the communications that transpire while a steady stream of workflow processes is in progress.

It is governed by “Little’s Law,” which works on the premise that the average number of units that lie in wait for processing is equal to the average rate of arrival of materials or of work, multiplied by the length of time it stays on a particular stage. If expressed by way of mathematical equation:

Work-in-Process = Average Rate of Arrival x Time Spent

The key characteristic of this theory is the element of “waiting,” hence the terminology–queuing. It indicates that the flow of system requires a greater demand beyond the norm of what is required or its threshold. Manufacturing lines are an example of a system in which processes are organized as a network of stages where throughputs should flow smoothly instead of waiting to be served.

The simulation models are therefore structured into three major areas:

(1) the arrival process

(2) the service process, and

(3) the number of service performers.

Exploring these three regions lead to the bottlenecks — as the replications of different resource allocations reveal the consequences of the queue lines. Doing so allows the estimation of waiting times as a basis for process optimization and improvement.

Algots

Case Study Sample:

A garment manufacturer, for example, employs ten stitchers, and each worker has her own sewing machine. However, two of these machines often encounter breakdowns and can be attended to by only one repairman. The service providers under study are the sewing machines and the implications to be considered are the two units that break down at some point of production.

The owner is thinking of employing a full-time repairman to minimize the time it takes to render the machines fully operational again. The sewing machines of workers who are next in the processing line remain idle until the work of the stalled stitchers is completed.

Once the impaired machines become operational, the arrival of throughputs at certain stages creates congestion with increased work-in-process.

Hiring an in-house repairman may seem like the appropriate solution on the surface, but there are other factors to consider:

  • What type of stitch work is demanded from the frequently impaired sewing machine? Does it have a greater demand on the machine in terms of number of hours worked or amount of machine stitches applied?

  • Are there stages in the processes that require reworking once the fabrics undergo quality inspection?

  • Are workers compensated per piece or hourly rate? Are the stitchers turning in outputs beyond the machine’s threshold capacity?

  • What are the traits of the workers as machine operators? Do they clean and lubricate their machines regularly? Do they try to locate the causes of needle breaks even if relatively thin materials are being worked on?

  • Are the workers trained to do only the work that is assigned to them?

  • What are the consequences to the business owner if the finished goods are not delivered on time?

The main goal in these lines of questioning is to establsih the users’ understanding of the machines’ serviceability before it reaches a breakdown point. Actually, traditional sewing machine problems stem from simple stuck-ups due to jammed materials or improper tension setting. Baiscally, they are often the results of desynchronization in moving parts due to lack of lubrication. The lubrication in turn attracts dust and debris which results in stuck-ups.

A queuing simulation where there is proper balance of the arrival processes, the correct service processes and the number of service providers presents a basis for determining proper resource allocation. This is with the assumption that the amount of work allocated is more or less the same.

The use of a sample lot size for the queuing simulations is likewise recommended in order to establish the standards for best performance. After this, another bottleneck analysis should be performed to ensure that there are no more constraints to affect the flow of the processes.

Going Over the Main Points

If your project aims to improve a production or operational process, start your performance of a bottleneck analysis by profiling the present set-up of the system. Study the flow of resources, the time it takes for a resource to be processed and the service providers’ overall understanding of what transpires in their line of work. Identify the points at which outputs are being constrained and determine the inconsistencies or variables that take place in the entire system. Create queuing simulations against which comparisons can be made and for an executable model for determining the standards.

Reference Materials and Image Credit Section:

References:

Image Credits: