Prevention of Nozzle Clogging in Continuous Casting of Steel

Prevention Of Nozzle Clogging In The Continuous Casting Of Steel
There have been four documented causes for nozzle clogging in continuous casting steels; build up de-oxidations such as Al2O3 (1), solid steel build up, buildup of complex oxides such as spinels, and the buildup of reaction products such as CaS (4). While some causes are more detrimental than others, all are a problem. Different steels will yield a different potential nozzle clogging cause (3), for example, a re-sulfurized free machining steel is going to have much more of an issue with the formation of calcium sulfides than spinels. No matter what cause is all nozzle clogging can be detrimental to a continuous casting process. Looking at Figure 1, it is easy to see how the deposit of clogging material on the side walls of the nozzle can cause irregular flow from the tundish into the mold. Irregular flow through a tundish nozzle enhances the probability of generating a number of quality defects such as re-oxidation of the steel and slag entrapment (4). Nozzle clogging also affects productivity in that less steel is able to be cast because of the blockage in the nozzle. In simple business terms, less steel equals less profit. Another thing to consider is the life of the tundish is often limited to the life of the nozzle due to clogging. If nozzle clogging can be controlled enough to extend the nozzle life even one or two heats longer, that results in substantial process cost savings.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

The most effective way to prevent, or at least lessen, nozzle clogging in the continuous casting of steels is to modify the inclusions in the steel to a liquid rather than a solid at steel casting temperatures (2). This is typically done by the addition of calcium to the steel at the end of the steel refining process. Looking at Figure 2, a pure Al2O3 inclusion’s liquidus temperature is considerably higher than that of steel casting temperatures, and that by adding the right amount of calcium to the inclusions in the steel the inclusion’s liquidus temperature can potentially be lowered to below steel casting temperatures (12CaO.7Al2O3).
Calcium is typically added to the melt one of three ways; by CaSi powder, CaSi wire, or calcium injection with argon. CaSi powder has the poorest recovery because calcium’s vapor temperature is lower than steel making temperatures (5). Therefore by simply throwing calcium powder on top of the melt, the majority of the calcium will vaporize into a gas and leave the system without being absorbed into the steel. Figure 3 shows the vapor temperature for calcium related to depth into the steel melt and we can see that the deeper into the melt the calcium is able to get (i.e. the greater the pressure) the higher the vapor temperature is for calcium (5). This is the basis by which CaSi wire is used. CaSi wire is a steel wire shell packed with calcium as the core. As the wire is injected into the melt the calcium is protected by the steel shell from melting and not exposed to the high melt temperatures until deep enough into the melt to provide enough pressure to avoid the calcium from vaporizing. Calcium injection uses this same principle by sticking a lance into the melt deep enough to avoid vaporization and blows calcium into the melt by the use of inert argon.
It’s one thing to make inclusions liquid and it’s a completely different challenge to keep it liquid throughout the entire casting process. This is often the difficult aspect of nozzle clogging prevention given that all of your incluions modification control is performed at the LMF or degasser and not at the caster. One thing many steel producers will try to do is reduce the number of incluions present in the steel during the casting process (2). The easiest thing to do in lowering the number onf inclusions in the steel is to increase ths size of the inlucions. By Stokes law, larger inclusions will have a greater upward velocity out of the steel and into the slag thus not being cast through the nozzle. Another practice steel producers use to reduce inclusionon counts in their steel is to have proper geometry in the tundish as the caster. By adding tundish components such as dams and weirs (shown in Figure 4) inclusion flow can be directed to give optimum exposure to the slag(4). Weirs a used to direct steel flow down where as dams are to direct flow upwards. By having two sets of weir-dam combinations between the ladle shroud and nozzle, the inclusions in the steel are exposed to the tundish slag all while maintaining minimum turbulance (5).
Unfortunatily not all inclusions in the steel can be removed and therefore the remaining inclusions must remain liquid through the nozzle to prevent clogging. To achieve this it is curtial that the steel is protected from re-oxidation from atmospheric oxygen (2). To ensure this many tools are used. Starting from the ladle, a ladle shroud is used from the ladle to tundish in order to funnel the liquid steel from the ladle to under the slag layer in the tudish (Figure 4). An impact pad is often used as shown in Figure 4, to reduce the turbulance in the tundish (5). Increased turbulance can disrupt the slag surface in the tundish as expose the liquid steel to the amtosphere causing re-oxidation and possibily slag entrapment. To help prevent steel -slag interaction,baffles are often used (Figure 5) which slows down steel flow but also allows steel to flow through the holes. In order to prevent the steel exposed to the surface from re-oxidizing tundish fluxes are used to act as a protectinve barrier between the steel and atmopshere as shown in Figure 6 (2). Tundish refactories must also be considered to ensure no or very little reaction occures between the steel and refactory occurs (2). If it were to occur and solid inclusions percipitate in the steel, all the effort put forth into the steel up until the point could be usless.
Once the steel is secure in the tundish one more step is required and that is to get the steel through the nozzle and into the mold. Just as in the tundish, re-oxidation of the steel and any negative reaction between the nozzle refractory and steel must be avoided. To ensure this, typically submerged entry nozzles or submerged entry shrouds are used as the nozzle which will provide a barrier between the steel and atmosphere all the way into the mold. Typically made of alumina graphite, the added graphite prevents wetting of the inclusions onto the nozzle walls (4). Argon purging in various parts of the side walls of the nozzle are also often used to separate any would be oxygen from the steel.
In conclusion, preventing nozzle clogging is not successfully completed by one simple action but rather many actions working together: inclusion count reduction, inclusion modification by the use of calcium, protecting from re-oxidation of the steel, proper tundish geometry, and proper tundish and nozzle refractories (2). While the concept of making only liquid inclusions appears simple in application, it can be rather difficult to maintain these liquid inclusions throughout the entire casting process.
Sources Cited
1. Zhang, Lifeng; Thomas, Brian; Inclusions in Continuous Casting of Steel. Nationals Steelmaking Symposium. Mich, Mexico. November 2003. page 138-183.
2. Alekseenko, A.A. Problems of Nozzle Clogging during Continous casting of an Aluminum-Killed Low-Carbon Low-Silicon Steel. Russian Metallurgy, Vol. 2007. page 634-637.
3. Girase, N.U. Development of indices of quantification of nozzle clogging dujring continuous slab casting. Iron and Steelmaking. Vol. 34; No. 6. 2007. page 506-512.
4. Zhang, Lifeng, Wand, Yufeng, and Zuo, Xianjmun. Flow Transport and Inclusion Motion in Steel Continuous-Casting Mold under Submerged Entry Nozzle Clogging Condition. Metallurgical and Materials Transaction. Vol. 39B. August 2008. page 534-550
5. The Making, Shaping and Treating of Steel, 11th Edition Casting Volume; AISE Steel Foundation. Pittsburg, PA. Copy Right 2003
 

Quality Management Tools and Techniques for Continuous Improvement

Quality Management Tools and Techniques for Continuous Improvement

Synthesis of Literature

Literature Review

Introduction

This literature illustrates basic quality management tools and techniques which are used in many organizations for continuous improvement. There are some effective contributors to quality who determined on process improving and producing continuous quality results at highly productive levels. The leaders are:

Walter Shewhart;

W. Edwards Deming;

Joseph M. Juran;

Taiichi Ohno;

Kaoru Ishikawa;

Armand V. Feigenbaum and

Philip B. Crosby.

Walter Shewhart (1891 – 1967)

In order to reduce the frequency and improve reliability the concept of common cause, special cause variation and statistical control were brought by Walter Shewhart in 1924. Statistical process control (SPC) was introduced by Shewhart in the book “Economic Control of Quality of Manufactured Product”, which has become the vital element for process control in industry.

W. Edwards Deming (1900 – 1993)

Deming developed “14 Principles for Western Management” and combined this with Shewhart’s concept and instructed that by accepting these points organizations can improve the quality, customer loyalty and reduce costs by avoiding rework, waste and employee attrition. From his book “System of profound knowledge” (2000) he promoted that “85% of poor quality was due to bad management, poor process and improper systems and remaining 15% was because of workers”. He explained PDSA Cycle (Plan-Do-Study-Act) for improvement and learning process.

Joseph M. Juran (1904 – 2008)

Juran has specialized in quality management, he created the Pareto principle (80/20 rule) while formulating the ideas of Deming. The book “Quality Control Handbook” in 1951 were Juran is a coauthor explained quality in 2 different concepts 1) Higher quality costs more and

                                                                                           2) Higher quality usually costs less.

“Quality planning, Quality control and Quality improvement” are the interrelated process known as “Juran Trilogy” which is one of the important contributions towards quality improvement by Juran.

Taiichi Ohno (1912 – 1990)

The concept of continuous flow (one-piece) process was developed by Ohno in 1948 to avoid “Batch and Queue” process where he analyzed the waste called “Muda” in Japanese. He classified the waste into 7 types (activities which do not add value to the process).

Overproduction

Transporting

Rejects

Motion

Waiting

Inventory

Over-processing

This one-piece process allows completing a product at a time, which results in higher output, greater efficiency and helps to avoid the above 7 wastes.

Kaoru Ishikawa (1915 – 1989)

The Ishikawa diagram (Fishbone diagram) was created by Kaoru Ishikawa, which concentrated on the effect of participation at all levels in an organization for quality improvement actions and in decision making by the use of statistical measurements. The book Guide to Quality Control was published in the year 1968 authored by Ishikawa, which explained the concept of participation and understanding the quality controls at all levels in an organization.

Armand V. Feigenbaum (1922 – 2014)

The book Total Quality Control was authored by Feigenbaum in the year 1951, which made clear that “TQC is excellence is driven rather than defect driven” – which mixes quality development, quality improvement and quality maintenance. He defines quality costs as the costs of prevention, appraisal and internal and external failures.

Philip B. Crosby (1926 – 2001)

The concept of Zero defects was introduced by Philip B. Crosby in 1961, he defined quality as “Conformance to requirements” and explained quality as “Price of nonconformance”. He educated quality improvement as a process rather than a temporary project, Crosby’s principle is DIRFT (Doing it right the first time). He also introduced 4 principles:

Quality is defined as conformance to requirements.

The system of quality is prevention.

The performance standard is close enough i.e., Zero defect.

Measurement of quality is not an indication, it is the price of nonconformance.

He sensed that companies should adopt perseverance, education and implementation to avoid nonconformance.

Tools and Techniques

The tool is a device commonly used on its own.

The technique is a set of tools and have broad application.

An organization can be made better by applying proper quality management and quality management tools and techniques. Thus, Dale and McQuater (1998) have determined basic quality tools and techniques which were most commonly used by organizations.

Tools

Techniques

Checksheet

Departmental purpose analysis

Pareto diagram

Poka-yoke

Histogram

Fault tree analysis

Control chart

Design of experiments

Scatter diagram

Quality function deployment

Statistical process control

Flowchart

Failure mode and effects analysis

Cause and effect diagram

Benchmarking

Quality management tools

The above basic tools are classified under 2 categories “Data Acquisitions” and “Data Analysis”. Check sheet, Histogram and Control chart are Data acquisition and Cause and effect diagram (Fishbone diagram), Pareto diagram, Flowchart and Scatter diagram comes under Data analysis.

Data Acquisitions

Checksheet

Check sheets are the simplest form used for recording data in an organization orderly, the data can be either qualitative or quantitative. It is constructed for quick, easy and effective documentation, data’s collected in the check sheet gives clear information about the frequency of a particular process. The benefit of the check sheet is easily understood and gives a clear description of the condition of the firm, this does have the capacity to analyse the problem but allows to identify.

Histogram

The histogram is similar to bar chart which pictures both quality and variable data of a process, it illustrates the frequency distribution. This chart is much more useful if the data collected is in the shape of numbers, it should be made in such a manner that it is well understandable for those who are engaged in an operational process. It helps to examine and classify the unrevealed issue of a variable being explored.

Control chart

This statistical tool is also called a run chart, which helps to differentiate whether the variation is a common cause or special cause in a process over a period of time. This chart helps to analyse the process is within the “statistical control” or not (i.e., Within UCL and LCL), If the process moves away from it, then the process is out of control and there is an issue with quality. The advantage of this chart is to reduce the variation and judge the parameters in a process. This chart is also known as the Shewhart control chart.

Data analysis

Cause and effect diagram

This problem-solving tool helps to identify and sort the real cause of a particular problem, it graphically shows the relation between a given outcome and the factors influencing it. The problems are classified under these main categories such as man, machine, material, method, measurement and environment, potential causes are indicated under the main causes. This diagram is also called as the Ishikawa diagram or Fishbone diagram.

Pareto diagram

Pareto chart is otherwise known as (80/20 rule), which means 80% of problems are due to 20% of the issues. It is blended with a bar and line graph, where individual values are presented in descending order from the left through the bar and cumulative values are represented by a line. The primary purpose of the Pareto chart is to ascertain the different forms of “nonconformity” from data figures and produce mean for inspecting concerning quality improvement.

Flowchart

The flowchart is a diagrammatic representation, which includes symbols to explain the series of steps involved in an operation to complete a process. This problem-solving tool used to identify and analyse the process methodically and to improve the quality of a process.

Scatter diagram

This powerful tool is used to determine and analyse the correlation between 2 variables (i.e., The 2 variables are related to each other or not). The scatter diagram helps to understand the relationship between the variables is weak or strong and positive or negative, and the shape of the diagram illustrates the correlation between two variables whether it has positive, negative or no correlation. It is useful in regression modelling.

Figure 1 7QC Tools

7QC Tools through PDCA Cycle

To achieve continuous improvement and customer satisfaction, quality management principle is a base to start. Every organization executes a quality management system for analyzing the process. Without quality tools, continuous improvement cannot be fulfilled, which is grouped into Deming’s cycle (PDCA). PDCA cycle is a dynamic model because one cycle performs one entire step of improvement, and it is the necessary part of process management. It is a never-ending process because, improvement program starts with careful planning and results in effective action, and again to careful planning (i.e., Completion of 1 cycle continues with the start of another cycle). It has four steps

Plan – Determination of what should be changed.

Do – Execution of the changes which is determined in plan step.

Check – Measurement of the process according to the changes built in the previous step. Report on results

Act – Keeping improvement ongoing.

The main function of the PDCA cycle is process improvement which is achieved by proper planning, it results in corrective and preventive actions backed with applicable quality assurance tools.

Figure 2 Deming’s Cycle

7QC Tools in Six Sigma

Six Sigma technique requires a creative use of data, the importance of statistical analysis and designed experiments. This methodology goes beyond process improvement and tools, it defines process improvement as DMAIC methodology.

Define – Generate project ideas, Select project and finalise project charter.

Measure – Finalise performance standards for the project, validate the measurement system for project and measure current performance and gap.

Analyse – List all probable root causes, Identify critical root causes and Verify sufficiency of critical root causes for the project.

Improve – Generate and evaluate the alternate solution, Select and optimize best solution and pilot, implement and validate the solution.

Control – Implement control system for the critical root cause, Document solution and benefits, and transfer to process owner, project closure.

Each of the above steps can be fulfilled with different tools and techniques. An altered version of six sigma known as DFSS (Design for Six Sigma) is used for the development of a new process which targets on “problem prevention”. The method used in DFSS is DMADV (Define, Measure, Analyse, Design and Verify) or IDOV (Identify, Measure, Optimize and Validate). Based on the process either DMAIC or DMADV is used.

Figure 3 6 Sigma

Quality management techniques

Techniques – a collection of tools, and has broader usage. Some of the basic techniques used in an organization are.

 

 

 

Departmental purpose analysis

DPA is a practical way of applying concepts and principles, it is constructed in a way that the team members achieves the goal to add value to the company’s strategy. The primary target of DPA is measuring and meeting customer requirements.

Poka-yoke

Poka-yoke is foolproof, which arrests defect. An operator is alerted by producing either a warning signal or to pause in order to avoid producing defective goods.

Fault tree analysis

FTA is widely used in safety and reliability engineering field to analyse the probability of an undesirable event using Boolean algebra. It is a top-down logical failure analysis.

Design of experiments

DOE is a systematic analysis of a process. A process is tested in a sequential manner where changes are made to the input variables, these changes are determined on a pre-defined output.

Quality function deployment

QFD techniques are used for converting customer needs into design features for every stage of product development. It is a systematic way to design customer’s requirement with the combination of corporate functional groups.

Statistical process control

SPC is an essential technique for continuous improvement, special cause variations are removed by using this scientific graphical approach for refining the process.

Failure mode and effects analysis

FMEA is a proactive approach to determine the potential causes of failure and to measure the depth of different failures.

Benchmarking

Benchmarking is a technique used for adopting best practices, it is a self-improvement tool which allows organizations to enhance their comparative skills.

5S

 A Japanese tool which helps the workers to control their working area, which helps them to work comfortably and easily. There is a different meaning for each S in “5S”

Seiri (Sort) – Keep things which are necessary.

Seiton (Set) – Arrange and identify the things.

Seiso (Shine) – To keep working area and things clean.

Seiketsu (Standardize) – Use best practices frequently.

Shitsuke (Sustain) – To ensure above “4S” is followed.

Conclusion

Tools and techniques can only be enhanced by providing proper training to the concerned person so that it will be easy for them to understand the effectiveness in it. These are the essential components for improving the process and quality, it can be used in all process development where data collection, analyzing and visualization has a vital place. All these tools and techniques cannot be applied to a particular issue, the implement varies according to the problems. These basic quality tools can be practised in day-to-day life for a better understanding of where and for which problems it should be implemented. Managerial encouragement and commitment are required for complete usage of these basic tools and practices in teams, as these cannot be performed individually.

References

Darrell, K. R. (2007). Management Tools 2007. BAIN & COMPANY.

David, R. B., & Richard, W. G. (2005). The use of Quality Management Tools & Techniques: A Study of Application in everyday situations. International Journal of Quality & Reliability Management, 376 – 391.

Dusko, P., Mirko, S., & Glorija, P. (2008). Practical Application of Quality Tools. 2nd International Quality Conference. Kragujevac.

H.S. Bunney, B. D. (1997). The Implementation of Quality Management Tools and Techniques: A Study. The TQM Magazine, 183 – 189.

Kevin, W. (2008). Quality Improvement: The Foundation, Process, Tools, and Knowledge Transfer Techniques. In The Healthcare Quality Book (pp. 63 – 69). Washington DC: AUPHA Press.

M, S., R, M., K, S., B, D., & J, B. (1998). The use of quality tools and techniques in product introduction: an assessment methodology. The TQM Magazine, 45 – 50.

Mirko, S. J., Zdravko, K., & Aleksandar, V. (2009). Basic Quality Tools in Continuous Improvement. Journal of Mechanical Engineering.

Mirko, S., Jelena, J., Zdravko, K., & Aleksandar, V. (2009). Basic Quality Tools in Continuous Improvement Process. Journal of Mechanical Engineering.

Mohit, S., LA, K., & Sandeep, G. (2012). Tools and Techniques for quality management in manufacturing industries. Trends and Advances in Mechanical Engineering, (pp. 853 – 858). Faridabad.

Muhammad, H. K. (2013). Quality Improvement for Manufacturing Process by Using 7 QC Tools in SME.

Neyestani, B. (2017). The Appropriate Techniques for Solving Quality Problem in the Organizations. Seven Basic Tools of Quality Control.

Oakland, J. S. (2003). Total Quality Management.

Rami, H. F., & Adnan, M. (2010). Statistical Process Control Tools: A Practical guide for Jordanian Industrial Organizations. Jordan Journal of mechanical and industrial engineering, 694 – 699.

Rhys, R. J., Paul, T. T., & Kelly, P. T. (n.d.). Quality Management Tools & Techniques: ProfilingSME use & Customer Expectations. 2 – 13.

Varsha, M. M., & Vilas, B. S. (2014). Application of 7 Quality Control (7 QC) Tools for Continuous Improvement of Manufacturing Process. International Journal of Engineering Research and General Science Volume 2, 364 – 370.

Comparison of Continuous Review and Periodic Review Systems

Compare and contrast the continuous review system with the periodic review system. Is the continuous review or periodic review inventory system more likely to result in higher safety stock? Which is likely to require more time and effort to administer and why?
The continuous review system requires knowing physical inventory all the times, like using a barcode scanner every time cashier scans product purchased by the customer to update inventory. This method is more expensive to administer because inventory needs to update each time something goes out of shelf or comes into the shelf. It requires a lower level of safety since there is only uncertainty during the delivery period is the magnitude of demand. Therefore, during that time, the only safety stock required is for potential stockouts. (Wisner, Tan, Leong, Stanley, 2012, p.90).
The advantage of the continuous review system:

Provide real-time updates of inventory counts
Make easier to know when to order
Provide accurate accounting

The disadvantage of the continuous system:

The periodic review system, evaluate inventory at specific times like counting inventory at the end of each month. It is inexpensive to administer since counting takes place at a particular time, but a higher level of safety is required to buffer against uncertainty in demand over longer planning horizon. (Wisner et al., 2012, p.90).
The advantage of the periodic review system:

Reduce time for business owner to analyze inventory counts
Allows the business owner having more time to run another aspect of the business
Simple to administer
Save labor cost for counting

The disadvantage of the periodic review system:

It may not provide an accurate inventory count when there is a high volume of sale
It may also make accounting inaccurate
There is little control over inventory movement

References:
Periodic and Perpetual Systems of Inventory Accounting. (n.d.). Retrieved March 01, 2017, from http://www.financialaccountancy.org/inventory-valuation/periodic-and-perpetual-systems-of-inventory-accounting/
Wisner, J. D., Tan, K., Keong Leong, G., & Stanley. (2012). Demand Forecasting and Inventory Management (pp. 89-91). Mason, OH: Cengage Learning.
Writer, L. G. (2011, October 10). What Is the Difference Between a Periodic and Continuous Inventory Review Policy? Retrieved March 01, 2017, from http://smallbusiness.chron.com/difference-between-periodic-continuous-inventory-review-policy-30967.html
Stephane Berube posted Feb 28, 2017 1:02 PM
The continuous review system entail that real supply is acknowledged in real time, so it will be costlier to implement than the periodic review system, but it will let you know that your physical inventory will match what is on your screen. The only concern with this system is that we are not sure of how many demand we will get during the “delivery lead time” (Wisner-Tan-Leong-Stanley, 2012, p. 90) period of those goods in order.
The periodic review system check the concrete stock pile at a distinct intermission of time. This system will be economical to use compare to the continuous review system, however this system will require an above normal limit of safety stock to avoid ambiguity with demand during an extended schedule perspective.
The continuous review system to my opinion would be the one requiring more time and effort to implement. The reason I have chosen this method is because of all the equipment and material that need to be involve in providing real-time data in regards of the physical inventory. Most company will have to invest into new computers and inventory software, new operating system if the old one can’t support the new software and bar code scanners to be able to read what goes in and out of the warehouse. One more thing that is important for all businesses is all the training that would need to be done with their employees, so that everyone is onboard with the new equipment.
Hello,
There are advantages and disadvantages for each of two methods:
The advantage of the continuous review system:

Provide real-time updates of inventory counts
Make easier to know when to order
Provide accurate accounting

The disadvantage of the continuous system:

The advantage of the periodic review system:

Reduce time for business owner to analyze inventory counts
Allows the business owner having more time to run another aspect of the business
Simple to administer
Save labor cost for counting

The disadvantage of the periodic review system:

It may not provide an accurate inventory count when there is a high volume of sale
It may also make accounting inaccurate
There is little control over inventory movement

Dorothea Stach posted Feb 28, 2017 7:58 PM
Comparing the Continues Review System with the Periodic Review System as described on Page 89 – 91 in Textbook, Demand Forecasting and Inventory Management, Applications to Supply Chain Management by Wisner, Tan, Leong and Stanley, I certainly would give the continues review system with physical known inventory at any given time, my preference. Is it more expensive? Of course, but it also saves time and effort in terms of less physical inventory checks, minimizing lead times, physically adjusting re-order points over and over again, time consuming negotiations with suppliers to have better pricing on larger quantities. Time and effort are measurements, and they are convertible into money, or better, cost, operational cost, or margin, for that matter. The only uncertain variable in this equation is the demand vs the time span until the next delivery arrives, which can be covered by safety stocks.

Find Out How UKEssays.com Can Help You!
Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.
View our services

I just find the periodic review system very rigid. Why and Who determines what length the period might have? Are the periods flexible? Can they be shortened, lengthened? How is the periodic review system dealing with sudden changes in demand? This kind of continues review might work for very small companies, but for mid to large size companies it can turn into a road block, or bottle neck.
In a previous company I worked on SAP, a great live system! I would open my Material Master in the morning, check on quantities required, check my next re-order point compared to my inventory in stock and voila, convert my demand into purchase orders to send to my supplier. With SAP being live, (we always joked “alive”), all inventory is current, stock, WIP, finished goods, raw material. Now, imagine my shock when I started in another company, and my “ERP system” is pretty much a spread sheet (which I created in excel when I started at this company).
I am really trying to convince my Boss to join us in the 21st century and to invest into a better system.

The Effects Of Interval And Continuous Training Physical Education Essay

In todays society where appearance and health is a major part of modern life, there is a growing awareness of overweight and obesity in the World. For many reasons such as appearance, health many overweight and obese people undertake some form of diet or exercise program to overcome this. In many grocery stores fitness magazine can be found describing new fat diets “shed 2 stone in 4 weeks”. Weight-loss drinks have become more and more popular as they may aid in weight loss, although most people favour eating actual food than a shake every couple of hours every day.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

More health clubs have become available all around the country, being easy to access and offering guarantee weight loss. These clubs help people in losing weight, however usually does little to encourage them to stay as they have already received there signing up fee. Success in weight loss programs comes by adherence to exercise, however for a majority of people this is the major issue. These health and fitness clubs strive on selling membership to the general public and most don’t encourage people to stay.
More exercise and changes in diet are the key factors to weight loss. A change in diet helps weight loss by restricting total caloric as well as fat intake (C. Curioni & P.Lourenco, 2005). A change in exercise patterns also aids weight loss by increasing caloric and fat expenditure (L N. Keim et al., 1990 & V. Mougios, et al., 2006)
Many individuals attempt to lose weight, though never meeting their goals.
This is usually due to someone attempting a diet and exercise program for a brief time, lack of changes in their appearance or weight make the participant want to quit (A. Grediagin, et al., 1995) as well as lack of time and interest (Willis & Campbell, 1992) have shown to prevent devotion to their weight loss program (Kempen et al. 1995). Body composition is one of the most frequently studied subjects (R. Bryner, et.al 1997). To date, a number of studies have reported the efficacy of high intensity exercise on various physiological parameters related to weight loss (R. Bryner, et al. 1997; J. Jakicic. et al. 2004; V. Mougios et al. 2006).
Both men and women begin dieting and exercise programs in an attempt to lose weight however many fail to continue to either exercise or diet usually due to a decrease in results. However, women tend to struggle compared with men when losing weight (Gleim, 1993). Factors include smaller body sizes (Gleim, 1993), less fat free mass (Pollock et al., 1998; Westerterp, 1998), and lower resting metabolic rates (RMR) (Westerterp, 1998) than men. Men’s testosterone levels are higher than women, which causes males to have a greater muscle mass and absolute RMR than women (D.W McArdle, et.al 1996). These factors cause female’s energy expenditures to be less than that of male, so it critical to find an exercise program or diet program for females that will create the necessary results needed for the participants to make sure they continue with that program.
The Conventional way of low intensity exercise was considered to be more beneficial way to reduce weight than high intensity exercise because of the greater percentage of fat calories are burned during low intensity exercise (McArdle et al., 1996). Previous research has shown that higher intensity exercise is associated with greater improvements in cardiovascular fitness and greater caloric expenditure, which in turn can assist in improving health as well as weight loss (Perna, et al. 1999, O’Donovan, et al. 2005).
However it has been shown that high intensity exercise requires a greater percentage of calories (kcal) during and after exercise and is often greater than that of lower intensity exercise (O’Donovan, et al. 2005). Substantially following exercise, fat metabolism and RMR has been shown to be elevated for up to 24 hours (Bielinski, Schultz, & Jéquier, 1985; Treuth, Hunter, & Williams, 1996).
It is has been highly debated whether if high intensity interval training can be used as a possible treatment intervention in promoting weight loss.
In continuous steady state training the speed which the participant exercises at stays the same intensity throughout the duration of the protocol. Whereas the participant completes 10 intermittently exercises at a high intensity alternating with a lower intensity every few moments in the high intensity interval training.
Comparing the two training protocols over the same amount of calories expended during a high intensity high intensity interval training program as in a more moderate intensity steady state training program the substrate utilization during these exercises will differ (K. Wallman, et al. 2009).
Higher intensity exercise would use primarily use glycogen during exercise whereas a more moderate intensity program would use primarily fat (K. Wallman, et al. 2009). A typical individual would interpret this information as an argument that low steady state intensity exercise is better for burning fat.
However, this does not take in to account the fact that fat metabolism is increased after high intensity interval exercise, serving as the body’s fuel source for any post-exercise activity, also to replenish the glycogen stores depleted by the high intensity exercise (K. Wallman, et al. 2009)
It’s very difficult for many individuals to maintain an extremely high intensity for an extended period of time, thus requiring that near maximal exercise is completed in a high intensity interval training program rather than a continuous steady state program (W. Schmidt et al., 2001)
Aims
Compare the effects of high intensity interval training versus low intensity continuous steady state training on VO2 max in overweight women.
Compare the effects of high intensity interval training versus low intensity continuous steady state training on body composition in overweight women.
Lack of studies dealing with high intensity interval training programs as a potential means of weight loss over a short intervention, this specifies that such research is necessary to determine if high intensity interval training is a worthwhile means to reduce total body weight and fat mass over a shorter period.
1.3 Hypotheses / Research Questions
The two research hypotheses of this study were “null” 1) there would be no significant differences between high intensity interval training and low steady state continuous steady state training in V02 max and in body fat percentage; “alternative” 2) there would be a significant difference between high intensity interval training and low steady state continuous steady state training in VO2 max and body fat percentage.
Delimitations
Subjects were limited to 18 to 34 year-old female non-smokers, not pregnant, not lactating, and not taking any medications that could inhibit metabolism with a body fat percentage 25% – 30%.
Subjects, 4 subjects were assigned to either interval training group or continuous steady state training group.
Body composition was measured using bioelectrical impedance (BIA)
Maximal aerobic capacity was measured using multi stage fitness test (MSFT). The equation was then used to then calculate the VO2max (AD Flouris et.al. 2004 & L L´eger, C Gadoury.1989).
V02 max was used to determine the level at which a subject needed to exercise for a given exercise intensity.
Limitations
A small group, thus limiting the generalizability of the findings.
Work environment could not be controlled for.
Bioelectrical impedance could only be used for body fat percentage.
Multi stage fitness test (MSFT) to assess VO2 max not graded exercise test (GXT).
Definition of Terms
Aerobic: – exercising which requires the use of oxygen
Anaerobic: exercising without the presence of oxygen as the work intensity is greater than the rate the body can transport oxygen to be used.
Body mass index (BMI): describes relative weight for height. Calculated as weight (kg)/height squared (m2) x 704.5. A BMI of > 25 is considered overweight and a BMI > 30 is considered obese in women.
Calorie: energy unit also known as the kilocalorie (kcal). It takes 3500 kcal to be to burn one pound of fat.
Continuous training: steady-paced, prolonged exercise (McArdle et al. 1996)
Interval training: a form of training that involves high-intensity exercise for a brief period of time with brief periods of rest or low intensity exercise (McArdle et al., 1996)
Maximal oxygen uptake (VO2max): is used to measure cardiovascular fitness
Obesity: unhealthy high body fat percentages, generally considered >30% for women (McArdle et al., 1996)
Overweight: – unhealthy high body fat percentages, generally considered 25% to 30% body fat percentage.
Steady-state: the point that is reached in continuous exercise where workload and heart rate become constant.
2.0 CHAPTER TWO – LITERATURE REVIEW
2.1 Literature Review
Obesity is a worldwide issue associated with serious health, social, and economic problems (Brisbon N, et al. 2005). (World Health Organisation 2005) defines overweight and obesity as ”abnormal or excessive fat accumulation that presents a risk to health”. Obesity has been associated with one or more diseases such as diabetes, hypertension, and cardiovascular disease, which have shown to result in serious health issues and even causes of death (C. Stein and C. Colditz 2004). Obesity can be classified into two different sections these being android obesity, where the main proportion of fat mass is situated around the abdomen and waist area, and gynoid obesity, where a large proportion of fat mass is located in the gluteal and femoral areas (A. Kissebah and G. Krakower 1994). Obesity is usually occurs as the result of an imbalance between calories consumed and calories which are expended. An increased consumption of highly calorific foods, without an equal increase in physical activity, will lead to an unhealthy increase in weight. Also decreased levels of physical activity will result in an energy imbalance and lead to weight gain.
It is estimated that one billion adults are overweight and more than 300 million are obese (World Health Organisation 2008). At least 2.6 million people each year die as a result of being overweight or obese (World Health Organisation 2008). Once being associated with the higher income countries, obesity is now also widespread into the lower and middle income countries, as over “65% of the world’s population live in a country where overweight and obesity kills more people than underweight (World Health Organisation 2008). This includes all high-income and middle-income countries. Globally, 44% of diabetes, 23% of ischaemic heart disease and 7-41% of certain cancers are attributable to overweight and obesity” (World Health Organisation 2008).
2.2 Body Mass Index
The most commonly used measure for identifying if an individual is considered overweight or obese is the Body Mass Index (BMI), is a simple index to classify overweight and obesity in adult populations and individuals. The (World Health Organisation) defines the calculations for BMI as the weight in kilograms divided by the square of the height in meters (kg/m2). The classifications of BMI are underweight (=25.0), pre-obese (25.0 – 29.9), obese (>=30.0), obese class 1 (30.0 – 34.9), obese class 2 (35.0 – 39.9) and obese class 3 (>=40.0).
The body mass index (BMI) is the same within both sexes and for all ages of adults. However, the BMI should be considered as a rough guideline as it may not correspond to the same body fat percentage in different individuals. The BMI classification system is not yet usable for children as their bodies undergo a number of physiological changes as they grow.
2.3 Bioelectrical Impedance Analysis (BIA)
However bioelectrical impedance analysis (BIA) is a commonly used method for estimating body composition (Maughan R 1993). BIA first became available in the mid-1980s the method has become very popular due to its ease of use, portability of the equipment and it’s relatively low cost compared to some of the other major methods in assessing body composition analysis (Maughan R 1993). In spite of the perception that BIA measures “body fat,” the device actually determines the electrical impedance of body tissues, which in turn provides an estimate of total body water (TBW) (Maughan R 1993). TBW results from the BIA, can then estimate fat-free mass (FFM) and body fat (adiposity) (Maughan R 1993).
2.4 Exercise Regimes
Changes in diet and/or exercise patterns are the primary ways for one to lose weight but a combination of caloric restriction and exercise has been shown to be a more effective nonsurgical intervention (C. Curioni and P.Lourenco, 2005). Recent research from (V. Mougios, et al. 2006) has shown that a combination of exercise and dieting has been more effective to optimize fat loss. Past research from (N. Keim, et al. 1990) agrees with this by stating that a change in diet eases weight loss by limiting the total caloric intake for the day. Caloric and fat expenditure is increased by a change in exercise patterns (N. Keim, et al. 1990).
Of relevance, the exercise regime typically employed in an overweight or obese population involves stable aerobic exercise performed at a continuous low to moderately low intensity (Jacobsen et al., 2003). It is undefined whether this form of exercise (continuous), in combination with dieting, is the most effective way to lose fat or to improve general health. Alternatively high intensity exercise burns a larger number of calories when compared with low to low-moderate intensity exercise performed over the same period of time, therefore being more effective solution for fat loss (L. Campbell, et al. 2010, ). Additionally past research from (J. MacDougall, et al. 1998) has shown that high intensity exercise places a larger physiological load on the cardiovascular system compared with lower intensity exercise and therefore may lead to greater results in improvements in aerobic fitness. On the other hand (L. Campbell, et al. 2010) states that many overweight and obese individuals have low levels of fitness, the stress which is put upon their bodies by the high bouts of high intensity exercise may be difficult for them, if not impossible. This is supported by Jakicic et al. (2004) who reported the need for obese/overweight participants to divide their exercise sessions into smaller sections due to their incapability of performing a single continuous session of moderate to high intensity exercise.
Up to now, certain studies have reported the efficacy of high intensity exercise on various physiological restrictions related to weight loss (Jakicic, Marcus, Gallagher, et al. 2004; Mougios, Kazaki, Christoulas, et al. 2006). In addition, O’Donovan et al. (2005) has reported superior improvements in cholesterol, low density lipoprotein (LDL-C) and high density lipoprotein (HDL-C) after research of a 24 weeks period of high-intensity exercise, compared to moderate-intensity exercise. As Interval training includes bouts of high-intensity exercise with stages of rest or lower intensity exercise that allow for part recovery (McArdle et al., 2001), it can be used for most individuals as dependent on their fitness levels the intensity and duration of the interval bouts can be adjusted in order to match an individual, thus making this form of training a suitable option for most people. The studies which have compared high intensity interval training and continuous aerobic exercise in the obese and overweight population have stated that high intensity interval training resulted in greater fat loss (J. King et al, 2001; E. Trapp et al, 2008).
High intensity interval training can be conducted in many forms of exercises from cycling to walking, research from (L. Campbell, et al. 2010) looked at the effects of interval exercise on physiological more specifically into continuous versus interval walking in an obese population, whereas (K. Wallman, et al. 2009) exercise interventions required the participants to exercise on a cycle ergometer (Monark 828e, Sweden) as the reduced strain on the body , would occur via cycling, as an exercise intervention in an overweight population.
The research done by (L. Campbell, 2010) stated that potential participants were eliminated if they participated in an excess of 30 minutes of exercise on 3 different occasions per week over the last 6 months K. Wallman, et al. (2009) also stated this. L. Campbell, (2010) also excluded participants if they were ever pregnant, taking medications in relation to beta blockers, blood pressure or a thyroid medication, whereas other research by (K. Wallman, et al. 2009, JW. J. King. 2001, K. Hansen, et al. 2005) didn’t look into eliminating participants if they had these advantages. Participants were also excluded if they had diabetes, had a blood pressure (BP) superior than 160/90, had lost more than five kg in the last three months, had musculoskeletal problems that prevented them from walking (L. Campbell, 2010, K. Wallman, et al. 2009). The daily activity data for a week i.e. the number of steps per day was assessed during weeks 1 and 12 of the intervention using a pedometer (Yamax, Digi-walker, SW-700, Tokyo, Japan) in research by (L. Campbell, 2010). The Yamax Digi-walker pedometer has been reported to accurately and reliably measure steps during walking and running in overweight and obese individuals (Swartz et al., 2003). However other studies did not take into consideration daily activity which is what I did.
How different studies measured their results varied as H. Mohebbi, (2011) interventions consisted of the use body mass index and whole body fat mass and free fat mass (FFM) in order to get their results, whereas (K. Wallman, et al. 2009) used the stadiometer for to measure their height and body mass was determined using Sauter scales. Compared with how the intervention that I used to get my results would be the use of body fat percentages from bioelectrical impedance and looking into physiology adaptations by the MSFT.
2.5 Interval Versus Continuous
The research results reported by (L. Campbell, 2010) used individuals who there were no significant differences for age, body-mass, height and BMI Prior to the intervention for both groups and there were no significant differences between the two groups for VO2peak (ml·kg-1·min-1) K. Wallman, et al. (2009) also approach their investigations this way. The results by (L. Campbell, 2010, K. Wallman, et al. 2009, K. Hansen, et al. 2005) shown that there were no significant differences between both groups for body mass, fat mass or lean mass at baseline but there were significant main effects for time for body mass and fat mass .
Further, while (L. Campbell, 2010) found there were no significant differences between groups for gynoid and android fat mass at baseline or upon the conclusion of the intervention, there was a significant main effect for time found for gynoid fat mass, with reductions in this measure being reflected by large ES in interval and continuous groups. Whereas K. Wallman, et al. (2009) found there was no significant differences, however there was a slight difference in both variables, but results revealed that while there were no significant changes in body mass and android and gynoid fat mass between groups, there was a trend for a decline in android fat mass in the interval group, as established by a large effect size in this group only.
“Declines in total fat and gynoid fat mass were reflected by significant main effects for time, as well as moderate and large effect in both groups” (L. Campbell, 2010). L. Campbell, (2010), K. Wallman, et al. (2009) found that in addition, the decrease in overall body mass over time only was reflected by a moderate effect sizes in the interval group only. These results show that body mass in the interval group are most probable at greater total android fat mass and fat loss in the interventions, as results show interval group are (~22.5% and 28.5%) compared to the continuous group (~17% and 19.2%) (L. Campbell, 2010). Furthermore these results are also support by other similar studies that stated body mass loss (W. Schmidt et al., 2001; J. Volek et al., 2005) and fat mass loss (J. King et al., 2001) after exercise interventions (J. King et al., 2001; W. Schmidt et al., 2001) and also a diet and exercise intervention (J. Volek et al., 2005).
K. Wallman, et al. (2009) research looked into the use of a calorie restricted diet when comparing interval versus continuous, while other studies (J. King, et al 2001, K. Hansen, et al. 2005, L. Campbell, 2010) have shown not to directly look into calorie restricted diet.
Consequently research suggests a combination of both high intensity interval training and calorie restricted diet has shown beneficial improvement in VLDL-C (L. Campbell, 2010). The studies which have compared high intensity interval training to continuous aerobic exercise in the obese and overweight population have reported that high intensity interval training resulted in greater fat loss (J W. King, et al, 2001; E. Trapp et al, 2008).
Nonetheless, the results of the study conducted by (C. Perry, 2008) suggest that further investigation is necessary into the use of interval training on cardiovascular fitness and fat loss amongst an overweight or obese population. C. Perry, (2008) “In particular, a longer intervention period, as well as a higher work to relief ratio associated with the interval exercise may result in greater improvements in cardiovascular fitness and fat loss”. As the results have shown that interval training appears to be an effective form of exercise when improving aerobic performance and fat loss, (C. Perry, 2008) states that future studies should examine the adoptability and sustainability of a cycling interval training regimen in the overweight and obese population. Furthermore J W. King, et al (2001) indicated that there is a lack of studies investigating high intensity interval training programs as a potential means of weight loss compared with low continuous training. This shows that such research must be conducted to determine if high intensity interval training is a viable means to reduce total body weight and fat mass when in complaisant with low continuous training.
2.6 Summary
Even though previous studies exist concerning the effect of high intensity interval training on performance, interval training has yet to be assessed in a shorter duration than other studies which look into the effects of interval training. Most studies look into the effects of high intensity interval training over a period of 8 weeks or longer (Jakicic, Marcus, Gallagher, et al. 2004; Mougios, Kazaki, Christoulas, et al. 2006). These studies have found a significant difference when comparing body fat percentage/Body composition and performance (J. King et al, 2001; E. Trapp et al, 2008). However if an high intensity interval training program is shown to produce changes in body weight and body composition in a shorter time than 8 weeks plus, perhaps that type of program would be more appealing to those who have difficulty adhering to longer continuous steady state exercise programs.
3.0 CHAPTER THREE – METHOD
The purpose of this study was to compare the effects of high intensity interval versus low steady state continuous training on weight loss and body composition in overweight population. This section will discuss the subjects, instrumentation, research protocol, and the design and analyses that were used in comparing the effects of the two training methods.
3.1 Subjects
The primary criterion for subject selection will be that all subjects will be clinically overweight and obese, overweight is classified having an body fat percentage of 25%During the testing the subjects were not be allowed to make any conscious changes in their eating habits. The purpose of the study was to determine the effects of high intensity interval and low intensity steady state training protocols on weight loss and physiological adaptations, any changes in energy consumption would have an effect on this data. For this study there were 8 subjects/volunteers, which were randomized into two different groups using randomized software, these groups being high intensity interval and continuous steady state training groups. The 8 subjects will be randomly drawn into high intensity interval training and low intensity steady state training groups though computer software tools which was also agreed with by participants and the university itself.
Prior to the study, all subjects were asked to sign an informed consent form (Appendix A) and Par-Q (Appendix B). The informed consent notified subjects of all potential risks involved, including the possibility of musculoskeletal injury and myocardial infarction (J. King et al, 2001) while the Par-Q gave detailed information about the participants health. The experimental protocol and associated risks were explained orally and in writing form to all subjects before written consent will be obtained. The subjects were told that they would be free to leave the study at any time and that their personal records would be kept confidential.
3.2 Tests and Equipment
Each subject completed a 4 week training program, the subjects were told that they would be free to leave the study at any time and that their personal records would be kept confidential. Prior to the study the subjects/volunteers were asked if they are involved in a structured training program and will be excluded from the study if they don’t meet the criteria.
As this study dealt with the effect of high intensity interval and low intensity steady state training protocols on weight loss, body composition and physiological adaptations, there were be several measures taken. A VO2 max test prior to the study was conducted in order to determine appropriate absolute intensity levels for the subjects. The dependent variables, weight and body composition, was measured at both the beginning and end of the study. Body fat percentage was also recorded both at the beginning and end of the study at similar times in the afternoon.
The most precise way to assess aerobic capacity is the direct measurement of maximal oxygen uptake (VO2max) during a graded exercise test (GXT). However, the direct measurement of VO2max is often limited to laboratory, clinical, and research settings. The requirement to assess aerobic capacity in the general public has led to different development of various field based testing. These tests included the multistage fitness test and 1 mile walk test, previous research by (D J. George et al., 1997; P D. Heil et al., 1995; H M. Malek et al., 2005) reported valid estimates of aerobic capacity when using field based VO2 max testing. The multistage fitness was used in this study due to other commitments where all participants couldn’t arrive at the laboratory’s to conduct the GXT.
The 20-m multi-stage shuttle run test (MSFT) is also known as the Leger test, the beep test, the bleep test (Leger, Mercier, Gadoury, & Lambert, 1988). MSFT (20-m MSFT, Leger et al., 1988; Leger et al., 1989) is often the most used field based fitness test used when testing aerobic capacities of a person, recent study by (Wong et al., 2001; Mota et al., 2002; Guerra et al., 2002; Vicente-Rodriguez et al., 2003; Vicente-Rodriguez et al., 2004) have used the 20-m multistage fitness test for the measurement of aerobic capacity.
The MSFT involved the test subjects to do continuous running in-between two lines which were situated 20 metres apart in time to the recorded beeps. As the test subjects reach the marked line they then stop, turn around by 180° and run in the opposite direction towards the other marked line. The subjects were told they must stop when instructed by a beep from a CD. The starting speed of the MSFT is 8.5 km/h and after about a minute a sound indicates an increase in speed (0.5 km/h per minute) (Leger, A L & Lambert, J. 1982). As the level increase the time it take for the beeps decreases. The test was stopped when a subject was unable to keep up with the pace dictated by the beep sound, and their score was taken. Throughout the test, the participants had to make sure to cover the set distance and touch each line with their foot before proceeding towards the next line.
In order to calculate the predicted VO2 max (predË™VO2 max) for the MSFT an appropriate equations was used (AD Flouris et.al. 2004 & L L´eger, C Gadoury.1989):
• MSFT: predË™VO2max = MAS Ã- 6.592 − 32.678
Each subjects was required to attend the exercise physiology laboratory at Wolverhampton university where their height can be assessed using a stadiometer, their mass be assessed and body fat percentage using bioelectrical impedance. However due to time arrangements participants couldn’t make it to physiology laboratories for testing, therefore the testing was brought to the participants through the portability of the bioelectrical impedance and a portable stadiometer was used to assess the participants height, also they weight was assessed using scales (Seca 769 Upright Scales).
Bioelectrical impedance analysis (BIA) has emerged as one of the most popular methods for estimating relative body fat (National Institutes of Health Technology 1996 & V H. Heyward et al., 1996). BIA was first developed in the 1960s; BIA is also relatively simple, quick and portable and is used in diverse settings, including private clinics and hospitals.
National Institutes of Health Technology (1996) has shown that the BIA method to have approximately the same accuracy as of the skinfold method in a diverse group as also found by (V H. Heyward et al., 1996 & D W. Lockner et al., 1999). Before testing subjects could moderately consume drinks or food, as long as the fluid or food remains within the stomach, not absorbed by body tissue, test results will not be influenced (V H. Heyward et al., 1996). To conduct the BIA all subjects were asked to lie in a supine position on a non-conducting surface, with the arms slightly abducted from the trunk and the legs slightly separated the particular model used was the (Bodystat 1500; Bodystat Ltd, Douglas, UK). The electrodes were placed on the hand and foot of the right side of the body and repeat tests were applied to the same side of the body, new electrodes were employed for each subject. A non-susceptible current then entered the body through the first pair of hand-foot electrodes, and then the second electrode pair is used to determine the voltage drop caused by the body water dependent impedance or total resistance which then determines body fat percentage (V H. Heyward et al., 1996) a few seconds later the test was completed.
 

Continuous Improvement in Software Development

The above principle concerns the close, daily collaboration between business people and customers is an important one for Agile as it ensures the usability of the product and consequently quality of work to fulfil the customer’s requirement in the best way possible (Cohn, 2005). The principle reflects the agile value of customer collaboration over contract negotiation. Schwaber (2004) highlights the importance of this principle as during the last decades with the increasing complexity of IT project, developers and customers have been drifting apart due to unsuitable methodologies that obstruct effective customer collaboration.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Requirement collection following this agile principle goes beyond the requirement collection of traditional project management methodologies (Cobb, 2011). Beck (2000) suggests that when using XP, there should always be a customer on site to be able to answer all arising questions instantaneously. Customers often have different or no expectations from a project which emphasizes the need of close collaboration to detect any discrepancies (Cohn, 2005). Cohn (2005) further argues that through daily meetings changing requirements originating in rapidly evolving business environment can be addressed immediately and realignment of the strategy and deliverables is possible.
However, the practice of daily customer meetings was not achievable during the wiki project; nonetheless, the team was able to consult with the customer frequently through email and very short response times allowed areas of unclarity to be resolved promptly. This close collaboration was often used to clarify small details in the requirements to increase the customer satisfaction through implementing change request without delay. When this principle is applied cautiously and thoroughly, a high level of trust can be developed between the two parties involved (Schwaber, 2004). Highsmith (2009) further argues that trust is a very important issue to be valued as it enhances the team cohesion and quality of collaborative work.
This is supported by the experienced Group Green has made during the wiki project. During iteration 1 and 2, all requirements have been comprehensively discussed and clarified within the team and with the customer during iteration planning and initial customer consultation. After the team has started developing the iteration’s product, the customer was consulted again to resolve any remaining unclarities. Through this practice of close collaboration the quality of the product was at a very high level which was reflected through the outstanding feedback from the customer. However, during iteration 3 this high level of cooperation with the customer was neglected by the team which was been reflected in the iteration review meeting. The customer was not as satisfied with the product as in the previous two iterations, because the team failed to fulfill the customer’s requirements and specifications.
In the subsequent iteration it was the Scrum Master’s top priority to involve the customer again in more detail to enhance communication and idea exchange, removing impediment between the customer and the development team as suggested by Schwaber (2004). To adhere and to apply this principle might be one of the most valuable lessons learned in this project, as the close collaboration ensures a high quality of work and subsequently high customer satisfaction.
The principle of sustainable development relates to the aim of developing the product in a constant pace without any perks in development velocity. Sustainability has a great significance, as the whole process of agile development is aimed to be a sustainable approach (Augustine, 2005). Poppendieck and Poppendieck (2003) note that companies which have adopted lean thinking have achieved a significant sustainable performance improvement. Stellman and Greene (2014) highlight that the breaking down of the whole project into smaller more manageable chunks facilitates the process of determining realistic durations of every story point or piece of work that is to be developed. The ability of estimating realistic durations enables the project team to give accurate predictions of the development time of the whole product. This supports a very steady flow of product development and the team can work in a constant and sustainable pace (Cohn, 2005).
In software development, this constant flow leads to a higher quality of code and fewer inconsistencies in the source code. In consequence, less time is needed to address bug fixing, which make the whole concept more sustainable and viable (Cohn, 2005). Bug fixing, improving flaws and making corrections often lead to a higher work load for the project team and consequently lowers the motivation and increases the stress the team experiences. The stress primarily results from the still existing deadline at the end of the short iteration which still needs to be met, despite the amount of required re-work. Cohn (2005) further stipulates that over time, the customer realises and acknowledges the high quality, which subsequently enables trust to be developed between the customer and the project team. Cobb (2011) further points out that all team members, not just developers, need to keep pace with each other throughout the whole duration of the project.
In agile development, the iterations prevent team members to step in or out of the project in different phases. As a result, the development of the product is much more fluent, as all team member can built up trust and develop a high team cohesion (Cobb, 2011). Cohn (2005) further argues that this can lead to a higher motivation for the project team as they feel empowered and are more willing to achieve better results. Whitworth and Biddle (2007) conclude that agile planning reduces tensions and conflicts and the consecutive development of small tasks promotes motivation in the team, which altogether which leads to an overall quality improvement.
In practice, Team Green has experienced the value of this principle, however, not in as much detail as in real-life practice. The project was already divided into weekly iterations, which established the grounds of sustainable development. However, the team experienced the value of dividing the whole project deliverables into smaller parts as this practice greatly improves transparency and clearness of what requirements need to be fulfilled and how this can be achieved. The internally agreed deadlines did not drastically change during the whole project duration. This way the team was able to realise a routine of weekly development, which greatly helped and supported in developing a high-quality product. Trust among the team has been developed at the same time, which facilitated the sustainable development.
An important lesson learned in this regard is the necessity of splitting the workload and thoroughly planning durations of the single pieces of work. This greatly benefits a sustainable, constant pace of development and consequently increases the product quality and customer satisfaction.
The last agile principle states that the team should regularly reflect on how to become more effective and adopt their work processes accordingly. Through the alignment of the overall approach and the strategy of development, the project team aims to increase the quality baseline of the developed work. Stellman and Greene (2014) note that it is important to include retrospectives to evaluate and assess performance to figure out ways on how to become more effective in future projects. This retrospective should not be limited to one meeting at the end of a project but should be implemented immediately when any possible improvements are recognised. According to Beck (2000) the project team should use daily stand up meetings to get discuss any areas of general development improvement. If this is not possible, the team should try to incorporate a retrospective at least after finishing every iteration (Smith and Upton, 2015). Cobb (2011) elaborates on this in saying that sprints in agile are generally much shorter than the development duration of traditional approaches, which facilitates the reflecting process.

Find Out How UKEssays.com Can Help You!
Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.
View our services

The concept of continuous improvement is linked to lean software development and based on the Kaizen philosophy and re-engineering approach to heighten the standard of status quo to achieve better quality products (Bond, 1999). Kaizen and re-engineering philosophy were originally deduced from operational management in logistics, but can be applied to other improvement processes such as Agile product development. Typically, the improvement process can be divided into four consecutive stages:
1. maintaining process status quo
2. process improvement
3. process re-engineering
4. achieving process stability.
Group Green applied this principle during most of the wiki project. In the first two iterations, the team held one retrospective at the end of each iteration to identify areas of improvement and ways to implement more agile principles than the ones that were already used. This practice lead to a high quality of product and customer satisfaction. However, during iteration 3 this principle was neglected and the team did not pursue the strive of further improvement. This was reflected in reduced customer satisfaction in comparison to the previous iterations. In response, the team decided to add an additional retrospective reflect on how to further improve their development process to retrieve the higher quality standard and customer satisfaction of previous iterations. Based on this positive experience of reinforcing this principle it was agreed that an additional retrospective is being held at the end of the wiki project to ensure a high quality of final assignment report. Reflecting the whole development process, it can be said with certainty that lessons learned includes the necessity of consequently applying this principle. Only by doing so, the prerequisite is fulfilled to continuously deliver high quality products and achiever customer satisfaction.

Implementing continuous improvement in hospitality sector

1. Title:
The research title of this proposal is “Implementing Continuous Improvement In Hospitality Sector”.
2. Introduction
Organizations today operate in an extremely competitive environment where service quality and customer satisfaction are paramount. If organizations are to continually improve and meet higher standards in future they must be prepared for continuous and sustainable change. Organizations will need to continually identify where they are and where they need to be in terms of performance, if changing customer needs and requirements are to be successfully met.
When discussing the CI (continuous improvement) many writers seem to focus on quality. Although quality is an important aspect of CI, the topic is much more complex and interesting than merely developing quality within products and services.
The first theory to be considered as relevant to the development of CI was Scientific Management as introduced by American engineer and manager Frederick Winslow Taylor (1911). Taylor was the first person to actually measure work methods with the view to increasing productivity through finding his “one best way” to perform a given task. Appalled by what he regarded as the inefficiencies of industrial practice Taylor basically introduced what we know today as Performance Measurement and Performance Management to all tasks.
The idea of ‘quality’ was developed by two Americans associated with the post-war renaissance of Japanese industry, namely Dr. W. Edwards Deming and Dr. Joseph Juran.
According to Deming (1982)
“Quality should be aimed at the needs of the consumer, present and future”
“Quality is consistent conformance to customers’ expectations” (Slack et al., 2006)
Dr Joseph M Juran states
“Quality is Fitness for purpose” (Juran,1988)
Taguchi says
“Quality should primarily be customer- driven,” (Taguchi, What is Total Quality Management?, The Japanese Way. 1985) they confirm this approach.
3. Research Aims and Objectives
My aim in this project is to identify appropriate applications for and use of Continuous Improvement tools and techniques for quality improvement in providing the goods and services in the hospitality sector some tools, techniques, theories, and philosophies I will be using in future are benchmarking, check sheets, histograms, performance management planning (philosophy), Imai’s Kaizen/ CI umbrella, Taguchi’s (theory), FQM (excellence model), Carlisle’s CI framework etc….

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

My aim in this research is to raise the awareness of CI within the hospitality sector and to start building the foundation for the organisation to design, implement and sustain a CI programme to create improved performance and help meet the requirements of competition, evaluate the type of products and services customer of the hospitality sector expects, assess whether the current product and service provided meets these expectations.
4. Research Methodology:
The research methodology used in my research is based on the conceptual model proposed by Howard and Sharp (1983) which offers seven steps as a guide to the research:

Identify the broad area of case study
Select the research topic
Decide the approach
Establish the plan
Collect the data or information
Analyse and interpret the data
Present the findings

1. Identifying the broad area of case study:
There are four steps of methodology for defining case study. They are

Designing case study.
Conducting case study
Analyzing case study with appropriate evidence
Developing conclusions, recommendations and implications.

Case study is a methodology; we have to follow particular procedure to achieve the expected results. Yin (1993) identified different types of case studies like

Exploratory
Explanatory
Descriptive

Later on it was extended by other three concepts like

Intrinsic- it specifies that, when researcher interested in particular case.
Instrumental – A specific Case is used to understand more than what is required for researcher.
Collective: After a group of cases are studies, researchers have to identify the specific cases which are useful and which are not.

2. Select the research topic:
For selecting a particular topic we have identify which area we are interested at, and did that topic is useful for doing Research or not. After selecting we have to justify which topic is strong for doing Research. Finally have to fix to specific topic and proceed further.
3. Decide the approach :
This procedure of gaining knowledge and understanding the problem and growth of selected case of study can offer enough knowledge into managerial culture, current trend and future possibilities. The historical method of research applies to all fields of study because it encompasses their origins, growth, theories, presentation, concepts, crisis, etc.Both quantitative and qualitative variables can be used in the gathering of historical information.

The collection of most relevant information about the topic.

The appropriate information forming and case studies.

Specific and relevant collection and organization of evidences, and the identification of the authenticity of information and its sources.

Selecting, organizing, and analyzing the most relevant collected evidence, and the representation of solutions and

Recording of perfect and accurate conclusions in a meaningful sequence of events.

4. Establish the plan:
Research plan helps to develop particular plan to improve the topic.
We have to create and answer some questions to improve the research, like

Who can help me for to learn more about this particular topic?
What type of question should I ask people in survey according to the check list?
What modifications should I made to learn more about the topic?
What are the resources I can refer to, how should I browse to learn more about the particular topic?
How can I organize the information, what I have collected?

5. Collect the data or information :
We have to look after many technical surveys, researches and journals to collect the required data or information. Collecting data and organizing the data is very important than other things.
6. Analyse and interpret the data :
Have to analyze and interpret the data which we have collected from surveys, Researches etc.. for a successful research outputs.
7. Present the findings:
The most important thing is, presenting the ideas and thoughts which we collected from many surveys, researches.
The way of presenting the report plans a major role, which decides whether the research is successful or failure.
5. Research Approach:
The term ‘paradigm’ has become popularized over the last decade, and it therefore tends to be used in many different ways. Mintzberg (1978) described the term as convenient ‘buzzword’ for social scientists. In response Morgan (1979) proposed a way of tidying up its usage. He distinguished between three levels of use:
* The philosophical level – basic about the world.
* The social level – guidelines about how the researcher should conduct their endeavor.
* The technical level – methods and techniques that should ideally be adopted in conducting research.
There are two paradigms or approaches to research Positivism and Phenomenology.
5.1 Positivism:
Easterby-Smith et al. (1991:22) define the positivism paradigm as
“that the social world exists externally, and that its properties should be measured through objective methods, rather than being inferred subjectively through sensation, reflection, or intuition”
This is involved using a quantitative/deductive research approach involving measurement using hard data, and both statistical and logical information. Research methods for this type of paradigm include surveys, experimentation and observation (audits). The method adopted in this research was a survey, which produced hard statistical data. As with most methods of data collection, the positivist paradigm has its’ strengths and weakness. These attributes are outlined in table below.

STRENGTHS

WEAKNESSES

* Provide a wide coverage of range of situations.
* Can be fast and economical.
* May be of Considerable relevance to policy decisions, particularly when statistics are aggregated.

* Methods tend to be inflexible and artificial.
* Ineffective for understanding process or significance that people attach to actions.
* Due to focus on recent or current events it can be difficult for policy makers to infer what actions to take.

5.2 Phenomenology:
Saunders et al. (1997:72) define the phenomenology paradigm in the following way:
“Characterized by a focus on the meanings that research subjects attach to social phenomena; an attempt by the researcher to understand what is happening and why it is happening”
This approach will allow me to gather data providing information as to how subjects perceived management development in greater depth. This involved using a qualitative/inductive research approach involving measurement using soft, meaningful and naturalistic data.
Research methods for this type of paradigm include personal interviews, group interviews and observation of group or individual behavior. I will adopt the personal interview approach as it is most suited to the research topic. The phenomenological paradigm also has its strengths and weaknesses. These are shown below in the table.

STRENGHTS

WEAKNESSES

* Ability to look as change process over time.
* To understand peoples meanings.
* To adjust to new issues and ideas as they emerge.
* Provide a way of gathering data that is seen as natural.

* Data collection can be time and resource consuming.
* Analysis and interpretation of data can be difficult.
* Qualitative studies may appear disorganized because it is harder to control their pace, process and end-points.
* Policy makers may give less credibility to studies rooted in a phenomenological approach.

Source: Easterby – Smith et.al(1991)
The theoretical approach to the research determines what methods will gain required information for the study.

Positivist Paradigm

Phenomenological Paradigm

Basic
Beliefs

– The world is external and objective.
– Observer is independent
– Science and value free

– The world is socially constructed and subjective.
– Observer is part of what is observed.
– Science is driven by human interests.

Researcher
Should

– Focus on facts
– Look at causality and fundamental laws.
– Reduce phenomena to simplest elements.
– Formulate hypothesis and then test them.

– Focus on meanings.
– Try to understand what is happening.
– Look at the totality of each situation.
– Develop ideas through induction from data.

Preferred methods
include

– Operationalising concepts so that they can be measured.
– Taking large samples.

– Using multiple methods to establish different views of the phenomena.
– Small samples investigated in depth or over time.

Source: Easterby – Smith et.al(1991)
5.3 Research Overview:
Primary data collection for this research involved both quantitative and qualitative information. These two types of information has to be noted down very carefully. If the information contains any calculated measurement of any type, it is considered as quantitative information. There are particular rules for maintaining right track of this information, but the main thing to remember is that any value recorded directly from the tool is considered quantitative data. Always it should be recorded immediately as soon as possible, along with its explanation and the units of measure, and have to be careful to maintain the perfect accuracy.
Sometimes we can observe something happening using senses, as a replacement for a tool like a measuring stick. This qualitative information is repeatedly just as useful as numerical data. It includes such explanation as colour as well as observations about consistency changes and anything else that is actually an opinion
In other words Quantitative information refers to:
“The application of a measurement of numerical approach to the nature of the issue under scrutiny as well as the gathering and analysis of data. It is the concepts and categories, not their incidence and frequency that are said to matter.” (Brannan, 1992:5)
Qualitative methods are concerned with acquiring data through investigative means of a descriptive nature. However, Burgess (1982) suggests that researchers ought to be flexible and select a range of methods that are appropriate to the research problem under investigation.
The characteristics of both quantitative and qualitative methods are illustrated in the table below.

QUANTITIVE DATA

QUALITATIVE DATA

* Based on meanings derived from numbers.
* Collection results in numerical and standardized data.
* Analysis conducted through the use of diagrams and statistics.

* Based on meanings expressed through words.
* Collection of results in non-standardized data requiring classification into categories.
* Analysis constructed through the use of conceptualization.

Source: Adapted from Saunders et al 1997
6. Research Design:
Easterby – Smith et al. (1991) advocate that to reduce the possibility of questionable reliability of the data and results being produced by the research a sound research design should be adopted.
6.1 Secondary Data:
Data that has already been collected for some other purpose, perhaps processed and subsequently stored, are termed secondary data. There are three main types of secondary data:
Documentary:
The nature and ways of classifying document varies conceptually and practically. Documentary research has many ways of analysing documents.
Documentary research method had more importance compared to other methods of research, because of the influence of positivism and empiricism so that information and quantification are most popular forms of collecting data and analyzing data. Documentary research is connected with historical research, and history feels anxious in conjunction with social science disciplines. Documentary research method is stated as unclear, it doesn’t have a particular method and there is no strong evidence to how a researcher uses it. However these criticisms are nowhere to be found. History as a restraint provides us with a sense of our ancient times and with that the ways in which our present came about, and employing a range of documentary sources. It enables researchers to reflect on current issues.
Documentary research methods are classified into three different types. They are

Primary, secondary and tertiary documents: Primary documents refer to the resources which are used or collected by those who really witnessed the events which were described. This type of sources is considered to be reliable and accurate, and for this reason the researcher may make use of secondary sources. These are printed after an event that the author has not perfectly or face-to-face witnessed, and the researcher must be conscious of the troubles in production of this data. Tertiary sources allow researchers to establish other references like index, abstract and bibliography.
Public and private documents: Here documents can be divided into four categories according to convenience, restricted, open-archival.
Solicited and unsolicited documents: some of the documents like government surveys and research projects would have produced with the aim of research in mind, whereas others like diaries would have been produced for personal use.

Survey:

Survey is a technique used for getting accurate and perfect results.
A detailed and elaborated inspection.

Multiple source:
There are several other sources for conducting a successful survey.
Five principal secondary data were utilized provide background information surrounding the area of research.
* Staff surveys.
* Organizational reports on subject matter.
* Organizational assessment and evaluation of subject matter in operation.
* Findings of previous studies into subject.
* Literature including books, academic reports, and journals from several authors.
As with all data collection, secondary data has its own advantages and disadvantages and shown in the table below.

ADVANTAGES

DISADVANTAGES

* May have fewer resource implications.
* Unobtrusive.
* Longitudinal studies may be feasible.
* Can provide comparative and contextual data.
* Can result in unforeseen discoveries.

* May have been collected for a purpose that does not match your need.
* Access may be difficult or costly.
* Aggregations and definitions may be unsuitable.

Source Adopted from Kidder & Judd (1986)
6.2 Primary Research:
The aim of the primary research is to obtain information that is not provided in the secondary data and investigate its validity by comparing and contrasting the findings. The following research methods will examine to validate the research aim.

Postal survey
Personal survey
Focus Group interview / Discussion
In depth interviews

Smith, Thorpe and Lowe, (1991) define these methods as an array of interpretative techniques which seek to describe, decode, translate and otherwise come to terms with meaning, not the frequency, of certain more or less naturally occurring phenomenon in the social world. The choice of method for collecting the data depends on the information needs and values, as well as, particularly in this study, the budget and resources available.
8. Questionnaires:
“survey research can be obtained from a relatively small sample of people and can be generated to large numbers of the population”
(Alreck & Settle, 1995:6)
Self-administered questionnaires will be used in this research so that responses could be obtained from a sample of potential respondents and generalized for large numbers of managers.
Kidder and Judd (1986:222) summarized the advantages and disadvantages of using questionnaires illustrated in the table below.

ADVATAGES

DISADVANTAGES

* Low Cost
* Ease of completion
* Immediate response
* Feeling of anonymity

* Accuracy of completeness of responses.
* Context of question answering
* Misunderstanding of questions
* Response rate

Source: Kidder & Judd (1986:222)
The questionnaire will be constructed using a combination of multiple choice closed questions; open questions and scaling questions.
8.Work Plan :

Time frame

Action

ResearchTime Frame

Month 1

To meet Requirements

Get the Initial Plans for the research approved.

To start work on research topic area, research questions and literature review

Complete writings on literature review

Month 2

To submit a draft of literature review details and to start on research design

Complete writings on methods of research and gatherings of data collected

Month 3

To submit a draft of research design and methods of research and data collected

To met supervisor, agree and move to next steps

Month 4

To work on research implementation methods

To submit research implementation methods

Month 5

To work on data analysis and conclusion

To submit draft of data analysis and conclusion

Month 6

To complete draft on complete Research

To submit draft of complete Research to supervisor and work on final conclusions

Submission of research

9.Conclusion:
To Research on “Implementing Continuous Improvement In Hospitality Sector” We have used lot of methods, implementations, surveys, questionnaires etc. Each and every concept of research is useful and also very important for the research to become a successful research.
Another main thing for research is “Work Plan”. The way we plan our work in a perfect and right manner will make our research successful. Planning according to the situation and implementing particular plans, even though it is complicated and time consuming. We have to organize our plan perfectly and step by step, also have to finish the tasks in specific time periods to achieve real time success.
Surveys has to be done very carefully, because there will be more complications to finish them. Like we have to justify which type of survey we want to go for and how to implement that.
There are many categories in surveys, which should be done in our chosen topic .We have to be very careful not to deviate from our primary research. Some of the surveys are

Surveying certain age group.
Surveying by gender.
Surveying by profession.
Surveying by mental condition
Postal survey
Personal survey
Focus Group interview / Discussion
In depth interviews and so on.

Another important aspect in research is Questionnaires. We have to use this type of survey very cautiously, because there are many ways we can use questionnaires. We have to use each and every possibility of questionnaires to get best results.Using all these methods and concepts we can succeed in obtaining a successful Research Results.
 

Continuous Personal Development in Hospitality

INTRODUCTION
In the following report, to be submitted to the restaurant manager I was instructed to find the gap between the staff’s current capabilities and the skills requirement that is needed to be developed in order to meet the restaurants further plan. I was also to determine the training objectives for the staff to increase their current set of skills.
For the current report I had to do the skills audit. I also had do the secondary and primary research for internal and external environment. For this purpose I have used a number of web sites and books.
About the restaurant: Chutney Mary- London’s most fashionable and highly acclaimed Indian restaurant and twice received the award of Best Indian Restaurant in the UK from the authoritative Good Curry Guide. Restaurant opened its door to customers in 1990 Chelsea, London. The new romantic interior combines Indian richness and sparkle sepia etchings of Indian life. Restaurant reflects mix of the finest Indian craftsmanship in a stylish setting. Chutney Mary welcome guest by a comfortable lounge area leading to a dramatic stairway, featuring an enormous Moghul style mirror work mural.
Impact Analysis
Skill Areas: Communication skills
Do you have the skills? (yes/no): No
Are the skills at a satisfactory level?(yes/no): No
Skill level 1-5 (1 is highest, 5 is lowest): 3
Skill level desired: 2

Skill Areas: Knowledge of the product
Do you have the skills? (yes/no): Yes
Are the skills at a satisfactory level?(yes/no): No
Skill level 1-5 (1 is highest, 5 is lowest): 4
Skill level desired: 2

Skill Areas: Up selling skills
Do you have the skills? (yes/no): Yes
Are the skills at a satisfactory level?(yes/no): No
Skill level 1-5 (1 is highest, 5 is lowest): 3
Skill level desired: 2

Skill Areas: New Technology
Do you have the skills? (yes/no): Yes
Are the skills at a satisfactory level?(yes/no): Yes
Skill level 1-5 (1 is highest, 5 is lowest): 4
Skill level desired: 1

The above table shows the current set of skills. The skills required by the staff to accomplish the company’s future plans are shown in the fifth column. The chart gives a synopsis to the manger of the areas that need to be develop, in order to reach the company’s goal. One of the ways to solve the problem is by gap analysis or impact analysis. It is technique use to pin point the correct gaps between expected levels of service and the actual level of service provided. Gap analysis has played a crucial role in planning asset within public sector and private sectors as well. This analysis also gives a summary of the impacts of the set qualities to the management. This analysis helps the management to recruit new staff in the future. The management can cross reference it to check if the candidate has met the requirement, so that they can achieve their future plans.
The analysis is generally used at the macro level. It identifies the key performance area of the organisation. It is necessary to close the gaps in desired and current levels of skills by every organisation in order to be successful in market.
Identified Gaps in Chutney Mary
Up-selling: Up-selling guides restaurant to achieve the target set by the management. In Chutney Mary the management had always given the target revenue for every month. From this employees will be aware of their their target market for their product and what exactly they have to up-sell. There is quite a big gap in up-selling. The gap can be closed by giving training to the employees. This to have a good knowledge about the product and will also help in up-selling the brand image of the restaurant.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Time management: It is an important tool in the hospitality sector. Time is one of the factors that cannot be wasted by the management. Skill audit the employee’s are lacking in time management. It sis a new restaurant managers must plan and manage the time. Inability to this will lead to a greater loss of revenue and reputation of the restaurant. For closing this gap special training must be given to the employees regarding time management.
Communication skills: Being a hospitality sector it is very important to know that communication skills plays a vital role. In Chutney Mary many of the employees are Asians and Europeans and they are lacking in English so management must be well aware of it. As Chutney Mary is a new restaurant and which serve dietary food so there will plenty of reviews from the customers so staff should be able to explain the concept. For closing this gap management should arrange special English classes for those employees who are not fluent in English.
Product knowledge: Product knowledge is one the gaps identified in Chutney Mary. Every staff must aware of the product and brand standards. It can be close by undergoing proper training to the staff which enhance the employees to explain the customers about their product with ease and confidence which will result in up selling.
The restaurant must focus on shortage of skills and their impacts on Chutney Mary. From the given skills audit we can find out what skills are the employees lacking in:

Lacking in efficiency and effective communication skills.
Product knowledge.
Lacking in time management.
Up selling skills and its techniques of the employees are not up to the mark.

Impact of the gaps to the business and achievement of future plans and objectives of the organisation
Due to lack of skills within the staff the restaurant will not be able to achieve future targets. This may have an effect on the productivity and can lead to negative comments and perceptions of the customers.
Poor product knowledge may create lower revenue even when the staffs are doing through up selling. This can lead to the down fall of the gross revenue. Thus making lower profit on the sales volume.
Less knowledge about the product makes the staff aware of the highlights of the products. It may lead tension and lower self confidence while they are dealing with customers.
Lack of skills development results in less productivity of the organisation and poor and unacceptable performance.
Lack of communication skills results in higher guest complaints. It increases the number of unsatisfied guest, and thus to bad word of mouth publicity. Because of this the management fails to achieve their future goals.
Lack of up-selling technique will result in loss of potential revenue that the company can make. Organisation may lose its business in the market sector. They also have a fear of losing their market position.
Explanation of staff development plan
As per the results from the above staff development plan I have explain it in details the summary of these plan:

Up selling skills training: Up-selling plays an important role in the organisation since it helps to provide the value of the product than the price verified for it. Up-selling sales report is put on the board for every month to know the whether the restaurant has gain more revenue or not. The restaurant has a target of doing up-selling of £20,000/- and above within a month time. This will help to increase the sales by 30%. Various up-selling techniques are used by the trainer who are invited from outside. From the above calculation there was £206 per hour of average increase in the productivity rate.
Menu knowledge: having accurate Menu knowledge has a good impact on the employees. By giving the training about the menu, restaurant is hoping to achieve the target of 45% on sale within 3-4 months and from the above calculation it was observed that there was 10% increase in sales.
Time management: Time management is done in order to maintain the standards and consistency of the quality of work. Employees must be aware of the importance of time. The organisation has to keep the prospective of events like Olympics 2012.
Communication skills: Communication skills are important in any service industry. Customer should know the concept and is easier when the staff has good communication skill. From the above chart 70% of employees who lacked in the communication skill left the job. Training wise restaurant is improvising on training the staff regarding with their English language.
Increase in staff retention: In Chutney Mary Staff retention program is not effective. From the above plan it was found that 70% of staff has left the job. Creating a friendly atmosphere and providing regular staff training the restaurant is seeking to cut down the employment cost by 34% by the year end.

Summary of this plan

What has to be done: From the above development plan the restaurant is planning to train all the employees within the department. For achieving this, restaurant must follow the objectives which had prioritised in the staff development plan. Special training must be given to the following issues menu knowledge, time management, and communication skills and staff retention. Employees should be trained on the brand standards so as to achieve the objectives. For that restaurant must arrange staff meeting once in a week. And have feedback from the employees and must discuss about what has to be done in order to achieve the goals.
Who is going to do it: the objectives are for all level of staff within the organisation. It is the management’s responsibility for arranging the trainings and evaluating the staff progress. Performance appraisal or staff performance check can be useful for this purpose. If the employees are successfully trained and are able to achieve this objective they should be rewarded by the manager of the restaurant.
When it is going to be done: November 2009 the restaurant starts the evaluating employee’s recruitment, whether the employees are happy with their job through employees feedback form. December 2009 the restaurant will start with their up-selling training program and it will be finish by the end of 3 months. These programs will be held on Fridays and Saturday for every 1 hour. Brand standards training will be held in November 2009 on every weekend for 1 hour. Time management training will be held on January 2010 and communication skills training will be held in February November 2010. The time duration for all the training is of 3-4 months making them aware of time period. The employees who are lack in communication skills in English will be trained throughout the duration of the program. All this training will be done at management level and at the lower level as well.
How it is going to be done: In Chutney Mary management puts training session on the notice board and gives the reminder of the session on the contact number of the employee. Websites resources like brand awareness, customer recommendation helps the employee to grow.
Why it is going to be done: It is done to develop the personnel skills of the employees. The restaurant will have efficient staff to provide quality service after completion of the training session. This will result in increase in productivity of the restaurant and customer satisfaction and will improve. The restaurant will have a new face in the market sector, and within the competitors. This will be helpful to achieve the competitive edge against the competitors.
How much will it cost: Approximately £200-300/- quotation has been allocated to achieve these objectives. A total expenditure of the training session which consists of staff expenses, trainer expenses, training room charges etc. will be approximately around £1000 to £1500/-.

How will the plan benefit the business?
Training on up-selling helps to improve the skills of the staff while up-selling the product to the customers and gain more profit.
Knowledge of the menu is key tool for the staff so that they can recommend the needs to the guest. This will help the staff to improve their skills and knowledge.
Through time management staff will become more efficient and productive and they can become focussed in their work.
By using communication skill the staff is expected to interact with the guest and top level management with effective communication.
The staff retention will help to identify whether the employee’s are satisfied with their job. It also creates friendly atmosphere and as a result in smooth running of the operation. It also ensures that employee cost will be reduced.
This plan will help the business in getting more customers. It will also help to achieve competitive edge over the competitors and helps to creating a reputation within the market sector.
How staff development will be measured
Recommendation by the customers: In Chutney Mary they follow guest recommendation. They also follow the comment cards policy. While presenting the bill folder the comment cards are also presented and through comment cards we receive the customer’s feedback. For that we have mention certain ratings from 1 to 5. If the ratings are more than 5 staffs are rewarded. It also helps in checking the performance of the restaurant.
Sales tracking system: Management keeps track on the sale through this system. In the system they compare the present and past sales and even do the break even analysis.
Employees feedback and reviews: In Chutney Mary performance appraisal are given to the employees and this is done on every month.
Risk assessment
The introduction of risk assessment.
PLANS
FACTORS WHICH CAN AFFECT TO CHUTNEY MARY
PLANS TO REDUCE THIS RISK
High profits
Customers wants
Wastages
Product knowledge
Staff must be aware of product knowledge and must reduced the wastages and management must undergo research on their whether their likes and dislikes.
Knowledge of the menu
Various styles and trends
Based on climatic conditions.
Allergic situations
Staff must know the trends and styles to overcome the place in the market. They must be aware of the seasonal foods so that they can explain them to the customers. Staff must very well tackle with the allergic guest for that the menu knowledge is important.
Time management
Staffs that are not aware of this skill will not be able to able to meet and keep the time for future events.
Staff needs to focus and must always be on time so as to achieve these targets so as to avoid conflicts within the department.
Communication skills
Many of staff are not aware of English language will not be able to interact with the customers and colleagues.
English classes must be appointed for the staff and must be made compulsory and other languages must be banned within the restaurant.
Increase in staff retention
High staff turnover rate will lead to high employment cost.
Friendly working environment for the employees.
Executive summary
From this assignment we were suppose to find out the medium terms plans of Chutney Mary and their objectives and what are the changes must be undertaken to achieve the objectives.
Than we have to find out the skills audit for analysing the skills and goals of staff within the organisation.
On the basis of skill audit we have to identify the gaps between current capabilities and the capabilities required in future plans.
Than we have to find out the impact and what are the plans and objectives of this gap on Chutney Mary
In the second task we carried out staff development plan so that the organisation can achieve certain targets and identified the risks which were inhibiting the organisational goals.
Appendix 2
From the above development plan the restaurant is striving to achieve the
Medium term objectives. Staff development plan is measured and achieved in every month and in result it will show the figures in the report regarding the restaurant success.
NAME OF THE STAFF
TRAINING PROGRAMMES
TRAINING HOURS
TRAINING PERIOD
COSTS
Mr Gaurav Raje
(manager ) and Mr. Sachin malhotra
(trainer)
Communication skills 
 For every 2 hrs
Monday to Wednesday
Approximately 250 £  
Mr Oliver
(wine trainer and supplier)
Up selling  
For every 1hr 
 Friday to Saturday
Approximately 200 £ 
 Mr. Gaurav Raje
(manager)
 Time management
For every 1hr 
Thursday to Friday 
Approximately 250£ 
 Mr. Sachin Malhotra (trainer)
Brand awareness 
For every 1-2 hrs 
Every weekends or during briefing 
Approximately 150-200£ 
Mr. Rohit Shelatkar
(director)
Staff retention
For every 10-15 minutes
During staff meeting
Approximately 300£
Appendix 3
SALES REPORT OF Chutney Mary OF DATE 30TH OCTOBER 2009(FRIDAY):
Chutney Mary
Current
Last year
Last week
Covers
121
103
110
Food per head
35.47
31.45
33.78
Beverage per head
21.01
23.36
17.13
Total per head
62.78
54.76
58.99
Revenue net of VAT
6,789
4,943
5,767
Dinner analysis
Before 7pm
Main
After 10 pm
Covers
05
181
15
Revenue net of VAT
0
6,389
0
Calculation of average sales per hour:
Output = average number of covers in a day – average price per cover
Input no of people working on the day – number of hours worked
= 120 – 55
4 – 8 hrs
= 6600
32
= 206 £.
Average turnover rate of the employees: No of people leaving premises – 100 No of people working in premises
= 7 – 100 = 70%.
10
Menu knowledge and cost: food cost + maintenance + employees cost
= 1500 + 1000 + 7 employees in a shift –
8 per hour
= 2500 + 56 =2556.
= cost per day = 2556 = 21.3%
No of covers per day 120
Sales per labour hour: Total sales
Labour hour
= 6,389
40
= 160 £ sales per labour hour.
Appendix 4
Continuous professional development determines latest trends in the organisation so as to raise the capability of delivering professional service. It helps to maintain the high standards and quality within the organisation. It promotes the general welfare of the public. It helps to increase the competitiveness within the organisation
Appendix 5
What is a skill audit?
Skill audit is defined as a performance indicator which helps to identifies employee’s performance and skills and it is done at management perspective.
The purpose for arranging skill audit is to find out the skills and knowledge that the organisation requires as well as they presently possess. It helps in understanding employee’s strength and weaknesses and identifies and measures the functional skills of the organisation. These are done through training, job design, out sourcing etc.
Appendix 6
The steps for performing staff development plan:
To determine the needs and development of the staff. This can be done with the manager of various departments that interact with the staff members.
For the improvement of staff development plan the organisation must measure the improvement for each development it must be specific such has production rate must be specific for every organisation.
For performing staff development plan organisation provides specific training courses, cross training to the employees so that the organisation can addressed and improve the development plan for the staff.
There are two types of staff development plan they are generic plan which are used for specific jobs, positions for every employee.
Source: Author unknown (2008)
Staff development plan identify the potential resources of the employees to meet the needs.
Staff development plan helps employees to update their skills and knowledge in various areas of the department.
These plans are generally made at management level.
The objectives of the plans to make the employees aware of the brand standards and to make to sell the product and to get positive feedback from the customers and to improve the performance.
 

Process Control Design Implementation For Continuous Manufacturing of Tablets

Process control design implementation for

continuous manufacturing of tablets

Introduction

The pharmaceutical industry has always implemented the traditional batch processes for Active Pharmaceutical Ingredient (API) manufacturing. The industry has opted for batch processing as it has several advantages such as flexibility in configuration of the plant unit operations, convenient tracking of different drugs at various stages of production and dispatch and product recalls. However, there is high variability in the product quality from batch to batch, challenges in scale up from laboratory to pilot to commercial manufacturing scale1. Due to these limitations, the pharmaceutical industry is enthusiastic on implementing continuous manufacturing1-2. Continuous Manufacturing (CM) has various advantages when compared to batch processing such as: integrated processing with fewer steps, increased safety, no manual handling, less carbon footprint, lower capital costs, etc. Currently various organizations have filed for approvals for producing new drugs via continuous manufacturing and hence FDA is keen on developing new standards with respect to continuous manufacturing, serialization and traceability of the API and final dosage forms. However, there are several technical challenges involved in development and implementation of continuous manufacturing of powder-based API processes. Some of them are material characterization and properties prediction, online measurement, modelling and simulation of unit operations & processes1. The goal of this project is to develop and implement process automation and control system design to ensure robust operation of the Tablet press2.

Literature Review

Continuous manufacturing is strongly united with the FDA’s support of the quality-by-design (QbD) model for pharmaceutical development and manufacturing. QbD is a systematic, scientific and risk-based approach which demonstrates product and process understanding to implement effective quality control strategies to achieve the desired results.  A robust process can be developed by identifying the variation sources of product quality and design appropriate control strategies to mitigate the risks associated with it3.  Some of the main barriers of implementation of closed loop control and advanced strategies are:

Integration of hardware, software and process equipment sensors due to the lack of standard control systems

Challenges in real time/online/inline monitoring of process parameters

Control strategies appropriate for continuous tablet manufacturing processes are still under development

Lack of availability of a commercial control4  

Therefore, for the pharmaceutical industry to successfully transition to continuous manufacturing, a systematic framework for process control design and risk analysis is required5. Various control techniques, ranging from simple proportional-integral-derivative (PID) controllers to advanced model-based control strategies (MPC) and real time optimization have been verified for set point tracking and rejection of disturbances observed during the process5. However, it has been identified that a resilient and fault-tolerant plant wide control system design plays a vital role in executing a safe continuous manufacturing process6. By implementing the proposed systematic framework integrated with hardware control and sensing technologies, more efficient manufacturing operations and QbD can be easily facilitated in the industry4.

2.1 Systematic framework for Process Control Design and Risk Analysis

In a typical batch manufacturing setup, the quality control of the product is done by extensive testing of the final dosage form. Whereas, in a continuous manufacturing setup, the quality of the product and the intermediate streams leading to it should be monitored and controlled in real time at its specified points. The control framework should respond to all variations arising due to disturbances in process variables, equipment conditions, incoming raw materials so that the product quality is unaffected. This is usually referred to as real-time release. A systematic framework consisting of various process systems engineering (PSE) and process analytical technology (PAT) tools, to develop and evaluate feasible advanced control strategies is shown in Fig12, 5.

Fig.1 Systematic framework for process control design and risk analysis

By integrating additional knowledge and supporting tools with the proposed systematic framework, the design and implementation of the control strategy can be easily integrated into the software and hardware of the system2, 5. 

2.2 Resilient fault-tolerant control design

In general, there are two main approaches to handle faults. The first approach is to respond to the failure by re-organizing the remaining system parameters to complete necessary control functions. The second approach is to design a system which is failure proof for a well-defined fault/risk sets7.  Fault tolerant control systems primarily aim at preventing any simple fault from developing into a failure at system level and use information redundancy to detect faults6.

2.3 A Hierarchical three-layer control design

The risk of producing out of specification products can be reduced by implementing advance control strategies such as fault tolerant control systems and predictive state space models. Such models allow the application of control strategies that automatically adjust the critical process parameters (CPPs) in response to any disturbances created to ensure that the critical quality attributes (CQAs) are unaffected and in the desired specification range2, 5.

As shown in Fig. 2, a control strategy can include three levels of controls in pharmaceutical management systems. This proposed classification is general and can be applied to both batch and continuous manufacturing systems. Depending on the process requirement and desired control performance, the complexity of the design can be manipulated5, 8.

Fig 2. General three-layer classification of control strategies

For example (see Fig 3), in a direct compaction process using a tablet press, the Level 0 control includes single/multiple loop single input single output (SISO) control. This is executed using a programmable logic control (PLC) panel built in the equipment. Usually, this level of control is designed by the vendor to control multiple CPPs to the given set point as desired by the end user. The Level 1 control also involves a single or multiple SISO controllers, but the control loop mainly depends on the data measured by the PAT tools to control CQAs. The Level 1 control manages Level 0 using a cascaded loop to achieve the desired set points of the CQAs, measured in situ by the PAT sensors.  The Level 2 uses more advanced control strategies such as mathematical models to predict the effect of disturbances in the CPPs and CQAs. Level 2 can accommodate large multivariable systems and integrates multiple unit operations. Hence, it can be used for executing plant wide process control.

                

Fig 3. The hierarchical three-layer control design in direct compaction.

Experiments and Results

Seventy five percent of the pharmaceutical products manufactured are solids produced mainly using direct compression, dry granulation or wet granulation to accommodate the formulation requirements. 

References:

Ierapetritou, M.; Muzzio, F.; Reklaitis, G., Perspectives on the continuous manufacturing of powder‐based pharmaceutical processes. AIChE Journal 2016, 62 (6), 1846-1862.

Su, Q.; Moreno, M.; Ganesh, S.; Reklaitis, G. V.; Nagy, Z. K., Resilience and risk analysis of fault-tolerant process control design in continuous pharmaceutical manufacturing. Journal of Loss Prevention in the Process Industries 2018, 55, 411-422.

Lee, S.; O’Connor, T.; Yang, X.; Cruz, C.; Chatterjee, S.; Madurawe, R.; Moore, C.; Yu, L.; Woodcock, J., Modernizing Pharmaceutical Manufacturing: from Batch to Continuous Production. From R&D to Market 2015, 10 (3), 191-199.

Singh, R.; Sahay, A.; Muzzio, F.; Ierapetritou, M.; Ramachandran, R., A systematic framework for onsite design and implementation of a control system in a continuous tablet manufacturing process. Computers and Chemical Engineering 2014, 66.

Su, Q.; Moreno, M.; Giridhar, A.; Reklaitis, G.; Nagy, Z., A Systematic Framework for Process Control Design and Risk Analysis in Continuous Pharmaceutical Solid-Dosage Manufacturing. From R&D to Market 2017, 12 (4), 327-346.

Blanke, M.; Izadi-Zamanabadi, R.; Bøgh, S. A.; Lunau, C. P., Fault-tolerant control systems — A holistic view. Control Engineering Practice 1997, 5 (5), 693-702.

Jiang, J.; Yu, X., Fault-tolerant control systems: A comparative study between active and passive approaches. Annual Reviews in Control 2012, 36 (1), 60-72.

Yu, L.; Amidon, G.; Khan, M.; Hoag, S.; Polli, J.; Raju, G.; Woodcock, J., Understanding Pharmaceutical Quality by Design. An Official Journal of the American Association of Pharmaceutical Scientists 2014, 16 (4), 771-783.