Íàçâàíèå: Aircraft engineering (Ìîðîçîâà Ì. À.)
Æàíð: Àâèàöèîííûå òåõíîëîãèè è óïðàâëåíèå
Use of digital design data for maintainability assessment
The use of the digital data generated during design and complex digital assessment models are becoming a real alternative to traditional methods. The nature of complex design is that it is continually changing. The assessment of digital design data can be readily undertaken on the most recent version of the design, and can be conducted throughout the design process as the final design emerges.
These are issues facing all design organizations. Terex Compact Equip- ment, a tractor manufacturer, reported that they made extensive use of digital prototypes in the design of a recent product saving £50,000 for each physical prototype bypassed. Engineering Manager Ian Davies stated, «Costly design mistakes don't happen with virtual reality, as they are spotted and rectified speedily. This is engineering design at its purest, enabling us to reduce time to market and unit cost» (Professional Engineering, 2003).
Digital data used for design assessment has been commonly termed the
«virtual prototype" and the digital world where this prototype exists is known as the "virtual environment» (VE) or «virtual reality» (VR). A real environment is one in which the participant uses their senses to experience the environment, is immersed in the environment and can interact with it. A VE or VR is an envi- ronment presented to a participant who is given the means to interact with it, to allow some simulated experience of the real environment. There are five prin- ciple elements of a VE, environment fidelity, immersion, interactivity, presence and the HF of the interface.
1. Enviroment fidelity
In many VE applications, especially non-immersive ones, the visual sense is most dominant. Image fidelity therefore becomes a critical consideration. In the early days of VEs, when they were used primarily for visualisation purpos- es, developers aimed to recreate the real environment. The alternative approach however, is to develop an image that the user can relate to reality, but without attempting to recreate it. Rinalducci (1996) lists the following cues for fidelity – perception of self-motion, colour, stereopsis (which is the perceptual transfor-
mation of the difference between the two monocular images seen by the eyes), depth cues (which allow users to sense depth in a VE), texture, luminance, field size and spatial resolution. When a VE is used for visualization purposes only, fidelity is therefore crucial. However, implementing a high level of fidelity is computationally expensive, reducing the performance of the VE and affecting the capability of the user to interact with the VE effectively. The performance is often measured by the speed with which the view presented to the user is up- dated. When the focus of the VE is on functionality rather than visualization, fidelity becomes less of a consideration, which enables developers to optimize the speed with which the view presented to the user is updated. This trade-off between fidelity and speed continues despite continuing increases in computing speed and capacity (Burdea et al, 1996).
Immersion is an element of perception, but it is also fundamentally linked with the technology employed. There are numerous VE technologies that facili- tate varying levels of user immersion. These range from desktop monitors or wall mounted screens that present a two-dimensional image and are non- immersive, to Head Mounted Displays (HMD) utilising LCD screens mounted on a helmet to give a stereoscopic, biocular or monoscopic display, which are fully immersive.
The choice of the level of immersion required is dependent on the task that is to be performed. Immersive environments have been found to be effec- tive at training prior to assembly tasks (Boud et al, 2000) and for navigation in large-scale environments (Ruddle et al, 1999).
Immersion cannot be considered in isolation from interactivity, as the amount that a user can affect and be affected by a VE increases the level of immersion (Witmer and Singer, 1998). Immersive systems are often multi- sensory due to the fact that more than one sensory modality is used to display the environment, or allow input into the environment. Interactivity is a two-way process, the output of the environment to the user, and facilitating mechanisms for the user to input commands.
Information can be provided to a user visually, via audio or via tactile in- terfaces. The user can input to the VE via desktop peripherals (such as the mouse and keyboard for two-dimensional applications, or CAD dials for three- dimensional applications), by voice commands, through motion (to position and move the body in three-dimensional space within the VE and to input com- mands to the VE by gesture recognition) by muscle activity (sensing the con- traction of muscles), by eye movement, or through brain activity. It is crucial that the capabilities and limitations of the human participant are considered when determining the modes to be combined in a VE application. Stanney et al (1998) concluded that in most applications input to the VE should be primarily mono-modal, but output should be multi-modal.
One of the significant features of the VE is the psychological sense of
'presence' that the user feels. Witmer and Singer (1998) define presence as
«...the subjective experience of being in one place or environment, even when one is physically situated in another". They state that the principle factors that influence presence are involvement (i.e. total focus on a set of stimuli that are interesting or significant to the participant) and immersion (i.e. the perception of a participant of total inclusion in an environment with constant stimuli). The example of an arcade game can illustrate this point, in which participants can be very involved in the experience by receiving a set of stimuli, but have low pres- ence due to the lack of immersive characteristics. They conclude that some iso- lation of participants from the stimuli of a real environment as can be facilitated by HMDs in immersive VEs, improves the level of presence. Also allowing the participant to perceive themselves affecting the VE and being affected by it and minimizing awkward, slowly responding environments and uncomfortable pe- ripherals helps the level of presence.
Cobb et al (1998) correlated measures of enjoyment and presence of par- ticipants in a VE and found that a sense of presence enhances the enjoyment of the VE. It was also found that when participants suffered a high level of Virtual Reality-Induced Symptoms and Effects (VRISE) symptoms they reported lower levels of presence. Although these references outline factors affecting presence,
there is no direct evidence to support the contention that increased presence will improve learning and performance. A number of references, including Stanney et al (1998) report a belief that an increased level of performance is achieved by increasing the level of presence but no evidence is supplied to support this. Witmer and Singer (1998) explain that if performance levels can be shown to vary in a predictable manner as level of presence is manipulated, then a strong link between presence and performance can be demonstrated, but this has not yet been conclusively proven.
1.5. Human factors of virtual environments
There are a number of HF issues that result from the use of a VE. These include physical comfort and safety, simulator sickness (a type of motion sick- ness where the characteristics of a simulated experience, primarily visual, do not match the perceptual requirements of the human), the difficulties of stereos- copic image presentation, the lack of accommodation of user differences and the social impact of using VEs. Such issues are more prominent in the use of immersive VEs and must be considered carefully in a VE application.
The advantages of utilising digital design for maintainability assessment and demonstration are illustrated. However, the challenge is how to facilitate the interaction of maintainability engineers with the digital design to make these
'virtual' maintainability assessments, and which type of interaction is most suited to maintainability assessment.
Digital design can be viewed in a much more interactive manner on a computer screen than on a piece of paper. Rather than being simple two- dimensional projections of the design facilitated by paper drawings, digital data can be viewed as a three-dimensional object, and the viewing angle and direc- tion can be manipulated with ease. Specific zones of the design can be focussed upon to enable accurate visualisation of designs and hence accurate assessment of the maintainability.
Modern CAD tools also integrate additional modules developed to per- form some maintainability assessment. These include modules allowing the us- er to assess the 'fit' of an assembly in relation to others around it and verifica- tion of whether a component can be installed and removed for maintenance or
assembly purposes. Although digital design tools facilitate enhanced visualisa- tion and integrate assessment aids, much of the assessment of maintainability is dependent on the interaction of a human with the product. Maintainability as- sessment and demonstration can therefore be undertaken more efficiently and effectively by introduction of a representation of the human into the digital de- sign environment. This is achieved by the insertion of virtual representations of humans into the VE, which are commonly termed virtual humans. There are two types of virtual humans, the agent and the avatar. An agent is a virtual hu- man that is controlled by software driven by a user non-immersively and an avatar is a virtual human controlled and driven by a live immersive participant, i.e. the user is 'being' the virtual human. These will be considered in greater de-
2. Non – immersive virtual humans
Agent virtual humans are biomechanical human models with representa- tive anthropometry and joint constraints that are used to mimic human motion. Figure 2 shows Transom Jack, one such agent virtual human. The main advan- tage of agent virtual human systems is that they enable the user to perform de- tailed analyses of the ergonomics of tasks, either by visualisation of the task by a HF expert as would be done in a traditional task analysis or by using in-built ergonomic assessment algorithms. There are many commercially available agent virtual human software packages that include features to aid in maintaina- bility assessment. The agent virtual human can be constructed to represent the size of a representative population via an anthropometric database. Anthropo- metry is the scientific measurement of the human body and various databases have been developed to summarise the sizes and shapes of different popula- tions. In order to conduct a valid maintainability assessment, the end user popu- lation of the system under development must be defined. For maintainability analysis, maintenance tasks are then trialled on the new design, using a repre- sentative range of personnel from this population.
Agent virtual humans have in-built joint limits and constraints based on a biomechanical model, allowing the application of algorithms to define accurate human motion, called inverse kinematics. These models use the skeletal hie-
rarchy to calculate relative motion of body parts, e.g. when an agent's left hand is raised, the model calculates that the left lower arm and upper arm must also move, within the limits of the joints.
Collision detection is integrated to determine if there has been a clash be- tween different objects within the virtual environment. This is particularly use- ful during maintainability assessment to illustrate when there is a collision be- tween the virtual human and any other object.
Assessment of reach can be achieved by moving the end effectors (i.e. the hand or foot) to the desired location by manually dragging the hand or foot to the desired location or via automated means using reach algorithms, reach areas and reach volumes. Vision can be assessed by the display of windows showing the view of the virtual human or vision cones can be generated to indicate what is within vision of the virtual humans and what objects are obscured.
A number of automated ergonomic analysis routines are available in agent virtual human software packages, including the National Institute of Safety and Health (NIOSH) lifting algorithm, the Snook Lift/lower, Push/Pull and Carry, energy expenditure evaluation such as the Rapid Upper Limb Assessment (RULA) posture analysis.
Motion synthesis is used to generate motion in an agent virtual human. The user positions joint and end effectors of the agent manually, although in- verse kinematics algorithms supplement this non-immersive manipulation of the virtual human. Such interaction requires some form of human-machine inte- raction device and although the mouse and keyboard can be used, there are more intuitive alternatives. The «Monkey»is a desktop VE peripheral that can be used to directly manipulate virtual humans. This human model sits on the desktop allowing users to adjust the joints, and position sensors relay the cor- responding joint angles to the virtual human. No calibration of the agent size is required as they can be scaled using percentile models or agents with real hu- man dimensions can be created from an anthropometric database.
Non-immersive agent virtual humans are applied extensively in industry. Lockheed Martin used agent virtual humans to reduce maintenance costs for the F-16, F-22 and JSF aircraft (Albers and Abshire, 1998). The Boeing Company
introduced agent virtual humans of representative size (5th percentile female to
95th percentile male) to simulate tasks using operational equipment (such as NBC and Arctic clothing) in the development of the V-22 Tiltrotor Osprey (Boeing, 1998). Airbus has used agent virtual humans to evaluate the emerging design of A380 since 2002 (Krueger, 2003).
Non-immersive virtual environments and agent virtual humans support the simulation of tasks with end users of varying anthropometrics (wearing pro- tective clothing if applicable), having in-built algorithms to analyse issues such as strength requirements, posture comfort and manual handling. However, non- immersive agent virtual humans do have weaknesses. These include the ques- tionable validity of the anthropometric models and automated analysis algo- rithms, which must be considered if the results are to be fully accepted by the design organization. In addition to this, if such in-built mechanisms to assess maintainability and HF are implemented in normal design tools, it introduces a risk that designers will draw incorrect conclusions. This is considered by Porter et al (1995), who state that, «The systems are designed to supplement an ergo- nomist's skills, not replace them». This must be carefully avoided in the short term, but maybe used as a longer-term goal. Another weakness of agent virtual humans is the means of generating motion. This is not very intuitive, forcing the user in some occasions to conduct detailed frame-by-frame editing of the post- ures of the agent, reducing the efficiency of the analysis.
3. Immersive virtual humans
Avatar virtual humans do not require complex representative geometry due to the fact that the live participant immersed in. The digital data provides body size and constraints on model. IdeaWy, sensors track all joints and end ef- fectors to produce a realistic avatar motion, but usually the number of sensors is minimised and inverse kinematic algorithms (similar to those used in agent vir- tual humans) calculate intermediate joint positions and orientations. The im- mersive devise software package is an example, as shown in Figure 3. Such sys- tems implement very few (if any) in-built ergonomic assessment functions be- cause they aim to enable the user to intuitively simulate tasks (by undertaking
them physically). Hence a HF expert is expected to assess the task ergonomics by conducting the task themselves, or by visualizing others undertaking the task. Motion is generated in an avatar virtual human through motion capture off-line or on-line. On-line motion capture is achieved using positional sensors placed around the body to drive an avatar in real-time. Such systems can be employed directly, i.e. with no intervention, but the end effectors paths are of- ten not accurately mimicked resulting in a simulation lacking in realism and it is difficult to model subtle human movement. Off-line motion capture is a two stage process; sensors positioned on the body translate motion into data which is post-processed using complex algorithms and human intervention to digitally recreate human motions in the VE. Although this develops a realistic simulation (it is used extensively in film making), significant human intervention is re- quired to process the captured motion and it is still difficult to model subtle human movement.
Both on-line and off-line motion capture require a tracking system based on electromagnetic, infra red, laser, mechanical, inertial, acoustic or computer imaging tracking systems. Whichever tracking system is employed, it is also important to determine the optimal number of sensors, but generally it is best to minimise the number of sensors to reduce the encumbrance on the operator (Badler et al, 1993). The choice of tracking system and number of sensors should be based on the environment in which it is to be employed and the kind of applications it will be used for.
Examples of industrial application of immersive virtual humans are less common. The National Aeronautics and Space Agency (NASA) used very few physical prototypes in the design of the Space Station Freedom (SSF) favouring the utilization of high fidelity, detailed digital prototypes and avatar virtual hu- mans (Miller and Tanner, 1993). Lockheed Martin used both desktop virtual humans and immersive VEs to reduce maintenance costs for the F-16 and F-22 aircraft (Albers and Abshire, 1998).Immersive interaction with the VE using an avatar virtual human is intuitive and an efficient means of interacting with a vir- tual prototype. However, avatar virtual humans also suffer from a weakness common to traditional physical prototypes, that it is very difficult to assess the
suitability of a prototype for users with representative anthropometrics, or simu- lating protective clothing. Immersive applications also possess few capabilities for automated analysis of HF or maintainability. These issues force a continued reliance on the expertise of the maintainability and HF personnel. In addition to this, it is not proven that having a sense of presence in the digital design adds any benefit to the assessment of maintainability. Immersive VE technologies driving avatar virtual humans are most suited to the communication of maintai- nability and HF challenges and visualization of the virtual prototype in the col- laborative design review process and for demonstration of maintainability.
This paper outlines the basic requirements of a maintainability assessment process and how virtual humans imbedded into the digital design can address these requirements. However, this can only be achieved by implementing a ro- bust virtual maintainability concept, considering end users of varying anthro- pometrics and protective equipment, evaluating issues such as posture, strength and fatigue to highlight where design changes are required due to weaknesses of physical HF issues or maintainability. The capability should be available to allow maintainability engineers or HF specialists to demonstrate such weak- nesses to design teams, and invoke a collaborative process to define solutions to such weaknesses. This capability should also be applicable for the demonstra- tion of maintainability to customers.
The effectiveness of VE technologies to deliver such a concept is depen- dent upon the level of environmental fidelity, the depth of immersion, the means of interaction and the HF of the VEs employed to conduct the analysis. Different VE technologies have varying strengths, but also significant weak- nesses, so care must be taken in how they are deployed, and which needs they are implemented to address. A robust virtual maintainability assessment and demonstration concept embraces the strengths of various VE technologies, con- sidering where non-immersive and immersive VE technologies can play a part.
It is the challenge of the virtual maintainability concept to embrace these strengths and to understand and overcome the weaknesses in the immersive and
non-immersive applications. An ideal virtual maintainability concept would ca- pitalise on the efficiency of immersive motion capture and the effectiveness of assessment using agent virtual humans, using the intuitive interface of motion capture to drive an agent virtual human, allowing off-line automated analysis. Although in the implementation of this some problems still exist, the concept of this approach is extremely powerful, and one that has the potential to revolutio- nise maintainability assessment.
In order for such a virtual maintainability concept to become a successful reality, it is crucial for the design organisation to have a high-level strategy for effective implementation. Most importantly, there is a need to consider the ef- fects of the concept on the organisation as a whole, and the participants within the organisation. It is clear that although the net effect on the organisation aims to be an improvement in efficiency, the effort to achieve optimal maintainabili- ty is that tasks will be re-distributed between individuals or between teams, new tasks will be created for individuals or entirely new roles are required. These elements of the implementation mechanism must be well managed to ensure the aims are achieved.
The digital design represents a significant improvement over traditional paper drawings and physical prototypes. However, it also introduces new tech- nological and training challenges to be overcome in order to make virtual main- tainability assessment a reality.
Should aero-engines be recovered by module exchange ?
Mirce Akademy, Woodbury Park, Exeter, EX5 1JJ, United Kingdom E- Mail: John. Crocker(a),DS-S. com accepted September 13, 2007
This paper uses a simulation model to investigate the case for and against the policy of engine recovery by module exchange for a representative engine used on civil aircraft.
Key words: Maintenance policy, helicopter engine, helicopter engine re-
covery, helicopter module exchange.
In 1969 Rolls-Royce designed a helicopter engine based on a modular construction. The engine could essentially be considered as an assembly of just seven modules, units or sub-assemblies. The basic philosophy behind this de- sign was that if a module contained a part that required invasive maintenance, the engine could be recovered by removing this module and replacing it with one from the spares pool. The removed module would then be sent away to be recovered by repairing, reconditioning or replacing the errant part (or parts) and having been recovered would be added to the stock of [serviceable] spares.
The benefits of this philosophy are that the removed engine can be stripped (disassembled) down to the rejected module, the module replaced and the engine rebuilt in far less time than it would take to recover the module and that this work is capable of being done at an air force base (2nd line) whereas the module would normally be sent back to a depot (3rd line) or the contractor (4th line). Another advantage is that the base need only carry a relatively small number of spare engines and modules and no parts.
Although most gas turbine engines since 1969 have also been designed to be modular, very few civil engines are actually recovered in the way described. Most are treated as non-modular in so far as when they are rebuilt they use the same modules as the engine had when it entered the workshop. This ensures that the modules installed have always been operated by the same airline so
they have a certain pedigree. It also avoids difficulties of matching modules to- gether. All components are manufactured to be within certain [engineering] to- lerances. If one module is close to the lower end and the one it is being fitted to is close to the upper end then it is possible they will not fit together or, if they do, they will not form a tight seal or the mismatch may be due to other more technical reasons.
Gas turbine engines used on aircraft have to meet stringent safety regula- tions. To minimise the probability of a catastrophic failure, such as an uncon- strained disc burst, a number of parts are given a "hard life". This is an age measured in engine flying hours (EFH) or cycles where for civil aircraft one "cycle" is typically clocked up in one flight. A life-limited component is re- quired, as part of its airworthiness certificate, to be replaced on, or before, reaching its hard life. It is common practice to set the hard lives of all of the life-limited parts to exact multiples of the lowest part. Because all of these parts are aging at the same rate, they will all reach their life limit at the same time as others with the same limit. By setting these limits as multiples, it will tend to minimise the number of planned engine removals albeit at the expense of max- imising the amount of life one might get from each part. If engines are recov- ered by module exchange, the ages of the life-limited parts are likely to quickly get out of synchronisation and hence lead to more planned arisings.
The main benefits of modularity are that it allows the system or sub- system to be recovered a great deal quicker. In the case of an aero-engine, it is generally possible to strip and rebuild it in less than ten days. By comparison, it usually takes anything from 2 months to 9 or more to recover the modules by repairing, reconditioning or replacing the parts. Obviously, if the engine has to wait for the original modules then it must wait for these modules to be recov- ered before the rebuild can start.
With the engine being out of service for so much less time, there will in- evitably be a need for fewer spare engines. However, in order to turn the engine around in the minimum time, there will of course need to be spare modules on
hand. Given that not all modules are likely to be rejected at every engine shop visit (recovery) and that the rejection rates for each of the modules will be dif- ferent, there is likely to be a requirement for different numbers of spares of each type of module. It is also possible that the recovery times for each type of mod- ule may be different which could introduce even more variation.
With the aircraft industry's emphasis on safety and the fact that engines are invariably run on a test bed before being passed as serviceable, most engine removals are due to external factors, typically foreign object damage, or age- related causes but rarely for poor design, poor quality control or maintenance induced failures. Engine health monitoring (EHM) systems are able to prevent the actual failure of a component due to most age-related causes but are some- what less successful when it comes to those resulting from external [non-age- related] factors. In this case, EHM will only work if the damage is not serious but merely starts off a failure mechanism with failure occurring a significant time later (i.e. sufficient to identify a change in the monitored signals).
Within an engine there are components whose failure are classified as ca- tastrophic, i.e. likely to cause the loss of the aircraft and/or life. In some cases, the expected life of these components will be less than that of the engine or air- craft. To reduce the risk of a catastrophe, these components may be given a
«hard life» or «life-limit» which means such a component has to be removed (and replaced) on or before achieving this age. The limit is set such that the probability of the component failing before this is «acceptably low». Before the limit is set, a sample of components are subjected to accelerated life testing to determine the population's expected time to failure and its variance. This policy, of course, can only work if the cause of failure is age-related – it has no place with externally caused failures or those resulting from inadequate design or quality control.
If an engine is taken off-wing (or out of service) because a component's age has reached its hard life, the event is generally referred to as a «planned arising». Clearly, the event can be planned because the age at which it is due to take place is known, the age of the component is known and the rate at which the component is aging is also known, therefore it is relatively easy to predict
when the event will take place, given nothing happens to change it in the mean-
Engines removed due to EHM are also sometimes referred to as planned arisings, although in this case, the time between when the onset of failure is first detected and when the component is actually expected to fail will generally be very much shorter and may indeed be a matter of only a few hours. At present, the times to failure distributions do not differentiate between actual failures and those removed just before they fail; the failure having been prevented by EHM so for the purposes of this paper, these will all be referred to as «unplanned aris- ings».
It should be noted that with aero-engines, there are very few occasions when an engine is removed for maintenance and no fault is found. For this ex- ercise, such events were ignored. There is, of course, a chance that the «fault» that has been found may not be the one causing the original symptoms which led to the engine being removed but that is unlikely to stop it being recorded as the primary cause of rejection.
If a part does fail, it is quite common for it to cause damage to other parts, usually downstream of it, on its way back through the engine and out into the exhaust. A bird strike, for example, may cause damage to the LP Compressor (fan) and, if large enough, continue through to damage the IP Compressor and even the HP Compressor. With high by-pass engines, the probability of reach- ing the HP Compressor is likely to be very small (unless the bird ingested hap- pened to be an emus in which case the whole engine would likely be written off). Because of the serial nature of an engine, a failure of a component towards the front of the engine is much more likely to cause damage to those nearer to the back. There is however a relatively low probability that if component/ fails it will cause damage to component / without having first caused damage to component k (if j is forward of £ is forward of/).
When an engine is opened up (or rather, stripped down) for invasive maintenance, the opportunity is taken to inspect all of those parts that have been exposed. Inspections can be done while the engine is on-wing using a horos- cope (like the endoscopes used in medicine) but these may not be 100\% effec-
tive, also the damage found may not be sufficient to justify an engine removal. If, however, the engine has already been removed then the damage found may be too much to allow the part to remain in the engine. The damage may be quite unrelated to the primary cause of the engine removal so is recorded as «found damage» rather than «caused damage».
During the engine recovery, life-limited parts will be checked to deter- mine their life remaining. If this is less than the desired amount, usually referred to as the «Minimum Issue Life» or MISL, the part will be rejected (as a second- ary time expiry – «2TX»). The actual value of the MISL may depend on where the engine and/or module containing the part is recovered. In the Defence sec- tor, engines are typically recovered by module exchange at the base (2nd Line) whereas a module would normally be sent to the depot/maintenance unit (3rd Line) or to a contractor (4th Line) where it would normally be subject to a high- er MISL (say 400 hours rather than 250 at a base).
Parts with age-related failure modes may also be given «soft lives». In this case, if the part has achieved an age which exceeds its soft life at the time the engine has been removed for invasive maintenance, that part will be rejected and either reconditioned or replaced. The argument supporting a soft life is that there is a significant chance that this particular part will be the cause of the next engine removal if it is left in the engine and that the expected time to that re- moval is less than some arbitrarily defined limit.
An alternative policy which is considered to be superior to the soft life one is to use the «target build life" to determine which parts, if any, should be replaced during a given engine shop visit. The expected build life for a new en- gine is defined as the age (t) such that the sum of the cumulative hazard func- tions for all of the primary failure modes of all of the components in the engine is unity. Note that for the purposes of this exercise we can assume an engine is a series system - in reality this is very close to the truth.
Clearly it is impractical to divide a fleet of aircraft into two sets and main- tain one set using module exchange and the other refitting the same modules
over the life of the fleet to determine which is the more cost effective. The al- ternative is to produce a mathematical model which can simulate the operation of a fleet and the maintenance and support thereof using different policies.
At around the same time as the Engineers were designing the first mod- ular engine, the Operational Research Scientists at Rolls-Royce in Derby were developing a discrete event simulation model in Fortran to run on their IBM mainframe computer. Some six years later, the author took over the develop- ment of this model called «ORACLE» (Operational Research AirCraft Logis- tics Evaluator) and has maintained this responsibility to this day although the latest version is now called PSALMS (Platform Support, Arisings, Logistics and Maintenance Simulation) which is currently in the process of replacing the
1990 MEAROS (Modular Engine Arisings, Repair and Overhaul Simulation) model. Both are in Simscript II.5 with the earlier model having been written by CACI Products jointly funded by the [UK] Ministry of Defence and Rolls- Royce.
PSALMS is a discrete event simulation which can model the operation, maintenance and support of any number of platforms or assets of any number of different types from the time they are delivered into service until the time they are decommissioned or scrapped. For the purposes of this exercise, only one type of platform will be considered and only a relatively small fleet of 30 4- engined commercial airliners.
3.1 System Configuration
Each platform (in this case an aircraft) can be made up of a number of line replaceable units (LRU) (e.g. engines). Each LRU may be broken down in- to a number of modules and each of these into a number of parts. There may be multiple occurrences of each type of component (LRU, module or part) within the system some of which may be redundant.
3.2 Component Reliability
Every type of component can own a set of failure modes, i.e. can fail in a number of different ways. Each failure mode will be described by its own Wei- bull time-to-failure distribution. Each part may have a hard life, i.e. be life- limited. The model also allows components to have routine inspections which
can be carried out on or off the platform but for this exercise, this option was not used.
In addition components can have secondary rejection causes including be- nign failure modes, caused damage, found damage, minimum issue lives and soft lives. These come into effect once the engine has been rejected for a prima- ry cause and in ways that are to a certain extent dependent on that cause.
The age for each type of component rejection can be given in engine fly- ing hours (EFH) or cycles. The average number of cycles per EFH is dependent on the type of component and the role the aircraft is flying at the time.
3.3 Aircraft Operation
When aircraft are delivered, they are normally assigned to a task which will be located at one of the operational sites. Each task is given a role which defines the length of each flight, mission or sortie, the minimum time between the end of one flight and the start of the next (referred to as the preparation time) and, amongst other things, the cyclic exchange rate. The task is given a start and end date, the number of aircraft to be assigned to it at the start and the target number of flying hours per aircraft per month (referred to as the fore- casted flying rate). The model selects the aircraft and schedules each flight based on the number of hours achieved to date against the target taking into ac- count the number of days to the end of the task. If there are no aircraft available at the time the flight is scheduled, the task simply waits until such time as a suitable aircraft does become available. If there happens to be several aircraft available at this time, the model effectively chooses the one which has been on the ground for the longest time.
At the end of each flight, the age of the aircraft is incremented by the du- ration of the flight. This is then checked against the time to next «aircraft grounding» and either grounds the aircraft or starts preparing it for the next flight.
3.4 Aircraft Grounding
After the aircraft has been grounded the cause is identified, which for this exercise will always be due to one of the engines having reached its time to next rejection. An aircraft could be grounded because it has been scrapped, crashed,
is due for a routine service or because one of the other LRU (line replacea- ble/repairable unit) has reached its time to next rejection. At the time the air- craft goes into service (from new, at the start of a task or after having been re- covered following a grounding), the model determines the next time it is due to be grounded for all of the possible causes going down through each LRU to each module and to each part within them. For components with failure modes, the times to failure are sampled from the given failure mode distribution taking into account the current age of the component (in the appropriate aging units). These times to failure remain unchanged throughout the component's life until it either reaches this age, is rejected for some [other] reason or there is a change to one of the parameters affecting it.
3.5 LRU Recovery
Having identified all of the LRU that have exceeded their «next time to rejection», the model then decides what to do with them. In some cases, recov- ery of the LRU may be possible whilst it is still attached to the platform («on- wing») but in most cases it will be necessary to first remove it from the aircraft. Either way, the model will «inspect» any components which are defined as
«visible» by the user. It will also check the hours/cycles remaining on life- limited parts to see if any are within the minimum issue life applicable to that component at that site. If this identifies additional rejections then the model will check again to see whether it is now necessary to remove the LRU (if it has not already done so).
If the LRU has been removed from the aircraft, it will be moved to one of the maintenance sites. The maintenance sites are defined in a sort of hierarchic- al structure with the operational sites at "echelon" (or level) 1. In the case of military aircraft, this is the «O – Level», 1st Line or Squadron. These are typi- cally supported by the intermediate (I-level), 2nd Line or Base at «Echelon 2» with the «Depot», «D-level», «MU» or 3rd Line at «Echelon 3» and the contrac- tor or 4th Line at «Echelon 4».
Normally engines can be stripped, rebuilt and tested at Echelon 2 but if the manufacturer has negotiated a «Total Care Package" with the operator, they may be sent straight to the contractor with 2nd and 3rd lines being no more than
supply sites (storage areas for serviceable spares). All of this can be defined by the user. The model simply determines from the data given which is the nearest site capable of doing the work. It then checks this site to see if it has the capaci- ty to handle this particular LRU and if it has then it will be moved straight to the stripping facility there. Otherwise the model checks to see if the LRU can be put into a holding queue until the required resource is available and added to it. If there is no room at the site, the model checks the capacities of the facilities at the next echelon and so on until it either reaches the contractor or has been ac- cepted at some intermediate echelon.
If the LRU has been moved to a deeper echelon, it is likely that a different MISL will apply and usually this will be higher so the model needs to check again to see if any components are within this new MISL. If this causes addi- tional components to be rejected it might make new components «visible» for inspection and it might mean the engine has to go to a deeper echelon for re- covery where new MISL may apply.
Having identified all of the components that need to be repaired or re- placed and decided where this should be done, the model then starts the strip- ping process. It uses a strip sequence to determine which offspring have to be removed to gain access to those that are rejected. From this it also determines which is the deepest and uses this to decide how long it will take to complete the strip.
Once an LRU has been stripped, it is then ready to be rebuilt. The model will try to refit the modules that came off the LRU but if these are not available then it must either find alternatives or wait until they are available. If LRU re- covery by module exchange is permitted then the model starts to search the supply environment for suitable replacement modules otherwise it puts the re- build on hold until the removed modules have been recovered. The rebuild process cannot start until all of the offspring have been amassed at the site of the parent. None of these modules (that are not currently installed) are allocated to the parent until the last one arrives as they could be used by another LRU.
When a full complement of modules is available, the model requests a re- build resource and waits until this too becomes available (in practice, this is in-
stantly) then starts the rebuild taking the time corresponding to the deepest component to be replaced (in the same way as it did for stripping)
3.6 Removed Components
Components removed purely for access will be added to the spares pool however they will keep the identity of their parents so that they can be refitted to the same engine if this is required. If the engine is to be recovered by module exchange then there is no need to refit these components to same parent but in practice, they would normally be refitted. The exception would be if there was a shortage of a particular module and using one from this engine would allow another engine to be recovered sooner than waiting for a recovered module to become available. It is unlikely, however, that additional modules would be re- moved from an engine in order to meet a requirement elsewhere.
The rejected components go through a very similar process to the engine. The model has already identified which of its offspring have been rejected so it can now decide where to send it for recovery. This will also be based firstly on capability then on capacity. If it has been decided to move the component to another site/echelon then the model will need to check the MISL and, if this adds new components to the rejected list, check the «visibility» and the capabil- ity again.
The parent is then stripped using a time based on the depth of strip re- quired. The serviceable components removed for access are added to the spares pool. The model now needs to decide what to do with the rejected components. If these contain any rejected offspring then the process is repeated at this new level of indenture but eventually it will get to the lowest level component or to one which does not contain any rejected offspring. It is possible for a module to be rejected without any of its constituent parts being implicated often because there is insufficient in-service failure data to allow a confident estimate of part level time-to-failure distribution parameters so the failure mode is applied at the module level.
3.7 Repair, Recondition or Replace
Rejected components with no rejected offspring may be either repaired or reconditioned (which is effectively the same as being scrapped and replaced
with new). The decision will be based on the cause of rejection - if a life-limited part has reached its hard life or is within the MISL or a component has ex- ceeded its soft life then it would be reconditioned. If the component is «beyond repair» it too would be reconditioned. The model uses a simple probability to decide if this is the case.
As with stripping, different sites can have different levels of capability so one site may be able to repair some modules and parts but may not be able to recondition them for example. Each site may have a limited capacity in terms of how many repairs or reconditions it can handle simultaneously or how many it can hold in a waiting queue if all of its resources are busy. If it is decided the given component has to be moved then it will be sent to the first site with both capability and capacity and it will take a given time to get there. Once there it either goes into the holding queue or straight into the relevant workshop. De- pending again on whether it is a repair or recondition, the model will determine how long the recovery will take. In theory, it could sample this time from the relevant distribution but in practice recovery times are usually given as con- stants. In practice as well, it is unusual to set limits on capacity as the recovery times are usually calculated from the total elapsed time components spend in a workshop and are rarely broken down into time spent waiting and "hands-on"
Once a component has been repaired or reconditioned it is added to the spares pool at the recovery site. The model will then check to see firstly if a parent is waiting for a spare of this type or, if not, whether another site has rec- orded a shortage (against its reserve level) or finally if by adding this one to stock will cause this site to exceed its maximum holding level. If the spare is needed elsewhere then it is transported to the demanding site otherwise it is simply added to the spares pool at the current location.