Thursday, 12 September 2013

TRANSMISSION (mechanics)

A machine consists of a power source and a power transmission system, which provides controlled application of the power. Merriam-Webster defines transmission as an assembly of parts including the speed-changing gears and the propeller shaft by which the power is transmitted from an engine to a live axle.Often transmission refers simply to the gearbox that uses gears and gear trains to provide speed and torque conversions from a rotating power source to another device.

In British English, the term transmission refers to the whole drive train, including clutch, gearbox, prop shaft (for rear-wheel drive), differential, and final drive shafts. In American English, however, a gearbox is any device that converts speed and torque, whereas a transmission is a type of gearbox that can be "shifted" to dynamically change the speed-torque ratio such as in a vehicle.

The most common use is in motor vehicles, where the transmission adapts the output of the internal combustion engine to the drive wheels. Such engines need to operate at a relatively high rotational speed, which is inappropriate for starting, stopping, and slower travel. The transmission reduces the higher engine speed to the slower wheel speed, increasing torque in the process. Transmissions are also used on pedal bicycles, fixed machines, and anywhere rotational speed and torque must be adapted.

Often, a transmission has multiple gear ratios (or simply "gears"), with the ability to switch between them as speed varies. This switching may be done manually (by the operator), or automatically. Directional (forward and reverse) control may also be provided. Single-ratio transmissions also exist, which simply change the speed and torque (and sometimes direction) of motor output.

In motor vehicles, the transmission generally is connected to the engine crankshaft via a flywheel and/or clutch and/or fluid coupling, partly because internal combustion engines cannot run below a particular speed. The output of the transmission is transmitted via driveshaft to one or more differentials, which in turn, drive the wheels. While a differential may also provide gear reduction, its primary purpose is to permit the wheels at either end of an axle to rotate at different speeds (essential to avoid wheel slippage on turns) as it changes the direction of rotation.

Conventional gear/belt transmissions are not the only mechanism for speed/torque adaptation. Alternative mechanisms include torque converters and power transformation (for example, diesel-electric transmission and hydraulic drive system). Hybrid configurations also exist.

The simplest transmissions, often called gearboxes to reflect their simplicity (although complex systems are also called gearboxes in the vernacular), provide gear reduction (or, more rarely, an increase in speed), sometimes in conjunction with a right-angle change in direction of the shaft (typically in helicopters, see picture). These are often used on PTO-powered agricultural equipment, since the axial PTO shaft is at odds with the usual need for the driven shaft, which is either vertical (as with rotary mowers), or horizontally extending from one side of the implement to another (as with manure spreaders, flail mowers, and forage wagons). More complex equipment, such as silage choppers and snowblowers, have drives with outputs in more than one direction.

The gearbox in a wind turbine converts the slow, high-torque rotation of the turbine into much faster rotation of the electrical generator. These are much larger and more complicated than the PTO gearboxes in farm equipment. They weigh several tons and typically contain three stages to achieve an overall gear ratio from 40:1 to over 100:1, depending on the size of the turbine. (For aerodynamic and structural reasons, larger turbines have to turn more slowly, but the generators all have to rotate at similar speeds of several thousand rpm.) The first stage of the gearbox is usually a planetary gear, for compactness, and to distribute the enormous torque of the turbine over more teeth of the low-speed shaft. Durability of these gearboxes has been a serious problem for a long time.

Regardless of where they are used, these simple transmissions all share an important feature: the gear ratio cannot be changed during use. It is fixed at the time the transmission is constructed.

Automotive basics

The need for a transmission in an automobile is a consequence of the characteristics of the internal combustion engine. Engines typically operate over a range of 600 to about 7000 revolutions per minute (though this varies, and is typically less for diesel engines), while the car's wheels rotate between 0 rpm and around 1800 rpm.

Furthermore, the engine provides its highest torque and power outputs unevenly across the rev range resulting in a torque band and a power band. Often the greatest torque is required when the vehicle is moving from rest or traveling slowly, while maximum power is needed at high speed. Therefore, a system that transforms the engine's output so that it can supply high torque at low speeds, but also operate at highway speeds with the motor still operating within its limits, is required. Transmissions perform this transformation.

The dynamics of a car vary with speed: at low speeds, acceleration is limited by the inertia of vehicular gross mass; while at cruising or maximum speeds wind resistance is the dominant barrier.

Many transmissions and gears used in automotive and truck applications are contained in a cast iron case, though more frequently aluminium is used for lower weight especially in cars. There are usually three shafts: a mainshaft, a countershaft, and an idler shaft.

The mainshaft extends outside the case in both directions: the input shaft towards the engine, and the output shaft towards the rear axle (on rear wheel drive carsfront wheel drives generally have the engine and transmission mounted transversely, the differential being part of the transmission assembly.) The shaft is suspended by the main bearings, and is split towards the input end. At the point of the split, a pilot bearing holds the shafts together. The gears and clutches ride on the mainshaft, the gears being free to turn relative to the mainshaft except when engaged by the clutches.

Types of automobile transmissions include manual, automatic or semi-automatic transmission.
Manual

Manual transmissions come in two basic types:

    A simple but rugged sliding-mesh or unsynchronized/non-synchronous system, where straight-cut spur gear sets spin freely, and must be synchronized by the operator matching engine revs to road speed, to avoid noisy and damaging clashing of the gears
    The now common constant-mesh gearboxes, which can include non-synchronised, or synchronized/synchromesh systems, where typically diagonal cut helical (or sometimes either straight-cut, or double-helical) gear sets are constantly "meshed" together, and a dog clutch is used for changing gears. On synchromesh boxes, friction cones or "synchro-rings" are used in addition to the dog clutch to closely match the rotational speeds of the two sides of the (declutched) transmission before making a full mechanical engagement.

The former type was standard in many vintage cars (alongside e.g. epicyclic and multi-clutch systems) before the development of constant-mesh manuals and hydraulic-epicyclic automatics, older heavy-duty trucks, and can still be found in use in some agricultural equipment. The latter is the modern standard for on- and off-road transport manual and semi-automatic transmission, although it may be found in many forms; e.g., non-synchronised straight-cut in racetrack or super-heavy-duty applications, non-synchro helical in the majority of heavy trucks and motorcycles and in certain classic cars (e.g. the Fiat 500), and partly or fully synchronised helical in almost all modern manual-shift passenger cars and light trucks.

Manual transmissions are the most common type outside North America and Australia. They are cheaper, lighter, usually give better performance, and fuel efficiency (although automatic transmissions with torque converter lockup and advanced electronic controls can provide similar results). It is customary for new drivers to learn, and be tested, on a car with a manual gear change. In Malaysia and Denmark all cars used for testing (and because of that, virtually all those used for instruction as well) have a manual transmission. In Japan, the Philippines, Germany, Poland, Italy, Israel, the Netherlands, Belgium, New Zealand, Austria, Bulgaria, the UK, Ireland. Sweden, Norway, Estonia, France, Spain, Switzerland, the Australian states of Victoria,Western Australia and Queensland, Finland, Latvia, Lithuania and the Czech Republic, a test pass using an automatic car does not entitle the driver to use a manual car on the public road; a test with a manual car is required.[citation needed] Manual transmissions are much more common than automatic transmissions in Asia, Africa, South America and Europe.

Manual transmissions can include both synchronized and unsynchronized gearing. For example, reverse gear is usually unsynchronised, as the drive is only expected to engage it when the vehicle is at a standstill. Many older (up to 1970s) cars also lacked syncro on first gear (for various reasons—cost, typically "shorter" overall gearing, engines typically having more low-end torque, the extreme wear on a frequently used first gear synchroniser ...), meaning it also could only be used for moving away from a stop unless the driver became adept at double-declutching and had a particular need to regularly downshift into the lowest gear.

Some manual transmissions have an extremely low ratio for first gear, called a creeper gear or granny gear. Such gears are usually not synchronized. This feature is common on pickup trucks tailored to trailer-towing, farming, or construction-site work. During normal on-road use, the truck is usually driven without using the creeper gear at all, and second gear is used from a standing start. Some off-road vehicles, most particularly the Willys Jeep and its descendents, also had transmissions with "granny first"s either as standard or an option, but this function is now more often provided for by a low-range transfer gearbox attached to a normal fully synchronised transmission.
Non-synchronous
 Non-synchronous transmissions

Some commercial applications use non-synchronized manual transmissions that require a skilled operator. Depending on the country, many local, regional, and national laws govern operation of these types of vehicles (see Commercial Driver's License). This class may include commercial, military, agricultural, or engineering vehicles. Some of these may use combinations of types for multi-purpose functions. An example is a power take-off (PTO) gear. The non-synchronous transmission type requires an understanding of gear range, torque, engine power, and multi-functional clutch and shifter functions. Also see Double-clutching, and Clutch-brake sections of the main article.
Automatic

 Most modern North American and Australian and some European and Japanese cars have an automatic transmission that selects an appropriate gear ratio without any operator intervention. They primarily use hydraulics to select gears, depending on pressure exerted by fluid within the transmission assembly. Rather than using a clutch to engage the transmission, a fluid flywheel, or torque converter is placed in between the engine and transmission. It is possible for the driver to control the number of gears in use or select reverse, though precise control of which gear is in use may or may not be possible.

Automatic transmissions are easy to use. However, in the past, automatic transmissions of this type have had a number of problems; they were complex and expensive, sometimes had reliability problems (which sometimes caused more expenses in repair), have often been less fuel-efficient than their manual counterparts (due to "slippage" in the torque converter), and their shift time was slower than a manual making them uncompetitive for racing. With the advancement of modern automatic transmissions this has changed.

Attempts to improve fuel efficiency of automatic transmissions include the use of torque converters that lock up beyond a certain speed or in higher gear ratios, eliminating power loss, and overdrive gears that automatically actuate above certain speeds. In older transmissions, both technologies could be intrusive, when conditions are such that they repeatedly cut in and out as speed and such load factors as grade or wind vary slightly. Current computerized transmissions possess complex programming that both maximizes fuel efficiency and eliminates intrusiveness. This is due mainly to electronic rather than mechanical advances, though improvements in CVT technology and the use of automatic clutches have also helped. A few cars, including the 2013 Subuaru Impreza and the 2012 model of the Honda Jazz sold in the UK, actually claim marginally better fuel consumption for the CVT version than the manual version.

For certain applications, the slippage inherent in automatic transmissions can be advantageous. For instance, in drag racing, the automatic transmission allows the car to stop with the engine at a high rpm (the "stall speed") to allow for a very quick launch when the brakes are released. In fact, a common modification is to increase the stall speed of the transmission. This is even more advantageous for turbocharged engines, where the turbocharger must be kept spinning at high rpm by a large flow of exhaust to maintain the boost pressure and eliminate the turbo lag that occurs when the throttle suddenly opens on an idling engine.
Semi-automatic

A hybrid form of transmission where an integrated control system handles manipulation of the clutch automatically, but the driver can still—and may be required to—take manual control of gear selection. This is sometimes called a "clutchless manual", or "automated manual" transmission. Many of these transmissions allow the driver to fully delegate gear shifting choice to the control system, which then effectively acts as if it was a regular automatic transmission. They are generally designed using manual transmission "internals", and when used in passenger cars, have synchromesh operated helical constant mesh gear sets.

Early semi-automatic systems used a variety of mechanical and hydraulic systems—including centrifugal clutches, torque converters, electro-mechanical (and even electrostatic) and servo/solenoid controlled clutches—and control schemes—automatic declutching when moving the gearstick, pre-selector controls, centrifugal clutches with drum-sequential shift requiring the driver to lift the throttle for a successful shift, etc.—and some were little more than regular lock-up torque converter automatics with manual gear selection.

Most modern implementations, however, are standard or slightly modified manual transmissions (and very occasionally modified automatics—even including a few cases of CVTs with "fake" fixed gear ratios), with servo-controlled clutching and shifting under command of the central engine computer. These are intended as a combined replacement option both for more expensive and less efficient "normal" automatic systems, and for drivers who prefer manual shift but are no longer able to operate a clutch, and users are encouraged to leave the shift lever in fully automatic "drive" most of the time, only engaging manual-sequential mode for sporty driving or when otherwise strictly necessary.

Specific types of this transmission include: Easytronic, Tiptronic and Geartronic, as well as the systems used as standard in all ICE-powered Smart-MCC vehicles, and on geared step-through scooters such as the Honda Super Cub or Suzuki Address.

A dual-clutch transmission alternately uses two sets of internals, each with its own clutch, so that a "gearchange" actually only consists of one clutch engaging as the other disengages—providing a supposedly "seamless" shift with no break in (or jarring reuptake of) power transmission. Each clutch's attached shaft carries half of the total input gear complement (with a shared output shaft), including synchronised dog clutch systems that pre-select which of its set of ratios is most likely needed at the next shift, under command of a computerised control system. Specific types of this transmission include: Direct-Shift Gearbox.

There are also sequential transmissions that use the rotation of a drum to switch gears, much like those of a typical fully manual motorcycle. These can be designed with a manual or automatic clutch system, and may be found both in automobiles (particularly track and rally racing cars), motorcycles (typically light "step-thru" type city utility bikes, e.g., the Honda Super Cub) and quadbikes (often with a separately engaged reversing gear), the latter two normally using a scooter-style centrifugal clutch.
Bicycle gearing

Bicycles usually have a system for selecting different gear ratios. There are two main types: derailleur gears and hub gears. The derailleur type is the most common, and the most visible, using sprocket gears. Typically there are several gears available on the rear sprocket assembly, attached to the rear wheel. A few more sprockets are usually added to the front assembly as well. Multiplying the number of sprocket gears in front by the number to the rear gives the number of gear ratios, often called "speeds".

Hub gears use epicyclic gearing and are enclosed within the axle of the rear wheel. Because of the small space, they typically offer fewer different speeds, although at least one has reached 14 gear ratios and Fallbrook Technologies manufactures a transmission with technically infinite ratios.

Several attempts have been made to fit bicycles with an enclosed gearbox, giving obvious advantages for better lubrication, dirt-sealing and shifting. These have usually been in conjunction with a shaft drive, as a gearbox with a traditional chain would (like the hub gear) still have many of the derailleur's disadvantages for an exposed chain. Bicycle gearboxes are enclosed in a box replacing the traditional bottom bracket. The requirement for a modified frame has been a serious drawback to their adoption. One of the most recent attempts to provide a gearbox for bicycles is the 18 speed Pinion P1.18. This gives an enclosed gearbox, but still a traditional chain. When fitted to a rear suspension bike, it also retains a derailleur-like jockey cage chain tensioner, although without the derailleur's low ground clearance.

Causes for failure of bicycle gearing include: worn teeth, damage caused by a faulty chain, damage due to thermal expansion, broken teeth due to excessive pedaling force, interference by foreign objects, and loss of lubrication due to negligence.
Uncommon types
Dual clutch transmission

This arrangement is also sometimes known as a direct shift gearbox or powershift gearbox. It seeks to combine the advantages of a conventional manual shift with the qualities of a modern automatic transmission by providing different clutches for odd and even speed selector gears. When changing gear, the engine torque is transferred from one gear to the other continuously, so providing gentle, smooth gear changes without either losing power or jerking the vehicle. Gear selection may be manual, automatic (depending on throttle/speed sensors), or a 'sports' version combining both options.
Continuously variable

The continuously variable transmission (CVT) is a transmission in which the ratio of the rotational speeds of two shafts, as the input shaft and output shaft of a vehicle or other machine, can be varied continuously within a given range, providing an infinite number of possible ratios. The CVT allows the driver or a computer to select the relationship between the speed of the engine and the speed of the wheels within a continuous range. This can provide even better fuel economy if the engine constantly runs at a single speed. The transmission is, in theory, capable of a better user experience, without the rise and fall in speed of an engine, and the jerk felt when changing gears poorly.

CVTs are increasingly found on small cars, and especially high-gas-mileage or hybrid vehicles. On these platforms, the torque is limited because the electric motor can provide torque without changing the speed of the engine. By leaving the engine running at the rate that generates the best gas mileage for the given operating conditions, overall mileage can be improved over a system with a smaller number of fixed gears, where the system may be operating at peak efficiency only for a small range of speeds. CVTs are also found in agricultural equipment; due to the high-torque nature of these vehicles, mechanical gears are integrated to provide tractive force at high speeds.
Infinitely variable

The IVT is a specific type of CVT that includes not only an infinite number of gear ratios, but an infinite range as well. This is a turn of phrase, it actually refers to CVTs that are able to include a "zero ratio", where the input shaft can turn without any motion of the output shaft while remaining in gear. Zero output implies infinite ratios, as any "high gear" ratio is an infinite number of times higher than the zero "low gear".

Most (if not all) IVTs result from the combination of a CVT with an epicyclic gear system with a fixed ratio. The combination of the fixed ratio of the epicyclic gear with a specific matching ratio in the CVT side results in zero output. For instance, consider a transmission with an epicyclic gear set to 1:−1 gear ratio; a 1:1 reverse gear. When the CVT side is set to 1:1 the two ratios add up to zero output. The IVT is always engaged, even during its zero output. When the CVT is set to higher values it operates conventionally, with increasing forward ratios.

In practice, the epicyclic gear may be set to the lowest possible ratio of the CVT, if reversing is not needed or is handled through other means. Reversing can be incorporated by setting the epicyclic gear ratio somewhat higher than the lowest ratio of the CVT, providing a range of reverse ratios.
Electric variable

The Electric Variable Transmission (EVT) combines a transmission with an electric motor to provide the illusion of a single CVT. In the common implementation, a gasoline engine is connected to a traditional transmission, which is in turn connected to an epicyclic gear system's planet carrier. An electric motor/generator is connected to the central "sun" gear, which is normally un-driven in typical epicyclic systems. Both sources of power can be fed into the transmission's output at the same time, splitting power between them. In common examples, between one-quarter and half of the engine's power can be fed into the sun gear. Depending on the implementation, the transmission in front of the epicyclic system may be greatly simplified, or eliminated completely. EVTs are capable of continuously modulating output/input speed ratios like mechanical CVTs, but offer the distinct benefit of being able to also apply power from two different sources to one output, as well as potentially reducing overall complexity dramatically.

In typical implementations, the gear ratio of the transmission and epicyclic system are set to the ratio of the common driving conditions, say highway speed for a car, or city speeds for a bus. When the drivers presses on the gas, the associated electronics interprets the pedal position and immediately sets the gasoline engine to the RPM that provides the best gas mileage for that setting. As the gear ratio is normally set far from the maximum torque point, this set-up would normally result in very poor acceleration. Unlike gasoline engines, electric motors offer efficient torque across a wide selection of RPM, and are especially effective at low settings where the gasoline engine is inefficient. By varying the electrical load or supply on the motor attached to the sun gear, additional torque can be provided to make up for the low torque output from the engine. As the vehicle accelerates, the power to the motor is reduced and eventually ended, providing the illusion of a CVT.

The canonical example of the EVT is Toyota's Hybrid Synergy Drive. This implementation has no conventional transmission, and the sun gear always receives 28% of the torque from the engine. This power can be used to operate any electrical loads in the vehicle, recharging the batteries, powering the entertainment system, or running the air conditioning. Any residual power is then fed back into a second motor that powers the output of the drivetrain directly. At highway speeds this additional generator/motor pathway is less efficient than simply powering the wheels directly. However, during acceleration, the electrical path is much more efficient than engine operating so far from its torque point. GM uses a similar system in the Allison Bus hybrid powertrains and the Tahoe and Yukon pick-up trucks, but these use a two-speed transmission in front of the epicyclic system, and the sun gear receives close to half the total power.
Non-direct
Electric

Electric transmissions convert the mechanical power of the engine(s) to electricity with electric generators and convert it back to mechanical power with electric motors. Electrical or electronic adjustable-speed drive control systems are used to control the speed and torque of the motors. If the generators are driven by turbines, such arrangements are called turbo-electric transmission. Likewise installations powered by diesel-engines are called diesel-electric.

Diesel-electric arrangements are used on many railway locomotives, ships, large mining trucks, and some bulldozers. In these cases, each driven wheel is equipped with its own electric motor, which can be fed varying electrical power to provide any required torque or power output for each wheel independently. This produces a much simpler solution for multiple driven wheels in very large vehicles, where drive shafts would be much larger or heavier than the electrical cable that can provide the same amount of power. It also improves the ability to allow different wheels to run at different speeds, which is useful for steered wheels in large construction vehicles.
Hydrostatic

Hydrostatic transmissions transmit all power hydraulically, using the components of hydraulic machinery. They are similar to electrical transmissions, but hydraulic fluid as the power distribution system rather than electricity.

The transmission input drive is a central hydraulic pump and final drive unit(s) is/are a hydraulic motor, or hydraulic cylinder (see: swashplate). Both components can be placed physically far apart on the machine, being connected only by flexible hoses. Hydrostatic drive systems are used on excavators, lawn tractors, forklifts, winch drive systems, heavy lift equipment, agricultural machinery, earth-moving equipment, etc. An arrangement for motor-vehicle transmission was probably used on the Ferguson F-1 P99 racing car in about 1961.

The Human Friendly Transmission of the Honda DN-01 is hydrostatic.
Hydrodynamic

If the hydraulic pump and/or hydraulic motor make use of the hydrodynamic effects of the fluid flow, i.e. pressure due to a change in the fluid's momentum as it flows through vanes in a turbine. The pump and motor usually consist of rotating vanes without seals and are typically placed in close proximity. The transmission ratio can be made to vary by means of additional rotating vanes, an effect similar to varying the pitch of an airplane propeller.

The torque converter in most automotive automatic transmissions is, in itself, a hydrodynamic transmission. Hydrodynamic transmissions are used in many passenger rail vehicles, those that are not using electrical transmissions. In this application the advantage of smooth power delivery may outweigh the reduced efficiency caused by turbulence energy losses in the fluid.

Wednesday, 11 September 2013

SPOILER

A spoiler is an automotive aerodynamic device whose intended design function is to 'spoil' unfavorable air movement across a body of a vehicle in motion, usually described as turbulence or drag. Spoilers on the front of a vehicle are often called air dams, because in addition to directing air flow they also reduce the amount of air flowing underneath the vehicle which generally reduces aerodynamic lift and drag. Spoilers are often fitted to race and high-performance sports cars, although they have become common on passenger vehicles as well. Some spoilers are added to cars primarily for styling purposes and have either little aerodynamic benefit or even make the aerodynamics worse.

Spoilers for cars are often incorrectly confused with, or the term used interchangeably with, wings. Automotive wings are devices whose intended design is to generate downforce as air passes around them, not simply disrupt existing airflow patterns.

Operation

Since spoiler is a term describing an application, the operation of a spoiler varies depending on the particular effect it's trying to spoil. Most common spoiler functions include disrupting some type of airflow passing over and around a moving vehicle. A common spoiler diffuses air by increasing amounts of turbulence flowing over the shape, "spoiling" the laminar flow and providing a cushion for the laminar boundary layer. However, other types of airflow may require the spoiler to operate differently and take on vastly different physical characteristics.

Passenger vehicles

The goal of many spoilers used in passenger vehicles is to reduce drag and increase fuel efficiency. Passenger vehicles can be equipped with front and rear spoilers. Front spoilers, found beneath the bumper, are mainly used to decrease the amount of air going underneath the vehicle to reduce the drag coefficient and lift.

Sports cars are most commonly seen with front and rear spoilers. Even though these vehicles typically have a more rigid chassis and a stiffer suspension to aid in high speed maneuverability, a spoiler can still be beneficial. This is because many vehicles have a fairly steep downward angle going from the rear edge of the roof down to the trunk or tail of the car which may cause air flow separation. The flow of air becomes turbulent and a low-pressure zone is created, increasing drag and instability (see Bernoulli effect). Adding a rear spoiler could be considered to make the air "see" a longer, gentler slope from the roof to the spoiler, which helps to delay flow separation and the higher pressure in front of the spoiler can help reduce the lift on the car by creating downforce. This may reduce drag in certain instances and will generally increase high speed stability due to the reduced rear lift.

Due to their association with racing, spoilers are often viewed as "sporty" by consumers.

Material types

Spoilers are usually made of:

    ABS plastic: Most original equipment manufacturers create spoilers produced by casting ABS plastic with various admixtures, which bring in plasticity to this inexpensive but fragile material. Frailness is a main disadvantage of plastic, which increases with product age and is caused by the evaporation of volatile phenols.
    Fiberglass: Used in car parts production due to the low cost of the materials. Fiberglass spoilers consist of fiberglass cloth infilled with a thermosetting resin . Fiberglass is sufficiently durable and workable, but has become unprofitable for large scale production due to the amount of labor.
    Silicon: More recently, many auto accessory manufacturers are using silicon-organic polymers. The main benefit of this material is its phenomenal plasticity. Silicon possesses extra high thermal characteristics and provides a longer product lifetime.
    Carbon fiber: Carbon fiber is light weight, durable, but also a very expensive material. Due to the very large amount of manual labor, large scale production cannot widely use carbon fiber in automobile parts production currently.

Other common spoiler types

    Front spoilers: A front spoiler (air dam) is positioned under or integrated with the front bumper. In racing, this spoiler is used to control the dynamics of handling related to the air in front of the vehicle. This can be to improve the drag coefficient of the body of the vehicle at speed, or to generate downforce. In passenger vehicles, the focus shifts more to directing the airflow into the engine bay for cooling purposes.
    Truck bed spoiler: This attaches only to the top of the truck bed rails near the rear. Used with a bed cover, this spoiler is intended to reduce the air profile of the steep drop-off from the tailgate.
    Truck cab spoiler: This is purposed the same as above, except focusing on the drop-off from the cab of the truck.

Other vehicles

Heavy trucks, like long haul tractors, may also have a spoiler on the top of the cab in order to lessen drag caused from air resistance from the trailer it's towing, which may be taller than the cab and reduce the aerodynamics of the vehicle dramatically without the use of this spoiler. The trailers they pull can also be fitted with under-side spoilers that angle outward to deflect passing air away from the rear axle's wheels.

Trains may use spoilers to induce drag (like an air brake). A new prototype Japanese high-speed train, the Fastech 360 is designed to reach speeds of 400 kilometres per hour (250 mph). Its nose is specifically designed to spoil a wind effect associated with passing through tunnels, and it can deploy 'ears' which act to slow the train in case of emergency by increasing its drag.

Some modern race cars employ a situational spoiler called a roof flap. The body of the car is designed to generate downforce while driving forward. These roof flaps deploy when the body of the car is rotated so it is traveling in reverse, a condition where the body instead generates lift. The roof flaps deploy because they are recessed into a pocket in the roof. The low pressure above this pocket will cause the flaps to deploy, and counteract some of the lift generated by the car, making it more resistant to coming out of contact with the ground. These devices were introduced in 1994 in NASCAR following Rusty Wallace's crash at Talladega.

Whale tail

When the Porsche 911 Turbo debuted in August 1974, with large, flared, rear spoilers, they were immediately dubbed whale tails.Designed to reduce rear-end lift and so keep the car from oversteering at high speeds, the rubber-edges of the whale tail spoilers were thought to be "pedestrian friendly". The Turbo, with its whale tail, became an instant hit. It also became one of the world's most recognizable sports cars, remaining in production for the next two decades in one form or another, with more than 23,000 sold by 1989, although from 1978, the rear spoiler was redesigned and dubbed 'teatray' on account of its raised sides. The Porsche 911 whale tails were used in conjunction with a chin spoiler attached to the front valence panel, which, according to some sources, did not enhance aerodynamic stability. It has been found to be less effective in multiplying downforce than newer technologies like an airfoil,"rear wing running across the base of the tailgate window", or "an electronically controlled wing that deploys at about 50 mph". (80 km/h).

FLYWHEEL

A flywheel is a rotating mechanical device that is used to store rotational energy. Flywheels have a significant moment of inertia and thus resist changes in rotational speed. The amount of energy stored in a flywheel is proportional to the square of its rotational speed. Energy is transferred to a flywheel by applying torque to it, thereby increasing its rotational speed, and hence its stored energy. Conversely, a flywheel releases stored energy by applying torque to a mechanical load, thereby decreasing its rotational speed.

Common uses of a flywheel include:

    Providing continuous energy when the energy source is discontinuous. For example, flywheels are used in reciprocating engines because the energy source, torque from the engine, is intermittent.
    Delivering energy at rates beyond the ability of a continuous energy source. This is achieved by collecting energy in the flywheel over time and then releasing the energy quickly, at rates that exceed the abilities of the energy source.
    Controlling the orientation of a mechanical system. In such applications, the angular momentum of a flywheel is purposely transferred to a load when energy is transferred to or from the flywheel.

Flywheels are typically made of steel and rotate on conventional bearings; these are generally limited to a revolution rate of a few thousand RPM. Some modern flywheels are made of carbon fiber materials and employ magnetic bearings, enabling them to revolve at speeds up to 60,000 RPM.

Applications

Flywheels are often used to provide continuous energy in systems where the energy source is not continuous. In such cases, the flywheel stores energy when torque is applied by the energy source, and it releases stored energy when the energy source is not applying torque to it. For example, a flywheel is used to maintain constant angular velocity of the crankshaft in a reciprocating engine. In this case, the flywheel—which is mounted on the crankshaft—stores energy when torque is exerted on it by a firing piston, and it releases energy to its mechanical loads when no piston is exerting torque on it. Other examples of this are friction motors, which use flywheel energy to power devices such as toy cars.

A flywheel may also be used to supply intermittent pulses of energy at transfer rates that exceed the abilities of its energy source, or when such pulses would disrupt the energy supply (e.g., public electric network). This is achieved by accumulating stored energy in the flywheel over a period of time, at a rate that is compatible with the energy source, and then releasing that energy at a much higher rate over a relatively short time. For example, flywheels are used in riveting machines to store energy from the motor and release it during the riveting operation.

The phenomenon of precession has to be considered when using flywheels in vehicles. A rotating flywheel responds to any momentum that tends to change the direction of its axis of rotation by a resulting precession rotation. A vehicle with a vertical-axis flywheel would experience a lateral momentum when passing the top of a hill or the bottom of a valley (roll momentum in response to a pitch change). Two counter-rotating flywheels may be needed to eliminate this effect. This effect is leveraged in reaction wheels, a type of flywheel employed in satellites in which the flywheel is used to orient the satellite's instruments without thruster rockets.

Physics

A flywheel is a spinning wheel or disc with a fixed axle so that rotation is only about one axis. Energy is stored in the rotor as kinetic energy, or more specifically, rotational energy:

    E_k=\frac{1}{2} I \omega^2

Where:

    ω is the angular velocity, and
    I is the moment of inertia of the mass about the center of rotation. The moment of inertia is the measure of resistance to torque applied on a spinning object (i.e. the higher the moment of inertia, the slower it will spin when a given force is applied).

    The moment of inertia for a solid cylinder is I = \frac{1}{2} mr^2,
    for a thin-walled empty cylinder is I = m r^2,
    and for a thick-walled empty cylinder is I = \frac{1}{2} m({r_{external}}^2 + {r_{internal}}^2),

Where m denotes mass, and r denotes a radius.

When calculating with SI units, the standards would be for mass, kilograms; for radius, meters; and for angular velocity, radians per second. The resulting answer would be in joules.

The amount of energy that can safely be stored in the rotor depends on the point at which the rotor will warp or shatter. The hoop stress on the rotor is a major consideration in the design of a flywheel energy storage system.

    \sigma_t = \rho r^2 \omega^2 \

Where:

    \sigma_t is the tensile stress on the rim of the cylinder
    \rho is the density of the cylinder
    r is the radius of the cylinder, and
    \omega is the angular velocity of the cylinder.

This formula can also be simplified using specific tensile strength and tangent velocity:

    \frac{\sigma_t}{\rho} = v^2

Where:

    \frac{\sigma_t}{\rho} is the specific tensile strength of the material
    v is the tangent velocity of the rim.

CHASSIS

A chassis consists of an internal framework that supports a man-made object in its construction and use. It is analogous to an animal's skeleton. An example of a chassis is the underpart of a motor vehicle, consisting of the frame (on which the body is mounted) with the wheels and machinery.

Examples of use
Vehicles

1950s Jeep FC cowl and chassis for others to convert into finished vehicles

In the case of vehicles, the term rolling chassis means the frame plus the "running gear" like engine, transmission, driveshaft, differential, and suspension.

A body (sometimes referred to as "coachwork"), which is usually not necessary for integrity of the structure, is built on the chassis to complete the vehicle.

For commercial vehicles a rolling chassis consists of an assembly of all the essential parts of a truck (without the body) to be ready for operation on the road. The design of a pleasure car chassis will be different than one for commercial vehicles because of the heavier loads and constant work use. Commercial vehicle manufacturers sell “chassis only”, “cowl and chassis”, as well as "chassis cab" versions that can be outfitted with specialized bodies. These include motor homes, fire engines, ambulances, box trucks, etc.

In particular applications, such as school buses, a government agency like National Highway Traffic Safety Administration (NHTSA) in the U.S. defines the design standards of chassis and body conversions.

An armoured fighting vehicle's hull serves as the chassis and comprises the bottom part of the AFV that includes the tracks, engine, driver's seat, and crew compartment. This describes the lower hull, although common usage of might include the upper hull to mean the AFV without the turret. The hull serves as a basis for platforms on tanks, armoured personnel carriers, combat engineering vehicles, etc.

Design

The backbone chassis is almost a trademark design feature of Czech Tatra heavy trucks (cross-country, military etc.) - Hans Ledwinka developed this style of chassis for Tatra 11 in 1923 with the model Tatra 11. He further enhanced the design with 6x4 model Tatra 26 which had excellent offroad abilities.





This type of chassis is also often found on some sports cars. It also does not provide protection against side collisions, and has to be combined with a body that would compensate for this shortcoming.

Examples of cars using a backbone chassis include DeLorean DMC-12, Lloyd 600, Lotus Elan, Lotus Esprit and Europa, Škoda 420 Popular, Tatra T-87, Tatra T111, Tatra T148, Tatra T815 etc., as well as TVR S1. Some cars also use a backbone as a part of the chassis to strengthen it; examples include the Volkswagen Beetle and the Locost where the transmission tunnel forms a backbone.
Advantages

    Standard conception truck's superstructure has to withstand the torsion twist and subsequent wear reduces vehicle's lifespan.
    The half-axles have better contact with ground when operated off the road. This has little importance on roads.
    The vulnerable parts of drive shaft are covered by thick tube. The whole system is extremely reliable, however if a problem occurs, repairs are more complicated.
    Modular system is enabling configurations of 2, 3, 4, 5, or 6-axle vehicles with various wheel bases.

Disadvantages

    Manufacturing the backbone chassis is more complicated and more costly. However the more axles with all wheel drive are needed, the cost benefit turns in favor of backbone chassis.
    The backbone chassis is heavier for a given torsional stiffness than a uni-body.
    The chassis gives no protection for side impacts.

Space Frame

In architecture and structural engineering, a space frame or space structure is a truss-like, lightweight rigid structure constructed from interlocking struts in a geometric pattern. Space frames can be used to span large areas with few interior supports. Like the truss, a space frame is strong because of the inherent rigidity of the triangle; flexing loads (bending moments) are transmitted as tension and compression loads along the length of each strut.

Overview

The simplest form of space frame is a horizontal slab of interlocking square pyramids and tetrahedra built from aluminium or tubular steel struts. In many ways this looks like the horizontal jib of a tower crane repeated many times to make it wider. A stronger form is composed of interlocking tetrahedra in which all the struts have unit length. More technically this is referred to as an isotropic vector matrix or in a single unit width an octet truss. More complex variations change the lengths of the struts to curve the overall structure or may incorporate other geometrical shapes.

Vehicles
Cars

Spaceframes are sometimes used in the chassis designs of automobiles and motorcycles. In both a spaceframe and a tube-frame chassis, the suspension, engine, and body panels are attached to a skeletal frame of tubes, and the body panels have little or no structural function. By contrast, in a unibody or monocoque design, the body serves as part of the structure.

Tube-frame chassis pre-date spaceframe chassis and are a development of the earlier ladder chassis. The advantage of using tubes rather than the previous open channel sections is that they resist torsional forces better. Some tube chassis were little more than a ladder chassis made with two large diameter tubes, or even a single tube as a backbone chassis. Although many tubular chassis developed additional tubes and were even described as "spaceframes", their design was rarely correctly stressed as a spaceframe and they behaved mechanically as a tube ladder chassis, with additional brackets to support the attached components, suspension, engine etc. The distinction of the true spaceframe is that all the forces in each strut are either tensile or compression, never bending. Although these additional tubes did carry some extra load, they were rarely diagonalised into a rigid spaceframe.

The first true spaceframe chassis were produced in the 1930s by designers such as Buckminster Fuller and William Stout (the Dymaxion and the Stout Scarab) who understood the theory of the true spaceframe from either architecture or aircraft design.

The first racing car to attempt a spaceframe was the Cisitalia D46 of 1946. This used two small diameter tubes along each side, but they were spaced apart by vertical smaller tubes, and so were not diagonalised in any plane. A year later, Porsche designed their Type 360 for Cisitalia. As this included diagonal tubes, it can be considered the first true spaceframe.

The Maserati Tipo 61 of 1959 (Birdcage) is often thought of as the first but in 1949 Dr. Robert Eberan-Eberhorst designed the Jowett Jupiter exhibited at the London Motor Show in 1949 and taking a class win at the 1950 Le Mans 24hr. Later the small British car manufacturers developed the concept TVR produced an alloy-bodied two seater on a multi tubular chassis, which appeared in 1949.

Colin Chapman of Lotus introduced his first 'production' car, the Mark VI, in 1952. This was influenced by the Jaguar C-Type chassis, another with four tubes of two different diameters, separated by narrower tubes. Chapman reduced the main tube diameter for the lighter Lotus, but did not reduce the minor tubes any further, possibly because he considered that this would appear flimsy to buyers. Although widely described as a spaceframe, Lotus did not build a true spaceframe chassis until the Mark VIII, with the influence of other designers, with experience from the aircraft industry.

Other notable examples of tube-frame cars include the, Audi R8, Ferrari 360, Lamborghini Gallardo, and Mercedes-Benz SLS AMG.

A drawback of the spaceframe chassis is that it encloses much of the working volume of the car and can make access for both the driver and to the engine difficult. Some spaceframes have been designed with removable sections, joined by bolted pin joints. Such a structure had already been used around the engine of the Lotus Mark III. Although somewhat inconvenient, an advantage of the spaceframe is that the same lack of bending forces in the tubes that allow it to be modelled as a pin-jointed structure also means that such a removable section need not reduce the strength of the assembled frame.
Motorcycles

Italian motorbike manufacturer Ducati extensively uses tube frame chassis on its models.

Space frames have also been used in bicycles, such as those designed by Alex Moulton.
Design methods

Space frames are typically designed using a rigidity matrix. The special characteristic of the stiffness matrix in an architectural space frame is the independence of the angular factors. If the joints are sufficiently rigid, the angular deflections can be neglected, simplifying the calculations.

Tuesday, 10 September 2013

ENGINES

An engine or motor is a machine designed to convert energy into useful mechanical motion.Heat engines, including internal combustion engines and external combustion engines (such as steam engines) burn a fuel to create heat, which then creates motion. Electric motors convert electrical energy into mechanical motion, pneumatic motors use compressed air and others—such as clockwork motors in wind-up toys—use elastic energy. In biological systems, molecular motors, like myosins in muscles, use chemical energy to create motion.

Terminology

"Engine" was originally a term for any mechanical device that converts force into motion. Hence, pre-industrial weapons such as catapults, trebuchets and battering rams were called "siege engines". The word "gin," as in "cotton gin", is short for "engine." The word derives from Old French engin, from the Latin ingenium, which is also the root of the word ingenious. Most mechanical devices invented during the industrial revolution were described as engines—the steam engine being a notable example

In modern usage, the term engine typically describes devices, like steam engines and internal combustion engines, that burn or otherwise consume fuel to perform mechanical work by exerting a torque or linear force to drive machinery that generates electricity, pumps water, or compresses gas. In the context of propulsion systems, an air-breathing engine is one that uses atmospheric air to oxidise the fuel rather than supplying an independent oxidizer, as in a rocket.

When the internal combustion engine was invented, the term "motor" was initially used to distinguish it from the steam engine—which was in wide use at the time, powering locomotives and other vehicles such as steam rollers. "Motor" and "engine" later came to be used interchangeably in casual discourse. However, technically, the two words have different meanings. An engine is a device that burns or otherwise consumes fuel, changing its chemical composition, whereas a motor is a device driven by electricity, which does not change the chemical composition of its energy source.

A heat engine may also serve as a prime mover—a component that transforms the flow or changes in pressure of a fluid into mechanical energy. An automobile powered by an internal combustion engine may make use of various motors and pumps, but ultimately all such devices derive their power from the engine. Another way of looking at it is that a motor receives power from an external source, and then converts it into mechanical energy, while an engine creates power from pressure (derived directly from the explosive force of combustion or other chemical reaction, or secondarily from the action of some such force on other substances such as air, water, or steam).

Devices converting heat energy into motion are commonly referred to simply as engines.

Engine configurations

Internal combustion engines can be classified by their configuration.

Common layouts of engines are:

Reciprocating:

    Two-stroke engine
    Four-stroke engine (Otto cycle)
    Six-stroke engine
    Diesel engine
    Atkinson cycle
    Miller cycle

Rotary:

    Wankel engine

Continuous combustion:

    Gas turbine
    Jet engine (including turbojet, turbofan, ramjet, Rocket, etc.)

Operation

Four-stroke cycle (or Otto cycle)

1. Induction

2. Compression

3. Power

4. Exhaust

As their name implies, four-stroke internal combustion engines have four basic steps that repeat with every two revolutions of the engine:

(1) Intake/suction stroke (2) Compression stroke (3) Power/expansion stroke and (4) Exhaust stroke

1. Intake stroke: The first stroke of the internal combustion engine is also known as the suction stroke because the piston moves to the maximum volume position (downward direction in the cylinder) creating a vacuum (negative pressure). The inlet valve opens as a result of the cam lobe pressing down on the valve stem, and the vaporized fuel mixture is sucked into the combustion chamber. The inlet valve closes at the end of this stroke.

2. Compression stroke: In this stroke, both valves are closed and the piston starts its movement to the minimum volume position (upward direction in the cylinder) and compresses the fuel mixture. During the compression process, pressure, temperature and the density of the fuel mixture increases.

3. A Power stroke: When the piston reaches a point just before top dead center, the spark plug ignites the fuel mixture. The point at which the fuel ignites varies by engine; typically it is about 10 degrees before top dead center. This expansion of gases caused by ignition of the fuel produces the power that is transmitted to the crank shaft mechanism.

4. Exhaust stroke: In the end of the power stroke, the exhaust valve opens. During this stroke, the piston starts its movement in the maximum volume position. The open exhaust valve allows the exhaust gases to escape the cylinder. At the end of this stroke, the exhaust valve closes, the inlet valve opens, and the sequence repeats in the next cycle. Four-stroke engines require two revolutions.

Many engines overlap these steps in time; turbine engines do all steps simultaneously at different parts of the engines.
Combustion

All internal combustion engines depend on combustion of a chemical fuel, typically with oxygen from the air (though it is possible to inject nitrous oxide to do more of the same thing and gain a power boost). The combustion process typically results in the production of a great quantity of heat, as well as the production of steam and carbon dioxide and other chemicals at very high temperature; the temperature reached is determined by the chemical make up of the fuel and oxidisers (see stoichiometry), as well as by the compression and other factors.

The most common modern fuels are made up of hydrocarbons and are derived mostly from fossil fuels (petroleum). Fossil fuels include diesel fuel, gasoline and petroleum gas, and the rarer use of propane. Except for the fuel delivery components, most internal combustion engines that are designed for gasoline use can run on natural gas or liquefied petroleum gases without major modifications. Large diesels can run with air mixed with gases and a pilot diesel fuel ignition injection. Liquid and gaseous biofuels, such as ethanol and biodiesel (a form of diesel fuel that is produced from crops that yield triglycerides such as soybean oil), can also be used. Engines with appropriate modifications can also run on hydrogen gas, wood gas, or charcoal gas, as well as from so-called producer gas made from other convenient biomass. Recently, experiments have been made with using powdered solid fuels, such as the magnesium injection cycle.

Internal combustion engines require ignition of the mixture, either by spark ignition (SI) or compression ignition (CI). Before the invention of reliable electrical methods, hot tube and flame methods were used. Experimental engines with laser ignition have been built.

Gasoline Ignition Process

Gasoline engine ignition systems generally rely on a combination of a lead–acid battery and an induction coil to provide a high-voltage electric spark to ignite the air-fuel mix in the engine's cylinders. This battery is recharged during operation using an electricity-generating device such as an alternator or generator driven by the engine. Gasoline engines take in a mixture of air and gasoline and compress it to not more than 12.8 bar (1.28 MPa), then use a spark plug to ignite the mixture when it is compressed by the piston head in each cylinder.

While gasoline internal combustion engines are much easier to start in cold weather than diesel engines, they can still have cold weather starting problems under extreme conditions. For years the solution was to park the car in heated areas. In some parts of the world the oil was actually drained and heated over night and returned to the engine for cold starts. In the early 1950s the gasoline Gasifier unit was developed, where part on cold weather starts raw gasoline was diverted to the unit where part of the gas was burned causing the other part to become a hot vapor sent directly to the intake valve manifold. This unit was quite popular till electric engine block heaters became standard on gasoline engines sold in cold climates.

Diesel Ignition Process

Diesel engines and HCCI (Homogeneous charge compression ignition) engines, rely solely on heat and pressure created by the engine in its compression process for ignition. The compression level that occurs is usually twice or more than a gasoline engine. Diesel engines take in air only, and shortly before peak compression, spray a small quantity of diesel fuel into the cylinder via a fuel injector that allows the fuel to instantly ignite. HCCI type engines take in both air and fuel, but continue to rely on an unaided auto-combustion process, due to higher pressures and heat. This is also why diesel and HCCI engines are more susceptible to cold-starting issues, although they run just as well in cold weather once started. Light duty diesel engines with indirect injection in automobiles and light trucks employ glowplugs that pre-heat the combustion chamber just before starting to reduce no-start conditions in cold weather. Most diesels also have a battery and charging system; nevertheless, this system is secondary and is added by manufacturers as a luxury for the ease of starting, turning fuel on and off (which can also be done via a switch or mechanical apparatus), and for running auxiliary electrical components and accessories. Most new engines rely on electrical and electronic engine control units (ECU) that also adjust the combustion process to increase efficiency and reduce emissions.
Two-stroke configuration

Main article: Two-stroke engine

Engines based on the two-stroke cycle use two strokes (one up, one down) for every power stroke. Since there are no dedicated intake or exhaust strokes, alternative methods must be used to scavenge the cylinders. The most common method in spark-ignition two-strokes is to use the downward motion of the piston to pressurize fresh charge in the crankcase, which is then blown through the cylinder through ports in the cylinder walls.

Spark-ignition two-strokes are small and light for their power output and mechanically very simple; however, they are also generally less efficient and more polluting than their four-stroke counterparts. In terms of power per cm³, a two-stroke engine produces comparable power to an equivalent four-stroke engine. The advantage of having one power stroke for every 360° of crankshaft rotation (compared to 720° in a 4-stroke motor) is balanced by the less complete intake and exhaust and the shorter effective compression and power strokes. It may be possible for a two-stroke to produce more power than an equivalent four-stroke, over a narrow range of engine speeds, at the expense of less power at other speeds.

Small displacement, crankcase-scavenged two-stroke engines have been less fuel-efficient than other types of engines when the fuel is mixed with the air prior to scavenging allowing some of it to escape out of the exhaust port. Modern designs (Sarich and Paggio) use air-assisted fuel injection, which avoids this loss and provides more efficiency than comparably sized four-stroke engines. Fuel injection is essential for a modern two-stroke engine for it to meet stringent emission standards. The problem of total loss oil consumption, however, remains a cause of high hydrocarbon emissions. The low-pressure direct gasoline injection developed by R Sarich was tested by Ford in an automobile size 2-stroke engine, and in 2012, Orbital won a contract by the Australia government for a two-stroke, direct injection engine for airborne drones.

Research continues into improving many aspects of two-stroke motors including direct fuel injection, amongst other things. The initial results have produced motors that are much cleaner burning than their traditional counterparts. Two-stroke engines are widely used in snowmobiles, lawnmowers, string trimmers, chain saws, jet skis, mopeds, outboard motors, and many motorcycles. Two-stroke engines have the advantage of an increased specific power ratio (i.e. power to volume ratio), typically around 1.5 times that of a typical four-stroke engine.

The largest internal combustion engines in the world are two-stroke diesels, used in some locomotives and large ships. They use forced induction (similar to super-charging, or turbocharging) to scavenge the cylinders; an example of this type of motor is the Wärtsilä-Sulzer turbocharged two-stroke diesel as used in large container ships. It is the most efficient and powerful internal combustion engine in the world with over 50% thermal efficiency. For comparison, the most efficient small four-stroke motors are around 43% thermal efficiency (SAE 900648); size is an advantage for efficiency due to the increase in the ratio of volume to surface area.

Common cylinder configurations include the straight or inline configuration, the more compact V configuration, and the wider but smoother flat or boxer configuration. Aircraft engines can also adopt a radial configuration, which allows more effective cooling. More unusual configurations such as the H, U, X, and W have also been used.

Multiple crankshaft configurations do not necessarily need a cylinder head at all because they can instead have a piston at each end of the cylinder called an opposed piston design. Because here gas in- and outlets are positioned at opposed ends of the cylinder, one can achieve uniflow scavenging, which, as in the four-stroke engine, is efficient over a wide range of engine speeds. Also the thermal efficiency is improved because of lack of cylinder heads. This design was used in the Junkers Jumo 205 diesel aircraft engine, using two crankshafts at either end of a single bank of cylinders, and most remarkably in the Napier Deltic diesel engines. These used three crankshafts to serve three banks of double-ended cylinders arranged in an equilateral triangle with the crankshafts at the corners. It was also used in single-bank locomotive engines, and is still used for marine propulsion engines and marine auxiliary generators.
Wankel

The Wankel cycle. The shaft turns three times for each rotation of the rotor around the lobe and once for each orbital revolution around the eccentric shaft.

Main article: Wankel engine

The Wankel engine (rotary engine) does not have piston strokes. It operates with the same separation of phases as the four-stroke engine with the phases taking place in separate locations in the engine. In thermodynamic terms it follows the Otto engine cycle, so may be thought of as a "four-phase" engine. While it is true that three power strokes typically occur per rotor revolution due to the 3:1 revolution ratio of the rotor to the eccentric shaft, only one power stroke per shaft revolution actually occurs; this engine provides three power 'strokes' per revolution per rotor giving it a greater power-to-weight ratio than piston engines. This type of engine was most notably used in the Mazda RX-8, the earlier RX-7, and other models.
Gas turbines

Main article: gas turbine

A gas turbine is a rotary machine similar in principle to a steam turbine and it consists of three main components: a compressor, a combustion chamber, and a turbine. The air, after being compressed in the compressor, is heated by burning fuel in it. About ⅔ of the heated air, combined with the products of combustion, expands in a turbine, producing work output that drives the compressor. The rest (about ⅓) is available as useful work output.
Jet engine

Main article: Jet engine

The jet engine takes a large volume of hot gas from a combustion process (typically a gas turbine, but rocket forms of jet propulsion often use solid or liquid propellants, and ramjet forms also lack the gas turbine) and feed it through a nozzle that accelerates the jet to high speed. As the jet accelerates through the nozzle, this creates thrust and in turn does useful work.
Engine cycle

Idealised P/V diagram for two-stroke Otto cycle
Two-stroke

Main article: Two-stroke cycle

This system manages to pack one power stroke into every two strokes of the piston (up-down). This is achieved by exhausting and recharging the cylinder simultaneously.

The steps involved here are:

    Intake and exhaust occur at bottom dead center. Some form of pressure is needed, either crankcase compression or super-charging.
    Compression stroke: Fuel-air mix is compressed and ignited. In case of diesel: Air is compressed, fuel is injected and self-ignited.
    Power stroke: Piston is pushed downward by the hot exhaust gases.

Two Stroke Spark Ignition (SI) engine:

In a two-stroke SI engine a cycle is completed in two strokes of a piston or one complete revolution (360°) of a crankshaft. In this engine the intake and exhaust strokes are eliminated and ports are used instead of valves. In this cycle, the gasoline is mixed with lubricant oil, resulting in a simpler, but more environmentally damaging system, as the excess oils do not burn and are left as a residue. As the piston proceeds downward another port is opened, the fuel/air intake port. Air/fuel/oil mixtures come from the carburetor, where it was mixed, to rest in an adjacent fuel chamber. When the piston moves further down and the cylinder doesn't have anymore gases, fuel mixture starts to flow to the combustion chamber and the second process of fuel compression starts. The design carefully considers the point that the fuel-air mixture should not mix with the exhaust, therefore the processes of fuel injection and exhausting are synchronized to avoid that concern. It should be noted that the piston has three functions in its operation:

    The piston acts as the combustion chamber with the cylinder and compresses the air/fuel mixture, receives back the liberated energy, and transfers it to the crankshaft.
    The piston motion creates a vacuum that sucks the fuel/air mixture from the carburetor and pushes it from the crankcase (adjacent chamber) to the combustion chamber.
    The sides of the piston act like the valves, covering and uncovering the intake and exhaust ports drilled into the side of the cylinder wall.

The major components of a two-stroke spark ignition engine are the:

    Cylinder: A cylindrical vessel in which a piston makes an up and down motion.
    Piston: A cylindrical component making an up and down movement in the cylinder
    Combustion chamber: A portion above the cylinder in which the combustion of the fuel-air mixture takes place
    Intake and exhaust ports: Ports that carry fresh fuel-air mixture into the combustion chamber and products of combustion away
    Crankshaft: A shaft that converts reciprocating motion of the piston into rotary motion
    Connecting rod: A rod that connects the piston to the crankshaft
    Spark plug: An ignition-source in the cylinder head that initiates the combustion process

Operation: When the piston moves from bottom dead center (BDC) to top dead center (TDC) the fresh air and fuel mixture enters the crank chamber through the intake port. The mixture enters due to the pressure difference between the crank chamber and the outer atmosphere while simultaneously the fuel-air mixture above the piston is compressed.

Ignition: With the help of a spark plug, ignition takes place at the top of the stroke. Due to the expansion of the gases the piston moves downwards covering the intake port and compressing the fuel-air mixture inside the crank chamber. When the piston is at bottom dead center, the burnt gases escape from the exhaust port.

At the time the transfer port is uncovered the compressed charge from the crank chamber enters into the combustion chamber through the transfer port. The fresh charge is deflected upwards by a hump provided on the top of the piston and removes the exhaust gases from the combustion chamber. Again the piston moves from bottom dead center to top dead center and the fuel-air mixture is compressed when the both the exhaust port and transfer ports are covered. The cycle is repeated.

Advantages: • It has no valves or camshaft mechanism, hence simplifying its mechanism and construction • For one complete revolution of the crankshaft, the engine executes one cycle—the 4-stroke executes one cycle per two crankshafts revolutions. • Less weight and easier to manufacture. • High power-to-weight ratio

Disadvantages: • The lack of lubrication system that protects the engine parts from wear. Accordingly, the 2-stroke engines have a shorter life. • 2-stroke engines do not consume fuel efficiently. • 2-stroke engines produce lots of pollution. • Sometimes part of the fuel leaks to the exhaust with the exhaust gases. In conclusion, based on the above advantages and disadvantages, two-stroke engines are supposed to operate in vehicles where the weight of the engine must be small, and it is not used continuously for long periods.
Four-stroke

Main article: Four-stroke cycle

Idealised Pressure/volume diagram of the Otto cycle showing combustion heat input Qp and waste exhaust output Qo, the power stroke is the top curved line, the bottom is the compression stroke

Engines based on the four-stroke ("Otto cycle") have one power stroke for every four strokes (up-down-up-down) and employ spark plug ignition. Combustion occurs rapidly, and during combustion the volume varies little ("constant volume").They are used in cars, larger boats, some motorcycles, and many light aircraft. They are generally quieter, more efficient, and larger than their two-stroke counterparts.

The steps involved here are:

    Intake stroke: Air and vaporized fuel are drawn in.
    Compression stroke: Fuel vapor and air are compressed and ignited.
    Combustion stroke: Fuel combusts and piston is pushed downwards.
    Exhaust stroke: Exhaust is driven out. During the 1st, 2nd, and 4th stroke the piston is relying on power and the momentum generated by the other pistons. In that case, a four-cylinder engine would be less powerful than a six- or eight-cylinder engine.

There are a number of variations of these cycles, most notably the Atkinson and Miller cycles. The diesel cycle is somewhat different.

Split-cycle engines separate the four strokes of intake, compression, combustion and exhaust into two separate but paired cylinders. The first cylinder is used for intake and compression. The compressed air is then transferred through a crossover passage from the compression cylinder into the second cylinder, where combustion and exhaust occur. A split-cycle engine is really an air compressor on one side with a combustion chamber on the other.

Previous split-cycle engines have had two major problems - poor breathing (volumetric efficiency) and low thermal efficiency. However, new designs are being introduced that seek to address these problems.

The Scuderi Engine addresses the breathing problem by reducing the clearance between the piston and the cylinder head through various turbo charging techniques. The Scuderi design requires the use of outwardly opening valves that enable the piston to move very close to the cylinder head without the interference of the valves. Scuderi addresses the low thermal efficiency via firing ATDC.

Firing ATDC can be accomplished by using high-pressure air in the transfer passage to create sonic flow and high turbulence in the power cylinder.
Diesel cycle

Main article: Diesel cycle

P-v Diagram for the Ideal Diesel cycle. The cycle follows the numbers 1-4 in clockwise direction.

Most truck and automotive diesel engines use a cycle reminiscent of a four-stroke cycle, but with a compression heating ignition system, rather than needing a separate ignition system. This variation is called the diesel cycle. In the diesel cycle, diesel fuel is injected directly into the cylinder so that combustion occurs at constant pressure, as the piston moves.

Otto cycle: Otto cycle is the typical cycle for most of the cars internal combustion engines, that work using gasoline as a fuel. Otto cycle is exactly the same one that was described for the four-stroke engine. It consists of the same four major steps: Intake, compression, ignition and exhaust.

PV diagram for Otto cycle On the PV-diagram, 1-2: Intake: suction stroke 2-3: Isentropic Compression stroke 3-4: Heat addition stroke 4-5: Exhaust stroke (Isentropic expansion) 5-2: Heat rejection The distance between points 1-2 is the stroke of the engine. By dividing V2/V1, we get: r

where r is called the compression ratio of the engine.
Six-stroke engine

The six-stroke engine was invented in 1883. Four kinds of six-stroke use a regular piston in a regular cylinder (Griffin six-stroke, Bajulaz six-stroke, Velozeta six-stroke and Crower six stroke), firing every three crankshaft revolutions. The systems capture the wasted heat of the four-stroke Otto cycle with an injection of air or water.

The Beare Head and "piston charger" engines operate as opposed-piston engines, two pistons in a single cylinder, firing every two revolutions rather more like a regular four-stroke.
Brayton cycle

Main article: Brayton cycle

Brayton cycle

A gas turbine is a rotary machine somewhat similar in principle to a steam turbine and it consists of three main components: a compressor, a combustion chamber, and a turbine. The air after being compressed in the compressor is heated by burning fuel in it, this heats and expands the air, and this extra energy is tapped by the turbine, which in turn powers the compressor closing the cycle and powering the shaft.

Gas turbine cycle engines employ a continuous combustion system where compression, combustion, and expansion occur simultaneously at different places in the engine—giving continuous power. Notably, the combustion takes place at constant pressure, rather than with the Otto cycle, constant volume.
Obsolete

The very first internal combustion engines did not compress the mixture. The first part of the piston downstroke drew in a fuel-air mixture, then the inlet valve closed and, in the remainder of the down-stroke, the fuel-air mixture fired. The exhaust valve opened for the piston upstroke. These attempts at imitating the principle of a steam engine were very inefficient.



STRENGTH OF MATERIALS

Mechanics of materials, also called strength of materials, is a subject which deals with the behavior of objects withstanding stresses and strains. This theory was established on the basis of mathematical modeling in first and second principal stress, specifically because types of stress state in construction parts such beam and shell are possible to approximate as one or two dimensional one. An important founding pioneer in mechanics of materials was Stephen Timoshenko.

The study of strength of materials often refers to various methods of calculating stresses in structural members, such as beams, columns and shafts. The methods employed to predict the response of a structure under loading and its susceptibility to various failure modes may take into account various properties of the materials other than material yield strength and ultimate strength; for example, failure by buckling is dependent on material stiffness and thus Young's Modulus.

In materials science, the strength of a material is its ability to withstand an applied stress without failure. The field of strength of materials deals with loads, deformations and the forces acting on a material. A load applied to a mechanical member will induce internal forces within the member called stresses. The stresses acting on the material cause deformation of the material. Deformation of the material is called strain, while the intensity of the internal forces is called stress. The applied stress may be tensile, compressive, or shear. The strength of any material relies on three different types of analytical method: strength, stiffness and stability, where strength refers to the load carrying capacity, stiffness refers to the deformation or elongation, and stability refers to the ability to maintain its initial configuration. Material yield strength refers to the point on the engineering stress-strain curve (as opposed to true stress-strain curve) beyond which the material experiences deformations that will not be completely reversed upon removal of the loading. The ultimate strength refers to the point on the engineering stress-strain curve corresponding to the stress that produces fracture.

Types of loadings

    Transverse loading - Forces applied perpendicular to the longitudinal axis of a member. Transverse loading causes the member to bend and deflect from its original position, with internal tensile and compressive strains accompanying the change in curvature of the member. Transverse loading also induces shear forces that cause shear deformation of the material and increase the transverse deflection of the member.
    Torsional loading - Twisting action caused by a pair of externally applied equal and oppositely directed force couples acting on parallel planes or by a single external couple applied to a member that has one end fixed against rotation.

Stress terms

Uniaxial stress is expressed by

    \sigma=\frac{F}{A},

where F is the force [N] acting on an area A [m2]. The area can be the undeformed area or the deformed area, depending on whether engineering stress or true stress is of interest.

    Compressive stress (or compression) is the stress state caused by an applied load that acts to reduce the length of the material (compression member) in the axis of the applied load, in other words stress state caused by squeezing the material. A simple case of compression is the uniaxial compression induced by the action of opposite, pushing forces. Compressive strength for materials is generally higher than their tensile strength. However, structures loaded in compression are subject to additional failure modes dependent on geometry, such as buckling.

    Tensile stress is the stress state caused by an applied load that tends to elongate the material in the axis of the applied load, in other words the stress caused by pulling the material. The strength of structures of equal cross sectional area loaded in tension is independent of shape of the cross section. Materials loaded in tension are susceptible to stress concentrations such as material defects or abrupt changes in geometry. However, materials exhibiting ductile behavior (most metals for example) can tolerate some defects while brittle materials (such as ceramics) can fail well below their ultimate material strength.

    Shear stress is the stress state caused by the combined energy of a pair of opposing forces acting along parallel lines of action through the material, in other words the stress caused by faces of the material sliding relative to one another. An example is cutting paper with scissors or stresses due to torsional loading.

Strength terms

    Yield strength is the lowest stress that produces a permanent deformation in a material. In some materials, like aluminium alloys, the point of yielding is difficult to identify, thus it is usually defined as the stress required to cause 0.2% plastic strain. This is called a 0.2% proof stress.

    Compressive strength is a limit state of compressive stress that leads to failure in the manner of ductile failure (infinite theoretical yield) or brittle failure (rupture as the result of crack propagation, or sliding along a weak plane - see shear strength).

    Tensile strength or ultimate tensile strength is a limit state of tensile stress that leads to tensile failure in the manner of ductile failure (yield as the first stage of that failure, some hardening in the second stage and breakage after a possible "neck" formation) or brittle failure (sudden breaking in two or more pieces at a low stress state). Tensile strength can be quoted as either true stress or engineering stress.

    Fatigue strength is a measure of the strength of a material or a component under cyclic loading, and is usually more difficult to assess than the static strength measures. Fatigue strength is quoted as stress amplitude or stress range (\Delta\sigma= \sigma_\mathrm{max} - \sigma_\mathrm{min}), usually at zero mean stress, along with the number of cycles to failure under that condition of stress.

    Impact strength, is the capability of the material to withstand a suddenly applied load and is expressed in terms of energy. Often measured with the Izod impact strength test or Charpy impact test, both of which measure the impact energy required to fracture a sample. Volume, modulus of elasticity, distribution of forces, and yield strength affect the impact strength of a material. In order for a material or object to have a higher impact strength the stresses must be distributed evenly throughout the object. It also must have a large volume with a low modulus of elasticity and a high material yield strength.

Strain (deformation) terms

    Deformation of the material is the change in geometry created when stress is applied (in the form of force loading, gravitational field, acceleration, thermal expansion, etc.). Deformation is expressed by the displacement field of the material.
    Strain or reduced deformation is a mathematical term that expresses the trend of the deformation change among the material field. Strain is the deformation per unit length. In the case of uniaxial loading - displacements of a specimen (for example a bar element) strain is expressed as the quotient of the displacement and the length of the specimen. For 3D displacement fields it is expressed as derivatives of displacement functions in terms of a second order tensor (with 6 independent elements).
    Deflection is a term to describe the magnitude to which a structural element bends under a load.

Stress-strain relations

Basic static response of a specimen under tension

    Elasticity is the ability of a material to return to its previous shape after stress is released. In many materials, the relation between applied stress is directly proportional to the resulting strain (up to a certain limit), and a graph representing those two quantities is a straight line.

The slope of this line is known as Young's Modulus, or the "Modulus of Elasticity." The Modulus of Elasticity can be used to determine the stress-strain relationship in the linear-elastic portion of the stress-strain curve. The linear-elastic region is either below the yield point, or if a yield point is not easily identified on the stress-strain plot it is defined to be between 0 and 0.2% strain, and is defined as the region of strain in which no yielding (permanent deformation) occurs.

    Plasticity or plastic deformation is the opposite of elastic deformation and is defined as unrecoverable strain. Plastic deformation is retained after the release of the applied stress. Most materials in the linear-elastic category are usually capable of plastic deformation. Brittle materials, like ceramics, do not experience any plastic deformation and will fracture under relatively low stress. Materials such as metals usually experience a small amount of plastic deformation before failure while ductile metals such as copper and lead or polymers will plasticly deform much more.

Consider the difference between a carrot and chewed bubble gum. The carrot will stretch very little before breaking. The chewed bubble gum, on the other hand, will plastically deform enormously before finally breaking.
Design terms

Ultimate strength is an attribute related to a material, rather than just a specific specimen made of the material, and as such it is quoted as the force per unit of cross section area (N/m²). The ultimate strength is the maximum stress that a material can withstand before it breaks or weakens. For example, the ultimate tensile strength (UTS) of AISI 1018 Steel is 440 MN/m². In general, the SI unit of stress is the pascal, where 1 Pa = 1 N/m². In Imperial units, the unit of stress is given as lbf/in² or pounds-force per square inch. This unit is often abbreviated as psi. One thousand psi is abbreviated ksi.

A Factor of safety is a design criteria that an engineered component or structure must achieve. FS = UTS/R, where FS: the factor of safety, R: The applied stress, and UTS: ultimate stress (psi or N/m^2)

Margin of Safety is also sometimes used to as design criteria. It is defined MS = Failure Load/(Factor of Safety * Predicted Load) - 1

For example to achieve a factor of safety of 4, the allowable stress in an AISI 1018 steel component can be calculated to be R = UTS/FS = 440/4 = 110 MPa, or R = 110×106 N/m². Such allowable stresses are also known as "design stresses" or "working stresses."

Design stresses that have been determined from the ultimate or yield point values of the materials give safe and reliable results only for the case of static loading. Many machine parts fail when subjected to a non steady and continuously varying loads even though the developed stresses are below the yield point. Such failures are called fatigue failure. The failure is by a fracture that appears to be brittle with little or no visible evidence of yielding. However, when the stress is kept below "fatigue stress" or "endurance limit stress", the part will endure indefinitely. A purely reversing or cyclic stress is one that alternates between equal positive and negative peak stresses during each cycle of operation. In a purely cyclic stress, the average stress is zero. When a part is subjected to a cyclic stress, also known as stress range (Sr), it has been observed that the failure of the part occurs after a number of stress reversals (N) even if the magnitude of the stress range is below the material’s yield strength. Generally, higher the range stress, the fewer the number of reversals needed for failure.
Failure theories

There are four important failure theories: maximum shear stress theory, maximum normal stress theory, maximum strain energy theory, and maximum distortion energy theory. Out of these four theories of failure, the maximum normal stress theory is only applicable for brittle materials, and the remaining three theories are applicable for ductile materials. Of the latter three, the distortion energy theory provides most accurate results in majority of the stress conditions. The strain energy theory needs the value of Poisson’s ratio of the part material, which is often not readily available. The maximum shear stress theory is conservative. For simple unidirectional normal stresses all theories are equivalent, which means all theories will give the same result.

    Maximum Shear stress Theory- This theory postulates that failure will occur if the magnitude of the maximum shear stress in the part exceeds the shear strength of the material determined from uniaxial testing.

    Maximum normal stress theory - This theory postulates, that failure will occur if the maximum normal stress in the part exceeds the ultimate tensile stress of the material as determined from uniaxial testing. This theory deals with brittle materials only. The maximum tensile stress should be less than or equal to ultimate tensile stress divided by factor of safety. The magnitude of the maximum compressive stress should be less than ultimate compressive stress divided by factor of safety.

    Maximum strain energy theory - This theory postulates that failure will occur when the strain energy per unit volume due to the applied stresses in a part equals the strain energy per unit volume at the yield point in uniaxial testing.

    Maximum distortion energy theory - This theory is also known as shear energy theory or von Mises-Hencky theory. This theory postulates that failure will occur when the distortion energy per unit volume due to the applied stresses in a part equals the distortion energy per unit volume at the yield point in uniaxial testing. The total elastic energy due to strain can be divided into two parts: one part causes change in volume, and the other part causes change in shape. Distortion energy is the amount of energy that is needed to change the shape.

    Fracture mechanics was established by Alan Arnold Griffith and George Rankine Irwin. This important theory is also known as numeric conversion of toughness of material in the case of crack existence.

    Fractology was proposed by Takeo Yokobori because each fracure laws including creep rupture criterion must be combined nonlinialy.

Microstructure

A material's strength is dependent on its microstructure. The engineering processes to which a material is subjected can alter this microstructure. The variety of strengthening mechanisms that alter the strength of a material includes work hardening, solid solution strengthening, precipitation hardening and grain boundary strengthening and can be quantitatively and qualitatively explained. Strengthening mechanisms are accompanied by the caveat that some other mechanical properties of the material may degenerate in an attempt to make the material stronger. For example, in grain boundary strengthening, although yield strength is maximized with decreasing grain size, ultimately, very small grain sizes make the material brittle. In general, the yield strength of a material is an adequate indicator of the material's mechanical strength. Considered in tandem with the fact that the yield strength is the parameter that predicts plastic deformation in the material, one can make informed decisions on how to increase the strength of a material depending its microstructural properties and the desired end effect. Strength is expressed in terms of compressive strength, tensile strength, and shear strength, namely the limit states of compressive stress, tensile stress and shear stress, respectively. The effects of dynamic loading are probably the most important practical consideration of the strength of materials, especially the problem of fatigue. Repeated loading often initiates brittle cracks, which grow until failure occurs. The cracks always start at stress concentrations, especially changes in cross-section of the product, near holes and corners.
 

Computational Fluid Dyanamics (CFD)

Computational fluid dynamics, usually abbreviated as CFD, is a branch of fluid mechanics that uses numerical methods and algorithms to solve and analyze problems that involve fluid flows. Computers are used to perform the calculations required to simulate the interaction of liquids and gases with surfaces defined by boundary conditions. With high-speed supercomputers, better solutions can be achieved. Ongoing research yields software that improves the accuracy and speed of complex simulation scenarios such as transonic or turbulent flows. Initial experimental validation of such software is performed using a wind tunnel with the final validation coming in full-scale testing, e.g. flight tests.

The fundamental basis of almost all CFD problems are the Navier–Stokes equations, which define any single-phase (gas or liquid, but not both) fluid flow. These equations can be simplified by removing terms describing viscous actions to yield the Euler equations. Further simplification, by removing terms describing vorticity yields the full potential equations. Finally, for small perturbations in subsonic and supersonic flows (not transonic or hypersonic) these equations can be linearized to yield the linearized potential equations.

Historically, methods were first developed to solve the Linearized potential equations. Two-dimensional (2D) methods, using conformal transformations of the flow about a cylinder to the flow about an airfoil were developed in the 1930s.The computer power available paced development of three-dimensional methods. The first work using computers to model fluid flow, as governed by the Navier-Stokes equations, was performed at Los Alamos National Labs, in the T3 group.This group was led by Francis H. Harlow, who is widely considered as one of the pioneers of CFD. From 1957 to late 1960s, this group developed a variety of numerical methods to simulate transient two-dimensional fluid flows, such as Particle-in-cell method (Harlow, 1957),Fluid-in-cell method (Gentry, Martin and Daly, 1966), Vorticity stream function method (Jake Fromm, 1963), and Marker-and-cell method (Harlow and Welch, 1965).Fromm's vorticity-stream-function method for 2D, transient, incompressible flow was the first treatment of strongly contorting incompressible flows in the world.

The first paper with three-dimensional model was published by John Hess and A.M.O. Smith of Douglas Aircraft in 1967.This method discretized the surface of the geometry with panels, giving rise to this class of programs being called Panel Methods. Their method itself was simplified, in that it did not include lifting flows and hence was mainly applied to ship hulls and aircraft fuselages. The first lifting Panel Code (A230) was described in a paper written by Paul Rubbert and Gary Saaris of Boeing Aircraft in 1968. In time, more advanced three-dimensional Panel Codes were developed at Boeing (PANAIR, A502), Lockheed (Quadpan), Douglas (HESS), McDonnell Aircraft (MACAERO), NASA (PMARC) and Analytical Methods (WBAERO, USAERO and VSAERO). Some (PANAIR, HESS and MACAERO) were higher order codes, using higher order distributions of surface singularities, while others (Quadpan, PMARC, USAERO and VSAERO) used single singularities on each surface panel. The advantage of the lower order codes was that they ran much faster on the computers of the time. Today, VSAERO has grown to be a multi-order code and is the most widely used program of this class. It has been used in the development of many submarines, surface ships, automobiles, helicopters, aircraft, and more recently wind turbines. Its sister code, USAERO is an unsteady panel method that has also been used for modeling such things as high speed trains and racing yachts. The NASA PMARC code from an early version of VSAERO and a derivative of PMARC, named CMARC, is also commercially available.

Methodology

In all of these approaches the same basic procedure is followed.

    During preprocessing
        The geometry (physical bounds) of the problem is defined.
        The volume occupied by the fluid is divided into discrete cells (the mesh). The mesh may be uniform or non uniform.
        The physical modeling is defined – for example, the equations of motions + enthalpy + radiation + species conservation
        Boundary conditions are defined. This involves specifying the fluid behaviour and properties at the boundaries of the problem. For transient problems, the initial conditions are also defined.
    The simulation is started and the equations are solved iteratively as a steady-state or transient.
    Finally a postprocessor is used for the analysis and visualization of the resulting solution.

Discretization methods

The stability of the chosen discretisation is generally established numerically rather than analytically as with simple linear problems. Special care must also be taken to ensure that the discretisation handles discontinuous solutions gracefully. The Euler equations and Navier–Stokes equations both admit shocks, and contact surfaces.

Some of the discretisation methods being used are:
Finite volume method

Main article: Finite volume method

The finite volume method (FVM) is a common approach used in CFD codes, as it has an advantage in memory usage and solution speed, especially for large problems, high Reynolds number turbulent flows, and source term dominated flows (like combustion).

In the finite volume method, the governing partial differential equations (typically the Navier-Stokes equations, the mass and energy conservation equations, and the turbulence equations) are recast in a conservative form, and then solved over discrete control volumes. This discretisation guarantees the conservation of fluxes through a particular control volume. The finite volume equation yields governing equations in the form,

    \frac{\partial}{\partial t}\iiint Q\, dV + \iint F\, d\mathbf{A} = 0,

where Q is the vector of conserved variables, F is the vector of fluxes (see Euler equations or Navier–Stokes equations), V is the volume of the control volume element, and \mathbf{A} is the surface area of the control volume element.
Finite element method

Main article: Finite element method

The finite element method (FEM) is used in structural analysis of solids, but is also applicable to fluids. However, the FEM formulation requires special care to ensure a conservative solution. The FEM formulation has been adapted for use with fluid dynamics governing equations.[citation needed] Although FEM must be carefully formulated to be conservative, it is much more stable than the finite volume approach. However, FEM can require more memory and has slower solution times than the FVM.

In this method, a weighted residual equation is formed:

    R_i = \iiint W_i Q \, dV^e

where R_i is the equation residual at an element vertex i, Q is the conservation equation expressed on an element basis, W_i is the weight factor, and V^{e} is the volume of the element.
Finite difference method

Main article: Finite difference method

The finite difference method (FDM) has historical importance and is simple to program. It is currently only used in few specialized codes, which handle complex geometry with high accuracy and efficiency by using embedded boundaries or overlapping grids (with the solution interpolated across each grid).

    \frac{\partial Q}{\partial t}+ \frac{\partial F}{\partial x}+ \frac{\partial G}{\partial y}+ \frac{\partial H}{\partial z}=0

where Q is the vector of conserved variables, and F, G, and H are the fluxes in the x, y, and z directions respectively.
Spectral element method

Main article: Spectral element method

Spectral element method is a finite element type method. It requires the mathematical problem (the partial differential equation) to be cast in a weak formulation. This is typically done by multiplying the differential equation by an arbitrary test function and integrating over the whole domain. Purely mathematically, the test functions are completely arbitrary - they belong to an infinitely dimensional function space. Clearly an infinitely dimensional function space cannot be represented on a discrete spectral element mesh. And this is where the spectral element discretization begins. The most crucial thing is the choice of interpolating and testing functions. In a standard, low order FEM in 2D, for quadrilateral elements the most typical choice is the bilinear test or interpolating function of the form v(x,y) = ax+by+cxy+d. In a spectral element method however, the interpolating and test functions are chosen to be polynomials of a very high order (typically e.g. of the 10th order in CFD applications). This guarantees the rapid convergence of the method. Furthermore, very efficient integration procedures must be used, since the number of integrations to be performed in a numerical codes is big. Thus, high order Gauss integration quadratures are employed, since they achieve the highest accuracy with the smallest number of computations to be carried out. At the time there are some academic CFD codes based on the spectral element method and some more are currently under development, since the new time-stepping schemes arise in the scientific world.
Boundary element method

Main article: Boundary element method

In the boundary element method, the boundary occupied by the fluid is divided into a surface mesh.
High-resolution discretization schemes

Main article: High-resolution scheme

High-resolution schemes are used where shocks or discontinuities are present. Capturing sharp changes in the solution requires the use of second or higher-order numerical schemes that do not introduce spurious oscillations. This usually necessitates the application of flux limiters to ensure that the solution is total variation diminishing
Turbulence models

In studying turbulent flows, the objective is to obtain a theory or a model that can yield quantities of interest, such as velocities. For turbulent flow, the range of length scales and complexity of phenomena make most approaches impossible. The primary approach in this case is to create numerical models to calculate the properties of interest. A selection of some commonly-used computational models for turbulent flows are presented in this section.

The chief difficulty in modeling turbulent flows comes from the wide range of length and time scales associated with turbulent flow. As a result, turbulence models can be classified based on the range of these length and time scales that are modeled and the range of length and time scales that are resolved. The more turbulent scales that are resolved, the finer the resolution of the simulation, and therefore the higher the computational cost. If a majority or all of the turbulent scales are modeled, the computational cost is very low, but the tradeoff comes in the form of decreased accuracy.

In addition to the wide range of length and time scales and the associated computational cost, the governing equations of fluid dynamics contain a non-linear convection term and a non-linear and non-local pressure gradient term. These nonlinear equations must be solved numerically with the appropriate boundary and initial conditions.
Reynolds-averaged Navier–Stokes

Main article: Reynolds-averaged Navier–Stokes equations

Reynolds-averaged Navier-Stokes (RANS) equations are the oldest approach to turbulence modeling. An ensemble version of the governing equations is solved, which introduces new apparent stresses known as Reynolds stresses. This adds a second order tensor of unknowns for which various models can provide different levels of closure. It is a common misconception that the RANS equations do not apply to flows with a time-varying mean flow because these equations are 'time-averaged'. In fact, statistically unsteady (or non-stationary) flows can equally be treated. This is sometimes referred to as URANS. There is nothing inherent in Reynolds averaging to preclude this, but the turbulence models used to close the equations are valid only as long as the time over which these changes in the mean occur is large compared to the time scales of the turbulent motion containing most of the energy.

RANS models can be divided into two broad approaches:

Boussinesq hypothesis

    This method involves using an algebraic equation for the Reynolds stresses which include determining the turbulent viscosity, and depending on the level of sophistication of the model, solving transport equations for determining the turbulent kinetic energy and dissipation. Models include k-ε (Launder and Spalding) Mixing Length Model (Prandtl),and Zero Equation Model (Cebeci and Smith). The models available in this approach are often referred to by the number of transport equations associated with the method. For example, the Mixing Length model is a "Zero Equation" model because no transport equations are solved; the k-\epsilon is a "Two Equation" model because two transport equations (one for k and one for \epsilon) are solved.
Reynolds stress model (RSM)

    This approach attempts to actually solve transport equations for the Reynolds stresses. This means introduction of several transport equations for all the Reynolds stresses and hence this approach is much more costly in CPU effort.

Large eddy simulation

 Large eddy simulation

Volume rendering of a non-premixed swirl flame as simulated by LES.

Large eddy simulation (LES) is a technique in which the smallest scales of the flow are removed through a filtering operation, and their effect modeled using subgrid scale models. This allows the largest and most important scales of the turbulence to be resolved, while greatly reducing the computational cost incurred by the smallest scales. This method requires greater computational resources than RANS methods, but is far cheaper than DNS.
Detached eddy simulation

Main article: Detached eddy simulation

Detached eddy simulations (DES) is a modification of a RANS model in which the model switches to a subgrid scale formulation in regions fine enough for LES calculations. Regions near solid boundaries and where the turbulent length scale is less than the maximum grid dimension are assigned the RANS mode of solution. As the turbulent length scale exceeds the grid dimension, the regions are solved using the LES mode. Therefore the grid resolution for DES is not as demanding as pure LES, thereby considerably cutting down the cost of the computation. Though DES was initially formulated for the Spalart-Allmaras model (Spalart et al., 1997), it can be implemented with other RANS models (Strelets, 2001), by appropriately modifying the length scale which is explicitly or implicitly involved in the RANS model. So while Spalart-Allmaras model based DES acts as LES with a wall model, DES based on other models (like two equation models) behave as a hybrid RANS-LES model. Grid generation is more complicated than for a simple RANS or LES case due to the RANS-LES switch. DES is a non-zonal approach and provides a single smooth velocity field across the RANS and the LES regions of the solutions.
Direct numerical simulation

Main article: Direct numerical simulation

Direct numerical simulation (DNS) resolves the entire range of turbulent length scales. This marginalizes the effect of models, but is extremely expensive. The computational cost is proportional to Re^{3}. DNS is intractable for flows with complex geometries or flow configurations.
Coherent vortex simulation

The coherent vortex simulation approach decomposes the turbulent flow field into a coherent part, consisting of organized vortical motion, and the incoherent part, which is the random background flow. This decomposition is done using wavelet filtering. The approach has much in common with LES, since it uses decomposition and resolves only the filtered portion, but different in that it does not use a linear, low-pass filter. Instead, the filtering operation is based on wavelets, and the filter can be adapted as the flow field evolves. Farge and Schneider tested the CVS method with two flow configurations and showed that the coherent portion of the flow exhibited the -\frac{40}{39} energy spectrum exhibited by the total flow, and corresponded to coherent structures (vortex tubes), while the incoherent parts of the flow composed homogeneous background noise, which exhibited no organized structures. Goldstein and Oleg applied the FDV model to large eddy simulation, but did not assume that the wavelet filter completely eliminated all coherent motions from the subfilter scales. By employing both LES and CVS filtering, they showed that the SFS dissipation was dominated by the SFS flow field's coherent portion.
PDF methods

Probability density function (PDF) methods for turbulence, first introduced by Lundgren, are based on tracking the one-point PDF of the velocity, f_{V}(\boldsymbol{v};\boldsymbol{x},t) d\boldsymbol{v}, which gives the probability of the velocity at point \boldsymbol{x} being between \boldsymbol{v} and \boldsymbol{v}+d\boldsymbol{v}. This approach is analogous to the kinetic theory of gases, in which the macroscopic properties of a gas are described by a large number of particles. PDF methods are unique in that they can be applied in the framework of a number of different turbulence models; the main differences occur in the form of the PDF transport equation. For example, in the context of large eddy simulation, the PDF becomes the filtered PDF. PDF methods can also be used to describe chemical reactions, and are particularly useful for simulating chemically reacting flows because the chemical source term is closed and does not require a model. The PDF is commonly tracked by using Lagrangian particle methods; when combined with large eddy simulation, this leads to a Langevin equation for subfilter particle evolution.

Vortex method

The vortex method is a grid-free technique for the simulation of turbulent flows. It uses vortices as the computational elements, mimicking the physical structures in turbulence. Vortex methods were developed as a grid-free methodology that would not be limited by the fundamental smoothing effects associated with grid-based methods. To be practical, however, vortex methods require means for rapidly computing velocities from the vortex elements – in other words they require the solution to a particular form of the N-body problem (in which the motion of N objects is tied to their mutual influences). A breakthrough came in the late 1980s with the development of the fast multipole method (FMM), an algorithm by V. Rokhlin (Yale) and L. Greengard (Courant Institute). This breakthrough paved the way to practical computation of the velocities from the vortex elements and is the basis of successful algorithms. They are especially well-suited to simulating filamentary motion, such as wisps of smoke, in real-time simulations such as video games, because of the fine detail achieved using minimal computation.

Software based on the vortex method offer a new means for solving tough fluid dynamics problems with minimal user intervention. All that is required is specification of problem geometry and setting of boundary and initial conditions. Among the significant advantages of this modern technology;

    It is practically grid-free, thus eliminating numerous iterations associated with RANS and LES.
    All problems are treated identically. No modeling or calibration inputs are required.
    Time-series simulations, which are crucial for correct analysis of acoustics, are possible.
    The small scale and large scale are accurately simulated at the same time.

Vorticity confinement method

Main article: Vorticity confinement

The vorticity confinement (VC) method is an Eulerian technique used in the simulation of turbulent wakes. It uses a solitary-wave like approach to produce a stable solution with no numerical spreading. VC can capture the small scale features to within as few as 2 grid cells. Within these features, a nonlinear difference equation is solved as opposed to the finite difference equation. VC is similar to shock capturing methods, where conservation laws are satisfied, so that the essential integral quantities are accurately computed.
Linear eddy model

The Linear eddy model is a technique used to simulate the convective mixing that takes place in turbulent flow. Specifically, it provides a mathematical way to describe the interactions of a scalar variable within the vector flow field. It is primarily used in one-dimensional representations of turbulent flow, since it can be applied across a wide range of length scales and Reynolds numbers. This model is generally used as a building block for more complicated flow representations, as it provides high resolution predictions that hold across a large range of flow conditions.
Two-phase flow

The modeling of two-phase flow is still under development. Different methods have been proposed lately. The Volume of fluid method has received a lot of attention lately for problems that do not have dispersed particles, but the Level set method and front tracking are also valuable approaches . Most of these methods are either good in maintaining a sharp interface or at conserving mass. This is crucial since the evaluation of the density, viscosity and surface tension is based on the values averaged over the interface.Lagrangian multiphase models, which are used for dispersed media, are based on solving the Lagrangian equation of motion for the dispersed phase.
Solution algorithms

Discretization in space produces a system of ordinary differential equations for unsteady problems and algebraic equations for steady problems. Implicit or semi-implicit methods are generally used to integrate the ordinary differential equations, producing a system of (usually) nonlinear algebraic equations. Applying a Newton or Picard iteration produces a system of linear equations which is nonsymmetric in the presence of advection and indefinite in the presence of incompressibility. Such systems, particularly in 3D, are frequently too large for direct solvers, so iterative methods are used, either stationary methods such as successive overrelaxation or Krylov subspace methods. Krylov methods such as GMRES, typically used with preconditioning, operate by minimizing the residual over successive subspaces generated by the preconditioned operator.

Multigrid has the advantage of asymptotically optimal performance on many problems. Traditional solvers and preconditioners are effective at reducing high-frequency components of the residual, but low-frequency components typically require many iterations to reduce. By operating on multiple scales, multigrid reduces all components of the residual by similar factors, leading to a mesh-independent number of iterations.

For indefinite systems, preconditioners such as incomplete LU factorization, additive Schwarz, and multigrid perform poorly or fail entirely, so the problem structure must be used for effective preconditioning. Methods commonly used in CFD are the SIMPLE and Uzawa algorithms which exhibit mesh-dependent convergence rates, but recent advances based on block LU factorization combined with multigrid for the resulting definite systems have led to preconditioners that deliver mesh-independent convergence rates.
Unsteady Aerodynamics

CFD made a major break through in late 70s with the introduction of LTRAN2, a 2-D code to model oscillating airfoils based on transonic small perturbation theory by Ballhaus and associates. It uses a Murman-Cole switch algorithm for modeling the moving shock-waves. Later it was extended to 3-D with use of a rotated difference scheme by AFWAL/Boeing that resulted in LTRAN3.