Tuesday, 10 September 2013

STRENGTH OF MATERIALS

Mechanics of materials, also called strength of materials, is a subject which deals with the behavior of objects withstanding stresses and strains. This theory was established on the basis of mathematical modeling in first and second principal stress, specifically because types of stress state in construction parts such beam and shell are possible to approximate as one or two dimensional one. An important founding pioneer in mechanics of materials was Stephen Timoshenko.

The study of strength of materials often refers to various methods of calculating stresses in structural members, such as beams, columns and shafts. The methods employed to predict the response of a structure under loading and its susceptibility to various failure modes may take into account various properties of the materials other than material yield strength and ultimate strength; for example, failure by buckling is dependent on material stiffness and thus Young's Modulus.

In materials science, the strength of a material is its ability to withstand an applied stress without failure. The field of strength of materials deals with loads, deformations and the forces acting on a material. A load applied to a mechanical member will induce internal forces within the member called stresses. The stresses acting on the material cause deformation of the material. Deformation of the material is called strain, while the intensity of the internal forces is called stress. The applied stress may be tensile, compressive, or shear. The strength of any material relies on three different types of analytical method: strength, stiffness and stability, where strength refers to the load carrying capacity, stiffness refers to the deformation or elongation, and stability refers to the ability to maintain its initial configuration. Material yield strength refers to the point on the engineering stress-strain curve (as opposed to true stress-strain curve) beyond which the material experiences deformations that will not be completely reversed upon removal of the loading. The ultimate strength refers to the point on the engineering stress-strain curve corresponding to the stress that produces fracture.

Types of loadings

    Transverse loading - Forces applied perpendicular to the longitudinal axis of a member. Transverse loading causes the member to bend and deflect from its original position, with internal tensile and compressive strains accompanying the change in curvature of the member. Transverse loading also induces shear forces that cause shear deformation of the material and increase the transverse deflection of the member.
    Torsional loading - Twisting action caused by a pair of externally applied equal and oppositely directed force couples acting on parallel planes or by a single external couple applied to a member that has one end fixed against rotation.

Stress terms

Uniaxial stress is expressed by

    \sigma=\frac{F}{A},

where F is the force [N] acting on an area A [m2]. The area can be the undeformed area or the deformed area, depending on whether engineering stress or true stress is of interest.

    Compressive stress (or compression) is the stress state caused by an applied load that acts to reduce the length of the material (compression member) in the axis of the applied load, in other words stress state caused by squeezing the material. A simple case of compression is the uniaxial compression induced by the action of opposite, pushing forces. Compressive strength for materials is generally higher than their tensile strength. However, structures loaded in compression are subject to additional failure modes dependent on geometry, such as buckling.

    Tensile stress is the stress state caused by an applied load that tends to elongate the material in the axis of the applied load, in other words the stress caused by pulling the material. The strength of structures of equal cross sectional area loaded in tension is independent of shape of the cross section. Materials loaded in tension are susceptible to stress concentrations such as material defects or abrupt changes in geometry. However, materials exhibiting ductile behavior (most metals for example) can tolerate some defects while brittle materials (such as ceramics) can fail well below their ultimate material strength.

    Shear stress is the stress state caused by the combined energy of a pair of opposing forces acting along parallel lines of action through the material, in other words the stress caused by faces of the material sliding relative to one another. An example is cutting paper with scissors or stresses due to torsional loading.

Strength terms

    Yield strength is the lowest stress that produces a permanent deformation in a material. In some materials, like aluminium alloys, the point of yielding is difficult to identify, thus it is usually defined as the stress required to cause 0.2% plastic strain. This is called a 0.2% proof stress.

    Compressive strength is a limit state of compressive stress that leads to failure in the manner of ductile failure (infinite theoretical yield) or brittle failure (rupture as the result of crack propagation, or sliding along a weak plane - see shear strength).

    Tensile strength or ultimate tensile strength is a limit state of tensile stress that leads to tensile failure in the manner of ductile failure (yield as the first stage of that failure, some hardening in the second stage and breakage after a possible "neck" formation) or brittle failure (sudden breaking in two or more pieces at a low stress state). Tensile strength can be quoted as either true stress or engineering stress.

    Fatigue strength is a measure of the strength of a material or a component under cyclic loading, and is usually more difficult to assess than the static strength measures. Fatigue strength is quoted as stress amplitude or stress range (\Delta\sigma= \sigma_\mathrm{max} - \sigma_\mathrm{min}), usually at zero mean stress, along with the number of cycles to failure under that condition of stress.

    Impact strength, is the capability of the material to withstand a suddenly applied load and is expressed in terms of energy. Often measured with the Izod impact strength test or Charpy impact test, both of which measure the impact energy required to fracture a sample. Volume, modulus of elasticity, distribution of forces, and yield strength affect the impact strength of a material. In order for a material or object to have a higher impact strength the stresses must be distributed evenly throughout the object. It also must have a large volume with a low modulus of elasticity and a high material yield strength.

Strain (deformation) terms

    Deformation of the material is the change in geometry created when stress is applied (in the form of force loading, gravitational field, acceleration, thermal expansion, etc.). Deformation is expressed by the displacement field of the material.
    Strain or reduced deformation is a mathematical term that expresses the trend of the deformation change among the material field. Strain is the deformation per unit length. In the case of uniaxial loading - displacements of a specimen (for example a bar element) strain is expressed as the quotient of the displacement and the length of the specimen. For 3D displacement fields it is expressed as derivatives of displacement functions in terms of a second order tensor (with 6 independent elements).
    Deflection is a term to describe the magnitude to which a structural element bends under a load.

Stress-strain relations

Basic static response of a specimen under tension

    Elasticity is the ability of a material to return to its previous shape after stress is released. In many materials, the relation between applied stress is directly proportional to the resulting strain (up to a certain limit), and a graph representing those two quantities is a straight line.

The slope of this line is known as Young's Modulus, or the "Modulus of Elasticity." The Modulus of Elasticity can be used to determine the stress-strain relationship in the linear-elastic portion of the stress-strain curve. The linear-elastic region is either below the yield point, or if a yield point is not easily identified on the stress-strain plot it is defined to be between 0 and 0.2% strain, and is defined as the region of strain in which no yielding (permanent deformation) occurs.

    Plasticity or plastic deformation is the opposite of elastic deformation and is defined as unrecoverable strain. Plastic deformation is retained after the release of the applied stress. Most materials in the linear-elastic category are usually capable of plastic deformation. Brittle materials, like ceramics, do not experience any plastic deformation and will fracture under relatively low stress. Materials such as metals usually experience a small amount of plastic deformation before failure while ductile metals such as copper and lead or polymers will plasticly deform much more.

Consider the difference between a carrot and chewed bubble gum. The carrot will stretch very little before breaking. The chewed bubble gum, on the other hand, will plastically deform enormously before finally breaking.
Design terms

Ultimate strength is an attribute related to a material, rather than just a specific specimen made of the material, and as such it is quoted as the force per unit of cross section area (N/m²). The ultimate strength is the maximum stress that a material can withstand before it breaks or weakens. For example, the ultimate tensile strength (UTS) of AISI 1018 Steel is 440 MN/m². In general, the SI unit of stress is the pascal, where 1 Pa = 1 N/m². In Imperial units, the unit of stress is given as lbf/in² or pounds-force per square inch. This unit is often abbreviated as psi. One thousand psi is abbreviated ksi.

A Factor of safety is a design criteria that an engineered component or structure must achieve. FS = UTS/R, where FS: the factor of safety, R: The applied stress, and UTS: ultimate stress (psi or N/m^2)

Margin of Safety is also sometimes used to as design criteria. It is defined MS = Failure Load/(Factor of Safety * Predicted Load) - 1

For example to achieve a factor of safety of 4, the allowable stress in an AISI 1018 steel component can be calculated to be R = UTS/FS = 440/4 = 110 MPa, or R = 110×106 N/m². Such allowable stresses are also known as "design stresses" or "working stresses."

Design stresses that have been determined from the ultimate or yield point values of the materials give safe and reliable results only for the case of static loading. Many machine parts fail when subjected to a non steady and continuously varying loads even though the developed stresses are below the yield point. Such failures are called fatigue failure. The failure is by a fracture that appears to be brittle with little or no visible evidence of yielding. However, when the stress is kept below "fatigue stress" or "endurance limit stress", the part will endure indefinitely. A purely reversing or cyclic stress is one that alternates between equal positive and negative peak stresses during each cycle of operation. In a purely cyclic stress, the average stress is zero. When a part is subjected to a cyclic stress, also known as stress range (Sr), it has been observed that the failure of the part occurs after a number of stress reversals (N) even if the magnitude of the stress range is below the material’s yield strength. Generally, higher the range stress, the fewer the number of reversals needed for failure.
Failure theories

There are four important failure theories: maximum shear stress theory, maximum normal stress theory, maximum strain energy theory, and maximum distortion energy theory. Out of these four theories of failure, the maximum normal stress theory is only applicable for brittle materials, and the remaining three theories are applicable for ductile materials. Of the latter three, the distortion energy theory provides most accurate results in majority of the stress conditions. The strain energy theory needs the value of Poisson’s ratio of the part material, which is often not readily available. The maximum shear stress theory is conservative. For simple unidirectional normal stresses all theories are equivalent, which means all theories will give the same result.

    Maximum Shear stress Theory- This theory postulates that failure will occur if the magnitude of the maximum shear stress in the part exceeds the shear strength of the material determined from uniaxial testing.

    Maximum normal stress theory - This theory postulates, that failure will occur if the maximum normal stress in the part exceeds the ultimate tensile stress of the material as determined from uniaxial testing. This theory deals with brittle materials only. The maximum tensile stress should be less than or equal to ultimate tensile stress divided by factor of safety. The magnitude of the maximum compressive stress should be less than ultimate compressive stress divided by factor of safety.

    Maximum strain energy theory - This theory postulates that failure will occur when the strain energy per unit volume due to the applied stresses in a part equals the strain energy per unit volume at the yield point in uniaxial testing.

    Maximum distortion energy theory - This theory is also known as shear energy theory or von Mises-Hencky theory. This theory postulates that failure will occur when the distortion energy per unit volume due to the applied stresses in a part equals the distortion energy per unit volume at the yield point in uniaxial testing. The total elastic energy due to strain can be divided into two parts: one part causes change in volume, and the other part causes change in shape. Distortion energy is the amount of energy that is needed to change the shape.

    Fracture mechanics was established by Alan Arnold Griffith and George Rankine Irwin. This important theory is also known as numeric conversion of toughness of material in the case of crack existence.

    Fractology was proposed by Takeo Yokobori because each fracure laws including creep rupture criterion must be combined nonlinialy.

Microstructure

A material's strength is dependent on its microstructure. The engineering processes to which a material is subjected can alter this microstructure. The variety of strengthening mechanisms that alter the strength of a material includes work hardening, solid solution strengthening, precipitation hardening and grain boundary strengthening and can be quantitatively and qualitatively explained. Strengthening mechanisms are accompanied by the caveat that some other mechanical properties of the material may degenerate in an attempt to make the material stronger. For example, in grain boundary strengthening, although yield strength is maximized with decreasing grain size, ultimately, very small grain sizes make the material brittle. In general, the yield strength of a material is an adequate indicator of the material's mechanical strength. Considered in tandem with the fact that the yield strength is the parameter that predicts plastic deformation in the material, one can make informed decisions on how to increase the strength of a material depending its microstructural properties and the desired end effect. Strength is expressed in terms of compressive strength, tensile strength, and shear strength, namely the limit states of compressive stress, tensile stress and shear stress, respectively. The effects of dynamic loading are probably the most important practical consideration of the strength of materials, especially the problem of fatigue. Repeated loading often initiates brittle cracks, which grow until failure occurs. The cracks always start at stress concentrations, especially changes in cross-section of the product, near holes and corners.
 

Computational Fluid Dyanamics (CFD)

Computational fluid dynamics, usually abbreviated as CFD, is a branch of fluid mechanics that uses numerical methods and algorithms to solve and analyze problems that involve fluid flows. Computers are used to perform the calculations required to simulate the interaction of liquids and gases with surfaces defined by boundary conditions. With high-speed supercomputers, better solutions can be achieved. Ongoing research yields software that improves the accuracy and speed of complex simulation scenarios such as transonic or turbulent flows. Initial experimental validation of such software is performed using a wind tunnel with the final validation coming in full-scale testing, e.g. flight tests.

The fundamental basis of almost all CFD problems are the Navier–Stokes equations, which define any single-phase (gas or liquid, but not both) fluid flow. These equations can be simplified by removing terms describing viscous actions to yield the Euler equations. Further simplification, by removing terms describing vorticity yields the full potential equations. Finally, for small perturbations in subsonic and supersonic flows (not transonic or hypersonic) these equations can be linearized to yield the linearized potential equations.

Historically, methods were first developed to solve the Linearized potential equations. Two-dimensional (2D) methods, using conformal transformations of the flow about a cylinder to the flow about an airfoil were developed in the 1930s.The computer power available paced development of three-dimensional methods. The first work using computers to model fluid flow, as governed by the Navier-Stokes equations, was performed at Los Alamos National Labs, in the T3 group.This group was led by Francis H. Harlow, who is widely considered as one of the pioneers of CFD. From 1957 to late 1960s, this group developed a variety of numerical methods to simulate transient two-dimensional fluid flows, such as Particle-in-cell method (Harlow, 1957),Fluid-in-cell method (Gentry, Martin and Daly, 1966), Vorticity stream function method (Jake Fromm, 1963), and Marker-and-cell method (Harlow and Welch, 1965).Fromm's vorticity-stream-function method for 2D, transient, incompressible flow was the first treatment of strongly contorting incompressible flows in the world.

The first paper with three-dimensional model was published by John Hess and A.M.O. Smith of Douglas Aircraft in 1967.This method discretized the surface of the geometry with panels, giving rise to this class of programs being called Panel Methods. Their method itself was simplified, in that it did not include lifting flows and hence was mainly applied to ship hulls and aircraft fuselages. The first lifting Panel Code (A230) was described in a paper written by Paul Rubbert and Gary Saaris of Boeing Aircraft in 1968. In time, more advanced three-dimensional Panel Codes were developed at Boeing (PANAIR, A502), Lockheed (Quadpan), Douglas (HESS), McDonnell Aircraft (MACAERO), NASA (PMARC) and Analytical Methods (WBAERO, USAERO and VSAERO). Some (PANAIR, HESS and MACAERO) were higher order codes, using higher order distributions of surface singularities, while others (Quadpan, PMARC, USAERO and VSAERO) used single singularities on each surface panel. The advantage of the lower order codes was that they ran much faster on the computers of the time. Today, VSAERO has grown to be a multi-order code and is the most widely used program of this class. It has been used in the development of many submarines, surface ships, automobiles, helicopters, aircraft, and more recently wind turbines. Its sister code, USAERO is an unsteady panel method that has also been used for modeling such things as high speed trains and racing yachts. The NASA PMARC code from an early version of VSAERO and a derivative of PMARC, named CMARC, is also commercially available.

Methodology

In all of these approaches the same basic procedure is followed.

    During preprocessing
        The geometry (physical bounds) of the problem is defined.
        The volume occupied by the fluid is divided into discrete cells (the mesh). The mesh may be uniform or non uniform.
        The physical modeling is defined – for example, the equations of motions + enthalpy + radiation + species conservation
        Boundary conditions are defined. This involves specifying the fluid behaviour and properties at the boundaries of the problem. For transient problems, the initial conditions are also defined.
    The simulation is started and the equations are solved iteratively as a steady-state or transient.
    Finally a postprocessor is used for the analysis and visualization of the resulting solution.

Discretization methods

The stability of the chosen discretisation is generally established numerically rather than analytically as with simple linear problems. Special care must also be taken to ensure that the discretisation handles discontinuous solutions gracefully. The Euler equations and Navier–Stokes equations both admit shocks, and contact surfaces.

Some of the discretisation methods being used are:
Finite volume method

Main article: Finite volume method

The finite volume method (FVM) is a common approach used in CFD codes, as it has an advantage in memory usage and solution speed, especially for large problems, high Reynolds number turbulent flows, and source term dominated flows (like combustion).

In the finite volume method, the governing partial differential equations (typically the Navier-Stokes equations, the mass and energy conservation equations, and the turbulence equations) are recast in a conservative form, and then solved over discrete control volumes. This discretisation guarantees the conservation of fluxes through a particular control volume. The finite volume equation yields governing equations in the form,

    \frac{\partial}{\partial t}\iiint Q\, dV + \iint F\, d\mathbf{A} = 0,

where Q is the vector of conserved variables, F is the vector of fluxes (see Euler equations or Navier–Stokes equations), V is the volume of the control volume element, and \mathbf{A} is the surface area of the control volume element.
Finite element method

Main article: Finite element method

The finite element method (FEM) is used in structural analysis of solids, but is also applicable to fluids. However, the FEM formulation requires special care to ensure a conservative solution. The FEM formulation has been adapted for use with fluid dynamics governing equations.[citation needed] Although FEM must be carefully formulated to be conservative, it is much more stable than the finite volume approach. However, FEM can require more memory and has slower solution times than the FVM.

In this method, a weighted residual equation is formed:

    R_i = \iiint W_i Q \, dV^e

where R_i is the equation residual at an element vertex i, Q is the conservation equation expressed on an element basis, W_i is the weight factor, and V^{e} is the volume of the element.
Finite difference method

Main article: Finite difference method

The finite difference method (FDM) has historical importance and is simple to program. It is currently only used in few specialized codes, which handle complex geometry with high accuracy and efficiency by using embedded boundaries or overlapping grids (with the solution interpolated across each grid).

    \frac{\partial Q}{\partial t}+ \frac{\partial F}{\partial x}+ \frac{\partial G}{\partial y}+ \frac{\partial H}{\partial z}=0

where Q is the vector of conserved variables, and F, G, and H are the fluxes in the x, y, and z directions respectively.
Spectral element method

Main article: Spectral element method

Spectral element method is a finite element type method. It requires the mathematical problem (the partial differential equation) to be cast in a weak formulation. This is typically done by multiplying the differential equation by an arbitrary test function and integrating over the whole domain. Purely mathematically, the test functions are completely arbitrary - they belong to an infinitely dimensional function space. Clearly an infinitely dimensional function space cannot be represented on a discrete spectral element mesh. And this is where the spectral element discretization begins. The most crucial thing is the choice of interpolating and testing functions. In a standard, low order FEM in 2D, for quadrilateral elements the most typical choice is the bilinear test or interpolating function of the form v(x,y) = ax+by+cxy+d. In a spectral element method however, the interpolating and test functions are chosen to be polynomials of a very high order (typically e.g. of the 10th order in CFD applications). This guarantees the rapid convergence of the method. Furthermore, very efficient integration procedures must be used, since the number of integrations to be performed in a numerical codes is big. Thus, high order Gauss integration quadratures are employed, since they achieve the highest accuracy with the smallest number of computations to be carried out. At the time there are some academic CFD codes based on the spectral element method and some more are currently under development, since the new time-stepping schemes arise in the scientific world.
Boundary element method

Main article: Boundary element method

In the boundary element method, the boundary occupied by the fluid is divided into a surface mesh.
High-resolution discretization schemes

Main article: High-resolution scheme

High-resolution schemes are used where shocks or discontinuities are present. Capturing sharp changes in the solution requires the use of second or higher-order numerical schemes that do not introduce spurious oscillations. This usually necessitates the application of flux limiters to ensure that the solution is total variation diminishing
Turbulence models

In studying turbulent flows, the objective is to obtain a theory or a model that can yield quantities of interest, such as velocities. For turbulent flow, the range of length scales and complexity of phenomena make most approaches impossible. The primary approach in this case is to create numerical models to calculate the properties of interest. A selection of some commonly-used computational models for turbulent flows are presented in this section.

The chief difficulty in modeling turbulent flows comes from the wide range of length and time scales associated with turbulent flow. As a result, turbulence models can be classified based on the range of these length and time scales that are modeled and the range of length and time scales that are resolved. The more turbulent scales that are resolved, the finer the resolution of the simulation, and therefore the higher the computational cost. If a majority or all of the turbulent scales are modeled, the computational cost is very low, but the tradeoff comes in the form of decreased accuracy.

In addition to the wide range of length and time scales and the associated computational cost, the governing equations of fluid dynamics contain a non-linear convection term and a non-linear and non-local pressure gradient term. These nonlinear equations must be solved numerically with the appropriate boundary and initial conditions.
Reynolds-averaged Navier–Stokes

Main article: Reynolds-averaged Navier–Stokes equations

Reynolds-averaged Navier-Stokes (RANS) equations are the oldest approach to turbulence modeling. An ensemble version of the governing equations is solved, which introduces new apparent stresses known as Reynolds stresses. This adds a second order tensor of unknowns for which various models can provide different levels of closure. It is a common misconception that the RANS equations do not apply to flows with a time-varying mean flow because these equations are 'time-averaged'. In fact, statistically unsteady (or non-stationary) flows can equally be treated. This is sometimes referred to as URANS. There is nothing inherent in Reynolds averaging to preclude this, but the turbulence models used to close the equations are valid only as long as the time over which these changes in the mean occur is large compared to the time scales of the turbulent motion containing most of the energy.

RANS models can be divided into two broad approaches:

Boussinesq hypothesis

    This method involves using an algebraic equation for the Reynolds stresses which include determining the turbulent viscosity, and depending on the level of sophistication of the model, solving transport equations for determining the turbulent kinetic energy and dissipation. Models include k-ε (Launder and Spalding) Mixing Length Model (Prandtl),and Zero Equation Model (Cebeci and Smith). The models available in this approach are often referred to by the number of transport equations associated with the method. For example, the Mixing Length model is a "Zero Equation" model because no transport equations are solved; the k-\epsilon is a "Two Equation" model because two transport equations (one for k and one for \epsilon) are solved.
Reynolds stress model (RSM)

    This approach attempts to actually solve transport equations for the Reynolds stresses. This means introduction of several transport equations for all the Reynolds stresses and hence this approach is much more costly in CPU effort.

Large eddy simulation

 Large eddy simulation

Volume rendering of a non-premixed swirl flame as simulated by LES.

Large eddy simulation (LES) is a technique in which the smallest scales of the flow are removed through a filtering operation, and their effect modeled using subgrid scale models. This allows the largest and most important scales of the turbulence to be resolved, while greatly reducing the computational cost incurred by the smallest scales. This method requires greater computational resources than RANS methods, but is far cheaper than DNS.
Detached eddy simulation

Main article: Detached eddy simulation

Detached eddy simulations (DES) is a modification of a RANS model in which the model switches to a subgrid scale formulation in regions fine enough for LES calculations. Regions near solid boundaries and where the turbulent length scale is less than the maximum grid dimension are assigned the RANS mode of solution. As the turbulent length scale exceeds the grid dimension, the regions are solved using the LES mode. Therefore the grid resolution for DES is not as demanding as pure LES, thereby considerably cutting down the cost of the computation. Though DES was initially formulated for the Spalart-Allmaras model (Spalart et al., 1997), it can be implemented with other RANS models (Strelets, 2001), by appropriately modifying the length scale which is explicitly or implicitly involved in the RANS model. So while Spalart-Allmaras model based DES acts as LES with a wall model, DES based on other models (like two equation models) behave as a hybrid RANS-LES model. Grid generation is more complicated than for a simple RANS or LES case due to the RANS-LES switch. DES is a non-zonal approach and provides a single smooth velocity field across the RANS and the LES regions of the solutions.
Direct numerical simulation

Main article: Direct numerical simulation

Direct numerical simulation (DNS) resolves the entire range of turbulent length scales. This marginalizes the effect of models, but is extremely expensive. The computational cost is proportional to Re^{3}. DNS is intractable for flows with complex geometries or flow configurations.
Coherent vortex simulation

The coherent vortex simulation approach decomposes the turbulent flow field into a coherent part, consisting of organized vortical motion, and the incoherent part, which is the random background flow. This decomposition is done using wavelet filtering. The approach has much in common with LES, since it uses decomposition and resolves only the filtered portion, but different in that it does not use a linear, low-pass filter. Instead, the filtering operation is based on wavelets, and the filter can be adapted as the flow field evolves. Farge and Schneider tested the CVS method with two flow configurations and showed that the coherent portion of the flow exhibited the -\frac{40}{39} energy spectrum exhibited by the total flow, and corresponded to coherent structures (vortex tubes), while the incoherent parts of the flow composed homogeneous background noise, which exhibited no organized structures. Goldstein and Oleg applied the FDV model to large eddy simulation, but did not assume that the wavelet filter completely eliminated all coherent motions from the subfilter scales. By employing both LES and CVS filtering, they showed that the SFS dissipation was dominated by the SFS flow field's coherent portion.
PDF methods

Probability density function (PDF) methods for turbulence, first introduced by Lundgren, are based on tracking the one-point PDF of the velocity, f_{V}(\boldsymbol{v};\boldsymbol{x},t) d\boldsymbol{v}, which gives the probability of the velocity at point \boldsymbol{x} being between \boldsymbol{v} and \boldsymbol{v}+d\boldsymbol{v}. This approach is analogous to the kinetic theory of gases, in which the macroscopic properties of a gas are described by a large number of particles. PDF methods are unique in that they can be applied in the framework of a number of different turbulence models; the main differences occur in the form of the PDF transport equation. For example, in the context of large eddy simulation, the PDF becomes the filtered PDF. PDF methods can also be used to describe chemical reactions, and are particularly useful for simulating chemically reacting flows because the chemical source term is closed and does not require a model. The PDF is commonly tracked by using Lagrangian particle methods; when combined with large eddy simulation, this leads to a Langevin equation for subfilter particle evolution.

Vortex method

The vortex method is a grid-free technique for the simulation of turbulent flows. It uses vortices as the computational elements, mimicking the physical structures in turbulence. Vortex methods were developed as a grid-free methodology that would not be limited by the fundamental smoothing effects associated with grid-based methods. To be practical, however, vortex methods require means for rapidly computing velocities from the vortex elements – in other words they require the solution to a particular form of the N-body problem (in which the motion of N objects is tied to their mutual influences). A breakthrough came in the late 1980s with the development of the fast multipole method (FMM), an algorithm by V. Rokhlin (Yale) and L. Greengard (Courant Institute). This breakthrough paved the way to practical computation of the velocities from the vortex elements and is the basis of successful algorithms. They are especially well-suited to simulating filamentary motion, such as wisps of smoke, in real-time simulations such as video games, because of the fine detail achieved using minimal computation.

Software based on the vortex method offer a new means for solving tough fluid dynamics problems with minimal user intervention. All that is required is specification of problem geometry and setting of boundary and initial conditions. Among the significant advantages of this modern technology;

    It is practically grid-free, thus eliminating numerous iterations associated with RANS and LES.
    All problems are treated identically. No modeling or calibration inputs are required.
    Time-series simulations, which are crucial for correct analysis of acoustics, are possible.
    The small scale and large scale are accurately simulated at the same time.

Vorticity confinement method

Main article: Vorticity confinement

The vorticity confinement (VC) method is an Eulerian technique used in the simulation of turbulent wakes. It uses a solitary-wave like approach to produce a stable solution with no numerical spreading. VC can capture the small scale features to within as few as 2 grid cells. Within these features, a nonlinear difference equation is solved as opposed to the finite difference equation. VC is similar to shock capturing methods, where conservation laws are satisfied, so that the essential integral quantities are accurately computed.
Linear eddy model

The Linear eddy model is a technique used to simulate the convective mixing that takes place in turbulent flow. Specifically, it provides a mathematical way to describe the interactions of a scalar variable within the vector flow field. It is primarily used in one-dimensional representations of turbulent flow, since it can be applied across a wide range of length scales and Reynolds numbers. This model is generally used as a building block for more complicated flow representations, as it provides high resolution predictions that hold across a large range of flow conditions.
Two-phase flow

The modeling of two-phase flow is still under development. Different methods have been proposed lately. The Volume of fluid method has received a lot of attention lately for problems that do not have dispersed particles, but the Level set method and front tracking are also valuable approaches . Most of these methods are either good in maintaining a sharp interface or at conserving mass. This is crucial since the evaluation of the density, viscosity and surface tension is based on the values averaged over the interface.Lagrangian multiphase models, which are used for dispersed media, are based on solving the Lagrangian equation of motion for the dispersed phase.
Solution algorithms

Discretization in space produces a system of ordinary differential equations for unsteady problems and algebraic equations for steady problems. Implicit or semi-implicit methods are generally used to integrate the ordinary differential equations, producing a system of (usually) nonlinear algebraic equations. Applying a Newton or Picard iteration produces a system of linear equations which is nonsymmetric in the presence of advection and indefinite in the presence of incompressibility. Such systems, particularly in 3D, are frequently too large for direct solvers, so iterative methods are used, either stationary methods such as successive overrelaxation or Krylov subspace methods. Krylov methods such as GMRES, typically used with preconditioning, operate by minimizing the residual over successive subspaces generated by the preconditioned operator.

Multigrid has the advantage of asymptotically optimal performance on many problems. Traditional solvers and preconditioners are effective at reducing high-frequency components of the residual, but low-frequency components typically require many iterations to reduce. By operating on multiple scales, multigrid reduces all components of the residual by similar factors, leading to a mesh-independent number of iterations.

For indefinite systems, preconditioners such as incomplete LU factorization, additive Schwarz, and multigrid perform poorly or fail entirely, so the problem structure must be used for effective preconditioning. Methods commonly used in CFD are the SIMPLE and Uzawa algorithms which exhibit mesh-dependent convergence rates, but recent advances based on block LU factorization combined with multigrid for the resulting definite systems have led to preconditioners that deliver mesh-independent convergence rates.
Unsteady Aerodynamics

CFD made a major break through in late 70s with the introduction of LTRAN2, a 2-D code to model oscillating airfoils based on transonic small perturbation theory by Ballhaus and associates. It uses a Murman-Cole switch algorithm for modeling the moving shock-waves. Later it was extended to 3-D with use of a rotated difference scheme by AFWAL/Boeing that resulted in LTRAN3.
 

Monday, 9 September 2013

What is an airfoil?

What is an airfoil?

An airplane wing has a special shape called an airfoil.
As a wing moves through air, the air is split and passes above and below the wing. The wing’s upper surface is shaped so the air rushing over the top speeds up and stretches out. This decreases the air pressure above the wing. The air flowing below the wing moves in a straighter line, so its speed and air pressure remain the same.
Since high air pressure always moves toward low air pressure, the air below the wing pushes upward toward the air above the wing. The wing is in the middle, and the whole wing is “lifted.” The faster an airplane moves, the more lift there is. And when the force of lift is greater than the force of gravity, the airplane is able to fly.
 
Examples of airfoils in nature and within various vehicles. Though not strictly an airfoil, the dolphin fin obeys the same principles in a different fluid medium.





Sunday, 8 September 2013

What Is Artificial Intelligence ?

What is Artificial Intelligence?

Definition...

Artificial Intelligence is a branch of Science which deals with helping machines find solutions to complex problems in a more human-like fashion. This generally involves borrowing characteristics from human intelligence, and applying them as algorithms in a computer friendly way. A more or less flexible or efficient approach can be taken depending on the requirements established, which influences how artificial the intelligent behaviour appears.
AI is generally associated with Computer Science, but it has many important links with other fields such as Maths, Psychology, Cognition, Biology and Philosophy, among many others. Our ability to combine knowledge from all these fields will ultimately benefit our progress in the quest of creating an intelligent artificial being.

Why Artificial Intelligence?

Motivation...

Computers are fundamentally well suited to performing mechanical computations, using fixed programmed rules. This allows artificial machines to perform simple monotonous tasks efficiently and reliably, which humans are ill-suited to. For more complex problems, things get more difficult... Unlike humans, computers have trouble understanding specific situations, and adapting to new situations. Artificial Intelligence aims to improve machine behaviour in tackling such complex tasks.
Together with this, much of AI research is allowing us to understand our intelligent behaviour. Humans have an interesting approach to problem-solving, based on abstract thought, high-level deliberative reasoning and pattern recognition. Artificial Intelligence can help us understand this process by recreating it, then potentially enabling us to enhance it beyond our current capabilities.

When will Computers become truly Intelligent?

Limitations...

To date, all the traits of human intelligence have not been captured and applied together to spawn an intelligent artificial creature. Currently, Artificial Intelligence rather seems to focus on lucrative domain specific applications, which do not necessarily require the full extent of AI capabilities. This limit of machine intelligence is known to researchers as narrow intelligence.
There is little doubt among the community that artificial machines will be capable of intelligent thought in the near future. It's just a question of what and when... The machines may be pure silicon, quantum computers or hybrid combinations of manufactured components and neural tissue. As for the date, expect great things to happen within this century!

How does Artificial Intelligence work?

Technology...

There are many different approaches to Artificial Intelligence, none of which are either completely right or wrong. Some are obviously more suited than others in some cases, but any working alternative can be defended. Over the years, trends have emerged based on the state of mind of influencial researchers, funding opportunities as well as available computer hardware.
Over the past five decades, AI research has mostly been focusing on solving specific problems. Numerous solutions have been devised and improved to do so efficiently and reliably. This explains why the field of Artificial Intelligence is split into many branches, ranging from Pattern Recognition to Artificial Life, including Evolutionary Computation and Planning.

Who uses Artificial Intelligence?

Applications...

The potential applications of Artificial Intelligence are abundant. They stretch from the military for autonomous control and target identification, to the entertainment industry for computer games and robotic pets. Lets also not forget big establishments dealing with huge amounts of information such as hospitals, banks and insurances, who can use AI to predict customer behaviour and detect trends.
As you may expect, the business of Artificial Intelligence is becoming one of the major driving forces for research. With an ever growing market to satisfy, there's plenty of room for more personel. So if you know what you're doing, there's plenty of money to be made from interested big companies!

Two Different Types Of Programming Involved In Robotics


There are two main methods of controlling a robot, the first method is using feedback which is called Closed-Loop Control and the other method is called Open-Loop Control, in which case robots incorporate no feedback, therefore, they depend instead on mechanical stops to control movement. This method is used to carry out simple instructions accurately, since most robots only do a specific job, such as screwing in a few screws, they only need to know the boundary of their specific area. While other sophisticated robots rely on feedback in the form of continuous data. These robots are being made to resemble human decision-making techniques, some of which are still in the developmental stages. The data (feedback) can come from a variety of different devices such a vision systems, tractile sensors or in terms of industrial use, devices that detect the positions and the rate of movement of the robot's joints.

However, there are still simple robots that use the Closed-Loop Method. An example of this would be a household thermostat. It is a device that senses the environment and has a mechanism that reacts to the environmental change. In this case a bimetallic strip changes shape in reaction to temperature change and thereby turning on the heating or cooling unit. The diagram in Figure 1 illustrates the closed-loop method and can be used to display the workings of a thermostat. First the controller is programmed to the desirable temperature, this information is then sent to the mechanism which in this case is the thermostat. The device then responds with a physical action which modifies its external or internal environment. Next, sensors detect and measure the modification, then returns the results to the controller, which then calculates the difference between the actual and desired results and closes the loop by issuing a corrective command to the mechanism.

The Future Of Robotics



What does the future hold for robotics? What is the next step, or the next technological boundary to overcome? The general trend for computers seems to be faster processing speed, greater memory capacity and so on. One would assume that the robots of the future would become closer and closer to the decision-making ability of humans and also more independent. Presently the most powerful computers can't match the mental ability of a low-grade animal. It will be a long time until we're having conversations with androids and have them do all our housework. Another difficult design aspect about androids is their ability to walk around on two legs like humans. A robot with biped movement is much more difficult to build then a robot with, say, wheels to move around with. The reason for this is that walking takes so much balance. When you lift your leg to take a step you instinctively shift your weight to the other side by just the right amount and are constantly alternating your center of gravity to compensate for the varying degrees of leg support. If you were to simply lift your leg with the rest of your body remaining perfectly still you would likely fall down. Try a simple test by standing with one shoulder and one leg against a wall. Now lift your outer leg and observe as you start to fall over.

Indeed, the human skeletal and muscular systems are complicated for many reasons. For now, robots will most likely be manufactured for a limited number of distinct tasks such as painting, welding or lifting. Presumably, once robots have the ability perform a much wider array of tasks, and voice recognition software improves such that computers can interpret complicated sentences in varying accents, we may in fact see robots doing our housework and carrying out other tasks in the physical world.


Wednesday, 4 September 2013

2013 Mercedes Benz SLS AMG e-cell Prototype Drive

Last December, Audi let us drive the R8-based E-Tron in California, and we came away impressed by its 313 hp and a level of refinement we’ve never before experienced in an electric car. Recently, Mercedes invited us to Kristiansund, on the west coast of Norway, to drive the fully electric version of its gullwing SLS. As in the E-Tron, power in the SLS AMG E-Cell is routed through four electric motors, one at each wheel. With 526 hp and 649 lb-ft of torque—the latter available from 0 rpm—the SLS E-Cell is in league with some venerable supercars, even though it tips the scales at a considerable 4400 pound. Before you mat the throttle, consider the appropriate driving program. In the comfort setting, the SLS shows its soft side, utilizing just 40 percent of the motors’ capability and exhibiting cautious responses to inputs. Switch to sport, and throttle response gets a bit sharper, and 60 percent of the power and torque become available. In sport plus, you get a super-aggressive throttle and the entire 526 hp. In comfort and sport, applying full throttle still gets you full power in an instant. An additional mode, manual, acts like sport plus but switches off regenerative braking entirely.
If you have so far associated electric cars with ridiculous humming boxes on wheels, hang on. This car catapults you into another dimension. In the SLS E-Cell, getting from rest to 62 mph takes a claimed four seconds flat; 130 mph, fewer than 12 seconds. At 50 or 60 mph, triple-digit speeds are mere seconds away, and the charge forward happens in utter silence. “Surreal” is a proper description for the acoustic character of this silent predator.
Autobahn credentials are standard, with a top speed governed at 155 mph. Ungoverned, 165 would be possible. That’s shy of the 197 mph reached by the regular SLS but enough to get you a room in a U.S. county jail. We were impressed by the absence of rattles, noises, and whines up to velocities well over 100 mph. At these speeds, a Tesla roadster feels like a prototype, but this actual prototype seems ready for customer delivery. Incidentally, Mercedes insists that Tesla—in which it now has a stake of ownership—was in no way involved in the development of the E-Cell.
The SLS E-Cell offers four modes of regenerative braking in addition to being completely off in the manual powertrain setting, which leaves you “sailing” with minimal drivetrain drag. Paddles on the steering column allow you to gradually increase the resistance; steps one and two feel like a regular car coasting; step three is a bit more aggressive, and step four decelerates the SLS so strongly that AMG considered switching on the brake lights as soon as you take your foot off the accelerator. Unlike Tesla, AMG decided not to. This mode is perfect for extreme driving, when you are standing on one of the two pedals at all times anyway.
The Same But Different
AMG has developed an entirely new front axle—a pushrod-actuated setup—that replaces the regular SLS’s unequal-length control-arm design, and the steering is now electrohydraulic. Like many similar systems, the steering could offer more feel and feedback. Although we enjoyed the silent, artificial character of the electric motors, we wouldn’t mind a bit more feedback from the chassis. Granted, this is a prototype, and as development progresses, it will benefit from torque vectoring, achieved by running the electric motors at different speeds.
Just like the regular SLS, the E-Cell is a big car, with a hood that seems to extend beyond the curvature of the planet. The instrumentation and the center console are exclusive to the E-Cell, the center stack being executed as a huge touchscreen, which hopefully hints at a change in philosophy for Mercedes in general. It works almost flawlessly and looks ready for series production. 
The E-Cell’s Achilles’ heel, unsurprisingly, is its range. This prototype carries a 48-kWh lithium-ion battery, but AMG hopes to fit the car with a 60-plus kWh battery pack when it becomes available to customers. The current range is about 90 miles, which is likely to grow to more than 130 miles. The current claims are perhaps even conservative: After a sharply driven 60 miles, battery capacity was still about 30 percent. With a fast-charging station, it took an hour to recharge the batteries to almost 100 percent. Extended trips still require planning, but the progress in battery technology is tangible.
If all goes according to plan, you will be able to buy the SLS E-Cell by late 2012 or early 2013—six or so months after Audi launches the E-Tron. There is no word yet on pricing, but figure on a premium of $50,000 to $100,000 over the regular SLS. Just having the money won’t be enough to get you an E-Cell, though, as customers will be handpicked.
The jury is still out on precisely when electric cars will become mainstream mobility—or if they are even the future at all. But if it happens, we can assure you there is still joy on the road ahead. Having flogged the SLS E-Cell unchaperoned for some 60 miles over lightly trafficked country roads, we began to appreciate the car. If this is the electric future, we’re starting to warm up to it.