Wednesday, 23 March 2011

Grinding Wheels


Grinding Wheels

Grinding wheel is widely used in grinding machines. These wheels are expendable wheels composing of an abrasive compound. These grinding wheels are formed out of an aluminum disc or solid steel by way of attaching the particles to the exterior surface. In general, this grinding wheel is normally prepared using the mold of hard-pressed coarse particles matrix that are allied together to make a firm circular shape. Based on the planned use of the wheel, various profiles and cross sections are also obtainable.

In common other materials are made use of with vitrified bonding agent like diamond and silicon carbide. A variety of materials are used in production grinding and primary things available in today’s market are wheels of different grade, various structure, varied abrasives, a range of grain sizes, and mixed link. Abrasive is in fact a cutting material like all other manufactured diamonds, cubic boron nitride, ceramic aluminum oxide and zirconia aluminum oxide. It is advisable to choose an abrasive considering the rigidity of the material that is about to slice.

The Wheel structure defines the wheel density that is the abrasive and bond versus airspace. Comparatively it is found a wheel which is of less density cuts freely and this plays a major effect on the finishing of the surface. The above nature is due to better chip clearance of the lesser density wheel. With a wheel that is of less density, we can obtain a wider or deeper cut, by means of less coolant.

We can determine the grade of a wheel by studying how closely the abrasive sticks to the bond. The grade has effect on about all aspects of grinding like flow of coolant, speed of wheel, depth of grinding and feed rate ranges. It is the grain which decides the size of a physical abrasive in any wheel. Though a grain which is of greater size cut swiftly without any strain, we would still be able to get only a poor finish of surface. To get a fine & precision surface finish make use of ultra-fine sized grain. Wheel bonding agent is what decides how a wheel holds an abrasive and also has its impact on wheel speed, its finish and coolant flow, etc.

Wheel manufacturing is a process that is precise and strictly controlled not only because of the innate security risk of a spinning disc but also depends on the consistency of composition that is required to stop that particular disc from explosion due to the high stress released at the time of revolving.

These grinding wheels have small level of sharpening by itself. To get the best possible use, we have to clad it with the help of grinding dressers. Dressing here means taking out of existing abrasive coating, this exposes a new and razor-sharp surface to the surface of work. Truing method is followed to make the grinding surface flat. It is done for the purpose of getting a parallel grinding surface to the respective table or to the orientation of the plane which could produce perfect surface.

The cup or plain wheel rest profusely on the arbors, perfectly sized metal discs at the side of the wheel and apply the essential hold tightly force to transmit the revolving motion. Paper blotter dissolves the force uniformly within the surface of the wheel.
Types of grinding wheels

Straight Grinding wheels

Straight wheel are the most common mode of wheel that is found on pedestal or bench grinders. This is the one widely used for centreless & cylindrical surface grinding operations. As it is used only on the periphery, it forms a little concave surface on the piece. This is used to gain on several tools like chisels. The size of these wheels differs to a great extent, width & diameter of its face obviously depends on the category of its work, machines grinding power.

Cylinder or wheel ring

A cylinder wheel has no center mounting support but has a long & wide surface. Their width is up to 12" and is used purely in horizontal or vertical spindle grinders. This is used to produce flat surface, here we do grinding with the ending face of the wheel.

Tapered Grinding wheels

Tapered Grinding wheel is a straight wheel that tapers externally towards the midpoint of the wheel. As this pact is stronger than straight wheels, it accepts advanced lateral loads. Straight wheel with tapered face is chiefly used for gear teeth, grinding thread, etc.

Straight cup

This Straight cup wheels forms an option for cup wheels in cutter and tool grinders, having an extra radial surface of grinding is favorable.

Dish cup

In fact this is used primarily in jig grinding and cutter grinding. It is a very thin cup-style grinding wheel which permits grinding in crevices and slot.

Saucer Grinding Wheels

Saucer Grinding Wheel is an exceptional grinding profile used for grinding twist drills and milling cutters. This finds wide usage in non-machining areas, as this saw filers are used by saucer wheels to maintain saw blades.

Diamond Grinding Wheels

In diamond wheels industrial diamonds remain bonded to the edge. This is used to grind hard materials like concrete, gemstones & carbide tips. A slitting saw is designed for slicing gemstones like hard materials.

Oxyacetylene Welding


Oxyacetylene Welding
Oxy-Acetylene (OA) welding is one of the types of welding supported by the PRL. It is extremely versatile, and with enough skill and practice you can use this type of welding for virtually any metal. In fact, the oxy-acetylene flame burns at 6000 °F, and is the only gas flame that is hot enough to melt all commercial metals. Oxy-acetylene welding is simple in concept - two pieces of metal are brought together, and the touching edges are melted by the flame with or without the addition of filler rod. This document will help you get started welding using the oxy-acetylene set-up. Read the steps below to get a feel for what is going on, and then get a shop TA to walk you through the process the first time.
Advantages of Oxy-Acetylene Welding
  • It's easy to learn.
  • The equipment is cheaper than most other types of welding rigs (e.g. TIG welding)
  • The equipment is more portable than most other types of welding rigs (e.g. TIG welding)
  • OA equipment can also be used to "flame-cut" large pieces of material.
Disadvantages of Oxy-Acetylene Welding
  • OA weld lines are much rougher in appearance than other kinds of welds, and require more finishing if neatness is required.
  • OA welds have large heat affected zones (areas around the weld line that have had their mechanical properties adversely affected by the welding process)
Materials Suitable for OA Welding in the PRL
Welding
Preparation
  1. Assemble all of the materials needed to make the weld. This includes parts, OA equipment, fixturing, tools, safety mask, gloves, and filler rod.
  2. Clean the parts to be welded to remove any oil, rust, or other contaminants. Use a wire brush if needed to remove any rust.
  3. Assemble and fixture the parts in place - the parts need to be stable for a good weld line. Ceramic bricks, vise grips, pliers, and clamps are available in a file cabinet in the weld room for fixturing.
  4. Select the nozzle you plan to use for welding. Nozzles come in a variety of sizes, from 000 (for a very small flame - typically used for thin materials) to upwards of 3 (for a large flame - needed for thick materials). Larger nozzles produce larger flames and, in general, are more appropriate for thicker material. Choosing the right size nozzle becomes easier with more experience. Ask a TA or make some test welds to determine if you are using the right size nozzle.
  5. Clean the nozzle. Carbon deposits can build up on the nozzles which interfere with flame quality and cause backfiring. The cleaning tool has a wide flat blade (with a file-like surface) which is used to clean carbon deposits on the exterior of the nozzle. Use it to scrape any deposits from the flat face of the tip. Use the wire-like files to clean the interior of the nozzle. Pick the largest wire which will fit inside the nozzle, and the scrape the edges of the hole to remove any carbon buildup.
  6. Attach the nozzle to the gas feed line by hand. Don't over-torque - the nozzle and hose fitting are both made of brass which does not stand up well to abuse. A snug, finger tight fit is sufficient.
  7. Check the pressure levels in the oxygen and acetylene tanks. There should be at least 50 psi in the acetylene tank. The oxygen tank can be used until it is completely empty. If needed, ask a TA to change bottles. Note: The oxygen used in OA welding in NOT for human consumption. It contains contaminants that could be unhealthy if taken in large quantities.
Lighting the flame
  1. Open the main valve on the acetylene tank ~1/2 turn. This charges the pressure regulator at the top of the tank.
  2. Open the pressure regulator valve on the acetylene tank (turn clockwise to open) and adjust the pressure in the acetylene line to 5 psi. DO NOT pressurize the acetylene over 15 psi - it will explode!
  3. Open the acetylene pin valve on the handle of the welding tool, letting acetylene escape. Tweak the pressure regulator valve until the regulator pressure is constant at 5 psi. Close the acetylene pin valve.
  4. Open the main valve on the oxygen tank. Turn the valve until it is fully open (until it stops turning).
  5. Open the pressure regulator valve on the oxygen tank (turn clockwise to open) and adjust the pressure in the oxygen line to 10 psi.
  6. Open the oxygen pin valve on the handle of the welding tool, letting oxygen escape. Tweak the pressure regulator valve until the regulator pressure is constant at 10 psi. Close the oxygen pin valve.
  7. Slightly open the acetylene valve (~1/8), until you can just barely hear acetylene escaping.
  8. Make sure there is no person or anything flammable in the path of the nozzle. Use the striker to ignite the acetylene. The flame should be yellow/orange and will give off a lot of soot.
Adjusting the flame
  1. Open the acetylene valve further and watch the flame near the nozzle tip. Add more acetylene until the flame is just about to separate from the tip. (The flame will separate from the tip of the nozzle if you add too much acetylene.) If so, reduce the flow until the flame reattaches to the tip, and then open the valve again to the near-separation point. (Another method is to adjust the flow until it just turns turbulent.)
  2. Slightly open the oxygen pin valve. If the flame goes out, turn off the gases and try again. DO NOT try and ignite the flame with both oxygen and acetylene pin valves open. As the oxygen is added the flame will turn bluish in color.
  3. The blue flame will be divided into 3 different color regions - a long yellowish tip, a blue middle section, and a whitish-blue intense inner section. There are three types of flames as described below :
    • Neutral - This type of flame is the one you will use most often in the shop. It is called "neutral" because it has no chemical effect upon the metal during welding. It is achieved by mixing equal parts oxygen and acetylene and is witnessed in the flame by adjusting the oxygen flow until the middle blue section and inner whitish-blue parts merge into a single region.
    • Reducing flame - If there is excess acetylene, the whitish-blue flame will be larger than the blue flame. This flame contains white hot-carbon particles, which may be dissolved during welding. This "reducing" flame will remove oxygen from iron oxides in steel.
    • Oxidizing flame - If there is excess oxygen, the whitish-blue flame will be smaller than the blue flame. This flame burns hotter. A slightly oxidizing flame is used in brazing, and a more strongly oxidizing flame is used in welding certain brasses and bronzes.
Welding
  1. Put on a dark faceshield to protect your eyes from the light of the flame. Make sure you have on long sleeves and all natural fibers. You can wear a leather welding jacket and/or gloves if it makes you feel more comfortable.
  2. Apply the flame to the parts to begin heating. Use the region of the flame near the tip of the bluish inner region.
  3. The metal will begin to glow. Continue heating both parts being welded until a small pool of welded metal appears near the edge of each of the parts. You must get molten pools on BOTH parts simultaneously to create the weld. The may require adding more heat to one side than the other, and takes some practice.
  4. After the molten pools have formed on both sides of the weld, use the flame to gently stir the two pools together to form the weld. This also takes a little practice.
  5. After the two pools have joined, slowly move the flame along the weld line, lengthening the pool using metal from both parts. A gentle, circular, swirling motion will help mix the molten metal from both sides as the puddle is lengthened. This process is highly dependent on the materials and part geometries being welded. Practice, practice, practice to get better control. Welding sample parts is a good idea...
  6. Continue this process until the entire weld line is complete.
  7. Once you're done, turn off the flame. Close the oxygen pin valve first, and then the acetylene valve. Note: Welded parts can remain hot for a LONG time.
Backfiring
Improper operation of the torch may cause the flame to go out with a loud snap or pop. This is called backfire. It is caused by one of a few things. The first thing to do is turn the gas in the torch off, check all the connections and try relighting the torch. Backfiring can be caused by touching the tip against your workpiece, overheating the tip, operating the torch at other than recommended gas pressures, by a loose tip or head or by dirt on the seat.
Shutting Down and Cleaning Up
When you're completely finished welding and are ready to quit for the day, you need to clean up.
  1. With the flame extinguished and the pin valves closed, close the main valve on the oxygen tank. It should be firmly seated at the bottom.
  2. Open the oxygen pin valve to bleed off all of the oxygen in the regulator and feed line. Close the pin valve once the feed line pressure has gone to zero.
  3. Fully back out the oxygen regulator valve so there is no pressure in the line. DO NOT close the valve, as this will pressurize the line once the tank is open again. In the case of the acetylene, if it is pressurized over 15 psi, it may explode! If you are not sure about doing this properly, find a TA to help you.
  4. Repeat steps 1 through 3 for the acetylene line.
  5. Return all of the tools to their proper storage places and coil the feed lines around the handle on the gas cylinder cart. Note: Do not remove the nozzle from the feed line. The feed lines should always have a nozzle attached to prevent accidental damage to the threads used to attach the nozzle.
  6. Don't forget to ask for a shop job!
revision history : 
Ver 1.0 5/97 Steve Johnson original text 
Ver 1.1 6/97 Bryan Cooperrider formatting, revisions, and additions
Ver 1.2 10/01 Katherine Kuchenbecker minor revisions 
Ver 1.3 4/07 Carly Geehr minor revisions

Screw threads


screw thread, often shortened to thread, is a helical structure used to convert between rotational and linear movement or force. A screw thread is a ridge wrapped around a cylinder or cone in the form of a helix, with the former being called a straight thread and the latter called a tapered thread. More screw threads are produced each year than any other machine element.[1]
The mechanical advantage of a screw thread depends on its lead, which is the linear distance the screw travels in one revolution.[2] In most applications, the lead of a screw thread is chosen so thatfriction is sufficient to prevent linear motion being converted to rotary, that is so the screw does not slip even when linear force is applied so long as no external rotational force is present. This characteristic is essential to the vast majority of its uses. The tightening of a fastener's screw thread is comparable to driving a wedge into a gap until it sticks fast through friction and slight plastic deformation.

The most used screw thread forms are those having symmetrical sides with inclined at equal angles. The Unified, the Whitworth and the Acme forms fall into this category. Symmetrical threads are easy to manufacture and to inspect compare to non-symmetrical threads. They are widely used on all type of mass-produced general-purpose thread fasteners. In addition to being used as fasteners, certain threads are used to move or drive machine parts against heavy loads; thus they require a stronger thread system. The most widely use translation thread forms are the Square and the Acme. Square thread is the most efficient, but it is also the hardest to manufacture due to its parallel sides. Another disadvantage is they cannot be adjusted to compensate for wear. The Acme thread, although less efficient, is easier to manufacture and can be adjusted.

DefinitionsAllowance is the prescribed difference between the design (maximum material) size and the basic size.

Basic Size is the nominal size of the screw thread being produced. The tolerance is applied to the basic size to determine the maximum and minimum acceptable dimension.

Thread Classes are used to specify the amounts of tolerance and allowance. Classes 1A, 2A and 3A apply to external threads; classes 1B, 2B and 3B apply to internal threads. 


Unified Screw Threads
  • Unified Thread Series - These are groups of diameter-pitch combinations differentiate from each other by the numbers of threads per inch applied to a specific diameter. The various diameter-pitch combinations of eleven standard series are shown here. Selected combinations are shown here.
  • Coarse Thread Series - The UNC/UNRC is the most commonly used in the bulk production of bolts, screws, and nut for general engineering applications. It is also used for threading into lower tensile strength materials such as cast iron, mild steel and to softer materials as brass and aluminum to prevent stripping of the internal threads.
  • Fine Thread Series - The UNF/UNRF series is also suitable for production of bolts, screws, and nuts for other applications. The external thread of this series has greater contact area than comparable sizes of the Coarse series. They are suitable when the resistance of stripping both external and internal threads equals or exceeds the tensile load carrying capacity of the screw.
  • Extra Fine Thread Series - The UNEF/UNREF series is applicable when even finer pitches of threads are required. They are very useful for short lengths of engagement, where their fine thread increase the contact area in a short screw.
Constant Pitch Series
The UN series provides a comprehensive range of diameter pitch combinations where the Coarse, Fine, and Extra-Fine series do not satisfy the requirement of the design. They are available with 4, 6, 8, 12, 16, 20, 28 and 32 threads per inch. More details regarding the 8, 12, and 16 thread series are provided below:
  • The 8-UN series coarse thread is used with large diameter (greater than 1 inch) and high-pressure applications, such as high-pressure joint bolts.
  • The 12-UN series is also used for large diameters with a medium pitch thread. Originally intended for use in pressure vessels, it is currently used as a fine pitch series for diameter larger than 1½ inch.
  • The 16-UN series is used for large diameters with fine pitch thread. It is often use as a nut retainer and can be considered as an extra fine pitch series for diameter larger than 1 11/16 inches. 

Friday, 18 March 2011


The lathe is an ancient tool, dating at least to ancient Egypt and known and used in Assyria, ancient Greece, and the Roman and Byzantine Empires.
The origin of turning dates to around 1300 BC when the Egyptians first developed a two-person lathe. One person would turn the wood work piece with a rope while the other used a sharp tool to cut shapes in the wood. The Romans improved the Egyptian design with the addition of a turning bow. Early bow lathes were also developed and used in Germany, France and Britain.[1] In the Middle Ages a pedal replaced hand-operated turning, freeing both the craftsman's hands to hold the woodturning tools. The pedal was usually connected to a pole, often a straight-grained sapling. The system today is called the "spring pole" lathe (see Polelathe). Spring pole lathes were in common use into the early 20th century. A two-person lathe, called a "great lathe", allowed a piece to turn continuously (like today's power lathes). A master would cut the wood while an apprentice turned the crank.[2]
During the Industrial Revolution, mechanized power generated by water wheels or steam engines was transmitted to the lathe via line shafting, allowing faster and easier work. The design of lathes diverged between woodworking and metalworking to a greater extent than in previous centuries. Metalworking lathes evolved into heavier machines with thicker, more rigid parts. The application of leadscrews, slide rests, and gearing produced commercially practical screw-cutting lathes. Between the late 19th and mid-20th centuries, individual electric motors at each lathe replaced line shafting as the power source. Beginning in the 1950s, servomechanisms were applied to the control of lathes and other machine tools via numerical control (NC), which often was coupled with computers to yield computerized numerical control (CNC). Today manually controlled and CNC lathes coexist in the manufacturing industries.

Description

 Parts

Parts of a wood lathe
A lathe may or may not have a stand (or legs), which sits on the floor and elevates the lathe bed to a working height. Some lathes are small and sit on a workbench or table, and do not have a stand.
Almost all lathes have a bed, which is (almost always) a horizontal beam (although some CNC lathes have a vertical beam for a bed to ensure that swarf, or chips, falls free of the bed). A notable exception is the Hegner VB36 Master Bowlturner, a woodturning lathe designed for turning large bowls, which in its basic configuration is little more than a very large floor-standing headstock.
At one end of the bed (almost always the left, as the operator faces the lathe) is a headstock. The headstock contains high-precision spinning bearings. Rotating within the bearings is a horizontal axle, with an axis parallel to the bed, called the spindle. Spindles are often hollow, and have exterior threads and/or an interior Morse taper on the "inboard" (i.e., facing to the right / towards the bed) by which workholding accessories may be mounted to the spindle. Spindles may also have exterior threads and/or an interior taper at their "outboard" (i.e., facing away from the bed) end, and/or may have a handwheel or other accessory mechanism on their outboard end. Spindles are powered, and impart motion to the workpiece.
The spindle is driven, either by foot power from a treadle and flywheel or by a belt or gear drive to a power source. In most modern lathes this power source is an integral electric motor, often either in the headstock, to the left of the headstock, or beneath the headstock, concealed in the stand.
In addition to the spindle and its bearings, the headstock often contains parts to convert the motor speed into various spindle speeds. Various types of speed-changing mechanism achieve this, from a cone pulley or step pulley, to a cone pulley with back gear (which is essentially a low range, similar in net effect to the two-speed rear of a truck), to an entire gear train similar to that of a manual-shift auto transmission. Some motors have electronic rheostat-type speed controls, which obviates cone pulleys or gears.
The counterpoint to the headstock is the tailstock, sometimes referred to as the loose head, as it can be positioned at any convenient point on the bed, by undoing a locking nut, sliding it to the required area, and then relocking it. The tailstock contains a barrel which does not rotate, but can slide in and out parallel to the axis of the bed, and directly in line with the headstock spindle. The barrel is hollow, and usually contains a taper to facilitate the gripping of various type of tooling. Its most common uses are to hold a hardened steel centre, which is used to support long thin shafts while turning, or to hold drill bits for drilling axial holes in the work piece. Many other uses are possible.[3]
Metalworking lathes have a carriage (comprising a saddle and apron) topped with a cross-slide, which is a flat piece that sits crosswise on the bed, and can be cranked at right angles to the bed. Sitting atop the cross slide is usually another slide called a compound rest, which provides 2 additional axes of motion, rotary and linear. Atop that sits a toolpost, which holds a cutting tool which removes material from the workpiece. There may or may not be a leadscrew, which moves the cross-slide along the bed.
Woodturning and metal spinning lathes do not have cross-slides, but rather have banjos, which are flat pieces that sit crosswise on the bed. The position of a banjo can be adjusted by hand; no gearing is involved. Ascending vertically from the banjo is a toolpost, at the top of which is a horizontal toolrest. In woodturning, hand tools are braced against the tool rest and levered into the workpiece. In metal spinning, the further pin ascends vertically from the tool rest, and serves as a fulcrum against which tools may be levered into the workpiece.

Center lathe

A lathe center, often shortened to center, is a tool that has been ground to a point as to accurately position a workpiece about an axis. They usually have an included angle of 60°, but in heavy machining situation an angle of 75° is used.[1]
The primary use of a center is to ensure concentric work is produced, this allows the workpiece to be transferred between operations without any loss of accuracy. A part may be turned in a lathe, sent off for hardening and tempering and then ground between centers in a cylindrical grinder. The preservation of concentricity between the turning and grinding operations is crucial for quality work.
A center is also used to support longer workpieces where the cutting forces would deflect the work excessively, reducing the finish and accuracy of the workpiece, or creating a hazardous situation.
A center has applications anywhere that a centered workpiece may be used, this is not limited to lathe usage but may include setups in dividing heads, cylindrical grinders, tool and cutter grinders or other related equipment. The term between centers refers to any machining operation where the job needs to be performed using centers.
A center is inserted into a matching hole drilled by a center drill.

The Centre Lathe is used to manufacture cylindrical shapes from a range of materials including; steels and plastics. Many of the components that go together to make an engine work have been manufactured using lathes. These may be lathes operated directly by people (manual lathes) or computer controlled lathes (CNC machines) that have been programmed to carry out a particular task. A basic manual centre lathe is shown below. This type of lathe is controlled by a person turning the various handles on the top slide and cross slide in order to make a product / part
The headstock of a centre lathe can be opened, revealing an arrangement of gears. These gears are sometimes replaced to alter the speed of rotation of the chuck. The lathe must be switched off before opening, although the motor should automatically cut off if the door is opened while the machine is running (a safety feature).
The speed of rotation of the chuck is usually set by using the gear levers. These are usually on top of the headstock or along the front and allow for a wide range of speeds.

Wednesday, 9 March 2011

Broaching

Broaching is a highly effective process that rivals milling and boring. For mass produced products, broaching is often an ideal solution. It is not nearly as effective for short-run production operations, since individual broaching tools can cost tens of thousands of dollars. To understand the broaching process, it is necessary to know what a broach is.

A typical broach looks a lot like a tube notched with thick, unsharpened teeth. Each tooth stands out slightly further than the previous one, so that the finishing teeth at one end are considerably wider than the front pilot section. Typically, this type of broach is sent through a circular hole in a work-piece, and completes its finishing job in one pass. Broaches are also used to cut external shapes, such as splines and keyways. Keyway broaches are long, rectangular pieces of steel with similar teeth notched into one end. Certain broaches, called rotor-cut broaches, are designed so that a succession of teeth have the same diameter, but are notched in such a way that each one only cuts a portion of the desired hole.

A broaching machine is simply a powered apparatus that sends the broach through the work-piece. Broaching machines rely on hydraulic drives. They are typically either vertical or horizontal machines. Vertical broaching machines operate in pull-down or pull-up varieties. This refers to the direction that the broach is pulled. A pull-down broach lowers the front pilot into the pre-made hole in the work-piece, and then the widening row of teeth is pulled down through the hole. Horizontal broaching machines are normally used for surface broaching. They work in a similar fashion to vertical machines, except that they operate from side-to-side on a horizontal plane. They still complete work in a single pass, however, from one side to the other. They are useful in operations that demand a rotating broach.

A broach performs functions similar to those of a saw, except that it completes its task in one movement through the material. Because of this, broaches do not have very fast speeds, though they still have exceptional output. Most of the time consumed in a broaching operation is spent loading and unloading parts, and in the time it takes for the broach to return to first position.




Spirit Level


The spirit level is a very old tool, used by carpenters, builders, and even folks at home trying to hang a painting, which helps you to determine straight or plumb lines. These are also called bubble levels because the goal when you line one up in a horizontal fashion is that you want the bubble in a small amount of liquid in the center of the level to be in the center, between two lines. This will let you know with a good deal of accuracy whether your line is straight. There are also spirit level types for measuring level vertical lines, which may operate by somewhat different standards.
As mentioned above, the spirit level is verifiably ancient. The first was invented by Melchisedech Thevenot in the 17th century CE. However, they may not have been widely used until the early 18th century, though Thevenot shared the design with other scientists and philosophers like Robert Hooke of England. The original spirit level had several vials with liquid, which would help a person measure a straight line horizontally. The design, which is now commonly seen, and certainly in use in plenty of homes, that contains a single vial with a bubble in it wasn’t created until the 1920s, and Harry Zeiman is credited with its invention.


The upper image is a plain precision level used in the engineering field to level machines or workpieces, the lower image shows an adjustable precision level that has an accuracy of 1:10000. The adjustable nature of this level can also be used to measure the inclination of an object.
The accuracy of a spirit level can be checked by placing it on any flat surface, marking the bubble's position and rotating the level 180°. The position of the bubble should then be symmetrical to the first reading.
Both levels have a "vee" groove machined along the base which enables the level to sit on a round bar while remaining parallel with the bar's axis. They also have a smaller cross level to enable the second axis to be roughly checked or corrected.
While a precision level may be used to check and correct the twist in a machine (or workpiece), its presence does not necessarily need to be corrected.
  • A machine such as a mill or lathe does not have to be perfectly level to operate correctly but may in fact have a known twist introduced to the machines bed. This twist is often introduced to ensure that a worn lathe turns parallel work, by realigning the bed (that is worn) to the spindle axis (unworn).
  • Leveling a ship's lathe would be pointless due to the nature of the ships base - floating on water. Correcting any twist in the bed however would be essential for accurate work to be reproduced from the lathe.


Sunday, 6 March 2011

Calibration


Calibration is a comparison between measurements – one of known magnitude or correctness made or set with one device and another measurement made in as similar a way as possible with a second device.
The device with the known or assigned correctness is called the standard. The second device is the unit under test (UUT), test instrument (TI), or any of several other names for the device being calibrated.

History

The words "calibrate" and "calibration" entered the English language during the American Civil War,[1] in descriptions of artillery. Many of the earliest measuring devices were intuitive and easy to conceptually validate. The term "calibration" probably was first associated with the precise division of linear distance and angles using a dividing engine and the measurement of gravitational mass using a weighing scale. These two forms of measurement alone and their direct derivatives supported nearly all commerce and technology development from the earliest civilizations until about 1800AD.
The Industrial Revolution introduced wide scale use of indirect measurement. The measurement of pressure was an early example of how indirect measurement was added to the existing direct measurement of the same phenomena.



Direct reading design



Indirect reading design from front



Indirect reading design from rear, showing Bourdon tube
Before the Industrial Revolution, the most common pressure measurement device was a hydrostatic manometer, which is not practical for measuring high pressures. Eugene Bourdonfulfilled the need for high pressure measurement with his Bourdon tube pressure gage.
In the direct reading hydrostatic manometer design on the left, unknown pressure pushes the liquid down the left side of the manometer U-tube (or unknown vacuum pulls the liquid up the tube, as shown) where a length scale next to the tube measures the pressure, referenced to the other, open end of the manometer on the right side of the U-tube. The resulting height difference "H" is a direct measurement of the pressure or vacuum with respect to atmospheric pressure. The absence of pressure or vacuum would make H=0. The self-applied calibration would only require the length scale to be set to zero at that same point.
In a Bourdon tube shown in the two views on the right, applied pressure entering from the bottom on the silver barbed pipe tries to straighten a curved tube (or vacuum tries to curl the tube to a greater extent), moving the free end of the tube that is mechanically connected to the pointer. This is indirect measurement that depends on calibration to read pressure or vacuum correctly. No self-calibration is possible, but generally the zero pressure state is correctable by the user.
Even in recent times, direct measurement is used to increase confidence in the validity of the measurements.
In the early days of US automobile use, people wanted to see the gasoline they were about to buy in a big glass pitcher, a direct measure of volume and quality via appearance. By 1930, rotary flowmeters were accepted as indirect substitutes. A hemispheric viewing window allowed consumers to see the blade of the flowmeter turn as the gasoline was pumped. By 1970, the windows were gone and the measurement was totally indirect.
Indirect measurement always involve linkages or conversions of some kind. It is seldom possible to intuitively monitor the measurement. These facts intensify the need for calibration.
Most measurement techniques used today are indirect.

[edit]Calibration versus Metrology

There is no consistent demarcation between calibration and metrology. Generally, the basic process below would be metrology-centered if it involved new or unfamiliar equipment and processes. For example, a calibration laboratory owned by a successful maker of microphoneswould have to be proficient in electronic distortion and sound pressure measurement. For them, the calibration of a new frequency spectrum analyzer is a routine matter with extensive precedent. On the other hand, a similar laboratory supporting a coaxial cable manufacturer may not be as familiar with this specific calibration subject. A transplanted calibration process that worked well to support the microphone application may or may not be the best answer or even adequate for the coaxial cable application. A prior understanding the measurement requirements of coaxial cable manufacturing would make the calibration process below more successful.

[edit]Basic calibration process

The calibration process begins with the design of the measuring instrument that needs to be calibrated. The design has to be able to "hold a calibration" through its calibration interval. In other words, the design has to be capable of measurements that are "within engineering tolerance" when used within the stated environmental conditions over some reasonable period of time. Having a design with these characteristics increases the likelihood of the actual measuring instruments performing as expected.
The exact mechanism for assigning tolerance values varies by country and industry type. The measuring equipment manufacturer generally assigns the measurement tolerance, suggests a calibration interval and specifies the environmental range of use and storage. The using organization generally assigns the actual calibration interval, which is dependent on this specific measuring equipment's likely usage level. A very common interval in the United States for 8–12 hours of use 5 days per week is six months. That same instrument in 24/7 usage would generally get a shorter interval. The assignment of calibration intervals can be a formal process based on the results of previous calibrations.
The next step is defining the calibration process. The selection of a standard or standards is the most visible part of the calibration process. Ideally, the standard has less than 1/4 of the measurement uncertainty of the device being calibrated. When this goal is met, the accumulated measurement uncertainty of all of the standards involved is considered to be insignificant when the final measurement is also made with the 4:1 ratio. This ratio was probably first formalized in Handbook 52 that accompanied MIL-STD-45662A, an early US Department of Defense metrology program specification. It was 10:1 from its inception in the 1950s until the 1970s, when advancing technology made 10:1 impossible for most electronic measurements.
Maintaining a 4:1 accuracy ratio with modern equipment is difficult. The test equipment being calibrated can be just as accurate as the working standard. If the accuracy ratio is less than 4:1, then the calibration tolerance can be reduced to compensate. When 1:1 is reached, only an exact match between the standard and the device being calibrated is a completely correct calibration. Another common method for dealing with this capability mismatch is to reduce the accuracy of the device being calibrated.
For example, a gage with 3% manufacturer-stated accuracy can be changed to 4% so that a 1% accuracy standard can be used at 4:1. If the gage is used in an application requiring 16% accuracy, having the gage accuracy reduced to 4% will not affect the accuracy of the final measurements. This is called a limited calibration. But if the final measurement requires 10% accuracy, then the 3% gage never can be better than 3.3:1. Then perhaps adjusting the calibration tolerance for the gage would be a better solution. If the calibration is performed at 100 units, the 1% standard would actually be anywhere between 99 and 101 units. The acceptable values of calibrations where the test equipment is at the 4:1 ratio would be 96 to 104 units, inclusive. Changing the acceptable range to 97 to 103 units would remove the potential contribution of all of the standards and preserve a 3.3:1 ratio. Continuing, a further change to the acceptable range to 98 to 102 restores more than a 4:1 final ratio.
This is a simplified example. The mathematics of the example can be challenged. It is important that whatever thinking guided this process in an actual calibration be recorded and accessible. Informality contributes to tolerance stacks and other difficult to diagnose post calibration problems.
Also in the example above, ideally the calibration value of 100 units would be the best point in the gage's range to perform a single-point calibration. It may be the manufacturer's recommendation or it may be the way similar devices are already being calibrated. Multiple point calibrations are also used. Depending on the device, a zero unit state, the absence of the phenomenon being measured, may also be a calibration point. Or zero may be resettable by the user-there are several variations possible. Again, the points to use during calibration should be recorded.
There may be specific connection techniques between the standard and the device being calibrated that may influence the calibration. For example, in electronic calibrations involving analog phenomena, the impedance of the cable connections can directly influence the result.
All of the information above is collected in a calibration procedure, which is a specific test method. These procedures capture all of the steps needed to perform a successful calibration. The manufacturer may provide one or the organization may prepare one that also captures all of the organization's other requirements. There are clearinghouses for calibration procedures such as the Government-Industry Data Exchange Program (GIDEP) in the United States.
This exact process is repeated for each of the standards used until transfer standards, certified reference materials and/or natural physical constants, the measurement standards with the least uncertainty in the laboratory, are reached. This establishes the traceability of the calibration.
See metrology for other factors that are considered during calibration process development.
After all of this, individual instruments of the specific type discussed above can finally be calibrated. The process generally begins with a basic damage check. Some organizations such as nuclear power plants collect "as-found" calibration data before any routine maintenance is performed. After routine maintenance and deficiencies detected during calibration are addressed, an "as-left" calibration is performed.
More commonly, a calibration technician is entrusted with the entire process and signs the calibration certificate, which documents the completion of a successful calibration.

[edit]Calibration process success factors

The basic process outlined above is a difficult and expensive challenge. The cost for ordinary equipment support is generally about 10% of the original purchase price on a yearly basis, as a commonly accepted rule-of-thumb. Exotic devices such as scanning electron microscopes,gas chromatograph systems and laser interferometer devices can be even more costly to maintain.
The extent of the calibration program exposes the core beliefs of the organization involved. The integrity of organization-wide calibration is easily compromised. Once this happens, the links between scientific theory, engineering practice and mass production that measurement provides can be missing from the start on new work or eventually lost on old work.
The 'single measurement' device used in the basic calibration process description above does exist. But, depending on the organization, the majority of the devices that need calibration can have several ranges and many functionalities in a single instrument. A good example is a common modern oscilloscope. There easily could be 200,000 combinations of settings to completely calibrate and limitations on how much of an all inclusive calibration can be automated.
Every organization using oscilloscopes has a wide variety of calibration approaches open to them. If a quality assurance program is in force, customers and program compliance efforts can also directly influence the calibration approach. Most oscilloscopes are capital assets that increase the value of the organization, in addition to the value of the measurements they make. The individual oscilloscopes are subject todepreciation for tax purposes over 3, 5, 10 years or some other period in countries with complex tax codes. The tax treatment of maintenance activity on those assets can bias calibration decisions.
New oscilloscopes are supported by their manufacturers for at least five years, in general. The manufacturers can provide calibration services directly or through agents entrusted with the details of the calibration and adjustment processes.
Very few organizations have only one oscilloscope. Generally, they are either absent or present in large groups. Older devices can be reserved for less demanding uses and get a limited calibration or no calibration at all. In production applications, oscilloscopes can be put in racks used only for one specific purpose. The calibration of that specific scope only has to address that purpose.
This whole process in repeated for each of the basic instrument types present in the organization, such as the digital multimeter (DMM) pictured below.

A DMM (top), a rack-mounted oscilloscope (center) and control panel
Also the picture above shows the extent of the integration between Quality Assurance and calibration. The small horizontal unbroken paper seals connecting each instrument to the rack prove that the instrument has not been removed since it was last calibrated. These seals are also used to prevent undetected access to the adjustments of the instrument. There also are labels showing the date of the last calibration and when the calibration interval dictates when the next one is needed. Some organizations also assign unique identification to each instrument to standardize the recordkeeping and keep track of accessories that are integral to a specific calibration condition.
When the instruments being calibrated are integrated with computers, the integrated computer programs and any calibration corrections are also under control.
In the United States, there is no universally accepted nomenclature to identify individual instruments. Besides having multiple names for the same device type there also are multiple, different devices with the same name. This is before slang and shorthand further confuse the situation, which reflects the ongoing open and intense competition that has prevailed since the Industrial Revolution.

[edit]The calibration paradox

Successful calibration has to be consistent and systematic. At the same time, the complexity of some instruments require that only key functions be identified and calibrated. Under those conditions, a degree of randomness is needed to find unexpected deficiencies. Even the most routine calibration requires a willingness to investigate any unexpected observation.
Theoretically, anyone who can read and follow the directions of a calibration procedure can perform the work. It is recognizing and dealing with the exceptions that is the most challenging aspect of the work. This is where experience and judgement are called for and where most of the resources are consumed.

[edit]Quality

To improve the quality of the calibration and have the results accepted by outside organizations it is desirable for the calibration and subsequent measurements to be "traceable" to the internationally defined measurement units. Establishing traceability is accomplished by a formal comparison to a standard which is directly or indirectly related to national standards (NIST in the USA), international standards, orcertified reference materials.
Quality management systems call for an effective metrology system which includes formal, periodic, and documented calibration of all measuring instruments. ISO 9000 and ISO 17025 sets of standards require that these traceable actions are to a high level and set out how they can be quantified.

[edit]Instrument calibration

Calibration can be called for:
  • with a new instrument
  • when a specified time period is elapsed
  • when a specified usage (operating hours) has elapsed
  • when an instrument has had a shock or vibration which potentially may have put it out of calibration
  • sudden changes in weather
  • whenever observations appear questionable
In general use, calibration is often regarded as including the process of adjusting the output or indication on a measurement instrument to agree with value of the applied standard, within a specified accuracy. For example, a thermometer could be calibrated so the error of indication or the correction is determined, and adjusted (e.g. via calibration constants) so that it shows the true temperature in Celsius at specific points on the scale. This is the perception of the instrument's end-user. However, very few instruments can be adjusted to exactly match the standards they are compared to. For the vast majority of calibrations, the calibration process is actually the comparison of an unknown to a known and recording the results.

[edit]International

In many countries a National Metrology Institute (NMI) will exist which will maintain primary standards of measurement (the main SI unitsplus a number of derived units) which will be used to provide traceability to customer's instruments by calibration. The NMI supports the metrological infrastructure in that country (and often others) by establishing an unbroken chain, from the top level of standards to an instrument used for measurement. Examples of National Metrology Institutes are NPL in the UKNIST in the United StatesPTB in Germanyand many others. Since the Mutual Recognition Agreement was signed it is now straightforward to take traceability from any participating NMI and it is no longer necessary for a company to obtain traceability for measurements from the NMI of the country in which it is situated.
To communicate the quality of a calibration the calibration value is often accompanied by a traceable uncertainty statement to a stated confidence level. This is evaluated through careful uncertainty analysis.