Tuesday, March 31, 2009

Compression Ratio

The compression ratio of an internal-combustion engine or external combustion engine is a value that represents the ratio of the volume of its combustion chamber; from its largest capacity to its smallest capacity. It is a fundamental specification for many common combustion engines.
In a piston engine it is "the ratio between the volume of the cylinder and combustion chamber when the piston is at the bottom of its stroke, and the volume of the combustion chamber when the piston is at the top of its stroke".
Cylinder with the piston at the bottom of its stroke containing 1000 cc of air. When the piston has moved up to the top of its stroke inside the cylinder and the remaining volume inside the head or combustion chamber has been reduced to 100 cc, then the compression ratio would be proportionally described as 1000:100, or with fractional reduction, a 10:1 compression ratio.
A high compression ratio is desirable because it allows an engine to extract more mechanical energy from a given mass of air-fuel mixture due to its higher thermal efficiency. High ratios place the available oxygen and fuel molecules into a reduced space along with the adiabatic heat of compression - causing better mixing and evaporation of the fuel droplets. Thus they allow increased power at the moment of ignition and the extraction of more useful work from that power by expanding the hot gas to a greater degree.
Higher compression ratios will however make gasoline engines subject to engine knocking, also known as detonation and this can reduce an engine's efficiency or even physically damage it.
Diesel engines on the other hand operate on the principle of compression ignition, so that a fuel which resists auto ignition will cause late ignition which will also lead to engine knock.

Typical compression ratios
Petrol/gasoline engine

Due to detonation, the CR in a gasoline/petrol powered engine will usually not be much higher than 10:1, although some production automotive engines built for high-performance from 1955-1972 had compression ratios as high as 12.5:1, which could run safely on the high-octane leaded gasoline then available.
A technique used to prevent the onset of knock is the high "swirl" engine that forces the intake charge to adopt a very fast circular rotation in the cylinder during compression that provides quicker and more complete combustion. Recently, with the addition of variable valve timing and knock sensors to delay ignition timing, it is possible to manufacture gasoline engines with compression ratios of over 11:1 that can use 87 MON (octane rating) fuel.

Petrol/gasoline engine for racing
Motorcycle racing engines can use compression ratios as high as 14:1, and it is not uncommon to find motorcycles with compression ratios above 12.0:1 designed for 86 or 87 octane fuel.
Racing engines burning methanol and ethanol often exceed a CR of 15:1. Consumers may note that "gasohol", or 90% gasoline with 10% ethanol gives a higher octane rating (knock suppression).
Gas-fuelled engine
In engines running exclusively on LPG or CNG, the CR may be higher, due to the higher octane rating of these fuels.
Diesel engine
In an auto-ignition diesel engine, (no electrical sparking plug--the hot air of compression lights the injected fuel) the CR will customarily exceed 14:1. Ratios over 22:1 are common. The appropriate compression ratio depends on the design of the cylinder head. The figure is usually between 14:1 and 16:1 for indirect injection engines and between 18:1 and 20:1 for direct injection engines.
Measuring the compression pressure of an engine, with a pressure gauge connected to the spark plug opening, gives an indication of the engine's state and quality. There is, however, no formula to calculate compression ratio based on cylinder pressure.
If the nominal compression ratio of an engine is given, the pre-ignition cylinder pressure can be estimated using the following relationship:
p = po X Cr h
Where po is the cylinder pressure at bottom dead center (BDC) which is usually at 1 atm, Cr is the compression ratio, and h is the specific heat ratio for the working fluid, which is about 1.4 for air, and 1.3 for methane-air mixture.
For example, if an engine running on gasoline has a compression ratio is 10:1, the cylinder pressure at top dead center (TDC) is
pTDC = (1bar) X 10 1.4 = 25.1 bar

Variable Compression Ratio (VCR) Engines
the first VCR engine being built and tested by Harry Ricardo in the 1920s. This work led to him devising the octane rating system that is still in use today. SAAB has recently been involved in working with the 'Office of Advanced Automotive Technologies', to produce a modern petrol VCR engine that showed an efficiency comparable with that of a Diesel. Many companies have been carrying out their own research in to VCR Engines, including Nissan, Volvo, PSA/Peugeot-Citroën and Renault but so far with no publicly demonstrated results.
The Atkinson cycle engine was one of the first attempts at variable compression. Since the compression ratio is the ratio between dynamic and static volumes of the combustion chamber the Atkinson cycle's method of increasing the length of the power stroke compared to the intake stroke ultimately altered the compression ratio at different stages of the cycle.

Thursday, March 26, 2009

Density

The density of a material is defined as its mass per unit volume. The symbol of density is ρ (the Greek letter rho)
Mathematically: ρ = m / V
Where: ρ is the density, m is the mass and V is the volume.

History : In a well-known common story, Archimedes was given the task of determining whether King Hiero's goldsmith was embezzling gold during the manufacture of a wreath dedicated to the gods and replacing it with another, cheaper alloy.
Archimedes knew that the irregularly shaped wreath could be crushed into a cube whose volume could be calculated easily and compared with the weight; but the king did not approve of this.
Baffled, Archimedes took a relaxing bath and observed from the rise of the warm water upon entering that he could calculate the volume of the gold crown through the displacement of the water. Allegedly, upon this discovery, he went running naked though the streets shouting, "Eureka! Eureka!" (Greek "I found it"). As a result, the term "eureka" entered common parlance and is used today to indicate a moment of enlightenment.

In general density can be changed by changing either the "pressure or the temperature". Increasing the pressure will always increase the density of a material.
Increasing the temperature generally decreases the density, but there are notable exceptions to this generalisation.
For example, the density of water increases between its melting point at 0 °C and 4 °C and similar behaviour is observed in silicon at low temperatures.
The effect of pressure and temperature on the densities of liquids and solids is small so that a typical compressibility for a liquid or solid is 10–6 bar–1 (1 bar=0.1 MPa) and a typical thermal expansivity is 10–5 K–1.
In contrast, the density of gases is strongly affected by pressure. Boyle's law says that the density of an ideal gas is given by
ρ = MP / RT
Where R is the universal gas constant, P is the pressure, M the molar mass, and T the absolute temperature.
This means that a gas at 300 K and 1 bar will have its density doubled by increasing the pressure to 2 bar or by reducing the temperature to 150 K.
Osmium is the densest known substance at standard conditions for temperature and pressure.

Density of Composite material:
ASTM specification D792 describes the steps to measure the density of a composite material.
ρ = (Wa / (Wa + Ww - Wb )) 0.9975

Where:
ρ is the density of the composite material, in g/cm3 and
Wa is the weight of the specimen when hung in the air
Ww is the weight of the partly immersed wire holding the specimen
Wb is the weight of the specimen when immersed fully in distilled water, along with the partly immersed wire holding the specimen
0.9975 is the density in g/cm3 of the distilled water at 23°C.

Friday, March 20, 2009

NDXRF

Non-dispersive x-ray fluorescence (NDXRF) got its start in the 1920's when Ross and other experimenters discovered that they could isolate an x-ray line for an element by using two filters made of different elements over two detectors.
One filter absorbs the elements x-rays, while the other transmits them. The difference in counts between the two matched detectors with balanced filters is the net intensity and is related to that elements concentration.
When combined with earlier work that demonstrated that elements could be measured by measuring total x-ray intensities from some simple samples, a new and powerful method was born. Unfortunately it was almost 50 years later when small microprocessor based analyzers were built in the 1970's that NDXRF started to make a commercial impact.
NDXRF has the least expensive hardware of any of the XRF methods, because it only requires a few low costs components. It needs an x-ray source, usually either a radioisotope such as Fe-55, Cd-109, Cm-244, Am-241 of Co-57, or a small x-ray tube.
And it requires a detector such as an ionization chamber or Geiger-Mueller Counter, which does not need to be energy dispersive. While Ross used two detectors, the more common approach is to use a single detector and use a filter wheel or tray to position the filters over the detector in sequence. In addition to the Ross method a single filter (Hull Method) or no filter at all may be used to measure some elements.
In commercial devices it is most common to see a proportional counter used as the detector since it is a low resolution EDXRF detector.
The advantage of a proportional counter is that it can be configured to not count the backscattered source x-rays making the overall background counts substantially lower.
At the same time a proportional counter instrument may be used for EDXRF analysis, making it a hybrid EDX/NDX instrument.
With x-ray tube source devices, x-ray tube filters may be used in combination with specially selected target anodes to produce optimal sources for exciting the elements in a sample.
The non-dispersive XRF method is very powerful, and cases where an appropriate filter pair exists and can successful isolate an elements wavelength it is often possible to match the performance of a WDX analyzer at a tenth the cost using 100 times less source intensity.
Application : One of the most common applications is measuring phosphorus, sulfur and chlorine in oil. Generally either a Fe-55 radioisotope, or an x-ray tube with either a Pd, Ag, or Ti target is used to excite those elements.
By looking at the absorption edges for various materials it is easily seen that chlorine has an absorption edge above chlorine in energy, sulfur has and absorption edge between chlorine and sulfur, and phosphorus has an absorption edge between sulfur and phosphorus.
X-rays do not readily excite hydrogen and carbon in the base matrix and the detector windows readily absorb their x-rays, and so they aren't measured.
In this case chlorine can be measured by using chlorine as a transmitting filter and sulfur as an absorbing filter. The difference between the counts of x-rays off the oil sample and through the filter correlates to the chlorine concentration.
The filers are electronically balanced by measuring the intensity on a blank and introducing a coefficient that when multiplied by one intensity yields zero net count. Similarly a sulfur and phosphorus pair of filters can be used to measure sulfur.
A single filter can be used to measure phosphorus since there are usually no measurable elements below phosphorus that would produce counts.
The matter is confused somewhat because it is difficult to produce good sulfur and phosphorus filters, so usually an element with an L absorption edge at the appropriate energy is used instead, Mo or Nb for S, and Zr for P. Since the heavy metals are denser the filters are usually much thinner than their K absorption edge counterparts.

ED - XRF & WD - XRF

Basic Theory of X-ray Fluorescence
An electron can be ejected from its atomic orbital by the absorption of a light wave (photon) of sufficient energy. The energy of the photon (hv) must be greater than the energy with which the electron is bound to the nucleus of the atom. When an inner orbital electron is ejected from an atom, an electron from a higher energy level orbital will be transferred to the lower energy level orbital. During this transition a photon maybe emitted from the atom.
This fluorescent light is called the characteristic X-ray of the element. The energy of the emitted photon will be equal to the difference in energies between the two orbitals occupied by the electron making the transition.
Because the energy difference between two specific orbital shells, in a given element, is always the same (i.e. characteristic of a particular element), the photon emitted when an electron moves between these two levels, will always have the same energy. Therefore, by determining the energy (wavelength) of the X-ray light (photon) emitted by a particular element, it is possible to determine the identity of that element.
For a particular energy (wavelength) of fluorescent light emitted by an element, the number of photons per unit time (generally referred to as peak intensity or count rate) is related to the amount of that analyte in the sample. The counting rates for all detectable elements within a sample are usually calculated by counting, for a set amount of time, the number of photons that are detected for the various analytes’ characteristic X-ray energy lines.
It is important to note that these fluorescent lines are actually observed as peaks with a semi-Gaussian distribution because of the imperfect resolution of modern detector technology. Therefore, by determining the energy of the X-ray peaks in a sample’s spectrum, and by calculating the count rate of the various elemental peaks, it is possible to qualitatively establish the elemental composition of the samples and to quantitatively measure the concentration of these elements.
or
When materials are excited with high-energy, short wavelength radiation (e.g., X-rays), they can become ionized. If the energy of the radiation is sufficient to dislodge a tightly-held inner electron, the atom becomes unstable and an outer electron replaces the missing inner electron. When this happens, energy is released due to the decreased binding energy of the inner electron orbital compared with an outer one. The emitted radiation is of lower energy than the primary incident X-rays and is termed fluorescent radiation. Because the energy of the emitted photon is characteristic of a transition between specific electron orbitals in a particular element, the resulting fluorescent X-rays can be used to detect the abundances of elements that are present in the sample.
X-Ray Fluorescence (XRF) Instrumentation - How Does It Work?
The analysis of major and trace elements in geological materials by XRF is made possible by the behavior of atoms when they interact with X-radiation. An XRF spectrometer works because if a sample is illuminated by an intense X-ray beam, known as the incident beam, some of the energy is scattered, but some is also absorbed within the sample in a manner that depends on its chemistry. The incident X-ray beam is typically produced from a Rh target, although W, Mo, Cr and others can also be used, depending on the application.

An XRF spectrometer, with the sample port on top, and a set of samples in silver metallic holders in the sample changer in front.
When this primary X-ray beam illuminates the sample, it is said to be excited. The excited sample in turn emits X-rays along a spectrum of wavelengths characteristic of the types of atoms present in the sample.
How does this happen?
The atoms in the sample absorb X-ray energy by ionizing, ejecting electrons from the lower (usually K and L) energy levels. The ejected electrons are replaced by electrons from an outer, higher energy orbital.
When this happens, energy is released due to the decreased binding energy of the inner electron orbital compared with an outer one. This energy release is in the form of emission of characteristic X-rays indicating the type of atom present.
If a sample has many elements present, as is typical for most minerals and rocks, the use of a Wavelength Dispersive Spectrometer much like that in an EPMA allows the separation of a complex emitted X-ray spectrum into characteristic wavelengths for each element present.
Various types of detectors (gas flow proportional and scintillation) are used to measure the intensity of the emitted beam. The flow counter is commonly utilized for measuring long wavelength (>0.15 nm) X-rays that are typical of K spectra from elements lighter than Zn.
The scintillation detector is commonly used to analyze shorter wavelengths in the X-ray spectrum (K spectra of element from Nb to I; L spectra of Th and U). X-rays of intermediate wavelength (K spectra produced from Zn to Zr and L spectra from Ba and the rare earth elements) are generally measured by using both detectors in tandem.
The intensity of the energy measured by these detectors is proportional to the abundance of the element in the sample. The exact value of this proportionality for each element is derived by comparison to mineral or rock standards whose composition is known from prior analyses by other techniques.
Energy Dispersive X-Ray Fluorescence (EDXRF) relies on the detector and detector electronics to resolve spectral peaks due to different energy x-rays. It wasn't until the 1960's and early 1970's that electronics had developed to the point that high-resolution detectors, like lithium drifted silicon, Si(Li), could be made and installed in commercial devices. Computers were also a necessity for the success of EDXRF even if they where often as large as the instrument itself.
EDXRF is relatively simple and inexpensive compared to other techniques. It requires and x-ray source, which in most laboratory instruments is a 50 to 60 kV 50-300 W x-ray tube.
Lower cost bench-top or handheld models may use radioisotopes such as Fe-55, Cd-109, Cm-244, Am-241 of Co-57 or a small x-ray tube.
The second major component is the detector, which must be designed to produce electrical pulses that vary with the energy of the incident x-rays.
Most laboratory EDXRF instruments still use liquid nitrogen or Peltier cooled Si(Li) detectors, while bench-top instruments usually have proportional counters, or newer Peltier cooled PIN diode detectors, but historically sodium iodide (NaI) detectors were common.
Some handheld devices use other detectors such as mercuric Iodide, CdTe, and CdZnTe in addition to PIN diode devices depending largely on the x-ray energy of the elements of interest.
The most recent and fastest growing detector technology is the Peltier cooled silicon drift detector (SDD), which are available in some laboratory grade EDXRF instruments.
After the source and detector the next critical component are the x-ray tube filters, which are available in most EDXRF instrument.
There function is to absorb transmit some energies of source x-rays more than other in order to reduce the counts in the region of interest while producing a peak that is well suited to exciting the elements of interest.
Secondary targets are an alternative to filters. A secondary target material is excited by the primary x-rays from the x-ray tube, and then emits secondary x-rays that are characteristic of the elemental composition of the target.
Where applicable secondary targets yield lower background and better excitation than filter but require approximate 100 times more primary x-ray intensity. One specialized form of secondary targets is polarizing targets.
Polarizing XRF takes advantage of the principle that when x-rays are scattered off a surface they a partially polarized. The target and sample are place on orthogonal axis' to further minimize the scatter and hence the background at the detector.
Fixed or movable detector filters, which take advantage of non-dispersive XRF principles, are sometimes added to EDXRF devices to further improve the instruments effective resolution or sensitivity forming a hybrid EDX/NDX device.
Applications: EDXRF can be used for a tremendous variety of elemental analysis applications. It can be used to measure virtually every element form Na to Pu in the periodic table, in concentrations ranging from a few ppm to nearly 100 percent.
It can be used for monitoring major components in a product or process or the addition of minor additive. Because XRF's popularity in the geological field, EDXRF instruments are often used alongside WDXRF instruments for measuring major and minor components in geological sample.
Wavelength Dispersive X-Ray Fluorescence (WDXRF) can be relatively simple and inexpensive, or complex and very expensive depending on the number of optical components. WDX instruments use a x-ray tube source to directly excite the sample. Because the overall efficiency of the WDXRF system is low, x-ray tubes in larger systems are normally rated at 1-4 kilowatts. There are some specialized low power systems that operate at 50 to 200 watts. A diffraction device, usually a crystal or multilayer, is positioned to diffract x-rays from the sample toward the detector. Diffracted wavelengths are those that satisfy the 2dnsin ? relationship, where d is the atomic spacing within the crystal, n is an integer, and theta is the angle between the sample and detector. Other wavelengths are scattered very inefficiently. Collimators are normally used to limit the angular spread of x-rays, to further improve the effective resolution of the WDX system. Because the detector is not relied on for the systems resolution it can be a proportional counter or other low-resolution counter capable of detecting a million or more counts per second.
All the components can be fixed to form a fixed single WDX channel that is ideal for analyzing a single element. A simultaneous WDX analyzer will have a number of fixed single channels usually formed in a circle around the sample with the x-ray tube facing upward in the middle. Other WDX analyzers use a goniometer to allow the angle (?) to be changed, so that one element after another may be measured in sequence. This type of instrument is a sequential WDX analyzer. There are also combined sequential/simultaneous instruments as well.
Applications : WDXRF can be used for a tremendous variety of elemental analysis applications. It can be used to measure virtually every element form Na to Pu in the periodic table, and some instruments can be used for quantitative or semi-quantitative work for even lighter elements. It can measure elemental concentrations ranging from a few ppm to nearly 100 percent. It can be used for monitoring major components in a product or process or the addition of minor additives. WDXRF is extremely popular in the geological field and is often used for measuring raw minerals, and finished products composed of minerals.
EDXRF vs WDXRF
In an effort to save money, space, sample preperation time, or simply to add an analytical instrument to their process many companies will decide to evaluate energy dispersive x-ray fluorescence (EDXRF) analyzers as a substitute for their standard wavelength dispersive x-ray fluorescence (WDXRF) analysis.
This is very common with geological applications where WDX is the benchmark, but it occurs with many other applications as well. What all these companies eventually discover is that EDXRF is not the low cost drop in replacement that they thought it would be but has significant differences, some positive and some negative, that must be considered in the evaluation process or else dealt with later when it may be less convenient.
As most scientifically minded persons know, the energy of the light photon increases as the wavelength decreases, so in an EDX spectra the low atomic number elements are on the left while they are to the right of a WDX spectra. But the difference goes far beyond that.
WDXRF:
The WDXRF analyzer uses a x-ray source to excite a sample. X-rays that have wavelengths that are characteristic to the elements within the sample are emmitted and they along with scattered source x-rays go in all directions. A crystal or other diffraction device is placed in the way of the x-rays coming off the sample. A x-ray detector is position where it can detector the x-rays that are diffracted and scattered off the crystal. Depending on the spacing between the atoms of the crystal lattice (diffractive device) and its angle in relation to the sample and detector, specific wavelengths directed at the detector can be controlled. The angle can be changed in order to measure elements sequentially, or multiple crystals and detectors may be arrayed around a sample for simultaneous analysis.
EDXRF:
The EDXRF analyzer also uses an x-ray source to excite the sample but it may be configured in one of two ways. The first way is direct excitation where the x-ray beam is pointed directly at the sample. Filter made of various elements may be placed between the source and sample to increase the excitation of the element of interest or reduce the background in the region of interest. The second way uses a secondary target, where the source points at the target, the target element is excited and fluoresces, and then the target fluorescence is used to excite the sample. A detector is positioned to measure the fluorescent and scattered x-rays from the sample and a multichannel analyzer and software assigns each detector pulse an energy value thus producing a spectrum. Note that there is absolutely no reason why the spectra cannot be displayed in a wavelength dependant graph format.
Points of Comparison
1. Resolution: It describes the width of the spectra peaks. The lower the resolution number the more easily an elemental line is distinguished from other nearby x-ray line intensities.a. The resolution of the WDX system is dependant on the crystal and optics design, particularly collimation, spacing and positonal reproducibilty. The effective resolution of a WDX system may vary from 20 eV in an inexpensive benchtop to 5 eV or less in a laboratory instrument. The resolution is not detector dependant.b. The resolution of the EDX system is dependent on the resolution of the detector. This can vary from 150 eV or less for a liquid nitrogen cooled Si(Li) detector, 150-220 eV for various solid state detectors, or 600 eV or more for gas filled proportional counter.
ADVANTAGE WDXRF – High resolution means fewer spectral overlaps and lower background intensities.
ADVANTAGE EDXRF – WDX crystal and optics are expensive, and are one more failure mode. 2. Spectral Overlaps: Spectral deconvolutions are necessary for determining net intensities when two spectral lines overlap because the resolution is too high for them to be measured indepedantly. a. With a WDX instrument with very high resolution (low number of eV) spectral overlap corrections are not required for a vast majority of elements and applications. The gross intensities for each element can be determined in a single acquisition.b. The EDXRF analyzer is designed to detect a group of eleemnts all at once. The some type of deconvolution method must be used to correct for spectral overlaps. Overlaps are less of a problem with 150+ eV resolution systems, but are significant when compared to WDXRF. Spectral overlaps become more problematic at lower resolutions.
ADVANTAGE WDXRF – Spectral deconvolution routines introduce error due to counting statistics for every overlap correction onto every other element being corrected for. This can double or triple the error.
3. Background: The background radiation is one limiting factor for determining detection limits, repeatability, and reproducibilty.a. Since a WDX instrument usually uses direct radiation flux the background in the region of interest is directly related to the amount of continuum radiation within the region of interest the width of which is determined by the resolution.b. The EDXRF instrument uses filters and/or targets to reduce the amount of continuum radiation in the region of interest which is also resolution dependant, while producing a higher intensity x-ray peak to excite the element of interest.
Even - WDX has an advantage due to resolution. If a peak is one tenth as wide it has one tenth the background.
EDX counters with filters and targets that can reduce the background intensities by a factor of ten or more.
4. Source Efficiency: How efficiently the source x-rays are utilized determines how much power is needed to make the system work optimally. Higher power costs much more money.a. Every time an x-ray beam is scattered off a surface the intensity is reduced by a factor of 100 or so. For any XRF system intensity is lost in the process of exciting the sample, but a WDX analyzer also looses a factor of 100 in intensity at the diffraction device, although some modern multilayers are more efficient. The sample to detector path length is often 10 cm or more introducing huge geometrical losses. b. With direct excitation the EDX system avoids wasting x-ray intensity. When filters are used the 3 to 10 times more energy is required, and when secondary targets are used 100 times more energy is required making the total energy budget simlar between Seconday target EDX and WDX systems before the path length is considered. An EDX system typically has sample to detector path lengths less than 1 cm.
ADVANTAGE EDXRF – In order to achieve similar counts at the detector a WDX system needs 100-1000 times the flux of a direct excitation EDX system and 10-100 times the flux of a secondary target system. This one proinciple reason WDX systems cost more.
5. Excitation Efficiency : Usually expressed in PPM per count-per-second (cps) or similar units, this is the other main factor for determining detection limits, repeatability, and reproducibility. The relative excitation efficiency is improved by having more source x-rays closer to but above the absorption edge energy for the element of interest.
a. WDXRF generally uses direct unaltered x-ray excitation, which contains a continuum of energies with most of them not optimal for exciting the element of interest.
b. EDXRF analyzers may use filter to reduce the continuum energies at the elemental lines, and effectively increaseing the percentage of x-rays above the element absorption edge.
Filters may also be used to give a filter fluorescence line immediately above the absorption edge, to further improve excitation efficiency. Secondary targets provide an almost monochromatic line source that can be optimized for the element of interest to achieve optimal excitation efficiency.

Heat of Combustion

Heat of combustion (ΔH) is the energy released as heat when one mol of a compound undergoes complete combustion with oxygen. The chemical reaction is typically a hydrocarbon reacting with oxygen to form carbon dioxide, water and heat. It may be expressed with the quantities:
energy/mole of fuel (J/mol)
energy/mass of fuel
energy/volume of fuel
The heat of combustion is traditionally measured with a bomb calorimeter. It may also be calculated as the difference between the heat of formation (ΔfH0) of the products and reactants.
Heating value
The heating value or calorific value of a substance, usually a fuel or food, is the amount of heat released during the combustion of a specified amount of it. The calorific value is a characteristic for each substance. It is measured in units of energy per unit of the substance, usually mass, such as: kcal/kg, kJ/kg, J/mol, Btu/m³. Heating value is commonly determined by use of a bomb calorimeter.
The heat of combustion for fuels is expressed as the HHV, LHV, or GHV:
The quantity known as higher heating value (HHV) (or gross calorific value or gross energy or upper heating value) is determined by bringing all the products of combustion back to the original pre-combustion temperature, and in particular condensing any vapor produced. This is the same as the thermodynamic heat of combustion since the enthalpy change for the reaction assumes a common temperature of the compounds before and after combustion, in which case the water produced by combustion is liquid.

The quantity known as lower heating value (LHV) (or net calorific value) is determined by subtracting the heat of vaporization of the water vapor from the higher heating value. This treats any H2O formed as a vapor. The energy required to vaporize the water therefore is not realized as heat.

Gross heating value accounts for water in the exhaust leaving as vapor, and includes liquid water in the fuel prior to combustion. This value is important for fuels like wood or coal, which will usually contain some amount of water prior to burning.
A common method of relating HHV to LHV is:
HHV = LHV + hv x (nH2O,out/nfuel,in)

where hv is the heat of vaporization of water, nH2O,out is the moles of water vaporized and nfuel,in is the number of moles of fuel combusted.
Most applications which burn fuel produce water vapor which is not used, and thus wasting its heat content. In such applications, the lower heating value is the applicable measure. This is particularly relevant for natural gas, whose high hydrogen content produces much water. The gross calorific value is relevant for gas burnt in condensing boilers which condense the water vapor produced by combustion, recovering heat which would otherwise be wasted.
Both HHV and LHV can be expressed in terms of AR (all moisture counted), MF and MAF (only water from combustion of hydrogen). AR, MF, and MAF are commonly used for indicating the heating values of coal:
AR (As Received) indicates that the fuel heating value has been measured with all moisture and ash forming minerals present.
MF (Moisture Free) or Dry indicates that the fuel heating value has been measured after the fuel has been dried of all inherent moisture but still retaining its ash forming minerals.
MAF (Moisture and Ash Free) or DAF (Dry and Ash Free) indicates that the fuel heating value has been measured in the absence of inherent moisture and ash forming minerals.

Higher heating value of some less common fuels
Fuel HHV MJ/kg BTU/lb kJ/mol
Methanol 22.7 9,800 726.0
Ethanol 29.7 12,800 1,300.0
Propanol 33.6 14,500 2,020.0
Acetylene 49.9 21,500 1,300.0
Benzene 41.8 18,000 3,270.0
Ammonia 22.5 9,690 382.0
Hydrazine 19.4 8,370 622.0
Hexamine 30.0 12,900 4,200.0
Carbon 32.8 14,100 393.5

Tuesday, March 10, 2009

Total organic carbon (TOC)

Total organic carbon (TOC) is the amount of carbon bound in an organic compound and is often used as a non-specific indicator of water quality or cleanliness of pharmaceutical manufacturing equipment.
TOC measures both the total carbon (TC) present as well as the inorganic carbon (IC). Subtracting the inorganic carbon from the total carbon yields TOC. Another common variant of TOC analysis involves removing the IC portion first and then measuring the leftover carbon. This method involves purging an acidified sample with carbon-free air or nitrogen prior to measurement, and so is more accurately called non-purgeable organic carbon (NPOC).
Whether the analysis of TOC is by TC-IC or NPOC methods, it may be broken into three main stages:
Acidification
Oxidation
Detection and Quantification
The first stage is acidification of the sample for the removal of the IC and POC gases. The release of these gases to the detector for measurement or to the air is dependent upon which type of analysis is of interest, the former for TC-IC and the latter for TOC (NPOC).
Prepared samples are combusted at 1,350o C in an oxygen rich atmosphere. All carbon present converts to carbon dioxide, flows through scrubber tubes to remove interferences such as chlorine gas, and water vapor, and the carbon dioxide is measured either by absorption into a strong base then weighed, or using an Infrared Detector. Most modern analyzers use non-dispersive infrared (NDIR) for detection of the carbon dioxide.
A manual or automated process injects the sample onto a platinum catalyst at 680o C in an oxygen rich atmosphere. The concentration of carbon dioxide generated is measured with a non-dispersive infrared (NDIR) detector.
Photo-Oxidation (UV Light)
Ultra-violet light alone oxidizes the carbon within the sample to produce CO2. The UV oxidation method offers the most reliable, low maintenance method of analyzing TOC in ultra-pure waters.
UV/Chemical (Persulfate) Oxidation
UV light is the oxidizer but the oxidation power of the reaction is magnified by the addition of a chemical oxidizer, which is usually a Persulfate compound. The mechanisms of the reactions are as follows:
Free radical oxidants formed
S2O82- ---hv----> 2SO4–
H2O ------> H+ + OH
SO4– + H2O ----> SO42– +OH + H+
Excitation of organics:
R----> RX
RX + SO4– + OH ----> nCO2 + …
UV/chemical oxidation method offers a relatively low maintenance, high sensitivity method for a wide range of applications. However, there are oxidation limitations of this method. Limitations include the inaccuracies associated with the addition of any foreign substance into the analyte and samples with high amounts of particulates. Performing "System Blank" analysis, which is to analyze then subtract the amount of carbon contributed by the chemical additive, inaccuracies are lowered. However, analyses of levels below 200 ppb TOC are still difficult.
Thermo-Chemical (Persulfate) Oxidation
Thermo – Chemical Oxidation is known as heated Persulfate, the method utilizes the same free radical formation as UV Persulfate oxidation except uses heat to magnify the oxidizing power of Persulfate. Chemical oxidation of carbon with a strong oxidizer, such as Persulfate, is highly efficient, and unlike UV, is not susceptible to lower recoveries caused by turbidity in samples. The analysis of system blanks, necessary in all chemical procedures, is especially necessary with heated Persulfate TOC methods because the method is so sensitive that reagents cannot be prepared with carbon contents low enough to not be detected.
Persulfate methods are used in the analysis of wastewater, drinking water, and pharmaceutical waters. When used in conjunction with sensitive NDIR detectors heated Persulfate TOC instruments readily measure TOC at single digit parts per billion (ppb) up to hundreds of parts per million (ppm) depending on sample volumes.

Detection and Quantification
Accurate detection and quantification are the most vital components of the TOC analysis process. Conductivity and non-dispersive infrared (NDIR) are the two common detection methods used in modern TOC analyzers.
Conductivity
There are two types of conductivity detectors, direct and membrane. Direct conductivity provides an inexpensive and simple means of measuring CO2. This method has good oxidation of organics, uses no carrier gas, is good at the parts per billion (ppb) ranges, but has a very limited analytical range. Membrane conductivity relies upon the same technology as direct conductivity. Although it is more robust than direct conductivity, it suffers from slow analysis time. Both methods analyze sample conductivity before and after oxidization, attributing this differential measurement to the TOC of the sample. During the sample oxidization phase, CO2 (directly related to the TOC in the sample) and other gases are formed. The dissolved CO2 forms a weak acid, thereby changing the conductivity of the original sample proportionately to the TOC in the sample.

Non-dispersive infrared (NDIR)
The non-dispersive infrared analysis (NDIR) method offers the only practical interference-free method for detecting CO2 in TOC analysis. The principal advantage of using NDIR is that it directly and specifically measures the CO2 generated by oxidation of the organic carbon in the oxidation reactor, rather than relying on a measurement of a secondary, corrected effect, such as used in conductivity measurements. A traditional NDIR detector relies upon flow-through-cell technology, the oxidation product flows into and out of the detector continuously. A region of adsorption of infrared light specific to CO2, usually around 4.26 µm (2350 cm-1), is measured over time as the gas flows through the detector. A new advance of NDIR technology is Static Pressurized Concentration (SPC). The exit valve of the NDIR is closed to allow the detector to become pressurized. Once the gases in the detector have reached equilibrium, the concentration of the CO2 is analyzed. This pressurization of the sample gas stream in the NDIR, a patent-pending technique, allows for increased sensitivity and precision by measuring the entirety of the oxidation products of the sample in one reading.

Combustion
In a combustion analyzer, half the sample is injected into a chamber where it is acidified, usually with phosphoric acid, to turn all of the inorganic carbon into carbon dioxide as per the following reaction:
CO2 + H2O ↔ H2CO3 ↔ H+ + HCO3- ↔ 2H+ + CO32-
This is then sent to a detector for measurement. The other half of the sample is injected into a combustion chamber which is raised to between 600–700°C, some even up to 1200°C. Here, all the carbon reacts with oxygen, forming carbon dioxide. It's then flushed into a cooling chamber, and finally into the detector. Usually, the detector used is a non-dispersive infrared spectrophotometer. By finding the total inorganic carbon and subtracting it from the total carbon content, the amount of organic carbon is determined.

Chemical Oxidation
Chemical oxidation analyzers inject the sample into a chamber with phosphoric acid followed by Persulfate. The analysis is separated into two steps. One removes inorganic carbon by acidification and purging. After removal of inorganic carbon Persulfate is added and the sample is either heated or bombarded with UV light from a mercury vapor lamp. Free radicals form from the Persulfate and react with any carbon available to form carbon dioxide. The carbon from both determination (steps) is either run through membranes which measure the conductivity changes that result from the presence of varying amounts of carbon dioxide, or purged into and detected by a sensitive NDIR detector. Same as the combustion analyzer, the total carbon formed minus the inorganic carbon gives a good estimate of the total organic carbon in the sample. This method is often used in online applications because of its low maintenance requirements.

Total Carbon (TC) – all the carbon in the sample, including both inorganic and organic carbon
Total Inorganic Carbon (TIC) – often referred to as inorganic carbon (IC), carbonate, bicarbonate, and dissolved carbon dioxide (CO2); a material derived from non-living sources.
Total Organic Carbon (TOC) – material derived from decaying vegetation, bacterial growth, and metabolic activities of living organisms or chemicals.
Non-Purgeable Organic Carbon (NPOC) – commonly referred to as TOC; organic carbon remaining in an acidified sample after purging the sample with gas.
Purgeable (volatile) Organic Carbon (POC) – organic carbon that has been removed from a neutral , or acidified sample by purging with an inert gas. These are the same compounds referred to as Volatile Organic Compounds (VOC) and usually determined by Purge and Trap Gas Chromatography.
Dissolved Organic Carbon (DOC) – organic carbon remaining in a sample after filtering the sample, typically using a 0.45 micrometer filter.
Suspended Organic Carbon – also called particulate organic carbon (PtOC); the carbon in particulate form that is too large to pass through a filter.

DO by Winkler Test Method

The "Winkler test" is used to determine the concentration of dissolved oxygen in water samples.
Dissolved Oxygen, abbreviated D.O., is widely used in water quality studies and routine operation of water reclamation facilities.
An excess of Manganese(II) salt, iodide (I-) and hydroxide (OH-) ions are added to a water sample causing a white precipitate of Mn(OH)2 to form.
This precipitate is then oxidized by the dissolved oxygen in the water sample into a brown Manganese precipitate.
In the next step, a strong acid (either hydrochloric acid or sulphuric acid) is added to acidify the solution.
The brown precipitates then convert the iodide ion (I-) to Iodine.
The amount of dissolved oxygen is directly proportional to the titration of Iodine with a thiosulphate solution.
The test was first developed by Lajos Winkler while working on his doctoral dissertation in 1888. The amount of dissolved oxygen is a measure of the biological activity of the water masses.
In the first step, Manganese (II) sulfate (at 48% of the total volume) is added to an environmental water sample. Next, Potassium iodide (15% in potassium hydroxide 70%) is added to create a pinkish-brown precipitate. In the alkaline solution, dissolved oxygen will oxidize manganese(II) ions to the tetravalent state.
2 Mn(OH)2(s) + O2(aq) → 2 MnO(OH)2(s)
MnO(OH)2 appears as a brown precipitate. There is some confusion about whether the oxidised manganese is tetravalent or trivalent. Some sources claim that Mn(OH)3 is the brown precipitate, but hydrated MnO2 may also give the brown colour.
4 Mn(OH)2(s) + O2(aq) + 2 H2O → 4 Mn(OH)3(s)
The second part of the Winkler test reduces acidifies the solution. The precipitate will dissolve back into solution. The acid facilitates the coversion by the brown, Manganese-containing precipitate of the Iodide ion into elemental Iodine.
The Mn(SO4)2 formed by the acid converts the iodide ions into iodine, itself being reduced back to manganese(II) ions in an acidic medium.
Mn(SO4)2 + 2 I-(aq) → Mn2+(aq) + I2(aq) + 2 SO42-(aq)
Thiosulfate solution is used, with a starch indicator, to titrate the iodine.
2 S2O32-(aq) + I2 → S4O62-(aq) + 2 I-(aq)
From the above stoichiometric equations, we can find that:
1 mole of O2 → 4 moles of Mn(OH)3 → 2 moles of I2
Therefore, after determining the number of moles of iodine produced, we can work out the number of moles of oxygen molecules present in the original water sample.
BOD5 : Five-day biological oxygen demand (BOD5), several dilutions of a sample are analyzed for dissolved oxygen before and after a five-day incubation period at 20 degrees Celsius (68 degrees Fahrenheit) in the dark.
In some cases, bacteria are used to provide a source of oxygen to the sample; these bacteria are known as "seed".
The difference in DO and the dilution factor are used to calculate BOD5. The resulting number (usually reported in parts per million or milligrams per liter) is useful in determining the relative organic strength of sewage or other polluted waters.

Wednesday, March 4, 2009

Incompatible Chemical Mixtures

Some chemicals shouldn't be mixed together. In fact, these chemicals shouldn't even be stored near each other on the chance that an accident could occur and the chemicals could react. Be sure to keep incompatibilities in mind when reusing containers to store other chemicals. Here are some examples of mixtures to avoid:
· Acids with cyanide salts or cyanide solution. Generates highly toxic hydrogen cyanide gas.
· Acids with sulphide salts or sulphide solutions. Generates highly toxic hydrogen sulphide gas.
· Acids with bleach. Generates highly toxic chlorine gas.
· Oxidizing acids (e.g., nitric acid, perchloric acid) with combustible materials (e.g., paper, alchohols, and other common solvents). May result in fire.
· Solid oxidizers (e.g., permanganates, iodates, nitrates) with combustible materials (e.g., paper, alchohols, other common solvents). May result in fire.
· Hydrides (e.g., sodium hydride) with water. May form flammable hydrogen gas.
· Phosphides (e.g., sodium phosphide) with water. May form highly toxic Phosphine gas.
· Silver salts with ammonia in the presence of a strong base. May generate an explosively unstable solid.
· Alkali metals (e.g., sodium, potassium) with water. May form flammable hydrogen gas.
· Oxidizing agents (e.g., nitric acid) with reducing agents (e.g., hydrazine). May cause fires or explosions.
· Unsaturated compounds (e.g., substances containing carbonyls or double bonds) in the presence of acids or bases. May polymerize violently.
· Hydrogen peroxide/acetone mixtures when heated in the presence of an acid. May cause explosions.
· Hydrogen peroxide/acetic acid mixtures. May explode upon heating.
· Hydrogen peroxide/sulfuric acid mixtures. May spontaneously detonate.

Hypothesis, Theory & Law

Hypothesis
A hypothesis is an educated guess, based on observation. Usually, a hypothesis can be supported or refuted through experimentation or more observation. A hypothesis can be disproven, but not proven to be true.
Theory
A scientific theory summarizes a hypothesis or group of hypotheses that have been supported with repeated testing. A theory is valid as long as there is no evidence to dispute it. Therefore, theories can be disproven. Basically, if evidence accumulates to support a hypothesis, then the hypothesis can become accepted as a good explanation of a phenomenon. One definition of a theory is to say it's an accepted hypothesis
Law
A law generalizes a body of observations. At the time it is made, no exceptions have been found to a law. Scientific laws explain things, but they do not describe them. One way to tell a law and a theory apart is to ask if the description gives you a means to explain 'why'.

Acids & Bases

Svante Arrhenius
· acids produce H+ ions in aqueous solutions
· bases produce OH- ions in aqueous solutions
Johannes Nicolaus Brønsted - Thomas Martin Lowry
· acids are proton donors
· bases are proton acceptors
Gilbert Newton Lewis
. acids are electron pair acceptors
· bases are electron pair donors
Properties of Acids
· Taste sour (don't taste them!)... the word 'acid' comes from the Latin acere, which means 'sour'
· Acids change litmus (a blue vegetable dye) from blue to red
· their aqueous (water) solutions conduct electric current (are electrolytes)
· react with bases to form salts and water
· evolve hydrogen gas (H2) upon reaction with an active metal (such as alkali metals, alkaline earth metals, zinc, aluminium)
Properties of Bases
· Taste bitter (don't taste them!)
· Feel slippery or soapy (don't arbitrarily touch them!)
· bases don't change the colour of litmus; they can turn red (acidified) litmus back to blue
· their aqueous (water) solutions conduct and electric current (are electrolytes)
· react with acids to form salts and water.

Chemical Composition of Air

What Is the Chemical Composition of Air?
Nearly all of the Earth's atmosphere is made up of only five gases: nitrogen, oxygen, water vapor, argon, and carbon dioxide. Several other compounds also are present. Although this CRC table does not list water vapor, air can contain as much as 5% water vapor, more commonly ranging from 1-3%. The 1-5% range places water vapor as the third most common gas (which alters the other percentages accordingly).
Answer: This is composition of air in percent by volume, at sea level at 15°C and 101325 Pa.
Nitrogen ---------- N2 -- 78.084%
Oxygen ----------- O2 -- 20.9476%
Argon -- -----------Ar -- 0.934%
Carbon Dioxide -- CO2 -- 0.0314%
Neon -------------- Ne -- 0.001818%
Methane --------- CH4 -- 0.0002%
Helium -- ----------He -- 0.000524%
Krypton ----------- Kr -- 0.000114%
Hydrogen --------- H2 -- 0.00005%
Xenon ------------- Xe -- 0.0000087%
Ozone ------------- O3 -- 0.000007%
Nitrogen Dioxide -- NO2 -- 0.000002%
Iodine -------------- I2 -- 0.000001%
Carbon Monoxide -- CO -- trace
Ammonia --------- NH3 -- trace
Reference: CRC Handbook of Chemistry and Physics, edited by David R. Lide, 1997.

Dry Ice

What Is Dry Ice?
Dry ice is the common name for the solid form of carbon dioxide. Originally the term 'dry ice' was a trademark for the solid carbon dioxide produced by Prest Air Devices (1925), but now it refers to any solid carbon dioxide.
Why Is it Called Dry Ice?
It's called dry ice because it doesn't melt into a wet liquid. Dry ice sublimates, which means it goes from its solid form directly to its gaseous form. Since it's never wet, it must be dry!
How Is Dry Ice Made?
Dry ice is made by compressing carbon dioxide gas until it liquefies, which is at about 870 pounds per square inch of pressure at room temperature. When the pressure is released, some of the liquid will transition into a gas, cooling some of the liquid into dry ice frost or snow, which can be collected and pressed into pellets or blocks. This is similar to what happens when you get frost on the nozzle of a CO2 fire extinguisher. The freezing point of carbon dioxide is -109.3°F or -78.5° C, so dry ice won't stay solid for long at room temperature.

Absolute Zero & Activated Charcoal

Absolute zero is the point where no more heat can be removed from a system, according to the absolute or thermodynamic temperature scale. This corresponds to 0 K or -273.15°C. In classical kinetic theory, there should be no movement of individual molecules at absolute zero, but experimental evidences shows this isn't the case.
Temperature is used to describe how hot or cold an object it. The temperature of an object depends on how fast its atoms and molecules oscillate. At absolute zero, these oscillations are the slowest they can possibly be. Even at absolute zero, the motion doesn't completely stop.
Activated charcoal is used in water filters, medicines that selectively remove toxins, and chemical purification processes.
Activated charcoal is carbon that has been treated with oxygen. The treatment results in a highly porous charcoal.
These tiny holes give the charcoal a surface area of 300-2,000 m2/g, allowing liquids or gases to pass through the charcoal and interact with the exposed carbon. The carbon adsorbs a wide range of impurities and contaminants, including chlorine, odors, and pigments. Other substances, like sodium, fluoride, and nitrates, are not as attracted to the carbon and are not filtered out. Because adsorption works by chemically binding the impurities to the carbon, the active sites in the charcoal eventually become filled. Activated charcoal filters become less effective with use and have to be recharged or replaced.
Several factors influence the effectiveness of activated charcoal. The pore size and distribution varies depending on the source of the carbon and the manufacturing process. Large organic molecules are absorbed better than smaller ones. Adsorption tends to increase as pH and temperature decrease. Contaminants are also removed more effectively if they are in contact with the activated charcoal for a longer time, so flow rate through the charcoal affects filtration.

Orbital Name Abbrevation

The orbital names s, p, d, and f stand for names given to groups of lines in the spectra of the alkali metals. These line groups are called "sharp, principal, diffuse, and fundamental".
The orbital letters are associated with the angular momentum quantum number, which is assigned an integer value from 0 to 3. s correlates to 0, p = 1, d = 2, and f = 3.The angular momentum quantum number can be used to give the shapes of the electronic orbitals. s orbitals are spherical; p orbitals are polar. It may be simpler to think of these two letters in terms of orbital shapes (d and f aren't described as readily).

Infrared spectroscopy


Infrared spectroscopy (IR spectroscopy) is the subset of spectroscopy that deals with the infrared region of the electromagnetic spectrum. It covers a range of techniques, the most common being a form of absorption spectroscopy. As with all spectroscopic techniques, it can be used to identify compounds or investigate sample composition. Infrared spectroscopy.
The infrared portion of the electromagnetic spectrum is divided into three regions;
The near-, mid- and far- infrared, named for their relation to the visible spectrum.
The far-infrared, approximately 400-10 cm-1 (1000–30 μm), lying adjacent to the microwave region, has low energy and may be used for rotational spectroscopy.
The mid-infrared, approximately 4000-400 cm-1 (30–1.4 μm) may be used to study the fundamental vibrations and associated rotational-vibrational structure.
The higher energy near-IR, approximately 14000-4000 cm-1 (1.4–0.8 μm) can excite overtone or harmonic vibrations.

Simple diatomic molecules have only one bond, which may stretch. More complex molecules have many bonds, and vibrations can be conjugated, leading to infrared absorptions at characteristic frequencies that may be related to chemical groups.

For example, the atoms in a CH2 group, commonly found in organic compounds can vibrate in six different ways: symmetrical, antisymmetrical, stretching, scissoring, rocking, wagging and twisting.

Spectrometry

In physics, spectrophotometry is the quantifiable study of electromagnetic spectra. It is more specific than the general term electromagnetic spectroscopy in that spectrophotometry deals with visible light, near-ultraviolet, and near-infrared. Also, the term does not cover time-resolved spectroscopic techniques.
Spectrophotometry involves the use of a spectrophotometer. "A spectrophotometer is a photometer (a device for measuring light intensity) that can measure intensity as a function of the color, or more specifically, the wavelength of light". There are many kinds of spectrophotometers. Among the most important distinctions used to classify them are the wavelengths they work with, the measurement techniques they use, how they acquire a spectrum, and the sources of intensity variation they are designed to measure. Other important features of spectrophotometers include the spectral bandwidth and linear range.
Perhaps the most common application of spectrophotometers is the measurement of light absorption, but they can be designed to measure diffuse or specular reflectance. Strictly, even the emission half of a luminescence instrument is a kind of spectrophotometer.
There are two major classes of spectrophotometers; single beam and double beam.
A double beam spectrophotometer measures the ratio of the light intensity on two different light paths, and a single beam spectrophotometer measures the absolute light intensity.
Although ratio measurements are easier, and generally more stable, single beam instruments have advantages; for instance, they can have a larger dynamic range, and they can be more compact.

the sequence of events in a spectrophotometer is as follows:

1. The light source shines through the sample.
2. The sample absorbs light.
3. The detector detects how much light the sample has absorbed.
4. The detector then converts how much light the sample absorbed into a number.
5. The numbers are either plotted straight away, or are transmitted to a computer to be further manipulated (e.g. curve smoothing, baseline correction)

Many spectrophotometers must be calibrated by a procedure known as "zeroing." The absorbency of some standard substance is set as a baseline value, so the absorbencies of all other substances are recorded relative to the initial "zeroed" substance. The spectrophotometer then displays % absorbency (the amount of light absorbed relative to the initial substance

Colorimetry

The quantitative study of color perception. It is similar to spectrophotometry, but may be distinguished by its interest in reducing spectra to tristimulus values, from which the perception of color derives.
Types of Colorimeter:
Absorption colorimeter :In physical chemistry, a colorimeter is a device used to test the concentration of a solution by measuring its absorbance of a specific wavelength of light. To use this device, different solutions must be made, and a control (usually a mixture of distilled water and another solution) is first filled into a cuvette and placed inside a colorimeter to calibrate the machine. Once the calibration steps completed then the instrument can be used to find the concentrations of the solutions.
Tristimulus colorimeter In digital imaging, colorimeters are tristimulus devices used for colour calibration. Accurate colour profiles ensure consistency throughout the imaging workflow, from acquisition to output.
Tristimulus values
The human eye has receptors (called cone cells) for short (S), middle (M), and long (L) wavelengths. Thus in principle, three parameters describe a color sensation. The tristimulus values of a color are the amounts of three primary colors in a three-component additive color model needed to match that test color. The tristimulus values are most often given in the CIE 1931 color space, in which they are denoted X, Y, and Z.
Spectroradiometer, Spectrophotometer, Spectrocolorimeter
The absolute spectral power distribution of a light source can be measured with a spectroradiometer, which works by optically collecting the light, then passing it through a monochromator before reading it in narrow bands of wavelength.
Reflected color can be measured using a spectrophotometer (also called spectroreflectometer or reflectometer), which takes measurements in the visible region (and a little beyond) of a given color sample. If the custom of taking readings at 10 nanometer increments is followed, the visible light range of 400-700nm will yield 31 readings. These readings are typically used to draw the sample's spectral reflectance curve (how much it reflects, as a function of wavelength); the most accurate data that can be provided regarding its characteristics.
Color temperature meter
Photographers and cinematographers use information provided by these meters to decide what color correction should be done to make different light sources appear to have the same color temperature. If the user enters the reference color temperature, the meter can calculate the mired difference between the measurement and the reference, enabling the user to choose a corrective color gel or photographic filter with the closest mired factor.

Phosphate in Water

There is often a direct relationship between the intensity of the colour of a solution and the concentration of the coloured component (the analyte species) which it contains. This direct relationship forms the basis of the "colourimetric" technique. One might readily determine the concentration of a sample based on its colour intensity, simply by comparing its colour with those of a series of solutions of known concentration of the analyte species.
In some cases the colour of the solution may be due to an inherent property of the analyte itself, for example, a KMnO4 solution has a natural purple colour, the intensity of which can be readily measured. In many other cases, however, the solution colour is developed by the addition of a suitable reagent which interacts with the analyte species thereby forming a coloured complex.
Intensity of coloured solutions are normally measured on a spectrophotometer.
A beam of light of intensity Io is focused on a sample, and a portion, I, is absorbed by the analyte species.
The amount of light absorbed may be mathematically expressed as:A = log (Io/I) (1)
The absorbance, A, is related to concentration by the Beer-Lambert law:A = εcl (2)
which states that the absorbance of a solution is directly proportional to its concentration, c, as long as the solution path length, l, and the wavelength of measurement are constant. Once the Beer-Lambert law is obeyed, a plot of absorbance against concentration will give a straight line, the slope of which is the molar absorptivity, ε * length.
Colourimetric techniques are useful in the analysis of a wide range of substances. One important application is its use in determining the phosphate content in water samples.
The phosphate found in natural waters mainly exists as the orthophosphate species, PO43-, however, the polyphosphates P2O74- and P3O105- are frequently encountered. These polyphosphate species may be hydrolysed to convert into orthophosphate.