Process safety focuses on preventing fires, explosions and accidental chemical releases in chemical process facilities or other facilities dealing with hazardous materials such as refineries, and oil and gas (onshore and offshore) production installations.
Occupational safety and health primarily covers the management of personal safety. Well developed management systems also address process safety issues. The tools, techniques, programs etc. required to manage both process and occupational safety can sometimes be the same (for example a work permit system) and in other cases may have very different approaches. LOPA (Layers of Protection Analysis) or QRA (Quantified Risk Assessment) for example focus on process safety whereas PPE (Personal Protective Equipment) is very much an individual focused occupational safety issue.
A common tool used to explain the various different but connected systems related to achieving process safety is described by James T. Reason‘s Swiss cheese model. In this model, barriers that prevent, detect, control and mitigate a major accident are depicted as slices, each having a number of holes. The holes represent imperfections in the barrier, which can be defined as specific performance standards. The better managed the barrier, the smaller these holes will be. When a major accident happens, this is invariably because all the imperfections in the barriers (the holes) have lined up. It is the multiplicity of barriers that provide the protection.
Process safety generally refers to the prevention of unintentional releases of chemicals, energy, or other potentially dangerous materials (including steam) during the course of chemical processes that can have a serious effect to the plant and environment. Process safety involves, for example, the prevention of leaks, spills, equipment malfunction, over-pressures, over-temperatures, corrosion, metal fatigue and other similar conditions. Process safety programs focus on design and engineering of facilities, maintenance of equipment, effective alarms, effective control points, procedures and training. It is sometimes useful to consider process safety as the outcome or result of a wide range of technical, management and operational disciplines coming together in an organised way. Process safety chemists will examine both:
- Desired chemical reaction, using a reaction calorimeter: this will allow a good measure of not only the reaction heat to be ascertained, but also to examine how much heat is « accumulated » during the various additions of chemicals (i.e. How much heat could be evolved if anything went wrong. The chemist will then (if necessary) vary the reactions conditions to arrive at a process that the proposed plant can control (i.e. the heat output is significantly less than the cooling capacity of the plant), and has low accumulation (meaning that in the event of any problem, the current addition can be stopped without any danger of overheating)
- Undesired chemical reaction, using one or more of:
These instruments are typically used for examining crude materials that are intended to be purified by distillation – these results will allow the chemist to decide on a maximum temperature limit for a process, that will not give rise to a thermal runaway.
Dangerous goods, abbreviated DG, are substances that when transported are a risk to health, safety, property or the environment. Certain dangerous goods that pose risks even when not being transported are known as hazardous materials (abbreviated as HAZMAT or hazmat).
Hazardous materials are often subject to chemical regulations. Hazmat teams are personnel specially trained to handle dangerous goods, which include materials that are radioactive, flammable, explosive, corrosive, oxidizing, asphyxiating, biohazardous, toxic, pathogenic, or allergenic. Also included are physical conditions such as compressed gases and liquids or hot materials, including all goods containing such materials or chemicals, or may have other characteristics that render them hazardous in specific circumstances.
In the United States, dangerous goods are often indicated by diamond-shaped signage on the item (see NFPA 704), its container, or the building where it is stored. The color of each diamond indicates its hazard, e.g., flammable is indicated with red, because fire and heat are generally of red color, and explosive is indicated with orange, because mixing red (flammable) with yellow (oxidizing agent) creates orange. A nonflammable and nontoxic gas is indicated with green, because all compressed air vessels are this color in France after World War II, and France was where the diamond system of hazmat identification originated.
Mitigating the risks associated with hazardous materials may require the application of safety precautions during their transport, use, storage and disposal. Most countries regulate hazardous materials by law, and they are subject to several international treaties as well. Even so, different countries may use different class diamonds for the same product. For example, in Australia, anhydrous ammonia UN 1005 is classified as 2.3 (toxic gas) with subsidiary hazard 8 (corrosive), whereas in the U.S. it is only classified as 2.2 (non-flammable gas).
People who handle dangerous goods will often wear protective equipment, and metropolitan fire departments often have a response team specifically trained to deal with accidents and spills. Persons who may come into contact with dangerous goods as part of their work are also often subject to monitoring or health surveillance to ensure that their exposure does not exceed occupational exposure limits.
Laws and regulations on the use and handling of hazardous materials may differ depending on the activity and status of the material. For example, one set of requirements may apply to their use in the workplace while a different set of requirements may apply to spill response, sale for consumer use, or transportation. Most countries regulate some aspect of hazardous materials.
Mitigating the risks associated with hazardous materials may require the application of safety precautions during their transport, use, storage and disposal. Most countries regulate hazardous materials by law, and they are subject to several international treaties as well. Even so, different countries may use different class diamonds for the same product. For example, in Australia, anhydrous ammonia UN 1005 is classified as 2.3 (toxic gas) with subsidiary hazard 8 (corrosive), whereas in the U.S. it is only classified as 2.2 (non-flammable gas).
People who handle dangerous goods will often wear protective equipment, and metropolitan fire departments often have a response team specifically trained to deal with accidents and spills. Persons who may come into contact with dangerous goods as part of their work are also often subject to monitoring or health surveillance to ensure that their exposure does not exceed occupational exposure limits.
Laws and regulations on the use and handling of hazardous materials may differ depending on the activity and status of the material. For example, one set of requirements may apply to their use in the workplace while a different set of requirements may apply to spill response, sale for consumer use, or transportation. Most countries regulate some aspect of hazardous materials.
The most widely applied regulatory scheme is that for the transportation of dangerous goods. The United Nations Economic and Social Council issues the UN Recommendations on the Transport of Dangerous Goods, which form the basis for most regional, national, and international regulatory schemes. For instance, the International Civil Aviation Organization has developed dangerous goods regulations for air transport of hazardous materials that are based upon the UN model but modified to accommodate unique aspects of air transport. Individual airline and governmental requirements are incorporated with this by the International Air Transport Association to produce the widely used IATA Dangerous Goods Regulations (DGR). Similarly, the International Maritime Organization (IMO) has developed the International Maritime Dangerous Goods Code (« IMDG Code », part of the International Convention for the Safety of Life at Sea) for transportation of dangerous goods by sea. IMO member countries have also developed the HNS Convention to provide compensation in case of dangerous goods spills in the sea.
The Intergovernmental Organisation for International Carriage by Rail has developed the regulations concerning the International Carriage of Dangerous Goods by Rail (« RID », part of the Convention concerning International Carriage by Rail). Many individual nations have also structured their dangerous goods transportation regulations to harmonize with the UN model in organization as well as in specific requirements.
The Globally Harmonized System of Classification and Labelling of Chemicals (GHS) is an internationally agreed upon system set to replace the various classification and labeling standards used in different countries. The GHS uses consistent criteria for classification and labeling on a global level.
Classification and labeling summary tables
Dangerous goods are divided into nine classes (in addition to several subcategories) on the basis of the specific chemical characteristics producing the risk.
Note: The graphics and text in this article representing the dangerous goods safety marks are derived from the United Nations-based system of identifying dangerous goods. Not all countries use precisely the same graphics (label, placard or text information) in their national regulations. Some use graphic symbols, but without English wording or with similar wording in their national language. Refer to the dangerous goods transportation regulations of the country of interest.
For example, see the TDG Bulletin: Dangerous Goods Safety Marks based on the Canadian Transportation of Dangerous Goods Regulations.
The statement above applies equally to all the dangerous goods classes discussed in this article.
|Class 1: Explosives|
|Information on this graphic changes depending on which, « Division » of explosive is shipped. Explosive Dangerous Goods have compatibility group letters assigned to facilitate segregation during transport. The letters used range from A to S excluding the letters I, M, O, P, Q and R. The example above shows an explosive with a compatibility group « A » (shown as 1.1A). The actual letter shown would depend on the specific properties of the substance being transported.For example, the Canadian Transportation of Dangerous Goods Regulations provides a description of compatibility groups.1.1 Explosives with a mass explosion hazardEx: TNT, dynamite, nitroglycerine.1.2 Explosives with a severe projection hazard.1.3 Explosives with a fire, blast or projection hazard but not a mass explosion hazard.1.4 Minor fire or projection hazard (includes ammunition and most consumer fireworks).1.5 An insensitive substance with a mass explosion hazard (explosion similar to 1.1)1.6 Extremely insensitive articles.The United States Department of Transportation (DOT) regulates hazmat transportation within the territory of the US.1.1 — Explosives with a mass explosion hazard. (nitroglycerin/dynamite, ANFO)1.2 — Explosives with a blast/projection hazard.1.3 — Explosives with a minor blast hazard. (rocket propellant, display fireworks)1.4 — Explosives with a major fire hazard. (consumer fireworks, ammunition)1.5 — Blasting agents.1.6 — Extremely insensitive explosives.|
|Class 2: Gases|
|Gases which are compressed, liquefied or dissolved under pressure as detailed below. Some gases have subsidiary risk classes; poisonous or corrosive.2.1 Flammable Gas: Gases which ignite on contact with an ignition source, such as acetylene, hydrogen, and propane.2.2 Non-Flammable Gases: Gases which are neither flammable nor poisonous. Includes the cryogenic gases/liquids (temperatures of below -100 °C) used for cryopreservation and rocket fuels, such as nitrogen, neon, and carbon dioxide.2.3 Poisonous Gases: Gases liable to cause death or serious injury to human health if inhaled; examples are fluorine, chlorine, and hydrogen cyanide.|
|Class 3: Flammable Liquids|
|Flammable liquids included in Class 3 are included in one of the following packing groups:|
Packing Group I, if they have an initial boiling point of 35°C or less at an absolute pressure of 101.3 kPa and any flash point, such as diethyl ether or carbon disulfide;
Packing Group II, if they have an initial boiling point greater than 35°C at an absolute pressure of 101.3 kPa and a flash point less than 23°C, such as gasoline (petrol) and acetone; or
Packing Group III, if the criteria for inclusion in Packing Group I or II are not met, such as kerosene and diesel. Note: For further details, check the Dangerous Goods Transportation Regulations of the country of interest.
Packing groups are used for the purpose of determining the degree of protective packaging required for dangerous goods during transportation.
- Group I: great danger, and most protective packaging required. Some combinations of different classes of dangerous goods on the same vehicle or in the same container are forbidden if one of the goods is Group I.
- Group II: medium danger
- Group III: minor danger among regulated goods, and least protective packaging within the transportation requirement
One of the transport regulations is that, as an assistance during emergency situations, written instructions how to deal in such need to be carried and easily accessible in the driver’s cabin.
A license or permit card for hazmat training must be presented when requested by officials.
Dangerous goods shipments also require a special declaration form prepared by the shipper. Among the information that is generally required includes the shipper’s name and address; the consignee’s name and address; descriptions of each of the dangerous goods, along with their quantity, classification, and packaging; and emergency contact information. Common formats include the one issued by the International Air Transport Association (IATA) for air shipments and the form by the International Maritime Organization (IMO) for sea cargo.
Occupational safety and health
Occupational safety and health (OSH), also commonly referred to as occupational health and safety (OHS), occupational health, or workplace health and safety (WHS), is a multidisciplinary field concerned with the safety, health, and welfare of people at work. These terms also refer to the goals of this field, so their use in the sense of this article was originally an abbreviation of occupational safety and health program/department etc.
The goal of occupational safety and health programs is to foster a safe and healthy work environment. OSH may also protect co-workers, family members, employers, customers, and many others who might be affected by the workplace environment. In the United States, the term occupational health and safety is referred to as occupational health and occupational and non-occupational safety and includes safety for activities outside of work.
In common-law jurisdictions, employers have a common law duty to take reasonable care of the safety of their employees. Statute law may in addition impose other general duties, introduce specific duties, and create government bodies with powers to regulate workplace safety issues: details of this vary from jurisdiction to jurisdiction.
As defined by the World Health Organization (WHO) « occupational health deals with all aspects of health and safety in the workplace and has a strong focus on primary prevention of hazards. » Health has been defined as « a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity. » Occupational health is a multidisciplinary field of healthcare concerned with enabling an individual to undertake their occupation, in the way that causes least harm to their health. It contrasts, for example, with the promotion of health and safety at work, which is concerned with preventing harm from any incidental hazards, arising in the workplace.
Since 1950, the International Labour Organization (ILO) and the World Health Organization (WHO) have shared a common definition of occupational health. It was adopted by the Joint ILO/WHO Committee on Occupational Health at its first session in 1950 and revised at its twelfth session in 1995. The definition reads:
The main focus in occupational health is on three different objectives: (i) the maintenance and promotion of workers’ health and working capacity; (ii) the improvement of working environment and work to become conducive to safety and health and (iii) development of work organizations and working cultures in a direction which supports health and safety at work and in doing so also promotes a positive social climate and smooth operation and may enhance productivity of the undertakings. The concept of working culture is intended in this context to mean a reflection of the essential value systems adopted by the undertaking concerned. Such a culture is reflected in practice in the managerial systems, personnel policy, principles for participation, training policies and quality management of the undertaking.— Joint ILO/WHO Committee on Occupational Health
Those in the field of occupational health come from a wide range of disciplines and professions including medicine, psychology, epidemiology, physiotherapy and rehabilitation, occupational therapy, occupational medicine, human factors and ergonomics, and many others. Professionals advise on a broad range of occupational health matters. These include how to avoid particular pre-existing conditions causing a problem in the occupation, correct posture for the work, frequency of rest breaks, preventive action that can be undertaken, and so forth.
« Occupational health should aim at: the promotion and maintenance of the highest degree of physical, mental and social well-being of workers in all occupations; the prevention amongst workers of departures from health caused by their working conditions; the protection of workers in their employment from risks resulting from factors adverse to health; the placing and maintenance of the worker in an occupational environment adapted to his physiological and psychological capabilities; and, to summarize, the adaptation of work to man and of each man to his job.
The research and regulation of occupational safety and health are a relatively recent phenomenon. As labor movements arose in response to worker concerns in the wake of the industrial revolution, worker’s health entered consideration as a labor-related issue.
In the United Kingdom, the Factory Acts of the early nineteenth century (from 1802 onwards) arose out of concerns about the poor health of children working in cotton mills: the Act of 1833 created a dedicated professional Factory Inspectorate. The initial remit of the Inspectorate was to police restrictions on the working hours in the textile industry of children and young persons (introduced to prevent chronic overwork, identified as leading directly to ill-health and deformation, and indirectly to a high accident rate). However, on the urging of the Factory Inspectorate, a further Act in 1844 giving similar restrictions on working hours for women in the textile industry introduced a requirement for machinery guarding (but only in the textile industry, and only in areas that might be accessed by women or children).
In 1840 a Royal Commission published its findings on the state of conditions for the workers of the mining industry that documented the appallingly dangerous environment that they had to work in and the high frequency of accidents. The commission sparked public outrage which resulted in the Mines Act of 1842. The act set up an inspectorate for mines and collieries which resulted in many prosecutions and safety improvements, and by 1850, inspectors were able to enter and inspect premises at their discretion.
Otto von Bismarck inaugurated the first social insurance legislation in 1883 and the first worker’s compensation law in 1884 – the first of their kind in the Western world. Similar acts followed in other countries, partly in response to labor unrest.
Although work provides many economic and other benefits, a wide array of workplace hazards also present risks to the health and safety of people at work. These include but are not limited to, « chemicals, biological agents, physical factors, adverse ergonomic conditions, allergens, a complex network of safety risks, » and a broad range of psychosocial risk factors. Personal protective equipment can help protect against many of these hazards.
Physical hazards affect many people in the workplace. Occupational hearing loss is the most common work-related injury in the United States, with 22 million workers exposed to hazardous noise levels at work and an estimated $242 million spent annually on worker’s compensation for hearing loss disability. Falls are also a common cause of occupational injuries and fatalities, especially in construction, extraction, transportation, healthcare, and building cleaning and maintenance. Machines have moving parts, sharp edges, hot surfaces and other hazards with the potential to crush, burn, cut, shear, stab or otherwise strike or wound workers if used unsafely.
Biological hazards (biohazards) include infectious microorganisms such as viruses and toxins produced by those organisms such as anthrax. Biohazards affect workers in many industries; influenza, for example, affects a broad population of workers. Outdoor workers, including farmers, landscapers, and construction workers, risk exposure to numerous biohazards, including animal bites and stings, urushiol from poisonous plants, and diseases transmitted through animals such as the West Nile virus and Lyme disease. Health care workers, including veterinary health workers, risk exposure to blood-borne pathogens and various infectious diseases, especially those that are emerging.
Dangerous chemicals can pose a chemical hazard in the workplace. There are many classifications of hazardous chemicals, including neurotoxins, immune agents, dermatologic agents, carcinogens, reproductive toxins, systemic toxins, asthmagens, pneumoconiotic agents, and sensitizers. Authorities such as regulatory agencies set occupational exposure limits to mitigate the risk of chemical hazards. An international effort is investigating the health effects of mixtures of chemicals. There is some evidence that certain chemicals are harmful at lower levels when mixed with one or more other chemicals. This may be particularly important in causing cancer.
Psychosocial hazards include risks to the mental and emotional well-being of workers, such as feelings of job insecurity, long work hours, and poor work-life balance. A recent Cochrane review – using moderate quality evidence – related that the addition of work-directed interventions for depressed workers receiving clinical interventions reduces the number of lost work days as compared to clinical interventions alone. This review also demonstrated that the addition of cognitive behavioral therapy to primary or occupational care and the addition of a « structured telephone outreach and care management program » to usual care are both effective at reducing sick leave days.
Specific occupational safety and health risk factors vary depending on the specific sector and industry. Construction workers might be particularly at risk of falls, for instance, whereas fishermen might be particularly at risk of drowning. The United States Bureau of Labor Statistics identifies the fishing, aviation, lumber, metalworking, agriculture, mining and transportation industries as among some of the more dangerous for workers. Similarly psychosocial risks such as workplace violence are more pronounced for certain occupational groups such as health care employees, police, correctional officers and teachers.
Construction is one of the most dangerous occupations in the world, incurring more occupational fatalities than any other sector in both the United States and in the European Union. In 2009, the fatal occupational injury rate among construction workers in the United States was nearly three times that for all workers. Falls are one of the most common causes of fatal and non-fatal injuries among construction workers. Proper safety equipment such as harnesses and guardrails and procedures such as securing ladders and inspecting scaffolding can curtail the risk of occupational injuries in the construction industry. Due to the fact that accidents may have disastrous consequences for employees as well as organizations, it is of utmost importance to ensure health and safety of workers and compliance with HSE construction requirements. Health and safety legislation in the construction industry involves many rules and regulations. For example, the role of the Construction Design Management (CDM) Coordinator as a requirement has been aimed at improving health and safety on-site.
The 2010 National Health Interview Survey Occupational Health Supplement (NHIS-OHS) identified work organization factors and occupational psychosocial and chemical/physical exposures which may increase some health risks. Among all U.S. workers in the construction sector, 44% had non-standard work arrangements (were not regular permanent employees) compared to 19% of all U.S. workers, 15% had temporary employment compared to 7% of all U.S. workers, and 55% experienced job insecurity compared to 32% of all U.S. workers. Prevalence rates for exposure to physical/chemical hazards were especially high for the construction sector. Among nonsmoking workers, 24% of construction workers were exposed to secondhand smoke while only 10% of all U.S. workers were exposed. Other physical/chemical hazards with high prevalence rates in the construction industry were frequently working outdoors (73%) and frequent exposure to vapors, gas, dust, or fumes (51%).
Agriculture workers are often at risk of work-related injuries, lung disease, noise-induced hearing loss, skin disease, as well as certain cancers related to chemical use or prolonged sun exposure. On industrialized farms, injuries frequently involve the use of agricultural machinery. The most common cause of fatal agricultural injuries in the United States is tractor rollovers, which can be prevented by the use of roll over protection structures which limit the risk of injury in case a tractor rolls over. Pesticides and other chemicals used in farming can also be hazardous to worker health, and workers exposed to pesticides may experience illnesses or birth defects. As an industry in which families, including children, commonly work alongside their families, agriculture is a common source of occupational injuries and illnesses among younger workers. Common causes of fatal injuries among young farm worker include drowning, machinery and motor vehicle-related accidents.
The 2010 NHIS-OHS found elevated prevalence rates of several occupational exposures in the agriculture, forestry, and fishing sector which may negatively impact health. These workers often worked long hours. The prevalence rate of working more than 48 hours a week among workers employed in these industries was 37%, and 24% worked more than 60 hours a week. Of all workers in these industries, 85% frequently worked outdoors compared to 25% of all U.S. workers. Additionally, 53% were frequently exposed to vapors, gas, dust, or fumes, compared to 25% of all U.S. workers.
As the number of service sector jobs has risen in developed countries, more and more jobs have become sedentary, presenting a different array of health problems than those associated with manufacturing and the primary sector. Contemporary problems such as the growing rate of obesity and issues relating to occupational stress, workplace bullying, and overwork in many countries have further complicated the interaction between work and health.
According to data from the 2010 NHIS-OHS, hazardous physical/chemical exposures in the service sector were lower than national averages. On the other hand, potentially harmful work organization characteristics and psychosocial workplace exposures were relatively common in this sector. Among all workers in the service industry, 30% experienced job insecurity in 2010, 27% worked non-standard shifts (not a regular day shift), 21% had non-standard work arrangements (were not regular permanent employees).
Due to the manual labour involved and on a per employee basis, the US Postal Service, UPS and FedEx are the 4th, 5th and 7th most dangerous companies to work for in the US.
Mining and oil & gas extraction.
The mining industry still has one of the highest rates of fatalities of any industry. There are a range of hazards present in surface and underground mining operations. In surface mining, leading hazards include such issues as geological stability, contact with plant and equipment, blasting, thermal environments (heat and cold), respiratory health (Black Lung). In underground mining operations hazards include respiratory health, explosions and gas (particularly in coal mine operations), geological instability, electrical equipment, contact with plant and equipment, heat stress, inrush of bodies of water, falls from height, confined spaces. ionising radiation.
According to data from the 2010 NHIS-OHS, workers employed in mining and oil and gas extraction industries had high prevalence rates of exposure to potentially harmful work organization characteristics and hazardous chemicals.
Many of these workers worked long hours: 50% worked more than 48 hours a week and 25% worked more than 60 hours a week in 2010.
Additionally, 42% worked non-standard shifts (not a regular day shift). These workers also had high prevalence of exposure to physical/chemical hazards. In 2010, 39% had frequent skin contact with chemicals. Among nonsmoking workers, 28% of those in mining and oil and gas extraction industries had frequent exposure to secondhand smoke at work. About two-thirds were frequently exposed to vapors, gas, dust, or fumes at work.
Healthcare and social assistance
Healthcare workers are exposed to many hazards that can adversely affect their health and well-being. Long hours, changing shifts, physically demanding tasks, violence, and exposures to infectious diseases and harmful chemicals are examples of hazards that put these workers at risk for illness and injury.
According to the Bureau of Labor statistics, U.S. hospitals recorded 253,700 work-related injuries and illnesses in 2011, which is 6.8 work-related injuries and illnesses for every 100 full-time employees. The injury and illness rate in hospitals is higher than the rates in construction and manufacturing – two industries that are traditionally thought to be relatively hazardous.
National management system standards for occupational health and safety include AS/NZS 4801-2001 for Australia and New Zealand, CAN/CSA Z1000-14 for Canada and ANSI/ASSE Z10-2012 for the United States. Association Française de Normalisation (AFNOR) in France also developed occupational safety and health management standards. In the United Kingdom, non-departmental public body Health and Safety Executive published Managing for health and safety (MFHS), an online guidance. In Germany, the state factory inspectorates of Bavaria and Saxony had introduced the management system OHRIS. In the Netherlands, the management system Safety Certificate Contractors combines management of occupational health and safety and the environment.
ISO 45001 was published in March 2018. Previously, the International Labour Organization (ILO) published ILO-OSH 2001, also titled « Guidelines on occupational safety and health management systems » to assist organizations with introducing OSH management systems. These guidelines encourage continual improvement in employee health and safety, achieved via a constant process of policy, organization, planning & implementation, evaluation, and action for improvement, all supported by constant auditing to determine the success of OSH actions.
From 1999 to 2018, the occupational health and safety management system standard OHSAS 18001 was adopted as a British and Polish standard and widely used internationally. OHSAS 18000 comprised two parts, OHSAS 18001 and 18002 and was developed by a selection of leading trade bodies, international standards and certification bodies to address a gap where no third-party certifiable international standard existed. It was intended to integrate with ISO 9001 and ISO 14001.
Professional roles and responsibilities.
The roles and responsibilities of OSH professionals vary regionally, but may include evaluating working environments, developing, endorsing and encouraging measures that might prevent injuries and illnesses, providing OSH information to employers, employees, and the public, providing medical examinations, and assessing the success of worker health programs.
In Norway, the main required tasks of an occupational health and safety practitioner include the following:
Systematic evaluations of the working environment.
Endorsing preventive measures which eliminate causes of illnesses in the workplace.
Providing information on the subject of employees’ health.
Providing information on occupational hygiene, ergonomics, and environmental and safety risks in the workplace.
In the Netherlands, the required tasks for health and safety staff are only summarily defined and include the following:
Providing voluntary medical examinations.
Providing a consulting room on the work environment to the workers.
Providing health assessments (if needed for the job
The main influence of the Dutch law on the job of the safety professional is through the requirement on each employer to use the services of a certified working conditions service to advise them on health and safety. A certified service must employ sufficient numbers of four types of certified experts to cover the risks in the organisations which use the service:
A safety professional.
An occupational hygienist.
An occupational physician.
A work and organisation specialist.
In 2004, 37% of health and safety practitioners in Norway and 14% in the Netherlands had an MSc; 44% had a BSc in Norway and 63% in the Netherlands; and 19% had training as an OSH technician in Norway and 23% in the Netherlands.
The main tasks undertaken by the OHS practitioner in the US include:
- Develop processes, procedures, criteria, requirements, and methods to attain the best possible management of the hazards and exposures that can cause injury to people, and damage property, or the environment;
- Apply good business practices and economic principles for efficient use of resources to add to the importance of the safety processes;
- Promote other members of the company to contribute by exchanging ideas and other different approaches to make sure that every one in the corporation possess OHS knowledge and have functional roles in the development and execution of safety procedures;
- Assess services, outcomes, methods, equipment, workstations, and procedures by using qualitative and quantitative methods to recognise the hazards and measure the related risks;
- Examine all possibilities, effectiveness, reliability, and expenditure to attain the best results for the company concerned
Knowledge required by the OHS professional in the US include:
- Constitutional and case law controlling safety, health, and the environment
- Operational procedures to plan/develop safe work practices
- Safety, health and environmental sciences
- Design of hazard control systems (i.e. fall protection, scaffoldings)
- Design of recordkeeping systems that take collection into account, as well as storage, interpretation, and dissemination
- Mathematics and statistics
- Processes and systems for attaining safety through design
Some skills required by the OHS professional in the US include (but are not limited to):
- Understanding and relating to systems, policies and rules
- Holding checks and having control methods for possible hazardous exposures
- Mathematical and statistical analysis
- Examining manufacturing hazards
- Planning safe work practices for systems, facilities, and equipment
- Understanding and using safety, health, and environmental science information for the improvement of procedures
- Interpersonal communication skills
Differences between countries and regions.
Because different countries take different approaches to ensuring occupational safety and health, areas of OSH need and focus also vary between countries and regions. Similar to the findings of the ENHSPO survey conducted in Australia, the Institute of Occupational Medicine in the UK found that there is a need to put greater emphasis on work-related illness in the UK. In contrast, in Australia and the US, a major responsibility of the OHS professional is to keep company directors and managers aware of the issues that they face in regards to occupational health and safety principles and legislation.
However, in some other areas of Europe, it is precisely this which has been lacking: “Nearly half of senior managers and company directors do not have an up-to-date understanding of their health and safety-related duties and responsibilities.
Identifying safety and health hazards
Hazards, risks, outcomes
The terminology used in OSH varies between countries, but generally speaking:
- A hazard is something that can cause harm if not controlled.
- The outcome is the harm that results from an uncontrolled hazard.
- A risk is a combination of the probability that a particular outcome will occur and the severity of the harm involved.
“Hazard”, “risk”, and “outcome” are used in other fields to describe e.g. environmental damage, or damage to equipment. However, in the context of OSH, “harm” generally describes the direct or indirect degradation, temporary or permanent, of the physical, mental, or social well-being of workers. For example, repetitively carrying out manual handling of heavy objects is a hazard. The outcome could be a musculoskeletal disorder (MSD) or an acute back or joint injury. The risk can be expressed numerically (e.g. a 0.5 or 50/50 chance of the outcome occurring during a year), in relative terms (e.g. « high/medium/low »), or with a multi-dimensional classification scheme (e.g. situation-specific risks).
Hazard identification or assessment is an important step in the overall risk assessment and risk management process. It is where individual work hazards are identified, assessed and controlled/eliminated as close to source (location of the hazard) as reasonably as possible. As technology, resources, social expectation or regulatory requirements change, hazard analysis focuses controls more closely toward the source of the hazard. Thus hazard control is a dynamic program of prevention. Hazard based programs also have the advantage of not assigning or implying there are « acceptable risks » in the workplace. A hazard-based program may not be able to eliminate all risks, but neither does it accept « satisfactory » – but still risky – outcomes. And as those who calculate and manage the risk are usually managers while those exposed to the risks are a different group, workers, a hazard-based approach can by-pass conflict inherent in a risk-based approach.
The information that needs to be gathered from sources should apply to the specific type of work from which the hazards can come from. As mentioned previously, examples of these sources include interviews with people who have worked in the field of the hazard, history and analysis of past incidents, and official reports of work and the hazards encountered. Of these, the personnel interviews may be the most critical in identifying undocumented practices, events, releases, hazards and other relevant information. Once the information is gathered from a collection of sources, it is recommended for these to be digitally archived (to allow for quick searching) and to have a physical set of the same information in order for it to be more accessible. One innovative way to display the complex historical hazard information is with a historical hazards identification map, which distills the hazard information into an easy to use graphical format.
Modern occupational safety and health legislation usually demands that a risk assessment be carried out prior to making an intervention. It should be kept in mind that risk management requires risk to be managed to a level which is as low as is reasonably practical.
This assessment should:
- Identify the hazards
- Identify all affected by the hazard and how
- Evaluate the risk
- Identify and prioritize appropriate control measures
The calculation of risk is based on the likelihood or probability of the harm being realized and the severity of the consequences. This can be expressed mathematically as a quantitative assessment (by assigning low, medium and high likelihood and severity with integers and multiplying them to obtain a risk factor), or qualitatively as a description of the circumstances by which the harm could arise.
The assessment should be recorded and reviewed periodically and whenever there is a significant change to work practices. The assessment should include practical recommendations to control the risk. Once recommended controls are implemented, the risk should be re-calculated to determine if it has been lowered to an acceptable level. Generally speaking, newly introduced controls should lower risk by one level, i.e., from high to medium or from medium to low.
On an international scale, the World Health Organization (WHO) and the International Labour Organization (ILO) have begun focusing on labour environments in developing nations with projects such as Healthy Cities. Many of these developing countries are stuck in a situation in which their relative lack of resources to invest in OSH leads to increased costs due to work-related illnesses and accidents. The European Agency for Safety and Health at Work states indicates that nations having less developed OSH systems spend a higher fraction of their gross national product on job related injuries and illness – taking resources away from more productive activities. The ILO estimates that work-related illness and accidents cost up to 10% of GDP in Latin America, compared with just 2.6% to 3.8% in the EU. There is continued use of asbestos, a notorious hazard, in some developing countries. So asbestos-related disease is, sadly, expected to continue to be a significant problem well into the future.
Augmented Reality, Virtual Reality and Artificial intelligence.
The impact of technologies on health and safety is an emerging field of research and practice. The opportunities for improving health and safety through use of augmented reality (AR) are now endless. There is emerging technologies for which the range of health and safety implications are not well understood and the impacts of machine learning and its impact on worker health and safety and the health and safety profession is still emerging. New technologies and ways of working introduce new risks and challenges for WHS and workers’ compensation, but they also have the potential to make work safer and reduce workplace injury.
Nanotechnology is an example of a new, relatively unstudied technology. A Swiss survey of one hundred thirty eight companies using or producing nanoparticulate matter in 2006 resulted in forty completed questionnaires. Sixty five per cent of respondent companies stated they did not have a formal risk assessment process for dealing with nanoparticulate matter.
Nanotechnology already presents new issues for OSH professionals that will only become more difficult as nanostructures become more complex.
The size of the particles renders most containment and personal protective equipment ineffective. The toxicology values for macro sized industrial substances are rendered inaccurate due to the unique nature of nanoparticulate matter.
As nanoparticulate matter decreases in size its relative surface area increases dramatically, increasing any catalytic effect or chemical reactivity substantially versus the known value for the macro substance. This presents a new set of challenges in the near future to rethink contemporary measures to safeguard the health and welfare of employees against a nanoparticulate substance that most conventional controls have not been designed to manage.
Occupational Health Disparities.
Occupational health disparities refer to differences in occupational injuries and illnesses that are closely linked with demographic, social, cultural, economic, and/or political factors.
There are multiple levels of training applicable to the field of occupational safety and health (OSH). Programs range from individual non-credit certificates, focusing on specific areas of concern, to full doctoral programs. The University of Southern California was one of the first schools in the US to offer a Ph.D. program focusing on the field. Further, multiple master’s degree programs exist, such as that of the Indiana State University who offer a master of science (MS) and a master of arts (MA) in OSH. Graduate programs are designed to train educators, as well as, high-level practitioners.
Many OSH generalists focus on undergraduate studies; programs within schools, such as that of the University of North Carolina’s online Bachelor of Science in Environmental Health and Safety, fill a large majority of hygienist needs. However, smaller companies often do not have full-time safety specialists on staff, thus, they appoint a current employee to the responsibility. Individuals finding themselves in positions such as these, or for those enhancing marketability in the job-search and promotion arena, may seek out a credit certificate program. For example, the University of Connecticut’s online OSH Certificate,[provides students familiarity with overarching concepts through a 15-credit (5-course) program. Programs such as these are often adequate tools in building a strong educational platform for new safety managers with a minimal outlay of time and money. Further, most hygienists seek certification by organizations which train in specific areas of concentration, focusing on isolated workplace hazards. The American Society for Safety Engineers (ASSE), American Board of Industrial Hygiene (ABIH), and American Industrial Hygiene Association (AIHA) offer individual certificates on many different subjects from forklift operation to waste disposal and are the chief facilitators of continuing education in the OSH sector. In the U.S. the training of safety professionals is supported by National Institute for Occupational Safety and Health through their NIOSH Education and Research Centers. In Australia, training in OSH is available at the vocational education and training level, and at university undergraduate and postgraduate level. Such university courses may be accredited by an Accreditation Board of the Safety Institute of Australia. The Institute has produced a Body of Knowledge which it considers is required by a generalist safety and health professional, and offers a professional qualification based on a four step assessment.
World Day for Safety and Health at Work
On April 28 The International Labour Organization celebrates « World Day for Safety and Health » to raise awareness of safety in the workplace. Occurring annually since 2003, each year it focuses on a specific area and bases a campaign around the theme.
Fire is the rapid oxidation of a material in the exothermic chemical process of combustion, releasing heat, light, and various reaction products. Slower oxidative processes like rusting or digestion are not included by this definition.
Fire is hot because the conversion of the weak double bond in molecular oxygen, O2, to the stronger bonds in the combustion products carbon dioxide and water releases energy (418 kJ per 32 g of O2); the bond energies of the fuel play only a minor role here. At a certain point in the combustion reaction, called the ignition point, flames are produced. The flame is the visible portion of the fire. Flames consist primarily of carbon dioxide, water vapor, oxygen and nitrogen. If hot enough, the gases may become ionized to produce plasma. Depending on the substances alight, and any impurities outside, the color of the flame and the fire’s intensity will be different.
Fire in its most common form can result in conflagration, which has the potential to cause physical damage through burning. Fire is an important process that affects ecological systems around the globe. The positive effects of fire include stimulating growth and maintaining various ecological systems.
The negative effects of fire include hazard to life and property, atmospheric pollution, and water contamination. If fire removes protective vegetation, heavy rainfall may lead to an increase in soil erosion by water. Also, when vegetation is burned, the nitrogen it contains is released into the atmosphere, unlike elements such as potassium and phosphorus which remain in the ash and are quickly recycled into the soil. This loss of nitrogen caused by a fire produces a long-term reduction in the fertility of the soil, but this fecundity can potentially be recovered as molecular nitrogen in the atmosphere is « fixed » and converted to ammonia by natural phenomena such as lightning and by leguminous plants that are « nitrogen-fixing » such as clover, peas, and green beans.
Fire has been used by humans in rituals, in agriculture for clearing land, for cooking, generating heat and light, for signaling, propulsion purposes, smelting, forging, incineration of waste, cremation, and as a weapon or mode of destruction.
Fires start when a flammable or a combustible material, in combination with a sufficient quantity of an oxidizer such as oxygen gas or another oxygen-rich compound (though non-oxygen oxidizers exist), is exposed to a source of heat or ambient temperature above the flash point for the fuel/oxidizer mix, and is able to sustain a rate of rapid oxidation that produces a chain reaction. This is commonly called the fire tetrahedron. Fire cannot exist without all of these elements in place and in the right proportions. For example, a flammable liquid will start burning only if the fuel and oxygen are in the right proportions. Some fuel-oxygen mixes may require a catalyst, a substance that is not consumed, when added, in any chemical reaction during combustion, but which enables the reactants to combust more readily.
Once ignited, a chain reaction must take place whereby fires can sustain their own heat by the further release of heat energy in the process of combustion and may propagate, provided there is a continuous supply of an oxidizer and fuel.
If the oxidizer is oxygen from the surrounding air, the presence of a force of gravity, or of some similar force caused by acceleration, is necessary to produce convection, which removes combustion products and brings a supply of oxygen to the fire. Without gravity, a fire rapidly surrounds itself with its own combustion products and non-oxidizing gases from the air, which exclude oxygen and extinguish the fire. Because of this, the risk of fire in a spacecraft is small when it is coasting in inertial flight. This does not apply if oxygen is supplied to the fire by some process other than thermal convection.
Fire can be extinguished by removing any one of the elements of the fire tetrahedron. Consider a natural gas flame, such as from a stove-top burner. The fire can be extinguished by any of the following:
- turning off the gas supply, which removes the fuel source;
- covering the flame completely, which smothers the flame as the combustion both uses the available oxidizer (the oxygen in the air) and displaces it from the area around the flame with CO2;
- application of water, which removes heat from the fire faster than the fire can produce it (similarly, blowing hard on a flame will displace the heat of the currently burning gas from its fuel source, to the same end), or
- application of a retardant chemical such as Halon to the flame, which retards the chemical reaction itself until the rate of combustion is too slow to maintain the chain reaction.
In contrast, fire is intensified by increasing the overall rate of combustion. Methods to do this include balancing the input of fuel and oxidizer to stoichiometric proportions, increasing fuel and oxidizer input in this balanced mix, increasing the ambient temperature so the fire’s own heat is better able to sustain combustion, or providing a catalyst, a non-reactant medium in which the fuel and oxidizer can more readily react.
A flame is a mixture of reacting gases and solids emitting visible, infrared, and sometimes ultraviolet light, the frequency spectrum of which depends on the chemical composition of the burning material and intermediate reaction products. In many cases, such as the burning of organic matter, for example wood, or the incomplete combustion of gas, incandescent solid particles called soot produce the familiar red-orange glow of « fire ».
This light has a continuous spectrum. Complete combustion of gas has a dim blue color due to the emission of single wavelength radiation from various electron transitions in the excited molecules formed in the flame. Usually oxygen is involved, but hydrogen burning in chlorine also produces a flame, producing hydrogen chloride (HCl). Other possible combinations producing flames, amongst many, are fluorine and hydrogen, and hydrazine and nitrogen tetroxide. Hydrogen and hydrazine/UDMH flames are similarly pale blue, while burning boron and its compounds, evaluated in mid-20th century as a high energy fuel for jet and rocket engines, emits intense green flame, leading to its informal nickname of « Green Dragon ».
The glow of a flame is complex. Black-body radiation is emitted from soot, gas, and fuel particles, though the soot particles are too small to behave like perfect blackbodies. There is also photon emission by de-excited atoms and molecules in the gases. Much of the radiation is emitted in the visible and infrared bands. The color depends on temperature for the black-body radiation, and on chemical makeup for the emission spectra. The dominant color in a flame changes with temperature. The photo of the forest fire in Canada is an excellent example of this variation. Near the ground, where most burning is occurring, the fire is white, the hottest color possible for organic material in general, or yellow. Above the yellow region, the color changes to orange, which is cooler, then red, which is cooler still. Above the red region, combustion no longer occurs, and the uncombusted carbon particles are visible as black smoke.
The common distribution of a flame under normal gravity conditions depends on convection, as soot tends to rise to the top of a general flame, as in a candle in normal gravity conditions, making it yellow. In micro gravity or zero gravity, such as an environment in outer space, convection no longer occurs, and the flame becomes spherical, with a tendency to become more blue and more efficient (although it may go out if not moved steadily, as the CO2 from combustion does not disperse as readily in micro gravity, and tends to smother the flame).
There are several possible explanations for this difference, of which the most likely is that the temperature is sufficiently evenly distributed that soot is not formed and complete combustion occurs. Experiments by NASA reveal that diffusion flames in micro gravity allow more soot to be completely oxidized after they are produced than diffusion flames on Earth, because of a series of mechanisms that behave differently in micro gravity when compared to normal gravity conditions. These discoveries have potential applications in applied science and industry, especially concerning fuel efficiency.
In combustion engines, various steps are taken to eliminate a flame. The method depends mainly on whether the fuel is oil, wood, or a high-energy fuel such as jet fuel.
Temperatures of flames by appearance
It is true that objects at specific temperatures do radiate visible light. Objects whose surface is at a temperature above approximately 470 °C (878 °F) will glow, emitting light at a color that indicates the temperature of that surface. See the section on red heat for more about this effect. It is a misconception that one can judge the temperature of a fire by the color of its flames or the sparks in the flames. For many reasons, chemically and optically, these colors may not match the red/orange/yellow/white heat temperatures on the chart. Barium nitrate burns a bright green, for instance, and this is not present on the heat chart.
Typical temperatures of flames
The « adiabatic flame temperature » of a given fuel and oxidizer pair indicates the temperature at which the gases achieve stable combustion.
- Oxy–dicyanoacetylene 4,990 °C (9,000 °F)
- Oxy–acetylene 3,480 °C (6,300 °F)
- Oxyhydrogen 2,800 °C (5,100 °F)
- Air–acetylene 2,534 °C (4,600 °F)
- Blowtorch (air–MAPP gas) 2,200 °C (4,000 °F)
- Bunsen burner (air–natural gas) 1,300 to 1,600 °C (2,400 to 2,900 °F)
- Candle (air–paraffin) 1,000 °C (1,800 °F)
- Smoldering cigarette:
- Temperature without drawing: side of the lit portion; 400 °C (750 °F); middle of the lit portion: 585 °C (1,100 °F)
- Temperature during drawing: middle of the lit portion: 700 °C (1,300 °F)
- Always hotter in the middle.
Every natural ecosystem has its own fire regime, and the organisms in those ecosystems are adapted to or dependent upon that fire regime. Fire creates a mosaic of different habitat patches, each at a different stage of succession. Different species of plants, animals, and microbes specialize in exploiting a particular stage, and by creating these different types of patches, fire allows a greater number of species to exist within a landscape.
The fossil record of fire first appears with the establishment of a land-based flora in the Middle Ordovician period, 470 million years ago, permitting the accumulation of oxygen in the atmosphere as never before, as the new hordes of land plants pumped it out as a waste product. When this concentration rose above 13%, it permitted the possibility of wildfire.Wildfire is first recorded in the Late Silurian fossil record, 420 million years ago, by fossils of charcoalified plants. Apart from a controversial gap in the Late Devonian, charcoal is present ever since. The level of atmospheric oxygen is closely related to the prevalence of charcoal: clearly oxygen is the key factor in the abundance of wildfire. Fire also became more abundant when grasses radiated and became the dominant component of many ecosystems, around 6 to 7 million years ago; this kindling provided tinder which allowed for the more rapid spread of fire. These widespread fires may have initiated a positive feedback process, whereby they produced a warmer, drier climate more conducive to fire.
The ability to control fire was a dramatic change in the habits of early humans. Making fire to generate heat and light made it possible for people to cook food, simultaneously increasing the variety and availability of nutrients and reducing disease by killing organisms in the food. The heat produced would also help people stay warm in cold weather, enabling them to live in cooler climates. Fire also kept nocturnal predators at bay. Evidence of cooked food is found from 1.9 million years ago, although fire was probably not used in a controlled fashion until 400,000 years ago. There is some evidence that fire may have been used in a controlled fashion about 1 million years ago. Evidence becomes widespread around 50 to 100 thousand years ago, suggesting regular use from this time; interestingly, resistance to air pollution started to evolve in human populations at a similar point in time. The use of fire became progressively more sophisticated, with it being used to create charcoal and to control wildlife from ‘tens of thousands’ of years ago.
Fire has also been used for centuries as a method of torture and execution, as evidenced by death by burning as well as torture devices such as the iron boot, which could be filled with water, oil, or even lead and then heated over an open fire to the agony of the wearer.
By the Neolithic Revolution, during the introduction of grain-based agriculture, people all over the world used fire as a tool in landscape management. These fires were typically controlled burns or « cool fires », as opposed to uncontrolled « hot fires », which damage the soil. Hot fires destroy plants and animals, and endanger communities. This is especially a problem in the forests of today where traditional burning is prevented in order to encourage the growth of timber crops. Cool fires are generally conducted in the spring and autumn. They clear undergrowth, burning up biomass that could trigger a hot fire should it get too dense. They provide a greater variety of environments, which encourages game and plant diversity. For humans, they make dense, impassable forests traversable. Another human use for fire in regards to landscape management is its use to clear land for agriculture. Slash-and-burn agriculture is still common across much of tropical Africa, Asia and South America. « For small farmers, it is a convenient way to clear overgrown areas and release nutrients from standing vegetation back into the soil », said Miguel Pinedo-Vasquez, an ecologist at the Earth Institute’s Center for Environmental Research and Conservation. However this useful strategy is also problematic. Growing population, fragmentation of forests and warming climate are making the earth’s surface more prone to ever-larger escaped fires. These harm ecosystems and human infrastructure, cause health problems, and send up spirals of carbon and soot that may encourage even more warming of the atmosphere – and thus feed back into more fires. Globally today, as much as 5 million square kilometres – an area more than half the size of the United States – burns in a given year.
There are numerous modern applications of fire. In its broadest sense, fire is used by nearly every human being on earth in a controlled setting every day. Users of internal combustion vehicles employ fire every time they drive. Thermal power stations provide electricity for a large percentage of humanity.
The use of fire in warfare has a long history. Fire was the basis of all early thermal weapons. Homer detailed the use of fire by Greek soldiers who hid in a wooden horse to burn Troy during the Trojan war. Later the Byzantine fleet used Greek fire to attack ships and men. In the First World War, the first modern flamethrowers were used by infantry, and were successfully mounted on armoured vehicles in the Second World War. In the latter war, incendiary bombs were used by Axis and Allies alike, notably on Tokyo, Rotterdam, London, Hamburg and, notoriously, at Dresden; in the latter two cases firestorms were deliberately caused in which a ring of fire surrounding each city was drawn inward by an updraft caused by a central cluster of fires. The United States Army Air Force also extensively used incendiaries against Japanese targets in the latter months of the war, devastating entire cities constructed primarily of wood and paper houses. The use of napalm was employed in July 1944, towards the end of the Second World War; although its use did not gain public attention until the Vietnam War. Molotov cocktails were also used.
Use as fuel
Setting fuel aflame releases usable energy. Wood was a prehistoric fuel, and is still viable today. The use of fossil fuels, such as petroleum, natural gas, and coal, in power plants supplies the vast majority of the world’s electricity today; the International Energy Agency states that nearly 80% of the world’s power came from these sources in 2002. The fire in a power station is used to heat water, creating steam that drives turbines. The turbines then spin an electric generator to produce electricity. Fire is also used to provide mechanical work directly, in both external and internal combustion engines.
The unburnable solid remains of a combustible material left after a fire is called clinker if its melting point is below the flame temperature, so that it fuses and then solidifies as it cools, and ash if its melting point is above the flame temperature.
Protection and prevention
Wildfire prevention programs around the world may employ techniques such as wildland fire use and prescribed or controlled burns. Wildland fire use refers to any fire of natural causes that is monitored but allowed to burn. Controlled burns are fires ignited by government agencies under less dangerous weather conditions.
Fire fighting services are provided in most developed areas to extinguish or contain uncontrolled fires. Trained firefighters use fire apparatus, water supply resources such as water mains and fire hydrants or they might use A and B class foam depending on what is feeding the fire.
Fire prevention is intended to reduce sources of ignition. Fire prevention also includes education to teach people how to avoid causing fires. Buildings, especially schools and tall buildings, often conduct fire drills to inform and prepare citizens on how to react to a building fire. Purposely starting destructive fires constitutes arson and is a crime in most jurisdictions.
Model building codes require passive fire protection and active fire protection systems to minimize damage resulting from a fire. The most common form of active fire protection is fire sprinklers. To maximize passive fire protection of buildings, building materials and furnishings in most developed countries are tested for fire-resistance, combustibility and flammability. Upholstery, carpeting and plastics used in vehicles and vessels are also tested.
Where fire prevention and fire protection have failed to prevent damage, fire insurance can mitigate the financial impact.
Different restoration methods and measures are used depending on the type of fire damage that occurred. Restoration after fire damage can be performed by property management teams, building maintenance personnel, or by the homeowners themselves; however, contacting a certified professional fire damage restoration specialist is often regarded as the safest way to restore fire damaged property due to their training and extensive experience. Most are usually listed under « Fire and Water Restoration » and they can help speed repairs, whether for individual homeowners or for the largest of institutions.
Fire and Water Restoration companies are regulated by the appropriate state’s Department of Consumer Affairs – usually the state contractors license board. In California, all Fire and Water Restoration companies must register with the California Contractors State License Board. Presently, the California Contractors State License Board has no specific classification for « water and fire damage restoration. » Hence, the Contractor’s State License Board requires both an asbestos certification (ASB) as well as a demolition classification (C-21) in order to perform Fire and Water Restoration work.
An explosion is a rapid increase in volume and release of energy in an extreme manner, usually with the generation of high temperatures and the release of gases. Supersonic explosions created by high explosives are known as detonations and travel via supersonic shock waves. Subsonic explosions are created by low explosives through a slower burning process known as deflagration.
Explosions can occur in nature due to a large influx of energy. Most natural explosions arise from volcanic or stellar processes of various sorts. Explosive volcanic eruptions occur when magma rising from below has much dissolved gas in it; the reduction of pressure as the magma rises causes the gas to bubble out of solution, resulting in a rapid increase in volume. Explosions also occur as a result of impact events and in phenomena such as hydrothermal explosions (also due to volcanic processes). Explosions can also occur outside of Earth in the universe in events such as supernova. Explosions frequently occur during bushfires in eucalyptus forests where the volatile oils in the tree tops suddenly combust.
Among the largest known explosions in the universe are supernovae, which results when a star explodes from the sudden starting or stopping of nuclear fusion gamma-ray bursts, whose nature is still in some dispute. Solar flares are an example of a common explosion on the Sun, and presumably on most other stars as well. The energy source for solar flare activity comes from the tangling of magnetic field lines resulting from the rotation of the Sun’s conductive plasma. Another type of large astronomical explosion occurs when a very large meteoroid or an asteroid impacts the surface of another object, such as a planet.
The most common artificial explosives are chemical explosives, usually involving a rapid and violent oxidation reaction that produces large amounts of hot gas. Gunpowder was the first explosive to be discovered and put to use. Other notable early developments in chemical explosive technology were Frederick Augustus Abel’s development of nitrocellulose in 1865 and Alfred Nobel’s invention of dynamite in 1866. Chemical explosions (both intentional and accidental) are often initiated by an electric spark or flame in the presence of Oxygen. Accidental explosions may occur in fuel tanks, rocket engines, etc.
Electrical and magnetic
A high current electrical fault can create an ‘electrical explosion’ by forming a high energy electrical arc which rapidly vaporizes metal and insulation material. This arc flash hazard is a danger to persons working on energized switchgear. Also, excessive magnetic pressure within an ultra-strong electromagnet can cause a magnetic explosion.
Mechanical and vapor
Strictly a physical process, as opposed to chemical or nuclear, e.g., the bursting of a sealed or partially sealed container under internal pressure is often referred to as an explosion. Examples include an overheated boiler or a simple tin can of beans tossed into a fire.
Boiling liquid expanding vapor explosions are one type of mechanical explosion that can occur when a vessel containing a pressurized liquid is ruptured, causing a rapid increase in volume as the liquid evaporates. Note that the contents of the container may cause a subsequent chemical explosion, the effects of which can be dramatically more serious, such as a propane tank in the midst of a fire. In such a case, to the effects of the mechanical explosion when the tank fails are added the effects from the explosion resulting from the released (initially liquid and then almost instantaneously gaseous) propane in the presence of an ignition source. For this reason, emergency workers often differentiate between the two events.
In addition to stellar nuclear explosions, a man-made nuclear weapon is a type of explosive weapon that derives its destructive force from nuclear fission or from a combination of fission and fusion. As a result, even a nuclear weapon with a small yield is significantly more powerful than the largest conventional explosives available, with a single weapon capable of completely destroying an entire city.
Properties of explosions
Explosive force is released in a direction perpendicular to the surface of the explosive. If a grenade is in mid air during the explosion, the direction of the blast will be 360°. In contrast, in a shaped charge the explosive forces are focused to produce a greater local effect.
The speed of the reaction is what distinguishes an explosive reaction from an ordinary combustion reaction. Unless the reaction occurs very rapidly, the thermally expanding gases will be moderately dissipated in the medium, with no large differential in pressure and there will be no explosion. Consider a wood fire. As the fire burns, there certainly is the evolution of heat and the formation of gases, but neither is liberated rapidly enough to build up a sudden substantial pressure differential and then cause an explosion. This can be likened to the difference between the energy discharge of a battery, which is slow, and that of a flash capacitor like that in a camera flash, which releases its energy all at once.
Evolution of heat
The generation of heat in large quantities accompanies most explosive chemical reactions. The exceptions are called entropic explosives and include organic peroxides such as acetone peroxide. It is the rapid liberation of heat that causes the gaseous products of most explosive reactions to expand and generate high pressures. This rapid generation of high pressures of the released gas constitutes the explosion. The liberation of heat with insufficient rapidity will not cause an explosion. For example, although a unit mass of coal yields five times as much heat as a unit mass of nitroglycerin, the coal cannot be used as an explosive (except in the form of coal dust) because the rate at which it yields this heat is quite slow. In fact, a substance which burns less rapidly (i.e. slow combustion) may actually evolve more total heat than an explosive which detonates rapidly (i.e. fast combustion). In the former, slow combustion converts more of the internal energy (i.e. chemical potential) of the burning substance into heat released to the surroundings, while in the latter, fast combustion (i.e. detonation) instead converts more internal energy into work on the surroundings (i.e. less internal energy converted into heat); c.f. heat and work (thermodynamics) are equivalent forms of energy. See Heat of Combustion for a more thorough treatment of this topic.
When a chemical compound is formed from its constituents, heat may either be absorbed or released. The quantity of heat absorbed or given off during transformation is called the heat of formation. Heats of formations for solids and gases found in explosive reactions have been determined for a temperature of 25 °C and atmospheric pressure, and are normally given in units of kilojoules per gram-molecule. A positive value indicates that heat is absorbed during the formation of the compound from its elements; such a reaction is called an endothermic reaction. In explosive technology only materials that are exothermic—that have a net liberation of heat and have a negative heat of formation—are of interest. Reaction heat is measured under conditions either of constant pressure or constant volume. It is this heat of reaction that may be properly expressed as the « heat of explosion. »
Initiation of reaction
A chemical explosive is a compound or mixture which, upon the application of heat or shock, decomposes or rearranges with extreme rapidity, yielding much gas and heat. Many substances not ordinarily classed as explosives may do one, or even two, of these things.
A reaction must be capable of being initiated by the application of shock, heat, or a catalyst (in the case of some explosive chemical reactions) to a small portion of the mass of the explosive material. A material in which the first three factors exist cannot be accepted as an explosive unless the reaction can be made to occur when needed.
Combustion is the accumulation and projection of particles as the result of a high explosives detonation. Fragments could be part of a structure such as a magazine. High velocity, low angle fragments can travel hundreds or thousands of feet with enough energy to initiate other surrounding high explosive items, injure or kill personnel and damage vehicles or structures.
Personal protective equipment (PPE)
Personal protective equipment (PPE) is protective clothing, helmets, goggles, or other garments or equipment designed to protect the wearer’s body from injury or infection. The hazards addressed by protective equipment include physical, electrical, heat, chemicals, biohazards, and airborne particulate matter. Protective equipment may be worn for job-related occupational safety and health purposes, as well as for sports and other recreational activities. « Protective clothing » is applied to traditional categories of clothing, and « protective gear » applies to items such as pads, guards, shields, or masks, and others.
The purpose of personal protective equipment is to reduce employee exposure to hazards when engineering controls and administrative controls are not feasible or effective to reduce these risks to acceptable levels. PPE is needed when there are hazards present. PPE has the serious limitation that it does not eliminate the hazard at the source and may result in employees being exposed to the hazard if the equipment fails.
Any item of PPE imposes a barrier between the wearer/user and the working environment. This can create additional strains on the wearer; impair their ability to carry out their work and create significant levels of discomfort. Any of these can discourage wearers from using PPE correctly, therefore placing them at risk of injury, ill-health or, under extreme circumstances, death. Good ergonomic design can help to minimise these barriers and can therefore help to ensure safe and healthy working conditions through the correct use of PPE.
Practices of occupational safety and health can use hazard controls and interventions to mitigate workplace hazards, which pose a threat to the safety and quality of life of workers. The hierarchy of hazard controls provides a policy framework which ranks the types of hazard controls in terms of absolute risk reduction. At the top of the hierarchy are elimination and substitution, which remove the hazard entirely or replace the hazard with a safer alternative. If elimination or substitution measures cannot apply, engineering controls and administrative controls, which seek to design safer mechanisms and coach safer human behavior, are implemented. Personal protective equipment ranks last on the hierarchy of controls, as the workers are regularly exposed to the hazard, with a barrier of protection. The hierarchy of controls is important in acknowledging that, while personal protective equipment has tremendous utility, it is not the desired mechanism of control in terms of worker safety.
Personal protective equipment can be categorized by the area of the body protected, by the types of hazard, and by the type of garment or accessory. A single item, for example boots, may provide multiple forms of protection: a steel toe cap and steel insoles for protection of the feet from crushing or puncture injuries, impervious rubber and lining for protection from water and chemicals, high reflectivity and heat resistance for protection from radiant heat, and high electrical resistivity for protection from electric shock. The protective attributes of each piece of equipment must be compared with the hazards expected to be found in the workplace. More breathable types of personal protective equipment may not lead to more contamination but do result in greater user satisfaction.
Respirators serve to protect the user from breathing in contaminants in the air, thus preserving the health of one’s respiratory tract. There are two main types of respirators. One type of respirator functions by filtering out chemicals and gases, or airborne particles, from the air breathed by the user. The filtration may be either passive or active (powered). Gas masks and particulate respirators are examples of this type of respirator. A second type of respirator protects users by providing clean, respirable air from another source. This type includes airline respirators and self-contained breathing apparatus (SCBA). In work environments, respirators are relied upon when adequate ventilation is not available or other engineering control systems are not feasible or inadequate.
In the United Kingdom, an organization that has extensive expertise in respiratory protective equipment is the Institute of Occupational Medicine. This expertise has been built on a long standing and varied research programme that has included the setting of workplace protection factors to the assessment of efficacy of masks available through high street retail outlets.
The Health and Safety Executive (HSE), NHS Health Scotland and Healthy Working Lives (HWL) have jointly developed the RPE (Respiratory Protective Equipment) Selector Tool, which is web-based. This interactive tool provides descriptions of different types of respirators and breathing apparatuses, as well as « dos and don’ts » for each type.
In the United States, The National Institute for Occupational Safety and Health (NIOSH) provides recommendations on respirator use, in accordance to NIOSH federal respiratory regulations 42 CFR Part 84. The National Personal Protective Technology Laboratory (NPPTL) of NIOSH is tasked towards actively conducting studies on respirators and providing recommendations.
Occupational skin diseases such as contact dermatitis, skin cancers, and other skin injuries and infections are the second-most common type of occupational disease and can be very costly. Skin hazards, which lead to occupational skin disease, can be classified into four groups. Chemical agents can come into contact with the skin through direct contact with contaminated surfaces, deposition of aerosols, immersion or splashes. Physical agents such as extreme temperatures and ultraviolet or solar radiation can be damaging to the skin over prolonged exposure. Mechanical trauma occurs in the form of friction, pressure, abrasions, lacerations and contusions. Biological agents such as parasites, microorganisms, plants and animals can have varied effects when exposed to the skin. Any form of PPE that acts as a barrier between the skin and the agent of exposure can be considered skin protection. Because much work is done with the hands, gloves are an essential item in providing skin protection. Some examples of gloves commonly used as PPE include rubber gloves, cut resistant gloves, chainsaw gloves and heat-resistant gloves. For sports and other recreational activities, many different gloves are used for protection, generally against mechanical trauma.
Other than gloves, any other article of clothing or protection worn for a purpose serve to protect the skin. Lab coats for example, are worn to protect against potential splashes of chemicals. Face shields serve to protect one’s face from potential impact hazards, chemical splashes or possible infectious fluid.
Many migrant workers need training in PPE for Heat Related Illnesses prevention (HRI). Based on study results, the research identified some potential gaps in heat safety education. While some farm workers reported receiving limited training on pesticide safety, incoming groups of farmer workers could also receive video and in-person training on HRI prevention. These educational programs for farm workers are most effective then they are based on health behavior theories, use adult learning principles and employ train-the trainer approaches.
Each day, about 2000 US workers have a job-related eye injury that requires medical attention. Eye injuries can happen through a variety of means. Most eye injuries occur when solid particles such as metal slivers, wood chips, sand or cement chips get into the eye. Smaller particles in smokes and larger particles such as broken glass also account for particulate matter-causing eye injuries. Blunt force trauma can occur to the eye when excessive force comes into contact with the eye. Chemical burns, biological agents, and thermal agents, from sources such as welding torches and UV light, also contribute to occupational eye injury.
While the required eye protection varies by occupation, the safety provided can be generalized. Safety glasses provide protection from external debris, and should provide side protection via a wrap-around design or side shields.
Goggles provide better protection than safety glasses, and are effective in preventing eye injury from chemical splashes, impact, dusty environments and welding. Goggles with high air flow should be used to prevent fogging.
Face shields provide additional protection and are worn over the standard eyewear; they also provide protection from impact, chemical, and blood-borne hazards.
Full-facepiece respirators are considered the best form of eye protection when respiratory protection is needed as well, but may be less effective against potential impact hazards to the eye.
Industrial noise is often overlooked as an occupational hazard, as it is not visible to the eye. Overall, about 22 million workers in the United States are exposed to potentially damaging noise levels each year. Occupational hearing loss accounted for 14% of all occupational illnesses in 2007, with about 23,000 cases significant enough to cause permanent hearing impairment. About 82% of occupational hearing loss cases occurred to workers in the manufacturing sector. The Occupational Safety and Health Administration establishes occupational noise exposure standards. NIOSH recommends that worker exposures to noise be reduced to a level equivalent to 85 dBA for eight hours to reduce occupational noise-induced hearing loss.
PPE for hearing protection consists of earplugs and earmuffs. Workers who are regularly exposed to noise levels above the NIOSH recommendation should be furnished hearing protection by the employers, as they are a low-cost intervention.
Eye protection for welding is shaded to different degrees, depending on the specific operation.
Below are some examples of ensembles of personal protective equipment, worn together for a specific occupation or task, to provide maximum protection for the user.
Chainsaw protection (especially a helmet with face guard, hearing protection, kevlar chaps, anti vibration gloves, and chainsaw safety boots).
Bee-keepers wear various levels of protection depending on the temperament of their bees and the reaction of the bees to nectar availability. At minimum most bee keepers wear a brimmed hat and a veil made of fine mesh netting. The next level of protection involves leather gloves with long gauntlets and some way of keeping bees from crawling up one’s trouser legs. In extreme cases, specially fabricated shirts and trousers can serve as barriers to the bees’ stingers.
Diving equipment, for underwater diving, constitutes equipment such as a diving helmet or diving mask, an underwater breathing apparatus, and a diving suit.
Firefighters wear PPE designed to provide protection against fires and various fumes and gases. PPE worn by firefighters include bunker gear, self-contained breathing apparatus, a helmet, safety boots, and a PASS device.
Participants in sports often wear protective equipment. Studies performed on the injuries of professional athletes, such as that on NFL players, question the effectiveness of existing personal protective equipment.
Limits of the definition.
The definition of what constitutes personal protective equipment varies by country. In the United States, the laws regarding PPE also vary by state. In 2011, workplace safety complaints were brought against Hustler and other adult film production companies by the AIDS Healthcare Foundation, leading to several citations brought by Cal/OSHA. The failure to use condoms by adult film stars was a violation of Cal/OSHA’s Blood borne Pathogens Program, Personal Protective Equipment. This example shows that personal protective equipment can cover a variety of occupations in the United States, and has a wide ranging definition.
Legislation in the European Union.
At the European Union level, personal protective equipment is governed by Directive 89/686/EEC on personal protective equipment (PPE). The Directive is designed to ensure that PPE meets common quality and safety standards by setting out basic safety requirements for personal protective equipment, as well as conditions for its placement on the market and free movement within the EU single market. It covers ‘any device or appliance designed to be worn or held by an individual for protection against one or more health and safety hazards’. The directive was adopted on 21 January 1989 and came into force on 1 July 1992. The European Commission additionally allowed for a transition period until 30 June 1995 to give companies sufficient time to adapt to the legislation. After this date, all PPE placed on the market in EU Member States was required to comply with the requirements of Directive 89/686/EEC and carry the CE Marking.
Article 1 of Directive 89/686/EEC defines personal protective equipment as any device or appliance designed to be worn or held by an individual for protection against one or more health and safety hazards. PPE which falls under the scope of the Directive is divided into three categories:
- Category I: simple design (e.g. gardening gloves, footwear, ski goggles)
- Category II: PPE not falling into category I or III (e.g. personal flotation devices, dry and wet suits)
- Category III: complex design (e.g. respiratory equipment, harnesses)
Directive 89/686/EEC on personal protective equipment does not distinguish between PPE for professional use and PPE for leisure purposes.
Personal protective equipment falling within the scope of the Directive must comply with the basic health and safety requirements set out in Annex II of the Directive. To facilitate conformity with these requirements, harmonized standards are developed at the European or international level by the European Committee for Standardization (CEN, CENELEC) and the International Organization for Standardization in relation to the design and manufacture of the product. Usage of the harmonized standards is voluntary and provides presumption of conformity. However, manufacturers may choose an alternative method of complying with the requirements of the Directive.
Personal protective equipment excluded from the scope of the Directive includes:
- PPE designed for and used by the armed forces or in the maintenance of law and order;
- PPE for self-defence (e.g. aerosol canisters, personal deterrent weapons);
- PPE designed and manufactured for personal use against adverse atmospheric conditions (e.g. seasonal clothing, umbrellas), damp and water (e.g. dish-washing gloves) and heat;
- PPE used on vessels and aircraft but not worn at all times;
- helmets and visors intended for users of two- or three-wheeled motor vehicles.
The European Commission is currently working to revise Directive 89/686/EEC. The revision will look at the scope of the Directive, the conformity assessment procedures and technical requirements regarding market surveillance. It will also align the Directive with the New Legalislative Framework. The European Commission is likely to publish its proposal in 2013. It will then be discussed by the European Parliament and Council of the European Union under the ordinary legislative procedure before being published in the Official Journal of the European Union and becoming law.
Swiss cheese model.
The Swiss cheese model of accident causation illustrates that, although many layers of defense lie between hazards and accidents, there are flaws in each layer that, if aligned, can allow the accident to occur.
The Swiss cheese model of accident causation is a model used in risk analysis and risk management, including aviation safety, engineering, healthcare, emergency service organizations, and as the principle behind layered security, as used in computer security and defense in depth. It likens human systems to multiple slices of swiss cheese, stacked side by side, in which the risk of a threat becoming a reality is mitigated by the differing layers and types of defenses which are « layered » behind each other. Therefore, in theory, lapses and weaknesses in one defense do not allow a risk to materialize, since other defenses also exist, to prevent a single point of failure. The model was originally formally propounded by Dante Orlandella and James T. Reason of the University of Manchester, and has since gained widespread acceptance. It is sometimes called the « cumulative act effect ».
Although the Swiss cheese model is respected and considered to be a useful method of relating concepts, it has been subject to criticism that it is used too broadly, and without enough other models or support.
Reason hypothesized that most accidents can be traced to one or more of four failure domains: organizational influences, supervision, preconditions, and specific acts. For example, in aviation, preconditions for unsafe acts include fatigued air crew or improper communications practices. Unsafe supervision encompasses for example, pairing inexperienced pilots on a night flight into known adverse weather. Organizational influences encompass such things as reduction in expenditure on pilot training in times of financial austerity.
Holes and slices.
In the Swiss cheese model, an organisation’s defenses against failure are modeled as a series of barriers, represented as slices of cheese. The holes in the slices represent weaknesses in individual parts of the system and are continually varying in size and position across the slices. The system produces failures when a hole in each slice momentarily aligns, permitting (in Reason’s words) « a trajectory of accident opportunity », so that a hazard passes through holes in all of the slices, leading to a failure.
Frosch described Reason’s model in mathematical terms as a model in percolation theory, which he analyses as a Bethe lattice.
Active and latent failures.
The model includes both active and latent failures. Active failures encompass the unsafe acts that can be directly linked to an accident, such as (in the case of aircraft accidents) a navigation error. Latent failures include contributory factors that may lie dormant for days, weeks, or months until they contribute to the accident. Latent failures span the first three domains of failure in Reason’s model.
In the early days of the Swiss Cheese model, late 1980 to about 1992, attempts were made to combine two theories: James Reason multi-layer defence model and Willem Albert Wagenaar’s Tripod theory of accident causation. This resulted in a period where the Swiss Cheese diagram was represented with the slices of cheese labels as Active Failures, Preconditions and latent failures.
These attempts to combine both theories still causes confusion today. A more correct version of the combined theories is shown with the Active Failures (now called immediate causes) Precondition and Latent Failure (now called underlying causes) shown as the reason each barrier (slice of cheese) has a hole in it and the slices of cheese as the barriers.
The same framework can be applicable in some areas of healthcare. For example, a latent failure could be the similar packaging of two drugs that are then stored close to each other in a pharmacy. Such a failure would be a contributory factor in the administration of the wrong drug to a patient. Such research led to the realization that medical error can be the result of « system flaws, not character flaws », and that greed, ignorance, malice or laziness are not the only causes of error.
Lubnau, Lubnau, and Okray apply the model to the engineering of firefighting systems, aiming to reduce human errors by « inserting additional layers of cheese into the system », namely the techniques of Crew Resource Management.
This is one of the many models listed, with references, in.
Kamoun and Nicho found the Swiss cheese model to be a useful theoretical model to explain the multifaceted (human, organizational and technological) aspects of healthcare data breaches.
A reaction calorimeter is a calorimeter that measures the amount of energy released (exothermic) or absorbed (endothermic) by a chemical reaction. These measurements provide a more accurate picture of such reactions.
When considering scaling up a reaction to large scale from lab scale, it is important to understand how much heat is released. At a small scale, heat released may not cause a concern, however when scaling up, build up can be extremely dangerous.
Crystallizing a reaction product from solution is a highly cost effective purification technique. It is therefore valuable to be able to measure how effectively crystallization is taking place in order to be able to optimize it. The heat absorbed by the process can be a useful measure.
The energy being released by any process in the form of heat is directly proportional to the rate of reaction and hence reaction calorimetry (as a time resolved measurement technique) can be used to study kinetics.
The use of reaction calorimetry in process development has been historically limited due to the cost implications of these devices however calorimetry is a fast and easy way to fully understand the reactions which are conducted as part of a chemical process.
Heat flow calorimetry.
Heat flow calorimetry measures the heat flowing across the reactor wall and quantifying this in relation to the other energy flows within the reactor.
where: Q = process heating (or cooling) power
U = overall heat transfer coefficient (W/(m2K)
A = heat transfer area (m2)
Tr = process temperature (K)
Tj= jacket temperature (K)
Heat flow calorimetry allows the user to measure heat whilst the process temperature remains under control. While the driving force Tr − Tj is measured with a relatively high resolution, the overall heat transfer coefficient U or the calibration factor UA respectively, is determined by means of calibration before and after the reaction takes place. The calibration factor UA (or the overall heat transfer coefficient U) are affected by the product composition, process temperature, agitation rate, viscosity, and the liquid level. Good accuracy can be achieved with experienced staff who know the limitations and how to get the best results from an instrument.
Calorimetry in real time is a calorimetry technique based on heat flux sensors that are located on the wall of the reactor vessels. The sensors measure heat across the reactor wall directly and thus, the measurement is independent of temperature, the properties or the behavior of the reaction mass. Heat flow as well as heat transfer information are obtained immediately without any calibrations during the experiment.
Heat balance calorimetry
In heat balance calorimetry, the cooling/heating jacket controls the temperature of the process. Heat is measured by monitoring the heat gained or lost by the heat transfer fluid.
where: Q = process heating (or cooling) power
m_s = mass flow of heat transfer fluid (kg/s)
C_ps = specific heat of heat transfer fluid (J/(kg K))
T_i = inlet temperature of heat transfer fluid (K)
T_o = outlet temperature of heat transfer fluid (K)
Heat balance calorimetry is, in principle, the ideal method of measuring heat since the heat entering and leaving the system through the heating/cooling jacket is measured from the heat transfer fluid (which has known properties). This eliminates most of the calibration problems encountered by heat flow and power compensation calorimetry. Unfortunately, the method does not work well in traditional batch vessels since the process heat signal is obscured by large heat shifts in the cooling/heating jacket.
Power compensation calorimetry.
A variation of the ‘heat flow’ technique is called ‘power compensation’ calorimetry. This method uses a cooling jacket operating at constant flow and temperature. The process temperature is regulated by adjusting the power of the electrical heater. When the experiment is started, the electrical heat and the cooling power (of the cooling jacket) are in balance. As the process heat load changes, the electrical power is varied in order to maintain the desired process temperature. The heat liberated or absorbed by the process is determined from the difference between the initial electrical power and the demand for electrical power at the time of measurement. The power compensation method is easier to set up than heat flow calorimetry but it suffers from the similar limitations since any change in product composition, liquid level, process temperature, agitation rate or viscosity will upset the calibration. The presence of an electrical heating element is also undesirable for process operations. The method is further limited by the fact that the largest heat it can measure is equal to the initial electrical power applied to the heater.
I = current supplied to heater
V = voltage supplied to heater
I_0 = current supplied to heater at equilibrium (assuming constant voltage / resistance)
Constant flux calorimetry.
A recent development in calorimetry, however, is that of constant flux cooling/heating jackets. These use variable geometry cooling jackets and can operate with cooling jackets at substantially constant temperature. These reaction calorimeters tend to be much simpler to use and are much more tolerant of changes in the process conditions (which would affect calibration in heat flow or power compensation calorimeters).
A key part of reaction calorimetry is the ability to control temperature in the face of extreme thermal events. Once the temperature is able to be controlled, measurement of a variety of parameters can allow an understanding of how much heat is being released of absorbed by a reaction.
In essence, constant flux calorimetry is a highly developed temperature control mechanism which can be used to generate highly accurate calorimetry. It works by controlling the jacket area of a controlled lab reactor, while keeping the inlet temperature of the thermal fluid constant. This allows the temperature to be precisely controlled even under strongly exothermic or endothermic events as additional cooling is always available by simply increasing the area over which the heat is being exchanged.
This system is generally more accurate than heat balance calorimetry (on which it is based), as changes in the delta temperature (Tout – Tin) are magnified by keeping the fluid flow as low as possible.
We also know that from the heat flow equation that: Q = U.A.LMTD
We can therefore rearrange this such that: U = mf.Cpf.(Tin – Tout ) /A.LMTD
This will allow us therefore to monitor U as a function of time.
Differential scanning calorimetry
Differential scanning calorimetry (DSC) is a thermoanalytical technique in which the difference in the amount of heat required to increase the temperature of a sample and reference is measured as a function of temperature. Both the sample and reference are maintained at nearly the same temperature throughout the experiment. Generally, the temperature program for a DSC analysis is designed such that the sample holder temperature increases linearly as a function of time. The reference sample should have a well defined heat capacity over the range of temperatures to be scanned.
The technique was developed by E. S. Watson and M. J. O’Neill in 1962, and introduced commercially at the 1963 Pittsburgh Conference on Analytical Chemistry and Applied Spectroscopy. The first adiabatic differential scanning calorimeter that could be used in biochemistry was developed by P. L. Privalov and D. R. Monaselidze in 1964 at Institute of Physics in Tbilisi, Georgia. The term DSC was coined to describe this instrument, which measures energy directly and allows precise measurements of heat capacity.
Types of DSC:
- Power-compensated DSC in which power supply remains constant
- Heat-flux DSC in which heat flux remains constant
Detection of phase transitions.
The basic principle underlying this technique is that when the sample undergoes a physical transformation such as phase transitions, more or less heat will need to flow to it than the reference to maintain both at the same temperature. Whether less or more heat must flow to the sample depends on whether the process is exothermic or endothermic. For example, as a solid sample melts to a liquid, it will require more heat flowing to the sample to increase its temperature at the same rate as the reference. This is due to the absorption of heat by the sample as it undergoes the endothermic phase transition from solid to liquid. Likewise, as the sample undergoes exothermic processes (such as crystallization) less heat is required to raise the sample temperature. By observing the difference in heat flow between the sample and reference, differential scanning calorimeters are able to measure the amount of heat absorbed or released during such transitions. DSC may also be used to observe more subtle physical changes, such as glass transitions. It is widely used in industrial settings as a quality control instrument due to its applicability in evaluating sample purity and for studying polymer curing.
An alternative technique, which shares much in common with DSC, is differential thermal analysis (DTA). In this technique it is the heat flow to the sample and reference that remains the same rather than the temperature. When the sample and reference are heated identically, phase changes and other thermal processes cause a difference in temperature between the sample and reference. Both DSC and DTA provide similar information. DSC measures the energy required to keep both the reference and the sample at the same temperature whereas DTA measures the difference in temperature between the sample and the reference when the same amount of energy has been introduced into both.
Normalized DSC curves using the baseline as the reference (left), and fractions of each conformational state (y) existing at each temperature (right), for two-state (top), and three state (bottom) proteins. Note the minuscule broadening in the peak of the three-state protein’s DSC curve, which may or may not appear statistically significant to the naked eye. The result of a DSC experiment is a curve of heat flux versus temperature or versus time. There are two different conventions: exothermic reactions in the sample shown with a positive or negative peak, depending on the kind of technology used in the experiment. This curve can be used to calculate enthalpies of transitions. This is done by integrating the peak corresponding to a given transition. It can be shown that the enthalpy of transition can be expressed using the following equation:
where : is the enthalpy of transition, is the calorimetric constant, and is the area under the curve. The calorimetric constant will vary from instrument to instrument, and can be determined by analyzing a well-characterized sample with known enthalpies of transition.
Differential scanning calorimetry can be used to measure a number of characteristic properties of a sample. Using this technique it is possible to observe fusion and crystallization events as well as glass transition temperatures Tg. DSC can also be used to study oxidation, as well as other chemical reactions.
Glass transitions may occur as the temperature of an amorphous solid is increased. These transitions appear as a step in the baseline of the recorded DSC signal. This is due to the sample undergoing a change in heat capacity; no formal phase change ccurs.
As the temperature increases, an amorphous solid will become less viscous. At some point the molecules may obtain enough freedom of motion to spontaneously arrange themselves into a crystalline form. This is known as the crystallization temperature (Tc). This transition from amorphous solid to crystalline solid is an exothermic process, and results in a peak in the DSC signal. As the temperature increases the sample eventually reaches its melting temperature (Tm). The melting process results in an endothermic peak in the DSC curve. The ability to determine transition temperatures and enthalpies makes DSC a valuable tool in producing phase diagrams for various chemical systems.
Differential scanning calorimetry can also be used to obtain valuable thermodynamics information about proteins. The thermodynamics analysis of proteins can reveal important information about the global structure of proteins, and protein/ligand interaction. For example, many mutations lower the stability of proteins, while ligand binding usually increases protein stability. Using DSC, this stability can be measured by obtaining Gibbs Free Energy values at any given temperature. This allows researchers to compare the free energy of unfolding between ligand-free protein and protein ligand complex, or wild type and mutant proteins. DSC can also be used in studying protein/lipid interactions, nucleotides, drug-lipid interactions. In studying protein denaturation using DSC, the thermal melt should be at least to some degree reversible, as the thermodynamics calculations rely on chemical equlibrium.
The technique is widely used across a range of applications, both as a routine quality test and as a research tool. The equipment is easy to calibrate, using low melting indium at 156.5985 °C for example, and is a rapid and reliable method of thermal analysis.
DSC is used widely for examining polymeric materials to determine their thermal transitions. Important thermal transitions include the glass transition temperature (Tg), crystallization temperature (Tc), and melting temperature (Tm). The observed thermal transitions can be utilized to compare materials, although the transitions alone do not uniquely identify composition. The composition of unknown materials may be completed using complementary techniques such as IR spectroscopy. Melting points and glass transition temperatures for most polymers are available from standard compilations, and the method can show polymer degradation by the lowering of the expected melting temperature. Tm depends on the molecular weight of the polymer and thermal history.
The percent crystalline content of a polymer can be estimated from the crystallization/melting peaks of the DSC graph using reference heats of fusion found in the literature. DSC can also be used to study thermal degradation of polymers using an approach such as Oxidative Onset Temperature/Time (OOT); however, the user risks contamination of the DSC cell, which can be problematic. Thermogravimetric Analysis (TGA) may be more useful for decomposition behavior determination. Impurities in polymers can be determined by examining thermograms for anomalous peaks, and plasticisers can be detected at their characteristic boiling points. In addition, examination of minor events in first heat thermal analysis data can be useful as these apparently « anomalous peaks » can in fact also be representative of process or storage thermal history of the material or polymer physical aging. Comparison of first and second heat data collected at consistent heating rates can allow the analyst to learn about both polymer processing history and material properties.
DSC is used in the study of liquid crystals. As some forms of matter go from solid to liquid they go through a third state, which displays properties of both phases. This anisotropic liquid is known as a liquid crystalline or mesomorphous state. Using DSC, it is possible to observe the small energy changes that occur as matter transitions from a solid to a liquid crystal and from a liquid crystal to an isotropic liquid.
Using differential scanning calorimetry to study the stability to oxidation of samples generally requires an airtight sample chamber. Usually, such tests are done isothermally (at constant temperature) by changing the atmosphere of the sample. First, the sample is brought to the desired test temperature under an inert atmosphere, usually nitrogen. Then, oxygen is added to the system. Any oxidation that occurs is observed as a deviation in the baseline. Such analysis can be used to determine the stability and optimum storage conditions for a material or compound.
DSC makes a reasonable initial safety screening tool. In this mode the sample will be housed in a non-reactive crucible (often gold or gold-plated steel), and which will be able to withstand pressure (typically up to 100 bar). The presence of an exothermic event can then be used to assess the stability of a substance to heat. However, due to a combination of relatively poor sensitivity, slower than normal scan rates (typically 2–3 °C/min, due to much heavier crucible) and unknown activation energy, it is necessary to deduct about 75–100 °C from the initial start of the observed exotherm to suggest a maximal temperature for the material. A much more accurate data set can be obtained from an adiabatic calorimeter, but such a test may take 2–3 days from ambient at a rate of a 3 °C increment per half-hour.
DSC is widely used in the pharmaceutical and polymer industries. For the polymer chemist, DSC is a handy tool for studying curing processes, which allows the fine tuning of polymer properties. The cross-linking of polymer molecules that occurs in the curing process is exothermic, resulting in a negative peak in the DSC curve that usually appears soon after the glass transition.
In the pharmaceutical industry it is necessary to have well-characterized drug compounds in order to define processing parameters. For instance, if it is necessary to deliver a drug in the amorphous form, it is desirable to process the drug at temperatures below those at which crystallization can occur.
General chemical analysis.
Freezing-point depression can be used as a purity analysis tool when analysed by differential scanning calorimetry. This is possible because the temperature range over which a mixture of compounds melts is dependent on their relative amounts. Consequently, less pure compounds will exhibit a broadened melting peak that begins at lower temperature than a pure compound.
A calorimeter is an object used for calorimetry, or the process of measuring the heat of chemical reactions or physical changes as well as heat capacity. Differential scanning calorimeters, isothermal micro calorimeters, titration calorimeters and accelerated rate calorimeters are among the most common types. A simple calorimeter just consists of a thermometer attached to a metal container full of water suspended above a combustion chamber. It is one of the measurement devices used in the study of thermodynamics, chemistry, and biochemistry.
To find the enthalpy change per mole of a substance A in a reaction between two substances A and B, the substances are separately added to a calorimeter and the initial and final temperatures (before the reaction has started and after it has finished) are noted. Multiplying the temperature change by the mass and specific heat capacities of the substances gives a value for the energy given off or absorbed during the reaction. Dividing the energy change by how many moles of A were present gives its enthalpy change of reaction.
Where q is the amount of heat according to the change in temperature measured in joules and Cv is the heat capacity of the calorimeter which is a value associated with each individual apparatus in units of energy per temperature (Joules/Kelvin).
In 1761 Joseph Black introduced the idea of latent heat which lead to creation of the first ice calorimeters. In 1780, Antoine Lavoisier used the heat from the guinea pig’s respiration to melt snow surrounding his apparatus, showing that respiratory gas exchange is combustion, similar to a candle burning. Lavoisier dubbed this apparatus the calorimeter, based on both Greek and Latin roots. One of the first ice calorimeters was used in the winter of 1782 by Lavoisier and Pierre-Simon Laplace, which relied on the heat required to melt ice to water to measure the heat released from chemical reactions.
An adiabatic calorimeter is a calorimeter used to examine a runaway reaction. Since the calorimeter runs in an adiabatic environment, any heat generated by the material sample under test causes the sample to increase in temperature, thus fueling the reaction.
No adiabatic calorimeter is fully adiabatic – some heat will be lost by the sample to the sample holder. A mathematical correction factor, known as the phi-factor, can be used to adjust the calorimetric result to account for these heat losses. The phi-factor is the ratio of the thermal mass of the sample and sample holder to the thermal mass of the sample alone.
A reaction calorimeter is a calorimeter in which a chemical reaction is initiated within a closed insulated container. Reaction heats are measured and the total heat is obtained by integrating heatflow versus time. This is the standard used in industry to measure heats since industrial processes are engineered to run at constant temperatures. Reaction calorimetry can also be used to determine maximum heat release rate for chemical process engineering and for tracking the global kinetics of reactions. There are four main methods for measuring the heat in reaction calorimeter:
Heat flow calorimeter.
The cooling/heating jacket controls either the temperature of the process or the temperature of the jacket. Heat is measured by monitoring the temperature difference between heat transfer fluid and the process fluid. In addition, fill volumes (i.e. wetted area), specific heat, heat transfer coefficient have to be determined to arrive at a correct value. It is possible with this type of calorimeter to do reactions at reflux, although it is very less accurate.
Heat balance calorimeter.
The cooling/heating jacket controls the temperature of the process. Heat is measured by monitoring the heat gained or lost by the heat transfer fluid.
Power compensation uses a heater placed within the vessel to maintain a constant temperature. The energy supplied to this heater can be varied as reactions require and the calorimetry signal is purely derived from this electrical power.
Constant flux calorimetry (or COFLUX as it is often termed) is derived from heat balance calorimetry and uses specialized control mechanisms to maintain a constant heat flow (or flux) across the vessel wall.
A bomb calorimeter is a type of constant-volume calorimeter used in measuring the heat of combustion of a particular reaction. Bomb calorimeters have to withstand the large pressure within the calorimeter as the reaction is being measured. Electrical energy is used to ignite the fuel; as the fuel is burning, it will heat up the surrounding air, which expands and escapes through a tube that leads the air out of the calorimeter. When the air is escaping through the copper tube it will also heat up the water outside the tube. The change in temperature of the water allows for calculating calorie content of the fuel.
In more recent calorimeter designs, the whole bomb, pressurized with excess pure oxygen (typically at 30atm) and containing a weighed mass of a sample (typically 1–1.5 g) and a small fixed amount of water (to saturate the internal atmosphere, thus ensuring that all water produced is liquid, and removing the need to include enthalpy of vaporization in calculations), is submerged under a known volume of water (ca. 2000 ml) before the charge is electrically ignited. The bomb, with the known mass of the sample and oxygen, form a closed system — no gases escape during the reaction. The weighed reactant put inside the steel container is then ignited. Energy is released by the combustion and heat flow from this crosses the stainless steel wall, thus raising the temperature of the steel bomb, its contents, and the surrounding water jacket. The temperature change in the water is then accurately measured with a thermometer. This reading, along with a bomb factor (which is dependent on the heat capacity of the metal bomb parts), is used to calculate the energy given out by the sample burn. A small correction is made to account for the electrical energy input, the burning fuse, and acid production (by titration of the residual liquid). After the temperature rise has been measured, the excess pressure in the bomb is released.
Basically, a bomb calorimeter consists of a small cup to contain the sample, oxygen, a stainless steel bomb, water, a stirrer, a thermometer, the dewar or insulating container (to prevent heat flow from the calorimeter to the surroundings) and ignition circuit connected to the bomb. By using stainless steel for the bomb, the reaction will occur with no volume change observed.
Since there is no heat exchange between the calorimeter and surroundings (Q=0) (adiabatic), no work is performed (W = 0)
Thus, the total internal energy change:
Also, total internal energy change: (constant volume: )
where : is heat capacity of the bomb
Before the bomb can be used to determine heat of combustion of any compound, it must be calibrated. The value of can be estimated by:
and can be measured;
In the laboratory, is determined by running a compound with known heat of combustion value:
Common compounds are benzoic acid
) or p-methyl benzoic acid: ).
Temperature (T) is recorded every minute and :
A small factor contributes to the correction of the total heat of combustion is the fuse wire. Nickel fuse wire is often used and has heat of combustion = 981.2 cal/g
In order to calibrate the bomb, a small amount (~ 1 g) of benzoic acid, or p-methyl benzoic acid is weighed. A length of Nickel fuse wire (~10 cm) is weighed both before and after the combustion process. Mass of fuse wire burned:
The combustion of sample (benzoic acid) inside the bomb:
value of the bomb is determined, the bomb is ready to use to calculate heat of combustion of any compounds by:
Combustion of non-flammables.
The higher pressure and concentration of O2 in the bomb system can render combustible some compounds that are not normally flammable. Some substances do not combust completely, making the calculations harder as the remaining mass has to be taken into consideration, making the possible error considerably larger and compromising the data.
When working with compounds that are not as flammable (that might not combust completely) one solution would be to mix the compound with some flammable compounds with a known heat of combustion and make a pallet with the mixture. Once the Cv of the bomb is known, the heat of combustion of the flammable compound (CFC), of the wire (CW) and the masses (mFC and mW), and the temperature change (ΔT), the heat of combustion of the less flammable compound (CLFC) can be calculated with:
CLFC = Cv. ΔT – CFC*mFC – CW*mW
The detection is based on a three-dimensional fluxmeter sensor. The fluxmeter element consists of a ring of several thermocouples in series. The corresponding thermopile of high thermal conductivity surrounds the experimental space within the calorimetric block. The radial arrangement of the thermopiles guarantees an almost complete integration of the heat. This is verified by the calculation of the efficiency ratio that indicates that an average value of 94% +/- 1% of heat is transmitted through the sensor on the full range of temperature of the Calvet-type calorimeter. In this setup, the sensitivity of the calorimeter is not affected by the crucible, the type of purgegas, or the flow rate. The main advantage of the setup is the increase of the experimental vessel’s size and consequently the size of the sample, without affecting the accuracy of the calorimetric measurement.
The calibration of the calorimetric detectors is a key parameter and has to be performed very carefully. For Calvet-type calorimeters, a specific calibration, so called Joule effect or electrical calibration, has been developed to overcome all the problems encountered by a calibration done with standard materials. The main advantages of this type of calibration are as follows:
- It is an absolute calibration.
- The use of standard materials for calibration is not necessary. The calibration can be performed at a constant temperature, in the heating mode and in the cooling mode.
- It can be applied to any experimental vessel volume.
- It is a very accurate calibration.
An example of Calvet-type calorimeter is the C80 Calorimeter (reaction, isothermal and scanning calorimeter).
A constant-pressure calorimeter measures the change in enthalpy of a reaction occurring in solution during which the atmospheric pressure remains constant.
An example is a coffee-cup calorimeter, which is constructed from two nested Styrofoam cups and a lid with two holes, allowing insertion of a thermometer and a stirring rod. The inner cup holds a known amount of a solvent, usually water, that absorbs the heat from the reaction. When the reaction occurs, the outer cup provides insulation. Then:
= Specific heat at constant pressure
= Enthalpy of solution
= Change in temperature
= mass of solvent
= molecular mass of solvent
The measurement of heat using a simple calorimeter, like the coffee cup calorimeter, is an example of constant-pressure calorimetry, since the pressure (atmospheric pressure) remains constant during the process. Constant-pressure calorimetry is used in determining the changes in enthalpy occurring in solution. Under these conditions the change in enthalpy equals the heat.
Differential scanning calorimeter.
In a differential scanning calorimeter (DSC), heat flow into a sample—usually contained in a small aluminium capsule or ‘pan’—is measured differentially, i.e., by comparing it to the flow into an empty reference pan.
In a heat flux DSC, both pans sit on a small slab of material with a known (calibrated) heat resistance K. The temperature of the calorimeter is raised linearly with time (scanned), i.e., the heating rate dT/dt = β is kept constant. This time linearity requires good design and good (computerized) temperature control. Of course, controlled cooling and isothermal experiments are also possible.
Heat flows into the two pans by conduction. The flow of heat into the sample is larger because of its heat capacity Cp. The difference in flow dq/dt induces a small temperature difference ΔT across the slab. This temperature difference is measured using a thermocouple. The heat capacity can in principle be determined from this signal:
Note that this formula (equivalent to Newton’s law of heat flow) is analogous to, and much older than, Ohm’s law of electric flow: ΔV = R dQ/dt = R I.
When suddenly heat is absorbed by the sample (e.g., when the sample melts), the signal will respond and exhibit a peak.
From the integral of this peak the enthalpy of melting can be determined, and from its onset the melting temperature.
Differential scanning calorimetry is a workhorse technique in many fields, particularly in polymer characterization.
A modulated temperature differential scanning calorimeter (MTDSC) is a type of DSC in which a small oscillation is imposed upon the otherwise linear heating rate.
This has a number of advantages. It facilitates the direct measurement of the heat capacity in one measurement, even in (quasi-)isothermal conditions. It permits the simultaneous measurement of heat effects that respond to a changing heating rate (reversing) and that don’t respond to the changing heating rate (non-reversing). It allows for the optimization of both sensitivity and resolution in a single test by allowing for a slow average heating rate (optimizing resolution) and a fast changing heating rate (optimizing sensitivity).
Safety Screening:- DSC may also be used as an initial safety screening tool. In this mode the sample will be housed in a non-reactive crucible (often Gold, or Gold plated steel), and which will be able to withstand pressure (typically up to 100 bar). The presence of an exothermic event can then be used to assess the stability of a substance to heat. However, due to a combination of relatively poor sensitivity, slower than normal scan rates (typically 2–3°/min – due to much heavier crucible) and unknown activation energy, it is necessary to deduct about 75–100 °C from the initial start of the observed exotherm to suggest a maximum temperature for the material. A much more accurate data set can be obtained from an adiabatic calorimeter, but such a test may take 2–3 days from ambient at a rate of 3 °C increment per half hour.
Isothermal titration calorimeter.
In an isothermal titration calorimeter, the heat of reaction is used to follow a titration experiment. This permits determination of the midpoint (stoichiometry) (N) of a reaction as well as its enthalpy (delta H), entropy (delta S) and of primary concern the binding affinity (Ka)
The technique is gaining in importance particularly in the field of biochemistry, because it facilitates determination of substrate binding to enzymes. The technique is commonly used in the pharmaceutical industry to characterize potential drug candidates.
Isothermal titration calorimetry.
Isothermal titration calorimetry (ITC) is a physical technique used to determine the thermodynamic parameters of interactions in solution. It is most often used to study the binding of small molecules (such as medicinal compounds) to larger macromolecules (proteins, DNA etc.). It consists of two cells which are enclosed in an adiabatic jacket. The compounds to be studied are placed in the sample cell, while the other cell, the reference cell, is used as a control and contains the buffer in which the sample is dissolved.
ITC is a quantitative technique that can determine the binding affinity , enthalpy changes , and binding stoichiometry of the interaction between two or more molecules in solution. From these initial measurements, Gibbs free energy changes and entropy changes can be determined using the relationship:
where : is the gas constant and is the absolute temperature.
For accurate measurements of binding affinity, the curve of the thermogram must be sigmoidal. The profile of the curve is determined by the c-value, which is calculated using the equation:
where: is the stoichiometry of the binding, is the association constant and is the concentration of the molecule in the cell.
An isothermal titration calorimeter is composed of two identical cells made of a highly efficient thermally conducting and chemically inert material such as Hastelloy alloy or gold, surrounded by an adiabatic jacket. Sensitive thermopile/thermocouple circuits are used to detect temperature differences between the reference cell (filled with buffer or water) and the sample cell containing the macromolecule. Prior to addition of ligand, a constant power (<1 mW) is applied to the reference cell. This directs a feedback circuit, activating a heater located on the sample cell. During the experiment, ligand is titrated into the sample cell in precisely known aliquots, causing heat to be either taken up or evolved (depending on the nature of the reaction). Measurements consist of the time-dependent input of power required to maintain equal temperatures between the sample and reference cells.
In an exothermic reaction, the temperature in the sample cell increases upon addition of ligand. This causes the feedback power to the sample cell to be decreased (remember: a reference power is applied to the reference cell) in order to maintain an equal temperature between the two cells. In an endothermic reaction, the opposite occurs; the feedback circuit increases the power in order to maintain a constant temperature (isothermic/isothermal operation).
Observations are plotted as the power needed to maintain the reference and the sample cell at an identical temperature against time. As a result, the experimental raw data consists of a series of spikes of heat flow (power), with every spike corresponding to one ligand injection. These heat flow spikes/pulses are integrated with respect to time, giving the total heat exchanged per injection. The pattern of these heat effects as a function of the molar ratio [ligand]/[macromolecule] can then be analysed to give the thermodynamic parameters of the interaction under study. Degassing samples is often necessary in order to obtain good measurements as the presence of gas bubbles within the sample cell will lead to abnormal data plots in the recorded results. The entire experiment takes place under computer control.
Application in drug discovery.
ITC is one of the latest techniques to be used in characterizing binding affinity of ligands for proteins. It is typically used as a secondary screening technique in high throughput screening. ITC is particularly useful as it gives not only the binding affinity, but also the thermodynamics of the binding. This thermodynamic characterization allows for further optimization of compounds.
Thermal runaway occurs in situations where an increase in temperature changes the conditions in a way that causes a further increase in temperature, often leading to a destructive result. It is a kind of uncontrolled positive feedback.
In other words, « thermal runaway » describes a process which is accelerated by increased temperature, in turn releasing energy that further increases temperature. In chemistry (and chemical engineering), it is associated with strongly exothermic reactions that are accelerated by temperature rise. In electrical engineering, thermal runaway is typically associated with increased current flow and power dissipation, although exothermic chemical reactions can be of concern here too. Thermal runaway can occur in civil engineering, notably when the heat released by large amounts of curing concrete is not controlled. In astrophysics, runaway nuclear fusion reactions in stars can lead to nova and several types of supernova explosions, and also occur as a less dramatic event in the normal evolution of solar mass stars, the « helium flash ».
Some climate researchers have postulated that a global average temperature increase of 3–4 degrees Celsius above the preindustrial baseline could lead to a further unchecked increase in surface temperatures. For example, releases of methane, a greenhouse gas more potent than CO2, from wetlands, melting permafrost and continental margin seabed clathrate deposits could be subject to positive feedback.
Thermal runaway is also called thermal explosion in chemical engineering, or runaway reaction in organic chemistry. It is a process by which an exothermic reaction goes out of control: the reaction rate increases due to an increase in temperature, causing a further increase in temperature and hence a further rapid increase in the reaction rate. This has contributed to industrial chemical accidents, most notably the 1947 Texas City disaster from overheated ammonium nitrate in a ship’s hold, and the 1976 explosion of zoalene, in a drier, at King’s Lynn. Frank-Kamenetskii theory provides a simplified analytical model for thermal explosion. Chain branching is an additional positive feedback mechanism which may also cause temperature to skyrocket because of rapidly increasing reaction rate.
Chemical reactions are either endothermic or exothermic, as expressed by their change in enthalpy. Many reactions are highly exothermic, so many industrial-scale and oil refinery processes have some level of risk of thermal runaway. These include hydrocracking, hydrogenation, alkylation (SN2), oxidation, metalation and nucleophilic aromatic substitution. For example, oxidation of cyclohexane into cyclohexanol and cyclohexanone and ortho-xylene into phthalic anhydride have led to catastrophic explosions when reaction control failed.
Thermal runaway may result from unwanted exothermic side reaction(s) that begin at higher temperatures, following an initial accidental overheating of the reaction mixture. This scenario was behind the Seveso disaster, where thermal runaway heated a reaction to temperatures such that in addition to the intended 2,4,5-trichlorophenol, poisonous 2,3,7,8-tetrachlorodibenzo-p-dioxin was also produced, and was vented into the environment after the reactor’s rupture disk burst.
Thermal runaway is most often caused by failure of the reactor vessel’s cooling system. Failure of the mixer can result in localized heating, which initiates thermal runaway. Similarly, in flow reactors, localized insufficient mixing causes hotspots to form, wherein thermal runaway conditions occur, which causes violent blowouts of reactor contents and catalysts. Incorrect equipment component installation is also a common cause. Many chemical production facilities are designed with high-volume emergency venting, a measure to limit the extent of injury and property damage when such accidents occur.
At large scale, it is unsafe to « charge all reagents and mix », as is done in laboratory scale. This is because the amount of reaction scales with the cube of the size of the vessel (V ∝ r³), but the heat transfer area scales with the square of the size (A ∝ r²), so that the heat production-to-area ratio scales with the size (V/A ∝ r). Consequently, reactions that easily cool fast enough in the laboratory can dangerously self-heat at ton scale. In 2007, this kind of erroneous procedure caused an explosion of a 2,400 U.S. gallons (9,100 L)-reactor used to metalate methylcyclopentadiene with metallic sodium, causing the loss of four lives and parts of the reactor being flung 400 feet (120 m) away. Thus, industrial scale reactions prone to thermal runaway are preferably controlled by the addition of one reagent at a rate corresponding to the available cooling capacity.
Some laboratory reactions must be run under extreme cooling, because they are very prone to hazardous thermal runaway. For example, in Swern oxidation, the formation of sulfonium chloride must be performed in a cooled system (–30 °C), because at room temperature the reaction undergoes explosive thermal runaway.
The UK Chemical Reaction Hazards Forum publishes analysis of previously-unreported chemical accidents to assist the education of the scientific and engineering community, with the aim of preventing similar occurrences elsewhere. Almost 150 such reports are available to view as of January 2009.
Microwaves are used for heating of various materials in cooking and various industrial processes. The rate of heating of the material depends on the energy absorption, which depends on the dielectric constant of the material. The dependence of dielectric constant on temperature varies for different materials; some materials display significant increase with increasing temperature. This behavior, when the material gets exposed to microwaves, leads to selective local overheating, as the warmer areas are better able to accept further energy than the colder areas—potentially dangerous especially for thermal insulators, where the heat exchange between the hot spots and the rest of the material is slow. These materials are called thermal runaway materials. This phenomenon occurs in some ceramics.
In combustion, Frank-Kamenetskii theory explains the thermal explosion of a homogeneous mixture of reactants, kept inside a closed vessel with constant temperature walls. It is named after a Russian scientist David A. Frank-Kamenetskii, who along with Nikolay Semenov developed the theory in the 1930s.
Safety management system.
A safety management system (SMS) is a management system designed to manage safety elements in the workplace. It includes policy, objectives, plans, procedures, organisation, responsibilities and other measures. The SMS is used in industries that manage significant safety risks, including aviation, petroleum, chemical, electricity generation and others.
An SMS provides a systematic way to continuously identify and monitor hazards and control risks while maintaining assurance that these risk controls are effective. SMS can be defined as:
…a businesslike approach to safety. It is a systematic, explicit and comprehensive process for managing safety risks. As with all management systems, a safety management system provides for goal setting, planning, and measuring performance. A safety management system is woven into the fabric of an organization. It becomes part of the culture, the way people do their jobs.
For the purposes of defining safety management, safety can be defined as:
… the reduction of risk to a level that is as low as is reasonably practicable.
There are three imperatives for adopting a safety management system for a business – these are ethical, legal and financial.
There is an implied moral obligation placed on an employer to ensure that work activities and the place of work to be safe, there are legislative requirements defined in just about every jurisdiction on how this is to be achieved and there is a substantial body of research which shows that effective safety management (which is the reduction of risk in the workplace) can reduce the financial exposure of an organisation by reducing direct and indirect costs associated with accident and incidents.
To address these three important elements, an effective SMS should:
- Define how the organisation is set up to manage risk.
- Identify workplace risk and implement suitable controls.
- Implement effective communications across all levels of the organisation.
- Implement a process to identify and correct non-conformities.
- Implement a continual improvement process.
A safety management system can be created to fit any business type and/or industry sector.
Basic safety-management components.
International Labour Organization SMS model.
Since there are many models to choose from to outline the basic components of a safety management system, the one chosen here is the international standard promoted by the International Labour Organization (ILO). In the ILO document, the safety management basic components are:
- Planning and implementation
- Action for improvement
Although other SMS models use different terminology, the process and workflow for safety management systems are usually similar;
- Policy – Establish within policy statements what the requirements are for the organization in terms of resources, defining management commitment and defining occupational safety and health (OSH) targets.
- Organizing – How is the organization structured, where are responsibilities and accountabilities defined, who reports to who and who is responsible for what.
- Planning and Implementation – What legislation and standards apply to our organization, what OSH objectives are defined and how are these reviews, hazard prevention and the assessment and management of risk.
- Evaluation – How is OSH performance measured and assessed, what are the processes for the reporting of accidents and incidents and for the investigation of accidents and what internal and external audit processes are in place to review the system.
- Action for Improvement – How are preventative and corrective actions managed and what processes are in place to ensure the continual improvement process. There is a significant amount of detail within each of these sections and these should be examined in detail from the ILO-OSH Guidelines document.
A SMS is intended to act as a framework to allow an organisation, as a minimum, to meet its legal obligations under occupational safety and health law. The structure of a SMS is generally speaking, not of itself a legal requirement but it is an extremely effective tool to organise the myriad aspects of occupational safety and health (OSH) that can exist within an organisation, often to meet standards which exceed the minimum legal requirement.
An SMS is only as good as its implementation – effective safety management means that organisations need to ensure they are looking at all the risks within the organization as a single system, rather than having multiple, competing, ‘Safety Management Silos.’ If safety is not seen holistically, it can interfere with the prioritization of improvements or even result in safety issues being missed. For example, after an explosion in March 2005 at BP’s Texas City Refinery (BP) the investigation concluded that the company had put too much emphasis on personal safety thus ignoring the safety of their processes. The antidote to such silo thinking is the proper evaluation of all risks, a key aspect of an effective SMS.
Adoption for industry sectors
There are a number of industry sectors worldwide which have recognised the benefits of effective safety management. The regulatory authorities for these industries have developed safety management systems specific to their own industries and requirements, often backed up by regulation. Below are examples from different industry sectors from a number of varied worldwide locations.
The International Civil Aviation Organization has recommended that all aviation authorities implement SMS regulatory structures. ICAO has provided resources to assist with implementation, including the ICAO Safety Management Manual. Unlike the traditional occupational safety focus of SMS, the ICAO focus is to use SMS for managing aviation safety. Id.
The ICAO High-level Safety Conference 2010 recommendation 2/5 proposed the development of a new Annex (19) dedicated to Safety Management. The Annex was published in February 2013 and entered into force on November 14, 2013. The benefits identified of this approach included:
- Address safety risks proactively;
- Manage and support strategic regulatory and infrastructure developments;
- Re-enforce the role played by the State in managing safety at the State level, in coordination with service providers;
- Stress the concept of overall safety performance in all domains.
The United States has introduced SMS for airports through an advisory circular and other guidance.
The United States announced at the 2008 EASA/FAA/TC International Safety Conference that they would be developing regulations to implement SMS for repair stations, air carriers, and manufacturers. The FAA formed a rulemaking committee to address the implementation (known as the SMS ARC). The SMS ARC reported its findings to the FAA on March 31, 2010. The Report recognizes that many of the elements of SMS already exist in the U.S. regulations, but that some elements do not yet exist. A draft of what the US SMS rule might look like was proposed by one trade association that participated in the ARC. Currently, the FAA is supporting voluntary pilot projects for SMS.
The Federal Aviation Administration has also required that all FAA services and offices adopt a common Aviation Safety (AVS) Safety Management System (AVSSMS). This is what ICAO calls a State Safety Program (SSP).
The Federal Aviation Administration published a Notice of Proposed Rulemaking (NPRM) for the establishment of SMS for air carriers. That NPRM explains that it is intended to serve as the foundation for rules that would later be applied to Part 135 operators, Part 145 repair stations and Part 21 manufacturers. Id. Several U.S. trade associations filed comments in response to the air carrier NPRM, including the Aviation Suppliers Association (ASA) comments in response to the SMS NPRM. and the Modification and Replacement Parts Association (MARPA) Among these comments were arguments for developing separate SMS regulations for other certificate holders, in order to make sure that SM remains a usable tool for advancing safety (rather than a uniform but useless paperwork exercise). In addition, the Federal Aviation Administration has also filed a NPRM for SMS for airports, which would be separate from the rules for SMS for air carriers (consistent with the arguments of the trade associations).
The European Aviation Safety Agency (EASA) began the process of implementing Safety Management System (SMS) regulations by issuing Terms of Reference (TOR) on July 18, 2011. That was followed by a Notice of Proposed Amendment (NPA) issued on January 21, 2013. The proposed EASA regulation would apply to repair stations, but would have significant ancillary effects on other aviation industry sub-sectors.
The International Maritime Organization (IMO) is another organization that has adopted SMS. All international passenger ships and oil tankers, chemical tankers, gas carriers, bulk carriers and cargo ships of 500 gross tons or more are required to have a Safety Management System. In the preamble to the International Safety Management (ISM) Code, the IMO states, “The cornerstone of good safety management is commitment from the top. In matters of safety and pollution prevention it is the commitment, competence, attitudes and motivation of individuals at all levels that determines the end result.”
Transport Canada’s Rail Safety Directorate incorporated SMS into the rail industry in 2001. The Rail Safety Management System requirements are set out in the Railway Safety Management System Regulations. The objectives of the Rail Safety Management System Regulations are to ensure that safety is given management time and corporate resources and that it is subject to performance measurement and monitoring on par with corporate financial and production goals.
The effect of SMS in the rail industry has not been positive, as a 2006 Toronto Star review of Transportation Safety Board data indicated that rail accidents were soaring. Critics have argued that this evidence should preclude the adoption of SMS in the aviation sector. However, Transportation Safety Board data show that the accident rate in the rail industry has actually varied around the average over that 10-year period. Since the Toronto Star article was published, the accident rate has decreased. The Transportation Safety Board reported that “a total of 1,143 rail accidents were reported to the TSB in 2008, a 14% decrease from the 2007 total of 1,323 and an 18% decrease from the 2003–2007 average of 1,387” and also noted that, in 2008, rail incidents reported under the TSB mandatory reporting requirements reached a 26-year low of 215.
The RSMSR has in 2015 been updated.
Process safety management.
Process safety management system is a regulation promulgated by the U.S. Occupational Safety and Health Administration (OSHA). A process is any activity or combination of activities including any use, storage, manufacturing, handling or the on-site movement of highly hazardous chemicals (HHCs) as defined by OSHA and the Environmental Protection Agency.
Process safety management system is an analytical tool focused on preventing releases of any substance defined as a « highly hazardous chemical » by the EPA or OSHA. Process Safety Management (PSMS) refers to a set of interrelated approaches to managing hazards associated with the process industries and is intended to reduce the frequency and severity of incidents resulting from releases of chemicals and other energy sources (US OSHA 1999). These standards are composed of organizational and operational procedures, design guidance, audit programs, and a host of other methods.
Elements of process safety management system.
The process safety management system program is divided into 14 elements. The U.S. Occupational Safety and Health Administration (OSHA) 1910.119 define all 14 elements of the process safety management system plan.
- Process Safety Information
- Process Hazard Analysis
- Operating Procedures
- Mechanical Integrity
- Hot Work
- Management of Change
- Incident Investigation
- Compliance Audits
All of those elements mentioned above are interlinked and interdependent. There is a tremendous interdependency of the various elements of PSM. All elements are related and are necessary to make up the entire PSM picture. Every element either contributes information to other elements for the completion or utilizes information from other elements in order to be completed.
Process safety information.
Process safety information (PSI) might be considered the keystone of a PSM Program in that it tells you what you are dealing with from both the equipment and the process standpoint. In order to be in compliance with the OSHA PSMS regulations the process safety information should include information pertaining to the hazards of the highly hazardous chemicals used or produced by the process, information pertaining to the technology of the process and information pertaining to the equipment in the process.
Information pertaining to the hazards of the highly hazardous chemicals in the process should consist of at least the following:
- Toxicity information
- Permissible exposure limit
- Physical data
- Reactivity data
- Corrosivity data
- Thermal and chemical stability data
- Hazardous effects of inadvertent mixing of different materials that could foreseeably occur.
Information pertaining to the technology of the process should include at least the following:
- A block flow diagram or simplified process flow diagram
- Process chemistry and its properties
- Maximum intended inventory
- Safety upper and lower limits for such items as temperatures, pressures, flows or compositions
- An evaluation of the consequences of deviations, including those affecting the safety and health of the employees
Information pertaining to the equipment in the process should include the following:
- Materials of construction
- Piping and instrument diagram (P&IDs)
- Electrical classification
- Relief system design and design basis
- Ventilation system design
- Design codes and standards employed
- Material and energy balances for processes built after May 26, 1992
- Safety system (for example, interlocks, detection or suppression systems)
The employer should document that equipment complies with recognized and generally accepted good engineering practices (RAGAGEP).
For existing equipment designed and constructed in accordance with codes, standards or practices that are no longer in general use, the employer should determine and document that the equipment is designed, maintained, inspected, tested and operating in a safe manner.
A process includes any group of vessels which are interconnected or separate and contain Highly Hazardous Chemicals (HHCs) which could be involved in a potential release. A process safety incident is the « Unexpected release of toxic, reactive, or flammable liquids and gases in processes involving highly hazardous chemicals. Incidents continue to occur in various industries that use highly hazardous chemicals which exhibit toxic, reactive, flammable, or even explosive properties, or may exhibit a combination of these properties. Regardless of the industry that uses these highly hazardous chemicals, there is a potential for an accidental release any time they are not properly controlled. This, in turn, creates the possibility of disaster. To help assure safe and healthy workplaces, OSHA has issued the Process Safety Management of Highly Hazardous Chemicals regulation (Title 29 of CFR Section 1910.119) which contains requirements for the management of hazards associated with processes using highly hazardous chemicals. »
Any facility that stores or uses a defined « highly hazardous chemical » must comply with OSHA’s process safety management (PSM) regulations as well as the quite similar United States Environmental Protection Agency (EPA) Risk management program (RMP) regulations (Title 40 CFR Part 68). The EPA has published a model RMP plan for an ammonia refrigeration facility which provides excellent guidance on how to comply with either OSHA’s PSM regulations or the EPA’s RMP regulations.
The Center for Chemical Process Safety (CCPS) of the American Institute of Chemical Engineers (AIChE) has published a widely used book that explains various methods for identifying hazards in industrial facilities and quantifying their potential severity. Appendix D of the OSHA’s PSM regulations endorses the use of the methods explained in that book. AIChE publishes additional guidelines for process safety documentation, implementing process safety management systems, and the Center for Chemical Process Safety publishes an engineering design for process safety.
In Australia, consideration of process safety management is a key consideration for the management of major hazard facilities (MHFs).