Tumgik
nerdygladiatorvoid · 3 years
Text
How Electric Motors Work
Flick a switch and get instant power—how our ancestors would have loved electric motors! You can find them in everything from electric trains to remote-controlled cars—and you might be surprised how common they are. How many electric motors are there in the room with you right now? There are probably two in your computer for starters, one spinning your hard drive around and another one powering the cooling fan. If you're sitting in a bedroom, you'll find motors in hair dryers and many toys; in the bathroom, they're in extractor fans, and electric shavers; in the kitchen, motors are in just about every appliance from clothes washing machines and dishwashers to coffee grinders, microwaves, and electric can openers. Electric motors have proved themselves to be among the greatest inventions of all time. Let's pull some apart and find out how they work!
The basic idea of an electric motor is really simple: you put electricity into it at one end and an axle (metal rod) rotates at the other end giving you the power to drive a machine of some kind. How does this work in practice? Exactly how do your convert electricity into movement? To find the answer to that, we have to go back in time almost 200 years.
Suppose you take a length of ordinary wire, make it into a big loop, and lay it between the poles of a powerful, permanent horseshoe magnet. Now if you connect the two ends of the wire to a battery, the wire will jump up briefly. It's amazing when you see this for the first time. It's just like magic! But there's a perfectly scientific explanation. When an electric current starts to creep along a wire, it creates a magnetic field all around it. If you place the wire near a permanent magnet, this temporary magnetic field interacts with the permanent magnet's field. You'll know that two magnets placed near one another either attract or repel. In the same way, the temporary magnetism around the wire attracts or repels the permanent magnetism from the magnet, and that's what causes the wire to jump.
The link between electricity, magnetism, and movement was originally discovered in 1820 by French physicist André-Marie Ampère (1775–1867) and it's the basic science behind a Ac motor. But if we want to turn this amazing scientific discovery into a more practical bit of technology to power our electric mowers and toothbrushes, we've got to take it a little bit further. The inventors who did that were Englishmen Michael Faraday (1791–1867) and William Sturgeon (1783–1850) and American Joseph Henry (1797–1878). Here's how they arrived at their brilliant invention.
Suppose we bend our wire into a squarish, U-shaped loop so there are effectively two parallel wires running through the magnetic field. One of them takes the electric current away from us through the wire and the other one brings the current back again. Because the current flows in opposite directions in the wires, Fleming's Left-Hand Rule tells us the two wires will move in opposite directions. In other words, when we switch on the electricity, one of the wires will move upward and the other will move downward.
If the coil of wire could carry on moving like this, it would rotate continuously—and we'd be well on the way to making an electric motor. But that can't happen with our present setup: the wires will quickly tangle up. Not only that, but if the coil could rotate far enough, something else would happen. Once the coil reached the vertical position, it would flip over, so the electric current would be flowing through it the opposite way. Now the forces on each side of the coil would reverse. Instead of rotating continuously in the same direction, it would move back in the direction it had just come! Imagine an electric train with a motor like this: it would keep shuffling back and forward on the spot without ever actually going anywhere.
How an asynchronous motor works—in practice
There are two ways to overcome this problem. One is to use a kind of electric current that periodically reverses direction, which is known as an alternating current (AC). In the kind of small, battery-powered motors we use around the home, a better solution is to add a component called a commutator to the ends of the coil. (Don't worry about the meaningless technical name: this slightly old-fashioned word "commutation" is a bit like the word "commute". It simply means to change back and forth in the same way that commute means to travel back and forth.) In its simplest form, the commutator is a metal ring divided into two separate halves and its job is to reverse the electric current in the coil each time the coil rotates through half a turn. One end of the coil is attached to each half of the commutator. The electric current from the battery connects to the motor's electric terminals. These feed electric power into the commutator through a pair of loose connectors called brushes, made either from pieces of  graphite (soft carbon similar to pencil "lead") or thin lengths of springy metal, which (as the name suggests) "brush" against the commutator. With the commutator in place, when electricity flows through the circuit, the coil will rotate continually in the same direction.
A simple, experimental motor such as this isn't capable of making much power. We can increase the turning force (or torque) that the motor can create in three ways: either we can have a more powerful permanent magnet, or we can increase the electric current flowing through the wire, or we can make the coil so it has many "turns" (loops) of very thin wire instead of one "turn" of thick wire. In practice, a motor also has the permanent magnet curved in a circular shape so it almost touches the coil of wire that rotates inside it. The closer together the magnet and the coil, the greater the force the motor can produce.
Although we've described a number of different parts, you can think of a motor as having just two essential components:
There's a permanent magnet (or magnets) around the edge of the motor case that remains static, so it's called the stator of a motor.
Inside the stator, there's the coil, mounted on an axle that spins around at high speed—and this is called the rotor. The rotor also includes the commutator.
Universal motors
DC motors like this are great for battery-powered toys (things like model trains, radio-controlled cars, or electric shavers), but you don't find them in many household appliances. Small appliances (things like coffee grinders or electric food blenders) tend to use what are called universal motors, which can be powered by either AC or DC. Unlike a simple DC motor, a universal motor has an electromagnet, instead of a permanent magnet, and it takes its power from the DC or AC power you feed in:
When you feed in DC, the electromagnet works like a conventional permanent magnet and produces a magnetic field that's always pointing in the same direction. The commutator reverses the coil current every time the coil flips over, just like in a simple DC motor, so the coil always spins in the same direction.
When you feed in AC, however, the current flowing through the electromagnet and the current flowing through the coil both reverse, exactly in step, so the force on the coil is always in the same direction and the motor always spins either clockwise or counter-clockwise. What about the commutator? The frequency of the current changes much faster than the motor rotates and, because the field and the current are always in step, it doesn't actually matter what position the commutator is in at any given moment.
In simple DC and universal motors, the rotor spins inside the stator. The rotor is a coil connected to the electric power supply and the stator is a permanent magnet or electromagnet. Large AC motors (used in things like factory machines) work in a slightly different way: they pass alternating current through opposing pairs of magnets to create a rotating magnetic field, which "induces" (creates) a magnetic field in the motor's rotor, causing it to spin around. You can read more about this in our article on AC induction motors. If you take one of these induction motors and "unwrap" it, so the stator is effectively laid out into a long continuous track, the rotor can roll along it in a straight line. This ingenious design is known as a linear motor, and you'll find it in such things as factory machines and floating "maglev" (magnetic levitation) railroads.
Another interesting design is the brushless DC (BLDC) motor. The stator and rotor effectively swap over, with multiple iron coils static at the center and the permanent magnet rotating around them, and the commutator and brushes are replaced by an electronic circuit. You can read more in our main article on hub motors. Stepper motors, which turn around through precisely controlled angles, are a variation of brushless DC motors.
By understanding how a motor works you can learn a lot about magnets, electromagnets and electricity in general. In this article, you will learn what makes electric motors tick.
An electric motor for concrete mixers is all about magnets and magnetism: A motor uses magnets to create motion. If you have ever played with magnets you know about the fundamental law of all magnets: Opposites attract and likes repel. So if you have two bar magnets with their ends marked "north" and "south," then the north end of one magnet will attract the south end of the other. On the other hand, the north end of one magnet will repel the north end of the other (and similarly, south will repel south). Inside an electric motor, these attracting and repelling forces create rotational motion. ­
In the above diagram, you can see two magnets in the motor: The armature (or rotor) is an electromagnet, while the field magnet is a permanent magnet (the field magnet could be an electromagnet as well, but in most small motors it isn't in order to save power).
The motor being dissected here is a simple 10 Hp electric motor that you would typically find in a toy.
You can see that this is a small motor, about as big around as a dime. From the outside you can see the steel can that forms the body of the motor, an axle, a nylon end cap and two battery leads. If you hook the battery leads of the motor up to a flashlight battery, the axle will spin. If you reverse the leads, it will spin in the opposite direction. Here are two other views of the same motor. (Note the two slots in the side of the steel can in the second shot -- their purpose will become more evident in a moment.)
The nylon end cap is held in place by two tabs that are part of the steel can. By bendin­g the tabs back, you can free the end cap and remove it. Inside the end cap are the motor's brushes. These brushes transfer power from the battery to the commutator as the motor spins:
The axle holds the armature and the commutator. The armature is a set of electromagnets, in this case three. The armature in this motor is a set of thin metal plates stacked together, with thin copper wire coiled around each of the three poles of the armature. The two ends of each wire (one wire for each pole) are soldered onto a terminal, and then each of the three terminals is wired to one plate of the commutator.
The final piece of any DC electric motor is the field magnet. The field magnet in this motor is formed by the can itself plus two curved permanent magnets.
One end of each magnet rests against a slot cut into the can, and then the retaining clip presses against the other ends of both magnets.
To understand how an electric motor works, the key is to understand how the electromagnet works. (See How Electromagnets Work for complete details.)
An electromagnet is the basis of an electric motor. You can understand how things work in the motor by imagining the following scenario. Say that you created a simple electromagnet by wrapping 100 loops of wire around a nail and connecting it to a battery. The nail would become a magnet and have a north and south pole while the battery is connected.
Now say that you take your nail electromagnet, run an axle through the middle of it and suspend it in the middle of a horseshoe magnet as shown in the figure below. If you were to attach a battery to the electromagnet so that the north end of the nail appeared as shown, the basic law of magnetism tells you what would happen: The north end of the electromagnet would be repelled from the north end of the horseshoe magnet and attracted to the south end of the horseshoe magnet. The south end of the electromagnet would be repelled in a similar way. The nail would move about half a turn and then stop in the position shown.
You can see that this half-turn of motion is simply due to the way magnets naturally attract and repel one another. The key to an electric motor is to then go one step further so that, at the moment that this half-turn of motion completes, the field of the electromagnet flips. The flip causes the electromagnet to complete another half-turn of motion. You flip the magnetic field just by changing the direction of the electrons flowing in the wire (you do that by flipping the battery over). If the field of the electromagnet were flipped at precisely the right moment at the end of each half-turn of motion, the electric motor would spin freely.
0 notes
nerdygladiatorvoid · 3 years
Text
Medical gloves in the era of coronavirus disease 2019 pandemic
The current coronavirus 2019 disease (COVID-19) pandemic has greatly changed our perspective of the risk for infection from contact, and the use of personal protective devices (PPDs) usually reserved for health care workers (HCWs) has spread to the general population, sometimes indiscriminately. As a result, medical glove stock has been depleted, but most of all medical gloves have become a source of medical concern.[1], [2], [3], [4]
The World Health Organization (WHO) has warned about the limited protective efficacy of gloves. There is high risk for infection spread with their incorrect use that could instead favor the spread of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Regular use of gloves for daily activities may lead to a false sense of protection and to an increased risk for self-contamination. This would involve the involuntary touching of the face or the spreading of fomites to desks, phones, and computers keyboards. A study has found that viruses can survive on gloves for 2 to 4 hours.5
Hand-to-face contact has a substantial role in upper respiratory tract infections,6 , 7 although COVID-19’s main way of transmission remains symptomatic person-to-person through respiratory droplets.[1], [2], [3], [4] The Centers for Disease Control and Prevention (CDC) and the European Center for Disease Prevention and Control (ECDC) have recently provided guidance to regulate the use of gloves both in the health care setting and in the community.2 , 4 In the context of the COVID-19 pandemic (Table 1 ), gloves are recommended when caring for confirmed or suspected COVID-19 patients, especially when there is the risk of contact with body fluids (eg, blood, wound care, aerosol-generating procedures).
Hand protection with gloves is essential in any medical procedure, because skin cleaning/disinfection alone does not remove all pathogens, especially when the contamination is considerably high. Nonsterile disposable gloves should be prioritized, and ECDC alerts that no direct evidence documents an increased protection against COVID-19 through glove use, compared with proper hand hygiene alone. Meticulous hand hygiene with water and soap or by alcohol-based hand rub solutions is not avoided by the use of gloves.
There are many different types of gloves, depending on the level of protection, tactility, risk of allergy, or cost (Table 2 ). Although biohazard risk requires frequent glove changing, the extended use of gloves, decontamination with hand disinfectants, and reuse are frequent.8 All of this should be avoided, because effects of hand sanitizers are tested on the skin, whereas application on gloved hands affects gloves’ mechanical properties. In a recent investigation,9 the application of 70% ethanol or 63% isopropanol commercial disinfectants reduced the tensile strength of latex and nitrile gloves, with a higher impact on nitrile gloves. Elongation did not change much with latex gloves, but nitrile gloves were affected. There are additional concerns about permeability, as alcohol can permeate any type of glove after 10 minutes. Some types of disposable gloves are permeated at 2 minutes, and repeated exposure to disinfectants can increase the permeability of the gloves. Alcohol is inactivated in the presence of organic matter, which can easily remain on used gloves, thus potentially driving the viral transmission.
Extended length gloves are not necessary when providing care to suspected or confirmed COVID-19 patients. They are not specifically recommended, except for activities with increased risk, such as submerging hands into a solution. For standard procedures, it is sufficient to cover the cuff (wrist) of the gown while donning.[1], [2], [3], [4]
Another common measure that is no longer recommended is “double gloving,” except for surgical procedures that carry a high risk of disrupting the integrity of the glove. Double gloving seems to increase the incidence of dermatologic side effects, from irritation and overhydration to induction of latex allergy. The increase of skin damage as the consequence of overzealous PPD use and hand hygiene is an emergent consequence of the COVID-19 handling.[10], [11], [12]
About 74.5% of front-line COVID-19 HCWs developed hand dermatitis in the Chinese experience.13 A questionnaire-based study suggested that 88.5% of skin reactions on the hands are associated with the use of latex gloves.14 Three types of adverse events might occur: latex allergy, talcum powder reactions, and irritant dermatitis. Excluding latex allergy and powder within the gloves, the problem of excessive dryness and pruritus, associated with irritant dermatitis, may be aggravated by occlusion, leading to sweating and/or overhydration. This then may increase the permeability to sanitizers or detergents, creating a vicious cycle, plus aggravation of hand dermatitis.12
A peculiar pattern of hand dermatitis has been recognized, characterized by erythema and fine scaling on the palms and web spaces.15 This may be attributed to the depletion of surface lipids, resulting in deeper penetration of detergents, and progressive damage of skin layers is a major pathogenetic mechanism. Irritant contact dermatitis is more commonly found with iodophors, chlorhexidine, chloroxylenol, triclosan, and alcohol-based products, whereas allergic contact dermatitis develops due to quaternary ammonium compounds, iodine or iodophors, chlorhexidine, triclosan, chloroxylenol, and alcohol sensitization.
To date, there have been no verified reports of COVID-19 infection as direct consequence of skin damage. Angiotensin-converting enzyme 2 (ACE2), which is the main cell receptor for SARS-CoV-2 entry, can be expressed in the basal layer of the epidermis, hair follicles, and eccrine glands, as well as on skin blood vessels.16
Basic skincare measures should be taken to avoid the risk of SARS-CoV-2 entry through the skin.[10], [11], [12], [13], [14], [15] Careful hand skin drying and hypoallergenic hand cream/emollients may be employed to prevent trapping sanitizers in the web spaces. Emollients may also be applied at other times to correct any residual dryness and scaling, or with the occurrence of hand dermatitis, topical corticosteroids are indicated.
A final consideration is the generation of massive amount of medical waste, caused in part by the extensive use of PPDs.17 HCWs, together with the general population, are using more gloves than ever before, whereas it should be limited to very essential preventive measures.
Medical gloves remain an essential part of the infection-control strategy; however, caring for patients with COVID-19 has pointed out the need for more accuracy and respect of novel guidance. Prolonged use of gloves, outside of direct patient contact, might be self-defeating rather than protective. Hand dermatitis is an emerging concern. At this time, the U.S. Food and Drug Administration has not cleared, approved, or authorized any medical gloves for specific protection against the virus that causes COVID-19 or prevention of COVID-19 infection.
Broadly speaking, there are 2 types of medical gloves: examination gloves, which are ambidextrous, usually nonsterile, and come in a small range of sizes, are used for nonsterile and less dextrous tasks and also for most dental work; surgical gloves are sterile, come individually packaged in handed pairs, and are usually available in half-inch intervals of hand girth. They are used in the operating theater for a variety of dextrous tasks, ranging from microsurgery on the eye or ear to bone setting or hip replacement.
Because the majority of clinical work is not perceived to be as dextrous as surgery, less emphasis is placed on the performance of examination gloves. Until recently, both examination and surgical gloves were generally made from natural rubber latex (commonly referred to as “latex”), although alternatives were available for known cases of latex allergy. However, the lack of regulation of manufacturing processes in the early years of mass production meant that gloves often contained a high level of allergenic proteins, which led to a steady increase in the number of cases of latex allergy reported.1
Current guidelines from the National Health Service and the Royal College of Physicians2 in the United Kingdom state that “the evidence does not … support a need to ban latex completely from the workplace.” They note that nonlatex surgical gloves “have higher failure rates in use and lower user satisfaction than latex gloves.” Instead, they advocate the use of nonpowdered, low-protein latex gloves, except for employees with latex allergy, latex sensitization, or latex-induced asthma, where nonlatex alternatives are recommended. However, most primary care health care groups and hospitals in the United Kingdom have replaced latex in nonsurgical situations with less flexible alternatives3 such as nitrile to remove the risk of latex allergy in patients and practitioners.
Similarly, the American College of Allergy, Asthma, and Immunology4 recommends that “a facility-wide review of glove usage should be undertaken to determine the appropriateness of use … and thereby prevent the unnecessary use of latex gloves” and advocates nonpowdered, low-protein gloves as standard in a health care facility but also states that “hospitals need to evaluate manufacturer information on nonlatex gloves in areas of durability, barrier protection, and cost” because “latex is still considered superior with respect to barrier characteristics against transmissible diseases.” Surgeons have generally resisted moves to replace surgical gloves in the same way because of the perceived reduction in manual performance when using nonlatex alternatives.
With respect to the glove design process, there is little or no evidence that gloves are evaluated in terms of their effects on users’ manual performance. All the currently available standards5,  6 focus on the barrier integrity of the gloves by defining tensile strength, freedom from holes, and tear resistance. Similarly, much of the research on medical gloves has concerned barrier integrity7,  8 and adherence of practitioners to handwashing and glove handling guidelines.9,  10 Clearly, because the primary role of the gloves is to prevent the spread of infection, it is important that the design brief takes these things into consideration, but achieving good barrier integrity is not necessarily incompatible with achieving the best performance.
Glove performance also has an effect on safety, particularly in a surgical environment. Surgeons using plastic gloves with less-than-optimal frictional properties, for example, may be more likely to drop instruments, to slip when performing delicate procedures, or to increase their stress levels when attempting to compensate. Similarly, practitioners who cannot feel a pulse through gloves when taking blood will be more likely to remove the gloves and increase their risk of infection. A 1994 survey of health care workers11 found that a “perceived interference with technical skills” was a common obstacle to compliance with universal precautions. There is also a subjective element to the performance that must be considered, which is that practitioners’ comfort and confidence in their gloves may affect their concentration levels and therefore their ability to perform surgery over extended periods of time.
It is vital that the glove design process includes an assessment of their effect on manual performance to ensure that practitioners can operate safely and efficiently. The first step in this process is to determine the key aspects of manual performance in medical practice and where current gloves have a significant adverse effect. The second is to design tests that are useful predictors of clinical performance. It is therefore necessary to identify the tasks that are most challenging and on which gloves are thought to have the greatest impact so that the tests can be designed to simulate relevant manual skills.
To achieve this, semistructured interviews with medical practitioners were carried out. As well as gathering information on the participants’ roles, disciplines, and glove use, a series of open-ended questions were used to identify tasks believed by users to require the most dexterity and tactility, and those most affected by glove performance, as well as any other issues related to HDPE gloves that might aid the study. The interviews took place within Sheffield Teaching Hospitals NHS Foundation Trust (STH) and received ethical approval from the research ethics committees of STH and The University of Sheffield, UK.
Focus groups were considered as a means of gathering data fairly quickly and stimulating discussion. However, the limited availability, particularly of senior staff, made this a difficult approach. Furthermore, it has been shown12 that, when recruitment, transcription, and analysis are included, focus groups can be much more time-consuming than individual interviews. Although focus groups are generally accepted to produce a wider range of responses, this is not always the case and depends on the nature of the questions.12,  13 In this study, many of the questions were of a technical nature and specific to the individual’s specialty. There was also a concern that participants’ opinions on specific gloves would be influenced by those of their colleagues.
Interviews were therefore conducted on a one-to-one basis to increase flexibility and enable senior staff to participate at their own convenience, often between operations or appointments. The questions were designed to be sufficiently open-ended so that the participant was not led down one particular line of thought but also included prompts where information was not forthcoming. With a wide enough selection of participants, it was hoped that a consensus would be formed in at least some of the areas, which would enable judgments to be made on the most productive direction for future research.
0 notes
nerdygladiatorvoid · 3 years
Text
Solvent Recovery
Solvent recovery is a form of waste reduction. In–process solvent recovery still is widely used as an alternative to solvent replacement to reduce waste generation. It is attractive, like end–of–pipe pollution control, since it requires little change in existing processes. There is widespread commercial availability of solvent recovery equipment which is another attraction. Availability of equipment suitable for small operations, especially batch operations, make in–process recovery of solvents economically preferable to raw materials substitution.
Commercially available solvent recovery equipment include:
Carbon adsorption of solvent, removal of the solvent by steam, and separation of the solvent for reuse in the operation. Carbon must be regenerated, two or more units are required to keep the operations continuous. Chloric acid formation from chlorinated solvents, carbon bed plugging by particulates, and buildup of certain volatile organics on the carbon and corrosion can be a problem.
Distillation and condensation can be used to separate and recover solvent from other liquids. Removal efficiency can be very high using this process and can be used for solvent mixtures as well as single solvents.
Dissolving the solvent in another material such as scrubbing. Solvents must be then recovered from the resulting solution, through distillation but efficiency of removal is often not high using this method.
Adsorption processes are useful and versatile tools when it comes to waste solvent recovery unit as they can be applied with high efficiency at relatively low cost in cases in which the desired component presents either a fairly small or a fairly high proportion of the stream. The applicable adsorbents vary according to different purposes.108,109 Adsorbents with low polarity (activated carbon, etc.) tend to adsorb nonpolar compounds, whereas ones with high polarity (e.g., silica, alumina) have higher affinity to adsorb polar substances. However, some adsorbents operate via specific binding sites (e.g., molecular sieves, molecularly imprinted polymers) rather than simple hydrophilic-hydrophobic interactions. It is worth mentioning that adsorption cannot easily be installed in a continuous configuration and is usually either a one-bed batch process or a twin-bed process with one bed in the adsorption, whereas the other one in the regeneration phase.
In organic solvent recycling, the most frequent issue is the removal of water content. Even traces of water can cause unexpected solubility problems, side reactions, or the decomposition of a reactant. There are various processes to recover wet solvents such as distillation methods or fractional freezing, whereas adsorptive methods are advantageous due to their low energy consumption. Molecular sieves (with pore size 3 or 4 Å), silica, and alumina are widely used for solvent drying.110,111 The polarity of the solvent affects the efficiency of water removal, which decreases with increasing polarity of the solvent. With the proper choice of adsorption technique, residual water content between 1 and 100 ppm is usually a realistic target.
In the regeneration stage of adsorption, high volumes of gas containing organic solvent are produced. Other processes in the chemical industry, such as paint drying or the drying of solid pharmaceutical intermediates or products, also generate a significant amount of solvent vapor.112 This raises another issue, as the recovery of this solvent is highly desired to minimize solvent loss and the environmental burden, as urged by the increasingly strict regulatory environment. For example, the recycling of chlorofluorocarbons has gained a lot of attention since the Montreal protocol.113–115 Incineration of solvent vapors is a widely used solution since it makes use of the solvent's latent heat. However, incineration likely needs supporting fuel to reach the required efficiency and needs continuous solvent vapor feed, not to mention that nonflammable halogenated solvent cannot be eliminated in this manner. Adsorptive systems have proved to be good alternatives. This field of adsorption is dominated by activated carbon adsorbents,116 but molecular sieve zeolites are also employed.117 Polymeric adsorbents are seldom employed in such processes, mainly because of their high price compared with activated carbon and zeolites.118 The choice of adsorbent regeneration technique has a significant effect on the quality of the recovered solvent. Examining the efficiency and applicability of various regeneration processes has been the aim of several studies.112,119 A typical system utilizing activated carbon adsorption to recover solvents from air emissions is shown in Fig. 3.15.11. Steam regeneration is employed to strip solvents from the activated carbon followed by condensation of the steam/solvent mixture through cooling. Eventually the solvent layer is separated by simple decantation.
The integrated production and recovery of ABE using glucose as a substrate and gas stripping as a means of solvent recovery distillation equipment has been reported by Groot et al. [39], Mollah and Stuckey [40], Park et al. [41], and Ezeji et al. [42–44]. Groot et al. produced butanol in a free cell (not immobilized) continuous reactor and removed the product in a separate stripper [39]. As a result of simultaneous product recovery, glucose utilization was improved by threefold, but the selectivity of butanol removal was low at 4 as compared to 19, which is the selectivity at equilibrium, suggesting that the stripper was not efficient. Also solvent productivity in the integrated system was 0.18 g/L h, as compared to 0.17 g/L h in the nonintegrated batch system [39]. Mollah and Stuckey used immobilized cells of C. acetobutylicum to improve productivity and recover butanol by gas stripping [40]. The cells were immobilized in calcium alginate gel and used in a fluidized bed bioreactor. This integrated system achieved a productivity of 0.58 g/L h, which is considered low for an immobilized cell continuous reactor.
Ezeji et al. tested the use of a hyper-butanol-producing strain, Clostridium beijerinckii BA101, in an integrated system with butanol produced in a free cell fed-batch reactor coupled with in situ product recovery [43]. As a result of simultaneous product recovery, the rates of fermentation (productivity) and glucose utilization improved. To compensate for the utilized glucose, a concentrated sugar solution (500 g/L) was intermittently fed into the reactor to maintain a solventogenic substrate concentration. This reactor was operated for 207 h before the culture stopped fermentation due to the accumulation of unknown inhibitory products. In this system 500 g/L glucose was used to produce 232.8 g/L ABE. ABE productivity was also improved from 0.29 g/L h in a nonintegrated batch system to 1.16 g/L h in the integrated system, a 400% increase. Given that the fed-batch fermentation stopped due to the accumulation of unknown inhibitory products, the authors devised another system in which a semicontinuous bleed was withdrawn from the reactor to eliminate or reduce the accumulation of unknown toxic by-products. As a result, the continuous reactor was operated for 21 days (504 h) before it was intentionally stopped [44]. Results from this continuous reactor suggest that ABE fermentation can be operated indefinitely in continuous mode, provided that toxic butanol is removed by gas stripping and unknown toxic products are removed by a bleed. In a 1-L culture volume, the system produced 461.3 g ABE from 1125.0 g glucose, with an ABE productivity of 0.92 g/L h, compared to 0.28 g/L h productivity in the nonintegrated batch system.
Adsorption is a physical process in which organic species are transferred onto the surface of a solid adsorbent. Adsorption is a particularly attractive control method as it can handle large volumes of gases of low pollutant concentrations. It is capable of removing contaminants down to very low levels.1 Removal efficiency is typically greater than 95%. The most frequently used adsorbent in the organic compound applications is activated carbon, although zeolites and resins are also used.
Adsorption is the most widely used solvent-recovery technique and is also used for odor control. The latter application is necessary to meet statutory air pollution control requirements. Depending on the application, adsorption can be used alone or with other techniques such as incineration.14
Solvent recovery with adsorption is most feasible when the reusable solvent is valuable and is readily separated from the regeneration agent. When steam-regenerated activated-carbon adsorption is employed, the solvent should be immiscible with water. If more than one compound is to be recycled, the compounds should be easily separated or reused as a mixture.9 Only very large solvent users can afford the cost of solvent purification by distillation.’
The advantages include the availability of long-term operating data. In addition, adsorbers can handle varying flow rates or varying concentrations of organic compounds. The main disadvantage of adsorption is the formation of a secondary waste, such as the spent adsorbent, unusable recovered organic compounds, and organics in the waste water if steam is used for regeneration. Secondary waste may require off-site treatment or specialist disposal.12 (see Table 13.12)
In addition to air, moisture and photochemical stability, the thermal stability is an important aspect of improving the economy of the process. The occurrence of thermally induced polymerization or decomposition reactions results in a loss of solvent recovery potential, specialized facilities for the treatment and post-purification of solvents and product streams and poor flexibility in the optimization of the thermal profile of the process (solvent extraction and extractive distillation steps). N-Methyl pyrrolidone has been shown to be chemically and thermally stable in the Arosolvan process. Sulpholane is reported to be stable to 493 K and undergoes some decomposition at 558 K [23]. In the sulpholane process, the influence of oxygen on solvent stability in the form of minor oxidative degradation has been observed under normal operating conditions. Consequently, the exclusion of air in the feed to the extraction unit has been advocated for this process together with the inclusion of a solvent regenerator unit. The latter operates by removing oxidized solvent from a small side-stream of the circulating solvent that is directed towards the solvent regenerator unit [16]. Ionic liquids exhibit excellent thermal stability and lack of sensitivity to oxygen would be advantageous with respect to the processing and recovery of the solvent.
Pfizer has redesigned the synthesis of several of its pharmaceutical products to reduce generation of hazardous waste. Changes were made in the synthetic route to sildenafil citrate (see Fig. 9.7), the active ingredient in Viagra® (Dunn et al., 2004), which resulted in a more efficient process that required no extraction and recovery system for solvent steps (see Fig. 9.8). The E-factor (Sheldon, 1992) for the process is 6 kg waste/kg product, which is substantially lower than an E-factor of 25–100, which is typical of pharmaceutical processes. Furthermore, all chlorinated solvents had been eliminated from the commercial process. During the medicinal chemistry stage in 1990, the solvent usage was 1816 L/kg, and the optimized process used 139 L/kg solvent, which was reduced to 31 L/kg during commercial production in 1997 and to 10 L/kg with solvent recoveries. Pfizer plans to replace t-butanol/t-butoxide cyclization with an ethanol/ethoxide cyclization. Combined with other proposed improvements, this is expected to increase the overall yield from 76–80% and further reduce solvent usage and organic waste.
A first point of economic comparison is the variable cost requirements of each process. Here, variable costs are defined as the sum of all raw materials costs plus the utilities cost for conversion of raw materials to product. All labor, overheads and depreciation costs are not included. On a variable cost basis, both the diacetate and diphenate routes show a distinct advantage over the acid chloride route. The largest component of the cost differential results from the high cost of the acid chloride monomers relative to the free acids. The second largest component arises because the acid chloride process inherently uses greater solvent volumes than the other two routes. Solvent losses which invariably occur contribute to increased variable cost as the solvent recovery processes are not completely efficient. Variable cost differences between the diacetate and diphenate processes are not very large. Both processes can be thought of as variations to reacting free diphenol with the free diacids. In the diacetate variation, acetic anhydride is consumed in forming the diacetate, but some of this cost is recouped by selling acetic acid — the process by-product. In the diphenate route, phenol is first consumed in monomer preparation, then recovered during the polymerization. The variable cost of the diacetate route may be slightly higher than that of the diphenate route due to the conversion of anhydride to acetic acid, but this disadvantage can be mitigated depending on the phenol recovery/recycle efficiencies in the diphenate process.
Secondly, the capital investment requirement required to construct facilities to practice each of the three process technologies can be compared. The acid chloride process is a low temperature, atmospheric pressure process and process fluid viscosities are low. Thus, standard design reaction equipment with low cost supporting utilities are used in the reaction area. However, polymer recovery would generally be accomplished by precipitation, washing and drying followed by extruder pelletization — operations which are capital intensive. Also, extensive used solvent recycler for sale is required in the acid chloride process, again leading to increased capital cost. Both the melt or solution diacetate and diphenate processes on the other hand are high temperature, high vacuum processes where process fluid viscosities reach very high values. For these processes, polymer reactors will require some special design features particularly with respect to agitation and heat transfer. Supporting utilities will be rather capital intensive. To balance these costs, however, product recovery is expected to be relatively simple, requiring only one or two melt processing operations most likely using a thin film polymer processor followed by an extruder. Solvent recovery requirements would be modest for the diacetate process but somewhat more costly for the diphenate process where large quantities of phenol (especially from monomer production) will require purification prior to recycle. Some difference in capital investment required for monomer production in the diacetate and diphenate processes is also expected. Diphenyl ester production is less attractive due to the more extreme reaction conditions required and the large phenol recycle streams. However, even with the noted differences, it is estimated that any of the three described processes could be built for approximately the same dollar amount per annual pound of polymer capacity at the 15 Mlb year−1 scale (1 kg = 2.2 lb).
0 notes
nerdygladiatorvoid · 3 years
Text
Paper Machine
On the paper machine, the size press is used to apply surface size to dried paper.182,183 Starch is the most frequently used binder in surface sizing. Besides raising surface strength, starch also imparts stiffness, lowers water sensitivity, reduces dimensional changes and raises air leak density of the sheet. In conventional practice, the sheet passes through a pond of starch dispersion held above the nip between two large rotating cylinders. In the nip a high, transient, hydrostatic pressure is developed. Excess starch dispersion is drained from the ends of the nip. The surface size is transferred to paper by capillary penetration, pressure penetration and by hydrodynamic force during nip passage.
The quantity of starch transferred to paper by a size press depends on several factors: concentration of dispersed starch in the surface size; viscosity of the starch dispersion; diameter of the size press rolls; size press pond height; cover hardness of the size press rolls; size press nip loading pressure; fluting corrugated paper machine speeds; wet-end sizing of the sheet; and water content of the sheet. The concentration of starch in the surface size liquid can range from 2% to ∼15%, depending on product requirements. Frequently, pigments and other materials are added, which further increases total dispersed and suspended solids content. The viscosity ranges from water thin to several hundred cP (mPa·s).
Viscosity of the starch dispersion is the primary rate-determining parameter for dynamic sorption of starch into paper during surface sizing. Surface size penetration into the capillaries of paper proceeds in lateral and normal directions. Lateral flow takes the shape of an ellipse, according to the bias of fiber orientation in machine direction.184 Contributions by wetting and capillary penetration decrease with increasing paper machine speed, while the contribution by hydrodynamic force increases with speed. As a consequence, starch pick-up will pass through a minimum at a specific speed. The hydrodynamic force depends on the angle of convergence (which is determined by the diameter of the rolls), by the nip length (which is influenced by the hardness of the roll covers), by the paper machine speed and by the opposing loading force between the two rolls. High liquid viscosity, large roll diameter, soft roll covers and high newspaper machine speed increase starch transfer, while high nip pressure counteracts these drivers. Starch cationization has no affect on pressure-driven penetration, provided the hydrostatic pressure is high and the viscosity of the dispersion is low.
The transferred liquid penetrates into the sheet according to the void space between fibers and pigment particles. During drying, starch attaches to the fibers and pigment, and reinforces the sheet by ‘spot welding’ and bridging between paper constituents. The ultimate location of the starch in the sheet can be affected by chromatographic partitioning behind a front of water that advances into the sheet. This effect will primarily occur in heavyweight paper and board and may lead to a gradient in starch concentration in the sheet from the surface to the interior and a weakening of internal bond at the ultimate location of free water. Starch application to the sheet induces some desizing due to coverage of hydrophobic patches by hydrophilic starch.
Application of surface size to paper carries with it the transfer of a substantial quantity of water. As an example, surface sizing of a 75 g/m2 (50 lb/3300 ft2) sheet (with 1% residual water content) by a 5% starch solution for a coat weight of 1.5 g/m2 (1 lb/3300 ft2/side) will raise the water content of the sheet to 43%. This large quantity of water will weaken the paper. Web breaks at the size press can occur, particularly when the sheet is also weakened as a result of edge cracks or holes.
Surface sizing can induce structural changes in the paper sheet185 due to the interaction of water sorption (which causes a relaxation of internal stresses) and machine direction tension (which increases anisotropy and creates additional stresses). Anisotropy can be lowered by reducing tension on the web during sheet passage through the size press and subsequent dryers, and by raising the moisture content prior to the size press.
When surface-sized paper leaves the size press, it will cling to a roll and has to be pulled off. The separation force due to film splitting depends on the free film thickness, its cohesiveness, and the rheological properties of the surface size, especially its viscoelasticity. Transfer defects, such as ribbing, orange peel, spatter or misting may result. It is important to control the starch viscosity, to use the correct take-off angle and to apply appropriate web tension. Surface-size splashing can occur due to the converging motion of paper sheet and roll surfaces in the pond and fluid rejection at the nip. Best pond stability is obtained at high or low viscosity, while intermediate viscosity is most prone to induce pond instability.
The same basic test liner paper machine used to produce writing and printing paper are also used to form paperboard. However, modern paper machines are limited in their ability to produce a single-layer paper sheet with a grammage above 150 g m−2. There are a number of reasons for this limitation. Primarily, thicker single-layer sheets are more difficult to dewater requiring excessive reductions in machine speed. Furthermore, the increased drainage forces applied to thicker sheets in the forming section would cause greater fines removal from the bottom of the sheet resulting in a rougher surface. The topside of a very thick sheet would also be adversely affected since paper is formed on fourdrinier machines layer by layer from the wire side up, which would allow extra time for the fibers in the top layer to flock and produce a ‘hill and valley’ appearance. The combination of these two effects would produce an unacceptably two-sided product.
Manufacturing multilayered paperboard from separately formed sheets provides a solution to the above-mentioned problems. The forming section of paperboard machines are composed of two, three, or even four forming sections that bring individual sheets together at the wet press. Paperboard machines are for this reason large and complex having heights that are two to three times greater than single-former machines. Any one of the former sections in a multilayer machine can be either a traditional fourdrinier or a modified fourdrinier equipped with a top-wire unit for additional dewatering capacity. The use of different furnishes in each former produces a final sheet that is engineered for specific stiffness and smoothness requirements.
Although initially forming two to three separately formed sheets of paper, a multilayer machine forms a single sheet of paperboard when the individual sheets of paper are combined together in the wet press. The individual single-layered sheets prior to the wet press are ‘vacuum dewatered’ with a typical consistency of 20% (80% moisture) and are simply assemblages of fibers held together by capillary forces exerted by the continuous matrix of water surrounding the fibers. When the sheet continues it progress through the wet press and the dryers, this continuous matrix of water is decreased and the fibers are progressively drawn together through surface tension. Eventually, at the end of the drying process with a final moisture content of 4–8%, the surface tension forces between individual fibers will produce pressures sufficiently high enough to form fiber-to-fiber hydrogen bonds resulting in a mechanically strong sheet. During multilayer forming, a single sheet of paperboard is formed from the individual sheets of paper by merging the water matrices of each sheet into a single, hydraulically connected matrix in the wet press. The net result is that the multilayer sheet continues through the wet press and dryer section forming fiber-to-fiber bonds inside layers and between layers as if they were initially formed together. Theoretically, the fiber-to-fiber bonding between separately formed layers will be identical to fiber-to-fiber bonding within a single layer. Differences in interlayer bonding strength (measured by z-direction strength tests) will be found when the individual sheets are wet pressed at moisture contents lower than what is necessary to form a hydraulically connected matrix. (z-direction strength is the maximum tensile force per unit area which a paper or paperboard can withstand when applied perpendicularly to the plane of the test sample.)
The advantage of manufacturing a multilayer sheet is that key paper properties can be engineered into the paperboard that would not be obtainable by single-layer forming. Special top layers can be incorporated that are white and smooth, therefore, having excellent printing properties. Middle layers can be used that are bulky and thus inherently thicker producing the stiffest possible board. These middle layers can also contain recycle fibers or pulp fibers of lower quality that can be covered or masked by higher quality top and/or bottom layers.
Although starch is usually added at the wet end of the coated board duplex paper machine as a liquid feed directly to the furnish, other systems which place the starch directly on the formed sheet while it is still on the wire of the Fourdrinier machine or on the felt of the cylinder machine may be used. Advantages claimed are improved retention and better distribution of starch throughout the sheet, while permitting the use of low-cost unmodified starch.
In one system, a solution of cooked starch or a dispersion of starch granules is sprayed from nozzles directly onto the wet-web of fibers. By varying concentration, spray pressure, and spray location, a variety of effects can be achieved (24, 25). Three types of spray systems are in use: high-pressure air atomization, high-pressure airless atomization, and low-pressure airless atomization. With high-pressure systems, an electrostatic assist is used to prevent loss owing to misting (26).
In another system, low-density starch foam is applied directly on the wet-web immediately before it enters the wet press. The foam is mechanically broken at the press nip, and the starch is dispersed through the sheet. By controlling foam density, bubble size, and starch concentration, a wide variety of results can be achieved (27). As in the spraying system, very high retentions are possible, and low-cost unmodified starch may be used.
In another system, a thin curtain of liquid is applied to the wet-web (28) for high retention of chemicals, including starches. This system is claimed to be suitable for addition of starch to multi-ply paperboard where it increases ply bond strength.
0 notes
nerdygladiatorvoid · 3 years
Text
Tubes and pipes in technical and everyday use
In the beginning was the hollowed-out tree trunk, one of the first capillary tube to be crafted by human hand. With a vast array of models in the plant world to inspire him, Homo sapiens had a much easier job inventing the tube than the wheel for which, by contrast, nature had no example to offer. Bamboo and reed are just two examples of plants with hollow stalks. Nature already knew the value of the tubular form, which combines high stability with the capacity to Transport essential substances for growth, such as water and nutrients, out of the earth.
In technical terms, a tube or pipe is a cylindrical, hard hollow body which usually has a round cross-section but can also be oval, square, rectangular or more complex in profile. It is used on the one hand to convey liquid, gas and solid matter and, on the other, as a construction element. Whatever its purpose, the term covers all sizes and diameters, from the smallest needle pipes right up to wind tunnels. No other profile shape with the same material cross-section has such a high flexural strength, which is what makes the tube so important as a load-bearing element in building.
Tubes for transporting purposes
In the past, people always tried to settle close to water. As the size of the settlements grew, it became increasingly difficult to get the water from the source - the spring, pond, river or lake - to the different dwellings. At first, people used open conduits - initially simple trenches, later stone canals. When the springs and sources were exhausted, aqueducts were used to carry water from the mountains into the towns. Some 300 years A.D., the Romans transported water from the Campagna into their capital and some of their impressive waterways can still be marvelled at in modern-day Europe.
Later, the open canals were covered over and used as closed conduits - and thus the pipeline was born. People were also quick to realize the benefits of closed pipes against open canals for removing waste water. Early pipe materials included wood and stoneware (fired clay), but also easy-to-work metals such as bronze, copper and lead. The first closed pipelines were made around 4,000 years ago of fired clay. The oldest metal pipelines date back to 200 years B.C., first made of bronze and later lead. Lead pipes were cast and chiefly used to Transport water. Copper pipes meanwhile were made from chased copper plate which was rolled and subsequently soldered together.
The advent of an economical method of producing large quantities of cast iron in the 14th century laid the foundation for the manufacture of iron pipes. Gunsmiths and cannon-makers were amongst the first to produce iron pipes. Cast iron pipes were used as early as the 15th century to carry water - some dating back to the 16th century are still in use today. Cast iron pipes also accompanied the development of a public gas supply network, for which compression-proof pipes were a matter of safety and therefore absolutely essential.
As more economical steelmaking methods were developed, an opportunity opened up for this material to be used for pipes. The first were forge-welded out of hoop steel, a method already known to gunsmiths in the Middle Ages. Around 1880, the invention of crossrolling by the Mannesmann brothers also made it possible to produce seamless pipes and tubes. With their thicker walls, seamless pipes offered greater stability at a relatively low weight. Oil-prospectors used such pipes to reach deeper reservoirs and by doing so were able to satisfy the growing demand for mineral oil which accompanied the early days of motorisation. The fact that mineral oil could be transported economically over long distances through a pipeline pushed up the demand for steel pipes even further. Soon, pipelines came to be the biggest market in this area, with demand reaching several million tonnes of welded and seamless pipes every year.
The crucial importance of how a pipe is made for the economic efficiency and environment-friendliness of industrial plant can be illustrated with the contemporary example of seamless boiler pipes with inner ribs. For years the power industry has been aiming to reduce fuel consumption and thereby cut CO2 emissions by stepping up efficiency. This can be done by working with higher operating pressures and temperatures. Consequently, plans have been made to set up new power plant in the first decades of the next century, which will run with pressure levels of up to 350 bar (today's maximum is 300 bar), at operating temperatures of around 700 "C (as opposed to 600 'C) and with efficiency increased from fts current 40% to 50%. Operating parameters of this kind can only be used for suitabie products and materials, of which seamless boiler pipes with inner ribs are one example. On account of their internal geometry, these pipes substantially improve the heat transfer between heating and the vapour phase on the inside of the pipe.
Pipes made of nonferrous metals and plastics Thanks to its good corrosion resistance, copper can be used to make pipes for the chemical industry, refrigeration technology and shipbuilding. Alongside their application for installation purposes, the usually seamless copper pipes are also used in capacitors and heat exchangers. For corrosive materials, low temperatures or stringent demands on the purity of the material carried by the pipe, Aluminium and Aluminium alloys are used in pipe construction. Meanwhile, thanks to its high resistance to many aggressive materials, titanium is well- suited to use in chemical engineering.
Plastics belong to the group of newer pipe materials. With the development of methods for producing plastics on an industrial scale in the 1930s, it also became possible to manufacture plastic pipes economically. By the middle of the 30s, plastics were already being used in Germany to make pressure pipelines. Among the chief advantages of plastics are their high corrosion resistance and a substantial chemical resistance to aggressive media. Moreover, the smooth surfaces mean that plastic pipes are not prone to incrustation, which can have a very detrimental effect on their conveying capacity. Pipes supplying drinking water are mostly made of polyethylene (PE) or polyvinyl chloride (PVC). Like ABS (acrylonftrile-butadiene-styrene copolymer) plastics, these two materials are also used for gas pipelines. Thermoplastic materials - alongside PE and PVC these include PP (polypropylene) and PVDF (polyvinylidenefluoride) - can also be used for industrial pipelines. Beyond these, PB (polybutene) and PE-X (cross- linked polyethylene) are also widespread in pipe-making. Plastic pipes find application in areas such as heating technology, shipbuilding, underwater pipelines (the crossing below a river floor from one bank to the other), irrigating and drainage plant, and well-building.
The right choice of material has a crucial bearing on the economic efficiency and safety of a pipe system. Materials therefore have to be selected according to the demands of each specific application. In steel boiler construction, for example, pipes must be made of steel with high temperature stability plus heat and scaling resistance, while special corrosion resistance is all-important in the chemical and foodstuffs industries. Meanwhile, the mineral-oil processing industry requires heat- proof or press-water-resistant steels for its pipes, gas liquefaction and separation, on the other hand, need materials which have special strength at low temperatures. This broad and highly diversified range of requirements has put a fantastic array of materials to use in pipe- making. Alongside the iron and steel, nonferrous metals and plastics mentioned above, these also take in concrete, clay, porcelain, glass and ceramics.
In addition to liquids and gases, solid matter, broken down, as dust or mixed with water in slurry form, is also pumped through pipelines. Gravel, sand or even iron ore can be conveyed in this manner. Pneumatic transportation of grain, dust and chips through pipes is also a widespread practice. Pneumatic tube conveyors, which similarly work with air, are another important mode of transporting solid matter.
Pipes may be several meters in diameter and pipelines many kilometers in length. At the other end of the scale are conduits with tiny, barely perceptible dimensions. One example of their use is as cannulas in medicine - a collective term referring to instruments with a variety of applications, including infusions, injections and transfusions. Their outer diameter ranges from over 5 millimeters to as little as 0.20 millimeters. Cannulas are made of high-quality grades of stainless steel, brass, silver or nickel silver (an alloy of copper, nickel and zinc, sometimes admixed with traces of lead, iron or tin), but also plastics such as polyethylene, polypropylene or Teflon. Often, different materials are combined with one another to produce the individual components. These tiny tubes must have extremely pronounced elastic properties. They may bend but under no circumstances snap. Their surfaces are often nickel[-plated and always highly polished, sometimes even on the inside. The best-known cannulas are hypodermic needles which, in their most common form as sterile disposable syringes, guarantee aseptic use without costly preparation for reutilisation.
Tubes for construction
No matter where we look in our cities today, we can be sure to see tubular steel constructions. They have become an indispensable element of modern building technology. Once again, we took the idea from nature: in tube-shaped straws, bamboo shoots, quills and bones, Mother Nature demonstrated the successful marriage of beauty and function. Yet these excellent static properties remained unexploited until the advent of welding technology made it possible to connect virtually all dimensions of pipes perfectly and with the necessary interaction of forces for use as construction elements.
As an extremely lightweight building element, steel tube combines high strength with low weight. Steel tubes are used as deck supports in shipbuilding, supports in steel superstructures and binders in building construction. They are used as tubular and lattice masts for overhead and overland transmission lines, for trains and trams, and for lighting. Bridges, railings, observation towers, diving platforms, television towers and roof constructions in halls or sports stadiums are all further examples. Steel tube is also a popular" building element for constructions in temporary use, such as halls, sheds, bridges, spectator stands, podiums and other structures for public events, supporting structures and scaffolding, from the small-scale for house renovation right up to building scaffolds.
In plant engineering, steel tube is used to make ladders, shelves, work tables and subframes for machinery and plant. Steel tube also found its way as a construction element into precision components for machinery and equipment. Shafts and rolls or cylinders in hydraulics and pneumatics are just two examples. Beyond these applications, a great volume of steel tube is used in the cycle industry, camping equipment manufacture, the furniture industry, vehicle and car making and the domestic appliances industry.
Be it on water, over land or in the air, the various modes of Transport would be lost without pipes & tubes. Pipes and tubular construction elements are to be found in ships, planes, trains and motor vehicles. A great variety of pipes and tubular profiles are used in car making, both in connection with the motor and with the chassis and bodywork sections. Most recent developments put them to a far more varied range of uses than before, from air suction pipes and exhaust systems through chassis components right up to side-impact tubes in doors and other safety features. One German car makers new lightweight concept takes as its basic subassembly a three-dimensional frame made up of complex Aluminium extruded sections joined together with the aid of pressure-diecast intersections.
Pipes in everyday use
We come into contact with pipes and tubes on a daily basis. It starts in the morning when we go to clean our teeth and squeeze the toothpaste from this tube, which is nothing other than a tube-shaped flexible container. We write notes with a pen, comprising one or more tubes with a smaller tube - the cartridge or refill - inside it. This is the modern equivalent of the quill, a pointed and split tube used in ancient times as a writing instrument and still used today for Arabic script.
We are surrounded everywhere we go and on a virtually constant basis by seamless pipe & tube, whether at home, on the move or at work. They take the form of lamp stands and furniture elements in chairs or shelves, curtain rails, telescopic aerials on portable and car radios, and rods on umbrellas or sunshades. And when we water the plants or hang out the washing, tubes are our constant companion - on the watering can or the clothes-horse. Pipes Transport electricity, water and gas directly into our homes. Tubes protect visitors to the Duesseldorf Trade Fair Center from the rigours of the Rhineland weather. Pipe constructions are responsible for a pleasant indoor temperature and prevent the hall roofs from falling on our heads. Civil engineers and architects choose special section tube constructions for windows and doors in preference to other solutions. Tubes even have a role to play in our leisure time, providing us with bicycles, training apparatus and sports equipment.
Musical pipes
Musical instrument-making would be unthinkable without welded pipe & tube. The tuba illustrates the connection particularly well: the name of this brass instrument is nothing other than the Latin word for tube. Other brass and pipe instruments also take the tube form. The reed used in a variety of wind instruments such as the clarinet, saxophone, bassoon or oboe is a flexible piece of cane which is fixed into the mouthpiece of the instrument or acts as a mouthpiece itself. Organ pipes also rely on the tube shape to create their sound. They are made of lead and tin, zinc or copper and are still crafted today according to a centuries-old Tradition.
CD stands in the shape of organ pipes make for an original link between two musical words. These CD stands are just under two meters in length, accommodate up to 50 CDs and, if required, can be supplied with interior lighting. Normally out of sight but critically important for good sound quality are the bass-reflex pipes found in loudspeakers. With the proper dimensions in length and diameter, these pipes help to reproduce low-pitched tones without any distortion as a result of unwanted flow noise.
Through squre pipe & tube flows the lifeblood of progress and without them our lives would not be nearly as comfortable. They make everyday life easier, safer, more attractive, more varied and more interesting. More to the point, though, they have become indispensable for our existence, shaping the development of our lives to lasting effect in the past and undoubtedly continuing to do so in the future.
0 notes
nerdygladiatorvoid · 3 years
Text
All You Need to Know About Quartz Countertops
Beautiful, durable, easy-care quartz is among the most popular countertop materials available—but it is pricey. If you’re considering quartz for your kitchen or bathroom, first get the 411 on this trendy topper before you buy. This complete countertop primer will set you up all of the necessary information on selecting and caring for quartz countertops, so you can make a smart decision and enjoy your work surface for years to come.
A visit to a kitchen showroom nowadays will show you a dazzling array of quartz countertop designs and patterns that remarkably mimic real marble and other natural stone. But quartz has come a long way! First appearing in Italy in the 1960s, these countertops were developed—by combining ground quartz particles with resins into a slab—as an alternative to stone that wouldn’t easily crack or break. While the resins added just enough flexibility to do the trick, early quartz countertops were a dull-looking cream and tan. Cutting-edge improvements in solid-surface technology have pure color quartz stone slab from functional to fabulous. With an abundance of finish choices and endless combinations of color and edge styles, you’ll likely find something stunning that suits your home.
Not only will you appreciate the look of quartz, you’ll find it remarkably easy to maintain—unlike marble and natural stone, which require a special sealant and can be finicky to care for. Quartz contains 90 to 94 percent ground quartz and 6 to 10 percent polymer resins and pigments, combined to produce a granite-hard slab that can duplicate the look of mesmerizing marble swirls or earthy natural stone, without the maintenance. Quartz also resists scratching and cracking to a greater degree than many natural countertops, ranking a “7” in hardness on the Moh’s scale (developed in 1822 by Friedrich Moh to rate mineral hardness). Marble, in comparison, ranks only a “3.”
A note to homeowners in the market to remodel: When exploring countertop options, make sure not to confuse quartz with quartzite. Quartz is engineered with pigments and resins, while quartzite is actually sandstone that, through natural metamorphosis, was exposed to intense heat, which caused it to solidify. Mined from large stone quarries and cut into solid slabs, quartzite is also available for countertops—but, unlike quartz, it must be sealed before use and again once or twice a year thereafter.
Thanks to its non-porous nature, quartz is mold-, stain-, and mildew-resistant, making it a breeze to keep not merely clean but also germ- and bacteria-free. Quartz also resists heat damage—up to a point. Manufacturers market quartz as able to withstand temperatures up to 400 degrees Fahrenheit (one reason it works well as fireplace surrounds). But “thermal shock” can result from placing a hot pan straight from the oven or stovetop onto a cold quartz countertop, which can lead to cracking or discoloring. And while quartz does resist staining because liquids can’t penetrate its surface, it’s not 100 percent stain-proof. Messes should be cleaned up quickly to best preserve quartz countertops’ original color.
The biggest downside to quartz, however, is cost. While a preformed or laminate countertop will set you back a few hundred dollars, quartz countertops cost between $70 to $100 per sq. ft., installed, comparable to the price of natural stone countertops. For a mid-size kitchen, you can easily spend a few thousand dollars for quartz.
If you’re planning a backyard kitchen, steer clear of quartz altogether. It’s not suitable for outdoor installation, as the sun’s UV rays can break down the resin binders and degrade the countertop, leading to fading and eventual warping.
With such a vast selection, making up your mind can be a challenge! So bring home a few quartz samples from a kitchen showroom before settling on a specific color or design. Under your own lighting, and against the backdrop of your cabinets and walls, you’ll be better able to choose a pattern and design that complements your kitchen décor. It helps to have a good idea of what you want your finished kitchen to look like before you buy. You can browse through design books at any kitchen center, or get ideas from show homes and home-design magazines and websites. As you plan, keep these points in mind:
Seams: If your counter is longer than 120 inches, or if it involves a complex configuration, Marble Look Quartz Stone Slab may have to be fabricated in more than one section, which means you’ll have one or more seams. Seams are typically less visible on dark-toned quartz but can be quite noticeable on light-toned or multicolor countertops, such as those with obvious veining or marbling patterns.
Thickness: Countertop thickness ranges from ½ inch to 1-¼ inch, depending on style, brand, and size. If you’re ordering a large countertop or want an elaborate edge design, the fabricator may suggest a thicker slab. If your heart is set on a thin countertop but your kitchen is large, expect to have one or more seams. Thickness also depends on custom features, such as integrated drain boards and elaborate edge profiles.
Design Details: Custom designs in a wide array of colors are available, from neutral grays, off-whites, and subtle tans to bold blues, bright yellows, and striking solid blacks. In addition to shade, you can choose from quartz made from small particles for a smooth appearance, or from larger grains for a flecked look. The surface can be sleek and glossy or feature a flecked, pebbled, embossed, or even suede appearance.
Edge Ideas: Custom edge profiles in complex designs bring distinction to your cook space but add to the final cost. You can opt for a bold square countertop edge, a chiseled raw-edge look, or select a softer, rounded bullnose corner. A reverse waterfall edge resembles the shape of crown molding and adds a touch of traditional elegance, while contemporary edges, including slanted, mitered, or undercut create the illusion of a thinner slab. Ogee (S-shape) is a popular edge design that fits just about any decor.
Bathroom Buys: Selecting a quartz countertop for a bathroom is slightly different from buying one for your kitchen. Bathroom vanities come in standard sizes, so you can purchase pre-made vanity countertops. Many come with pre-molded sinks or pre-cut holes to accommodate drop-in sinks. Bathroom vanity quartz countertops range from $400 to $1,000 depending on length, and installation for them is more DIY-friendly.
Professional installation is highly recommended for quartz countertops in kitchens, due to the custom nature of cabinet configuration and the weight of the slabs, which often require multiple workers just to lift. To protect your investment, installers should be certified to mount the specific brand of quartz you purchase. Many quartz countertops come with 15-year or even lifetime warranties, but often only when installed by certified professionals.
In this exclusive blog section of Alicante Surfaces, we try to share as much information and knowledge about the Quartz Countertops which we have gained in our past 20+ years of experience from the Tiles & Stones industry. Our blog articles are mainly focussed towards the Quartz Countertop Applications, it's usages and our exclusive range of products that we offer.
Material - Quartz slab or Engineered quartz stone slab is a composite material made of crushed stone bound together by a polyester resin. And we at Alicante procure the best Quartz Raw Materials for the manufacturing of grain quartz stone slab. Our Quartz slabs are highly popular and mainly used on the kitchen countertops.
Composition - Our manufactured premium quartz slabs consist of 93% quartz by weight and 7% resin. The main materials which are resins, which are available in various types, are used by quartz manufacturers as per their choice and needs. Stone is the major filler, although other materials like colored glass, shells, metals, or mirrors are also added to manufacture different kinds of designs.
Preference - Quartz slabs are becoming more and more popular day by day and are the preferred choice for Kitchen Countertop over Granite because of its anti-bacterial nature, less maintenance it requires, and its unique designs and colors which give marble look.
Application - Alicante Quartz slabs are the perfect option for hectic kitchens and bathrooms. Also, our quartz countertops are extremely durable, practical, and low maintenance. Our products are tough, versatile, and easy to clean. The most common quartz application is kitchen countertops. They are hassle-free and very easy to clean and apply.
Size & Color Range - We offer a wide range of sizes Starting from 140" x 77" known as the Super Jumbo Size, then we have got 126" x 63" with 2 CM & 3 CM thicknesses. Our Quartz slabs have the most choice in textures, tones, veins, and finishes. There are varieties of designs, sizes and collections are available in sparkling quartz stone slab. Our most famous range is Calacatta, Cararra, Pure White & Sparkle/Diamond Series.
0 notes
nerdygladiatorvoid · 3 years
Text
USB Power Delivery is the fastest way to charge iPhone and Android devices
With the current generation of smartphones and their much faster processors and vivid, high-resolution displays, and always-on connectivity, demands on battery performance are now higher than ever.
You may have noticed that, while you are on the road, you're quickly running out of juice. If you have this problem, portable batteries and PD fast charger than what may have come in the box with your device may be the solution.
But not all portable batteries are the same, even though they might use similar Lithium Polymer (LiPo) and Lithium-Ion (Lion) cells for capacity and look very much alike. Plus, modern smartphone hardware from Apple and various Android manufacturers support faster-charging rates than what was previously supported.
If you use the charger that comes in the box of the current-generation iPhone hardware, or if you buy just any portable battery pack on the market, you're going to be disappointed. Ideally, you want to match your charger, battery, and even the charging cable to the optimal charging speeds that your device supports.
There are three different high-speed USB charging standards currently on the market. While all will work with your device using a standard legacy charge mode, you will want to match up the right technology to optimize the speed in which you can top off your phone, tablet, or even your laptop. Let's start by explaining the differences between them.
Legacy USB-A 2.0 and 3.0 charging
If your Android device or accessory still has the USB Micro B connector (the dreaded fragile trapezoid that's impossible to connect in the dark), you can fast-charge it using an inexpensive USB-A-to-USB Micro B cable.
If the device and the 20W USB C PD fast charger white port both support the USB 2.0 standard (pretty much the least common denominator these days for entry-level Android smartphones), you can charge it at 1.5A/5V. Some consumer electronics, such as higher-end vape batteries that use the Evolv DNA chipset, can charge at 2A. A USB 3.0/3.1 charge port on one of these batteries can supply 3.0A/5V -- if the device supports it.
If you are charging an accessory, such as an inexpensive pair of wireless earbuds or another Bluetooth device, and it doesn't support either of the USB-A fast charging specs, it will slow charge at either 500mA or 900mA, which is about the same you can expect from directly connecting it to most PCs.
Many of the portable batteries on the market have both USB-C and multiple USB-A ports. Some of them have USB-A ports that can deliver the same voltage, while others feature one fast (2.4A) and one slow (1A).
So, you will want to make sure you plug the device into the battery port that can charge it at the fastest rate, if you're going to top off the device as quickly as possible.
USB Power Delivery
USB Power Delivery (USB PD) is a relatively new fast charge standard that was introduced by the USB Implementers Forum, the creators of the USB standard. It is an industry-standard open specification that provides high-speed charging with variable voltage up to 20V using intelligent device negotiation up to 5A at 100W.
It scales up from smartphones to notebook computers, provided they use a USB-C connector and a USB-C power controller on the client and host.
Batteries and 3 port PD fast charger that employ USB PD can charge devices up to 100W output using a USB-C connector -- however, most output at 30W because that is on the upper range of what most smartphones and tablets can handle. In contrast, laptops require adapters and batteries that can output at a higher wattage.
Apple introduced USB PD charging with iOS devices with the launch of the 2015 iPad Pro 12.9 and with OS X laptops in the MacBook Pro as of 2016. Apple's smartphones beginning with the iPhone 8 can rapidly charge with USB PD using any USB PD charging accessory; you don't have to use Apple's OEM USB-C 29W or its 61W power adapters.
In 2019, Apple released an 18W USB-C Power Adapter, which comes with the iPhone 11 Pro and 11 Pro Max. Although Apple's charger works just fine, you'll probably want to consider a third-party wall charger for the regular iPhone 11 or an earlier model. The regular iPhone 11 and the iPhone SE only come with a 5W USB-A charger, which is woefully inadequate for getting your device charged up quickly.  And the current rumor mill seems to indicate that the iPhone 12 may not even ship with a charger in the box at all.
Fast-charging an iPhone requires the use of a USB-C to Lightning cable, which, until February 2019, needed Apple's OEM MKQ42AM/A (1m ) or MD818ZM/A (2m) USB-C to Lightning cables. Unfortunately, they're a tad expensive at around $19 to $35 from various online retailers such as Amazon.
There are cheaper third-party USB-C to Lightning cables. I am currently partial to USB-C-to-Lightning cables from Anker, which are highly durable and MFI-certified for use with Apple's devices.
It should be noted that, if you intend to use your smartphone with either Apple's CarPlay and Google's Android Auto, your vehicle will probably still require a USB-A to USB-C or a USB-A-to-Lightning cable if it doesn't support these screen projection technologies wirelessly. You can't fast-charge with either of these types of cables in most cars, and there is no way to pass-through a fast charge to a 12V USB PD accessory while being connected to a data cable, either.
Qualcomm Quick Charge
Qualcomm's Snapdragon SoCs are used in many popular smartphones and tablets. It's fast-charging standard, Quick Charge, has been through multiple iterations.
The current implementation is Quick Charge 4.0, which is backward-compatible with older Quick Charge accessories and devices. Unlike USB PD, Quick Charge 2.0 and 3.0 can be delivered using the USB-A connector. Quick Charge 4.0 is exclusive to USB-C.
Quick Charge 4.0 is only present in phones that use the Qualcomm Snapdragon 8xx, and it's present in many North American tier 1 OEM Android devices made by Samsung, LG, Motorola, OnePlus, ZTE, and Google.
The Xiaomi, ZTE Nubia and the Sony Xperia devices also use QC 4.0, but they aren't sold in the US market. Huawei's phones utilize Kirin 970/980/990 chips, which use its own Supercharge standard, but they are backward-compatible with the 18W USB PD standard. Similarly, Oppo's phones have SuperVOOC, and OnePlus uses Warp Charge, and issue its compatible charger accessories if you want to take advantage of higher wattage (30W/40W/100W) charge rates.
Like USB PD, QC 3.0 and QC 4.0 are variable voltage technologies and will intelligently ramp up your device for optimal charging speeds and safety. However, Quick Charge 3.0 and 4.0 differ from USB PD in that it has some additional features for thermal management and voltage stepping with the current-generation Qualcomm Snapdragon SoCs to optimize for reduced heat footprint while charging.
It also uses a different variable voltage selection and negotiation protocol than USB PD, which Qualcomm advertises as better/safer for its own SoCs.
And for devices that use Qualcomm's current chipsets, Quick Charge 4.0 is about 25% faster than Quick Charge 3.0. The company advertises five hours of usage time on the device for five minutes of charge time.
However, while it is present in (some of) the USB C dual PD fast charger that ship with the devices themselves, and a few third-party solutions, Quick Charge 4 is not in any battery products yet. It is not just competing with USB Power Delivery; it is also compatible with USB Power Delivery.
Qualcomm's technology and ICs have to be licensed at considerable additional expense to the OEMs, whereas USB PD is an open standard.
If you compound this with Google recommending OEMs conform to USB PD over Quick Charge for Android-based products, it sounds like USB PD is the way to go, right?
Well, sort of. If you have a Quick Charge 3.0 device, definitely get a Quick Charge 3.0 battery. But if you have a Quick Charge 4.0 device or an iOS device, get at USB PD battery for now.
Which battery should you buy?
Now that you understand the fundamental charging technologies, which battery should you buy? When the first version of this article released in 2018, the product selection on the market was much more limited -- there are now dozens of vendors currently manufacturing USB PD products.
USB-C connectors have been designed hand-in-hand with USB-C Power Delivery, to handle these new high levels of power. USB-C circuit boards are specially designed to carry this increased wattage without being damaged or overheating, for enhanced safety to users and their devices.
Older connectors, such as USB-A, were first introduced in 1996, when much less power was needed than that required by today’s smartphones and tablets. This older technology is less suited to handle this increased wattage and may not have the ability to monitor heat and circuitry abnormalities.
Whether it’s a small phone or a large laptop, the USB C PD fast charger detects the connected device to deliver the right amount of power to charge that device as fast as possible. This ensures fast charging without delivering too much power which could damage circuitry.
0 notes
nerdygladiatorvoid · 3 years
Text
The best gaming chairs in 2021
The best gaming chair is the perfect finishing touch to a modern PC gaming setup. It really will bring your whole desktop together and make it look supremely suave. Yet these gaming thrones are good for more than style points, the top gaming chairs will also give your weary back an ample place to rest. This also explains why they can be so costly—keeping you in one piece isn't a cheap endeavour. If you've spent thousands of dollars on an extreme gaming PC build, it's only fair to give your gaming chair just as much attention.
If you're simply looking for everyday comfort, the best gaming seats may seem over the top. With wannabe-racer bucket seats, and gaming chairs covered in satanic runes running rampant, we've made sure to include a few sleek office-style chairs in here too. Whichever route you go down, keep your posture in mind. Posture may be the last thing you think about when embarking on a ten-hour raid, but we implore you: Don't disregard ergonomics.
We've tested tens of ergonomic gaming chair from today's most well-known companies to find luxurious and affordable places to park your rear. Check those out below. And if the chairs are a bit rich for your butt, then our cheap gaming chair roundup may be more up your street.
The Secretlab Titan Evo 2022 is everything we've been looking for in a gaming chair. That's why it's rightfully taken the top spot in our best gaming chair guide from the previous incumbent, the Secretlab Titan. It was an easy decision to make, though. The Secretlab Titan Evo 2022 does everything the Titan, and Omega below, can, except better.
User-friendly ergonomics make the Titan Evo 2022 a great fit for long nights gaming or eight hours tapping away for work, and that comes down to its superb built-in back support. It's highly adjustable, which means you can nail down a great fit with ease. There's also something to be said for the 4D armrests, comfortable seat rest, and magnetic head cushion.
You read that right, a magnetic head cushion. A simple solution to fiddly straps, the Titan Evo 2022 does away with all that with a couple of powerful magnets.
Secretlab also reckons its new Neo Hybrid Leatherette material is more durable than ever, though there's still the option for the Softweave fabric we've raved about in the past.
The chair is available in three sizes: S, R, and XL.
As a complete package, then, the Secretlab Titan Evo 2022 is the archetype of a great gaming chair. It is a little pricier than its predecessors, but we think it's worth the price tag. And anyways, that higher price tag is why we still recommend the Omega below for a cheaper option while stock lasts.
The Secretlab Omega is one of the most finely constructed chairs we've tested, and although it has largely been replaced by the Titan Evo 2022 above nowadays, the higher price tag of that chair might see the Omega remain a popular option for those looking to save a little cash.
From the casters to the base, the lift mechanism, armrests, and seat back, Secretlab has used some of the best materials available. The Omega was upgraded with Secretlab's 2020 series of improvements, which includes premium metal in the armrest mechanism, making it silky smooth to adjust and even more durable, and adding the company's ridiculously durable PU Leather 2.0.
The chair features a high-quality, cold-cured foam to provide support. It feels a little firm at first but gets more comfortable after extended use. The Omega stands out from the crowd with its velour memory foam lumbar and head pillows. These are so comfortable that we could smoothly fully recline the chair and take a nap if we wanted to. Though that's not a great look in the office... If you're looking to treat your body with a chair that will genuinely last, the Secretlab Omega is worth every penny.
Perhaps you've heard of the Herman Miller Embody. It occupied a top position in our best office chair roundup for a long time, but that has come to an end. Not for lack of comfort or acclaim, simply because the famed chair manufacturer has partnered up with Logitech to create something tailor-made to our gaming rumps.
Admittedly, the Logitech G x Herman Miller Embody doesn't differ much from its commercial cousin. That's hardly a mark against it, however. The Embody's cascading back support design and absurdly high quality make a welcome return but now comes with a few more flourishes to win over gamers. Specifically, extra cooling material designed to support a more active gaming position.
It's not so much the changes that make the Embody stand out as one of the best gaming chair with footrest going. It's what's been kept the same. The tried and tested Embody design is simply one of the best chairs for office work or gaming. It's incredibly comfortable over prolonged use, supports an active and healthy posture, and is easily fitted to your frame.
The warranty, too, is a standout feature. At 12 years, including labor, and rated to 24-hour use over that time, it's a chair that is guaranteed to last you over a decade, if not longer. So while the initial price tag may seem steep, and that it is, the reality is you're certain to get your money's worth in the long run. And your back will be thankful for it, too.
If you're the sort of person who prioritizes functionality over flash, the NeueChair is an excellent option. This isn't to say it's not stylish—quite the opposite; the NeueChair comes in a sleek, muted obsidian or flashy chrome/silver, both with bold, sweet curved supports on the back and an attractive black mesh. But, more importantly, the NeueChair is built to last, with a heavy, sturdy industrial construction. Even the chair's weight in the packaging indicates a solid piece of carefully constructed industrial art: it's heavy and substantial.
Assembling it is a breeze, as it comes in two discrete pieces and is simply a matter of inserting the casters and then pushing the two parts together. Almost every aspect of the seat is adjustable, from the armrests to the lumbar support system that lets you change the height depth of the backrest. It's one of the best office chairs I've ever had the pleasure to sit in, and if you can afford the admittedly steep price tag, well worth the investment.
If you're a big and tall gamer, you might have noticed that there aren't many racing gaming chair that can support your unique build. Whether it's a lower weight capacity or too short, or even feels like it'll break as soon as you sit in it, finding a chair for you might seem nearly impossible.
The AndaSeat Kaiser 2 screams large and in charge, supporting gamers up to 397lbs and 7ft tall. The Kaiser 2 is built on a solid steel frame with oversized bars to provide support.
Covered in premium PVC leather and extra thick memory foam cushioning, the Kaiser 2 manages to look more like a gaming chair for grownups. Available in black and a lovely maroon, no more will have to stuff yourself into a tiny gaming chair hope for the best. The Kaiser 2 manages to do both the function, comfort, and style you want in your premium gaming chair.
When buying a gaming chair, it's easy to forget your health. After all, most are advertised as luxurious, cushioned thrones that soothe your every ache as you smash the crap out of your foes in Apex Legends. But that isn't true, and for some, it's important to pick a chair that takes back support seriously. With some of the team have used it daily for almost a year, we can thoroughly recommend the Noblechairs Hero in uPVC leather. While not the most exciting of chairs, or the sportiest, it certainly does a good job of taking care of your back.
The Hero is easy to assemble, except for the bit where you attach the back to the seat, so make sure you have a buddy for that. It's firm and supportive, and extremely sturdy. As a word of warning: it is substantial, so if you prefer a softer chair that isn't as good for your lumbar, this maybe isn't for you.
Aside from that, it has a decent recline, can withstand frames of up to 330 lbs, and has fully adjustable wrist-rests. It's heavy but glides pretty easily on the supplied casters. It'll look just fine in both an office or gaming setup, so you're getting a chair that can do both. Not bad, if you can afford it.
Corsair's latest addition to its lineup of premium reclining gaming chair, the T3 Rush, has gotten a much-needed facelift. The T3 Rush is an insanely comfy chair thanks to its memory foam lumbar pillow but, more importantly, uses a breathable soft fabric in place of faux leather. The benefit of this is that it retains less heat, keeping you fresh and comfy instead of sweating in your squeaky pleather.
The Rush also reclines to a ridiculous 180 degrees in case you wanted to lie back and take a comfy cat nap before you take on another marathon streaming session of Apex Legends or CS: GO.
The only major downside for the T3 Rush mostly fits for smaller framed users. If you require a little larger seat, the T3 will be an uncomfortably tight fit. Other than that, the T3 Rush is an impressive-looking gaming chair that doesn't need a loud color to make a statement.
The DXRacer Master is a chair for people with money to spend, but it justifies the price by being an extremely luxuriant and comfortable chair. What's more, the DXRacer Master can be customized with modular parts (sold at an added cost) like mesh seat and backrests, leg rests, and even a rotating arm that bolts onto the base and can hold anything from a laptop to your phone.
Choosing not to invest in these extra parts won't compromise the chair itself, though, because DXRacer went all out on its features. Built-in lumbar support and an adjustable, rail-mounted headrest are great features, along with four-dimensional armrests. The microfiber leather is especially nice, and much of the chair is made of metal, which makes it feel sturdy.
It's clear that the DXRacer Master was built to last, and its understated look is great if you're not into the flashy designs seen on most other gaming chairs. But, boy, it will cost you for all this luxury: The DXRacer Master is still $80 more than the Secret Lab Omega, our favorite chair. But it's worth considering if you want to go all out and get something with all the bells and whistles.
0 notes
nerdygladiatorvoid · 3 years
Text
Useful information on External Gear Pumps
A gear pump is a type of positive displacement (PD) pump. Gear pumps use the actions of rotating cogs or gears to transfer fluids.  The rotating gears develop a liquid seal with the pump casing and create a vacuum at the pump inlet.  Fluid, drawn into the pump, is enclosed within the cavities of the rotating gears and transferred to the discharge.  A gear pump delivers a smooth pulse-free flow proportional to the rotational speed of its gears.
There are two basic designs of gear pump: internal and external (Figure 1).  An internal gear pump has two interlocking gears of different sizes with one rotating inside the other.  An external gear pump consists of two identical, interlocking gears supported by separate shafts.  Generally, one gear is driven by a motor and this drives the other gear (the idler).  In some cases, both shafts may be driven by motors.  The shafts are supported by bearings on each side of the casing.
This article describes plastic gear pump in more detail.
There are three stages in an internal gear pump’s working cycle: filling, transfer and delivery (Figure 2).
As the gears come out of mesh on the inlet side of the pump, they create an expanded volume.  Liquid flows into the cavities and is trapped by the gear teeth as the gears continue to rotate against the pump casing.
The trapped fluid is moved from the inlet, to the discharge, around the casing.
As the teeth of the gears become interlocked on the discharge side of the pump, the volume is reduced and the fluid is forced out under pressure.
No fluid is transferred back through the centre, between the gears, because they are interlocked.  Close tolerances between the gears and the casing allow the pump to develop suction at the inlet and prevent fluid from leaking back from the discharge side (although leakage is more likely with low viscosity liquids).
External gear pump designs can utilise spur, helical or herringbone gears (Figure 3).  A helical gear design can reduce pump noise and vibration because the teeth engage and disengage gradually throughout the rotation.  However, it is important to balance axial forces resulting from the helical gear teeth and this can be achieved by mounting two sets of ‘mirrored’ helical gears together or by using a v-shaped, herringbone pattern.  With this design, the axial forces produced by each half of the gear cancel out.  Spur gears have the advantage that they can be run at very high speed and are easier to manufacture.
Gear pumps are compact and simple with a limited number of moving parts. They are unable to match the pressure generated by reciprocating pumps or the flow rates of centrifugal pumps but offer higher pressures and throughputs than vane or lobe pumps. External gear pumps are particularly suited for pumping water, polymers, fuels and chemical additives. Small external gear pumps usually operate at up to 3500 rpm and larger models, with helical or herringbone gears, can operate at speeds up to 700 rpm. External gear pumps have close tolerances and shaft support on both sides of the gears. This allows them to run at up to 7250 psi (500 bar), making them well suited for use in hydraulic power applications.
Since output is directly proportional to speed and is a smooth pulse-free flow, external gear pumps are commonly used for metering and blending operations as the metering is continuous and the output is easy to monitor. The low internal volume provides for a reliable measure of liquid passing through a pump and hence accurate flow control. They are also used extensively in engines and gearboxes to circulate lubrication oil. External gear pumps can also be used in hydraulic power applications, typically in vehicles, lifting machinery and mobile plant equipment. Driving a gear pump in reverse, using oil pumped from elsewhere in a system (normally by a tandem pump in the engine), creates a motor. This is particularly useful to provide power in areas where electrical equipment is bulky, costly or inconvenient. Tractors, for example, rely on engine-driven external gear pumps to power their services.
External gear pumps can be engineered to handle aggressive liquids. While they are commonly made from cast iron or stainless steel, new alloys and composites allow the pumps to handle corrosive liquids such as sulphuric acid, sodium hypochlorite, ferric chloride and sodium hydroxide.
What are the limitations of a gear pump?
External gear pumps are self-priming and can dry-lift although their priming characteristics improve if the gears are wetted.  The gears need to be lubricated by the pumped fluid and should not be run dry for prolonged periods.  Some gear pump designs can be run in either direction so the same pump can be used to load and unload a vessel, for example.
The close tolerances between the gears and casing mean that these types of pump are susceptible to wear particularly when used with abrasive fluids or feeds containing entrained solids. External gear pumps have four bearings in the pumped medium, and tight tolerances, so are less suited to handling abrasive fluids.  For these applications, universal gear pump are more robust having only one bearing (sometimes two) running in the fluid.  A gear pump should always have a strainer installed on the suction side to protect it from large, potentially damaging, solids.
Generally, if the pump is expected to handle abrasive solids it is advisable to select a pump with a higher capacity so it can be operated at lower speeds to reduce wear.  However, it should be borne in mind that the volumetric efficiency of a gear pump is reduced at lower speeds and flow rates.  A gear pump should not be operated too far from its recommended speed.
For high temperature applications, it is important to ensure that the operating temperature range is compatible with the pump specification.  Thermal expansion of the casing and gears reduces clearances within a pump and this can also lead to increased wear, and in extreme cases, pump failure.
Despite the best precautions, gear pumps generally succumb to wear of the gears, casing and bearings over time.  As clearances increase, there is a gradual reduction in efficiency and increase in flow slip: leakage of the pumped fluid from the discharge back to the suction side.  Flow slip is proportional to the cube of the clearances between the cog teeth and casing so, in practice, wear has a small effect until a critical point is reached, from which performance degrades rapidly.
Gear pumps continue to pump against a back pressure and, if subjected to a downstream blockage will continue to pressurise the system until the pump, pipework or other equipment fails.  Although most gear pumps are equipped with relief valves for this reason, it is always advisable to fit relief valves elsewhere in the system to protect downstream equipment.
The high speeds and tight clearances of external gear pumps make them unsuitable for shear-sensitive liquids such as foodstuffs, paint and soaps.  Internal gear pumps, operating at lower speed, are generally preferred for these applications.
What are the main applications for gear pumps?
External gear pumps are commonly used for pumping water, light oils, chemical additives, resins or solvents.  They are preferred in any application where accurate dosing is required such as fuels, polymers or chemical additives.  The output of a gear pump is not greatly affected by pressure so they also tend to be preferred in any situation where the supply is irregular.
Summary
An external gear pump moves a fluid by repeatedly enclosing a fixed volume within interlocking gears, transferring it mechanically to deliver a smooth pulse-free flow proportional to the rotational speed of its gears.
External gear pumps are commonly used for pumping water, light oils, chemical additives, resins or solvents.  They are preferred in applications where accurate dosing or high pressure output is required.  External gear pumps are capable of sustaining high pressures.  The tight tolerances, multiple bearings and high speed operation make them less suited to high viscosity fluids or any abrasive medium or feed with entrained solids.
External-gear pumps are rotary, positive displacement machines capable of handling thin and thick fluids in both pumping and metering applications. Distinct from internal-gear pumps which use “gear-within-a-gear” principles, external-gear pumps use pairs of gears mounted on individual shafts. They are described here along with a discussion of their operation and common applications. For information on other pumps, please see our Pumps Buyers Guide.
Spur gear pumps
Spur gear pumps use pairs of counter-rotating toothed cylinders to move fluid between low-pressure intakes and high-pressure outlets. Fluid is trapped in pockets formed between gear teeth and the pump body until the rotating gear pairs bring individual elements back into mesh. The decreasing volume of the meshing gears forces the fluid out through the discharge port. A relatively large number of teeth minimizes leakage as the gear teeth sweep past the pump casing.
Spur gear pumps can be noisy due to a certain amount of fluid becoming trapped in the clearances between meshing teeth. Sometimes discharge pockets are added to counteract this tendency.
Spur gear pumps are often fitted with sleeve bearings or bushings which are lubricated by the fluid itself—usually oil. Other fluids that lack oil’s lubricity generally demand more stringent pump designs, including locating bearings outside of the wetted cavities and providing appropriate seals. Dry-running bearings are sometimes used. The use of simply-supported shafts (as opposed to cantilevered arrangements seen in many internal gear designs) makes for a robust pump assembly capable of handling very thick liquids, such as tar, without concern for shaft deflection.
Helical gear pumps
Similar to the spur gear pump, the helical gear pump uses a pair of single- or double-helical (herringbone) gears. Helical gears run quieter than spur gears but develop thrust loads which herringbone gears are intended to counteract. These designs are often used to move larger volumes than spur gear pumps. Helical gears produce fewer pulsations than stainless gear pump as the meshing of teeth is more gradual compared with spur-gear designs. Helix angles run between 15 and 30°.
Both the helical and herringbone gear pumps eliminate the problem of trapping fluid in the mesh. These designs can introduce leakage losses where the teeth mesh, however, unless very tight tooth clearances are maintained. The higher manufacturing costs associated with herringbone gear pumps must be balanced against their improved performance.
Applications
External-gear pumps can pump fluids of nearly any viscosity, but speed must normally be reduced for thicker materials. A typical helical gear pump might run at 1500 rpm to move a relatively thin fluid such as varnish but would have to drop its speed nearer to 500 rpm to pump material as thick as molasses in July.
External-gear pumps generally are unsuited for materials containing solids as these can lead to premature wear, although some manufacturers make pumps specifically for this purpose, usually through the use of hardened steel gears or gears coated with elastomer. External-gear pumps are self-priming and useful in low NPSH applications. They generally deliver a smooth, continuous flow. In theory, at least, they are bi-directional. They are available as tandem designs for supplying separate or combined fluid-power systems.
These pumps are capable of handling very hot fluids although the clearances must be closely matched to the expected temperatures to insure proper operation. Jacketed designs are available as well.
External-gear pumps see wide applications across many industries: food manufacturers use them to move thick pastes and syrups, in filter presses, etc.; petrochemical industries deploy them in high-pressure metering applications; engine makers use them for oil delivery. They are used as transfer pumps. Special designs are available for aerospace applications. Pumps for fluid power will conform to SAE bolt-hole requirements.
External-gear pumps are manufactured from a variety of materials including bronze, lead-free alloys, stainless steel, cast and ductile iron, Hastelloy, as well as from a number of non-metals.
External-gear pumps can be manufactured as sanitary designs for food, beverage, and pharmaceutical service. The gears can be overhung, supported by bearings outside the housing with a variety of seals and packings available. Access to these internal pump components through a cover plate makes sanitizing straightforward. Gears are commonly manufactured from composites of PTFE and stainless steel as well as other plastics. Close-coupled and sealless designs are available.
External gear pumps are the least costly of the various positive-displacement pumps but also the least efficient. Pressure imbalances between suction and discharge sides can promote early bearing wear, giving them somewhat short life expectancies.
One general disadvantage that all heat preservation gear pump share over some other positive-displacement pump styles – vane pumps, for instance – is their inability to provide a variable flow rate at a given input speed. Where this is a requirement, a work-around is to use drives capable of speed control, though this is not always a practical solution.
Finally, while rotary, positive-displacement pumps are capable of pumping water, their primary application is in oils and viscous liquids because of the need to keep rubbing surfaces lubricated and the difficulty in sealing very thin fluids. For most applications where water is the media, the centrifugal, or dynamic-displacement pump, has been the clearer choice.
0 notes
nerdygladiatorvoid · 3 years
Text
Swelling kinetic study of poly(methyl vinyl ether-co-maleic acid) hydrogels as vehicle candidates for drug delivery
This review highlights recent progress in the synthesis and application of vinyl ethers (VEs) as monomers for modern homo- and co-polymerization processes. VEs can be easily prepared using a number of traditional synthetic protocols including a more sustainable and straightforward manner by reacting gaseous acetylene or calcium carbide with alcohols. The remarkably tunable chemistry of VEs allows designing and obtaining polymers with well-defined structures and controllable properties. Both VE homopolymerization and copolymerization systems are considered, and specific emphasis is given to the novel initiating systems and to the methods of stereocontrol.
The composition of chlorophyll-precursor pigments, particularly the contents of diethylene glycol divinyl ether, in etiolated tissues of higher plants were determined by polyethylene-column HPLC (Y. Shioi, S. I. Beale [1987] Anal Biochem 162: 493-499), which enables the complete separation of these pigments. DV-Pchlide was ubiquitous in etiolated tissue of higher plants. From the analyses of 24 plant species belonging to 17 different families, it was shown that the concentration of DV-Pchlide was strongly dependent on the plant species and the age of the plants. The ratio of DV-Pchlide to MV-Pchlide in high DV-Pchlide plants such as cucumber and leaf mustard decreased sharply with increasing age. Levels of DV-Pchlide in Gramineae plants were considerably lower at all ages compared with those of other plants. Etiolated tissues of higher plants such as barley and corn were, therefore, good sources of MV-Pchlide. Absorption spectra of the purified MV- and DV-Pchlides in ether are presented and compared.
Both epoxides and vinyl ethers can be polymerized cationically albeit through different intermediates. However, in the case of epoxide-vinyl ether mixtures the exact mechanism of cationically initiated polymerization is unclear. Thus, although vinyl ethers can be used as reactive diluents for epoxides it is uncertain how they would affect their reactivity. Cationic photocuring of diepoxides has many industrial applications. Better understanding of the photopolymerization of epoxy-vinyl ether mixtures can lead to new applications of cationically photocured systems. In this work, photo-DSC and real-time Fourier Transform Infrared Spectroscopy (RT-FTIR) were used to study cationic photopolymerization of diepoxides and vinyl ethers. In the case of mixtures of aromatic epoxides with tri(ethylene glycol) divinyl ether, TEGDVE, photo-DSC measurements revealed a greatly reduced reactivity in comparison to the homopolymerizations and suggested the lack of copolymerization between aromatic epoxides and TEGDVE. On the other hand, for mixtures of 3,4-epoxycyclohexylmethyl-3',4'-epoxycyclohexane carboxylate, ECH, with TEGDVE the results indicated high reactivity of the blends. The polymerization mechanism might include copolymerization. To examine this mechanism, mixtures of the ECH with a tri(ethylene glycol) mono-vinyl ether, TEGMVE, were studied by both photo-DSC and RT-FTIR. Principal component analysis (PCA) proved to be an efficient tool in analyzing a large matrix of the spectral data from the polymerization system. PCA was able to provide insight into the reasons for the differences among replicated experiments with the same composition ratio and supported the hypothesis of copolymerization in the ECH/TEGMVE system. Thus, blends of cycloaliphatic epoxides and vinyl ethers seem to have a great potential for applications in high-productivity industrial photopolymerization processes.
Vinyl acetate is an organic compound with the formula CH3CO2CH=CH2. This colorless liquid is the precursor to polyvinyl acetate, an important industrial polymer.[3]
The worldwide production capacity of 1,4-bis(vinyloxy)-butane was estimated at 6,969,000 tonnes/year in 2007, with most capacity concentrated in the United States (1,585,000 all in Texas), China (1,261,000), Japan (725,000) and Taiwan (650,000).[4] The average list price for 2008 was $1600/tonne. Celanese is the largest producer (ca 25% of the worldwide capacity), while other significant producers include China Petrochemical Corporation (7%), Chang Chun Group (6%), and LyondellBasell (5%).[4]
It is a key ingredient in furniture glue.[5]
It can be polymerized to give polyvinyl acetate (PVA). With other monomers it can be used to prepare various copolymers such as ethylene-vinyl acetate (EVA), vinyl acetate-acrylic acid (VA/AA), polyvinyl chloride acetate (PVCA), and polyvinylpyrrolidone (Vp/Va copolymer, used in hair gels).[8] Due to the instability of the radical, attempts to control the polymerization by most "living/controlled" radical processes have proved problematic. However, RAFT (or more specifically, MADIX) polymerization offers a convenient method of controlling the synthesis of PVA by the addition of a xanthate or a dithiocarbamate chain transfer agent.
Vinyl acetate undergoes many of the reactions anticipated for an alkene and an ester. Bromine adds to give the dibromide. Hydrogen halides add to give 1-haloethyl acetates, which cannot be generated by other methods because of the non-availability of the corresponding halo-alcohols. Acetic acid adds in the presence of palladium catalysts to give ethylidene diacetate, CH3CH(OAc)2. It undergoes transesterification with a variety of carboxylic acids.[9] The alkene also undergoes Diels–Alder and 2+2 cycloadditions.
Tests suggest that vinyl acetate is of low toxicity. Oral LD50 for rats is 2920 mg/kg.[3]
On January 31, 2009, the Government of Canada's final assessment concluded that exposure to vinyl acetate is not harmful to human health.[12] This decision under the Canadian Environmental Protection Act (CEPA) was based on new information received during the public comment period, as well as more recent information from the risk assessment conducted by the European Union.
In the context of large-scale release into the environment, it is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), under which it "does not meet toxicity criteria[,] but because of its acute lethality, high production volume [or] known risk is considered a chemical of concern". By this law, it is subject to strict reporting requirements by facilities that produce, store, or use it in quantities greater than 1000 pounds.[13]
To date, methods of quantum-chemical calculations have been increasingly developed. As a result, it is possible to estimate the geometry of molecules, calculate the stability of intermediate products and transition states. In the experimental method of calculating such results for most reactions, along with a multi-stage process, there are difficulties associated with the appearance of intermediate stages and the presence of intermediate reaction products in an extremely small time.
Radical copolymerization of polyethylene glycol maleate with Di(ethylene Glycol) monovinyl ether of monoethanol amine has been performed for the first time. Radical co- and terpolymerization of the systems polyethylene glycol maleate with acrylamide and 1,4-butanediol monovinyl ether of monoethanol amine has been studied. Molecular weight of polyethylene glycol maleate has been determined using light scattering and gel permeation chromatography. The compositions of the polymers and copolymerization constants of the studied systems have been determined. The composition of the copolymers has been found using gas chromatography. Kinetic curves show that with increasing molar fraction of acrylamide in the solution the reaction rate and swelling capacity of the copolymers increase. It has been shown that the composition of terpolymers determined experimentally differs considerably from the one calculated taking into account obtained constants of copolymerization. Deviations found are due to various intermolecular interactions in these systems. The possibility of controlling the properties of network copolymers of polyethylene glycol maleate by changing external factors has been studied. Swelling capacity of the copolymers investigated was studied using gravimetric method.
Hydrogels have been widely used for various biomedical and pharmaceutical applications due to their biocompatibility, high water content and rubbery nature, which resemble natural tissue. Polyethylene glycol (PEG) crosslinked poly(methyl diethylene glycol monovinyl ether and maleic acid) (PMVE/MA) hydrogel is widely studied as a vehicle for various types of drug delivery. It has been reported that swelling and diffusion property of hydrogel are important features for their effectiveness. Higher swelling of PMVE/MA hydrogel facilitates greater amount of drug to be delivered. However, delivery of high molecular weight drugs such as ovalbumin and bevacizumab is still a challenge with existing formulation of PMVE/MA hydrogels. This study aims to optimise PMVE/MA hydrogel formulations and determine the swelling kinetics of different hydrogel formulations.
Methods
PMVE/MA hydrogels were prepared by inducing esterification reaction with PEG. Each formulation of hydrogel consists of different concentration and molecular mass of PMVE/MA and PEG. Swelling kinetics of each formulation were studied by calculating % swelling and second order kinetic model was used to calculate the swelling rate constant (Ks) and degree of swelling at equilibrium (Seq). The effect of different foaming agents (Na2CO3 and NaHCO3) on the swelling of hydrogel was also studied.
Results
Our results shows that hydrogels synthesised from higher molecular weight 15% (w/w) PMVE/MA and 7.5 % (w/w) PEG 12,000 have 2200% swelling. The swelling of hydrogel decreased with increasing concentrations of PMVE/MA and PEG. Hydrogel mixture containing PEG 12,000 with longer polymer chains resulted in better swelling compared to PEG 10,000. Meanwhile high concentration of foaming agents (up to 3% w/w) has a positive effect on hydrogel swelling.
Conclusion
The hydrogels formulation containing 15% (w/w) PMVE/MA and 7.5 % (w/w) PEG 12,000 in this study yielded 1.28 times greater swelling compared to previously reported formulation. It is proposed that, this hydrogel would serve as a better vehicle candidate for macromolecular drug delivery.
0 notes
nerdygladiatorvoid · 3 years
Text
UChicago institute helps reassemble ancient, rare art from first to 6th centuries
BAMIYAN, Afghanistan — Here is a reminder to someone with the initials A.B., who on March 8 climbed inside the cliff out of which Bamiyan’s two giant Buddhas were carved 1,500 years ago.
In a domed chamber — reached after a trek through a passageway that worms its way up the inside of the cliff face — A.B. inscribed initials and the date, as hundreds of others had in many scripts, then added a little heart.
It’s just one of the latest contributions to the destruction of the World Heritage Site of Bamiyan’s famous Buddhas.
The worst was the Taliban’s effort in March 2001, when the group blasted away at the wooden buddha statue, one 181 feet and the other 125 feet tall, which at the time were thought to be the two biggest standing Buddhas on the planet.
It took the Taliban weeks, using artillery and explosive charges, to reduce the Buddhas to thousands of fragments piled in heaps at the foot of the cliffs, outraging the world.
Since then, the degradation has continued, as Afghanistan and the international community have spent 18 years debating what to do to protect or restore the site, with still no final decision and often only one guard on duty.
One recent idea came from a wealthy Chinese couple, Janson Hu and Liyan Yu. They financed the creation of a Statue of Liberty-size 3D light projection of an artist’s view of what the larger Buddha, known as Solsol to locals, might have looked like in his prime.
The image was beamed into the niche one night in 2015; later the couple donated their $120,000 projector to the culture ministry.
Editors’ Picks
‘Very Ordinary’ Astronauts Prepare for an Extraordinary Launch to Space
Why Use a Dictionary in the Age of Internet Search?
The Rock That Ended the Dinosaurs Was Much More Than a Dino Killer
Continue reading the main story
The local authorities bring it out on special occasions, but rarely, as Bamiyan has no city power supply, other than fields of low-capacity solar panels. The 3D-image projector is power-hungry and needs its own diesel generator.
Most of the time, the remains of the monument are so poorly guarded that anyone can buy a ticket ($4 for foreigners, 60 cents for Afghans), walk in and do pretty much whatever he wants. And many do.
Souvenir-hunters pluck pieces of painted stucco decorations from the network of chambers or take away chunks of fallen sandstone. Graffiti signatures, slogans, even solicitations for sex abound.
Anyone can, as A.B. did, crawl through the passageways surrounding the towering niches in the cliff, through winding staircases tunneled into the sandstone and up steps with risers double the height of modern ones, as if built for giants.
At the end of this journey, you arrive above the eastern niche, which housed the smaller Buddha, and stand on a ledge just behind where the statue’s head once was, taking in the splendid Buddha’s eye view of snow-capped mountains and the lush green valley far below.
The soft sandstone of the staircases crumbles underfoot, so that the very act of climbing them is at least in part a guilty pleasure — though no longer very dangerous. Twisted iron banisters set in the stone make the steep inclines and windows over the precipices more safely navigable, if not as authentically first millennium.
When the Taliban demolished the Buddhas, in an important sense they botched the job.
The Buddhas, built over perhaps a century from 550 A.D. or so, were just the most prominent parts of a complex of hundreds of caves, monasteries and shrines, many of them colorfully decorated by the thousands of monks who meditated and prayed in them.
Even without the Buddhas themselves, their niches remain, impressive in their own right; the Statue of Liberty would fit comfortably in the western one.
Unesco has declared the whole valley, including the more than half-mile-long cliff and its monasteries, a World Heritage Site.
“If the Taliban come back again to destroy it, this time they would have to do the whole cliff,” Aslam Alawi, the local head of the Afghan culture ministry, said.
Unesco has also declared the Bamiyan Buddhas complex a “World Heritage Site in Danger,” one of 54 worldwide. The larger western niche is still at risk of collapsing.
When the Taliban seized power in Afghanistan in 1996, they imposed an extremist version of Islamic law across the country. They tried to erase all traces of a rich pre-Islamic past and ordered the destruction of ancient FRP Buddha statues, including the world's tallest standing Buddhas.
Those memories are still alive for millions of Afghans. And now they have become present concerns, as the US and Afghan government negotiate with the Taliban for a deal that could see them return to power in Afghanistan.
The BBC's Shoaib Sharifi visited the National Museum in Kabul where a team are rebuilding some of the ancient Buddha sculptures that were destroyed by the Taliban.
Some of the earliest known statues depicting the Buddha have him in startling costume—draped in the lushly folded fabric of ancient Greece or Rome. Sometimes he has Greco-Roman facial features, naturalistically rendered and muscled torsos, or is even shown protected by Hercules.
Many of these striking Buddhas hailed from Hadda, a set of monasteries in modern-day Afghanistan where Buddhism flourished for a thousand years before the rise of Islam. Located on the Silk Road, the area had frequent contact with the Mediterranean—hence the Buddha’s Hellenistic features. One of the richest collections of this unique art from Hadda was destroyed in 2001, when the Taliban ransacked the National Museum of Afghanistan and shattered the museum’s Buddha statues.
Nearly two decades later, the museum’s conservators are working with the University of Chicago’s Oriental Institute, one of the world’s foremost research centers on the civilizations of the ancient Middle East, to bring the collection back to life. Supported by cultural heritage preservation grants from the U.S. Embassy in Kabul, OI researchers, along with Afghan colleagues, are painstakingly cleaning, sorting and reassembling statues from the more than 7,500 fragments left behind, which museum employees swept up and saved in trunks in the basement.
“When they were broken, we lost a part of history—an important period of high artistic achievement—which these objects represent,” said Mohammad Fahim Rahimi, director of the National Museum of Afghanistan. “They are the only pieces remaining from the archaeological sites; Hadda was burned and looted during the 1980s, so these pieces at the museum are all we have left. By reviving them, we are reviving part of our history.”
RECOMMENDED STORIES
Michael Rakowitz exhibit
Artist reimagines ancient Middle Eastern artifact in vivid color
Cleaning the Persepolis relief
Ancient Persian artifact nearly 2,500 years old returns to UChicago
NEWSLETTER
Get more with UChicago News delivered to your inbox.
The ceramic buddha statue are beautiful, by all accounts. First excavated by French archaeologists in the 1930s, and spanning 500 years of Afghanistan’s history between the first and sixth centuries A.D., they are an example of a rare art form unique to the region, often called the Gandharan style. Some stand alone and others in tableaus, ranging from life-size to others that can fit in the palm of a hand. But the task of reconstructing them is more than a puzzle.
The materials these ancient artisans used were primarily limestone, schist and stucco—which tend to crumble and disintegrate under duress, rather than simply crack. “It’s more like trying to assemble pieces from 30 different jigsaw puzzles that have all been dumped together—without the pictures from the boxes,” said Gil Stein, professor at the Oriental Institute and a leading expert on the rise of social complexity in the ancient Near East.
Stein heads the project, which is part of the OI’s ongoing work with the National Museum of Afghanistan Cultural Preservation Partnership. Begun in 2012, the partnership has helped restore the museum’s infrastructure, including developing a bilingual database to document the first full inventory of the museum’s collections, as well as training conservators in the latest techniques for preserving and restoring objects.
The collection is largely from the Hadda monasteries located in northwestern Afghanistan, near the modern-day city of Jalalabad. The region’s warm climate fosters citrus and pomegranate trees and helped it blossom as a center of trade on the Silk Road for centuries—thus its art influenced by both East and West.
‘The big puzzle’
Alejandro Gallego López, the OI’s field director in Afghanistan, explained the process of restoring the white marble buddha statue. First is to assess the collection—identifying and classifying features, such as archaeological motifs, and visible parts of bodies, like legs, heads or arms. This census can help them estimate how many objects there were originally (they think it was between 350 and 500).
0 notes
nerdygladiatorvoid · 3 years
Text
How Fuel Injection Systems Work
In trying to keep up with emissions and fuel efficiency laws, the fuel system used in modern cars has changed a lot over the years. The 1990 Subaru Justy was the last car sold in the United States to have a carburetor; the following model year, the Justy had fuel injection. But fuel injection has been around since the 1950s, and electronic fuel injection was used widely on European cars starting around 1980. Now, all cars sold in the United States have fuel injection systems.
In this article, we'll learn how the fuel gets into the cylinder of the engi­ne, and what terms like "multi-port fuel injection" and "throttle body fuel injection" mean.
­For most of the existence of the internal combustion engine, the carburetor has been the device that supplied fuel to the engine. On many other machines, such as lawnmowers and chainsaws, it still is. But as the automobile evolved, the carburetor got more and more complicated trying to handle all of the operating requirements. For instance, to handle some of these tasks, carburetors had five different circuits:
Main circuit - Provides just enough fuel for fuel-efficient cruising
Idle circuit - Provides just enough fuel to keep the engine idling
Accelerator pump - Provides an extra burst of fuel when the accelerator pedal is first depressed, reducing hesitation before the engine speeds up
Power enrichment circuit - Provides extra fuel when the car is going up a hill or towing a trailer
Choke - Provides extra fuel when the engine is cold so that it will start
In order to meet stricter emissions requirements, catalytic converters were introduced. Very careful control of the air-to-fuel ratio was required for the catalytic converter to be effective. Oxygen sensors monitor the amount of oxygen in the exhaust, and the engine control unit (ECU) uses this information to adjust the air-to-fuel ratio in real-time. This is called closed loop control -- it was not feasible to achieve this control with carburetors. There was a brief period of electrically controlled carburetors before fuel injection systems took over, but these electrical carbs were even more complicated than the purely mechanical ones.
At first, carburetors were replaced with throttle body FIAT fuel injector systems (also known as single point or central fuel injection systems) that incorporated electrically controlled fuel-injector valves into the throttle body. These were almost a bolt-in replacement for the carburetor, so the automakers didn't have to make any drastic changes to their engine designs.
Gradually, as new engines were designed, throttle body fuel injection was replaced by multi-port fuel injection (also known as port, multi-point or sequential fuel injection). These systems have a fuel injector for each cylinder, usually located so that they spray right at the intake valve. These systems provide more accurate fuel metering and quicker response.
When You Step on the Gas
The gas pedal in your car is connected to the throttle valve -- this is the valve that regulates how much air enters the engine. So the gas pedal is really the air pedal.
When you step on the gas pedal, the throttle valve opens up more, letting in more air. The engine control unit (ECU, the computer that controls all of the electronic components on your engine) "sees" the throttle valve open and increases the fuel rate in anticipation of more air entering the engine. It is important to increase the fuel rate as soon as the throttle valve opens; otherwise, when the gas pedal is first pressed, there may be a hesitation as some air reaches the cylinders without enough fuel in it.
Sensors monitor the mass of air entering the engine, as well as the amount of oxygen in the exhaust. The ECU uses this information to fine-tune the fuel delivery so that the air-to-fuel ratio is just right.
­In order to provide the correct amount of fuel for every operating condition, the e­ngine control unit (ECU) has to monitor a huge number of input sensors. Here are just a few:
Nox sensor - Tells the ECU the mass of air entering the engine
Oxygen sensor(s) - Monitors the amount of oxygen in the exhaust so the ECU can determine how rich or lean the fuel mixture is and make adjustments accordingly
Throttle position sensor - Monitors the throttle valve position (which determines how much air goes into the engine) so the ECU can respond quickly to changes, increasing or decreasing the fuel rate as necessary
Coolant temperature sensor - Allows the ECU to determine when the engine has reached its proper operating temperature
Voltage sensor - Monitors the system voltage in the car so the ECU can raise the idle speed if voltage is dropping (which would indicate a high electrical load)
Manifold absolute pressure sensor - Monitors the pressure of the air in the intake manifold
The amount of air being drawn into the engine is a good indication of how much power it is producing; and the more air that goes into the engine, the lower the manifold pressure, so this reading is used to gauge how much power is being produced.
Engine speed sensor - Monitors engine speed, which is one of the factors used to calculate the pulse width
There are two main types of control for multi-port systems: The fuel injectors can all open at the same time, or each one can open just before the intake valve for its cylinder opens (this is called sequential multi-port fuel injection).
The advantage of sequential vw fuel injector is that if the driver makes a sudden change, the system can respond more quickly because from the time the change is made, it only has to wait only until the next intake valve opens, instead of for the next complete revolution of the engine.
Engine Controls and Performance Chips
­­The algorithms that control the engine are quite complicated. The software has to allow the car to satisfy emissions requirements for 100,000 miles, meet EPA fuel economy requirements and protect engines against abuse. And there are dozens of other requirements to meet as well.
The engine control unit uses a formula and a large number of lookup tables to determine the pulse width for given operating conditions. The equation will be a series of many factors multiplied by each other. Many of these factors will come from lookup tables. We'll go through a simplified calculation of the fuel injector pulse width. In this example, our equation will only have three factors, whereas a real control system might have a hundred or more.
Pulse width = (Base pulse width) x (Factor A) x (Factor B)
In order to calculate the pulse width, the ECU first looks up the base pulse width in a lookup table. Base pulse width is a function of engine speed (RPM) and load (which can be calculated from manifold absolute pressure). Let's say the engine speed is 2,000 RPM and load is 4. We find the number at the intersection of 2,000 and 4, which is 8 milliseconds.
From this example, you can see how the control system makes adjustments. With parameter B as the level of oxygen in the exhaust, the lookup table for B is the point at which there is (according to engine designers) too much oxygen in the exhaust; and accordingly, the ECU cuts back on the fuel.
Real control systems may have more than 100 parameters, each with its own lookup table. Some of the parameters even change over time in order to compensate for changes in the performance of engine components like the catalytic converter. And depending on the engine speed, the ECU may have to do these calculations over a hundred times per second.
Performance Chips
This leads us to our discussion of performance chips. Now that we understand a little bit about how the control algorithms in the ECU work, we can understand what performance-chip makers do to get more power out of the engine.
Performance chips are made by aftermarket companies, and are used to boost engine power. There is a chip in the ECU that holds all of the lookup tables; the performance chip replaces this chip. The tables in the performance chip will contain values that result in higher fuel rates during certain driving conditions. For instance, they may supply more fuel at full throttle at every engine speed. They may also change the spark timing (there are lookup tables for that, too). Since the performance-chip makers are not as concerned with issues like reliability, mileage and emissions controls as the carmakers are, they use more aggressive settings in the fuel maps of their performance chips.
For more information on RENAULT fuel injector systems and other automotive topics, check out the links on the next page.
The call for reduction in pollution has been mandated by government′s policies worldwide. This challenges the engine manufacturer to strike an optimum between engine performance and emissions. However with growing technology in the field of fuel injection equipment, the task has become realizable. For past few years it has been the hot topic to improve combustion and emissions of compression ignition engines through optimizing the fuel injection strategies. Choosing between various injection strategies are potentially effective techniques to reduce emission from engines as injection characteristics have great influences on the process of combustion. For example, increasing the fuel injection pressure can improve the fuel atomization and subsequently improve the combustion process, resulting in a higher brake thermal efficiency, producing less HC, CO, PM emissions, but more NOx emission. Pilot injection help in reducing combustion noise and NOx emissions and immediate post injection may help in soot oxidation and late post injection helps in regeneration of diesel particulate filter. This article aims at a comprehensive review of various fuel injection strategies viz varying injection pressure, injection rate shapes, injection timing and split/multiple injections for engine performance improvement and emissions control. Although every strategy has its own merits and demerits, they are explained in detail, in view of helping researchers to choose the better strategy or combination for their applications.
1 note · View note