Tumgik
Text
Short Course Digoxin in Acute Heart Failure by Nouira Semir in Open Access Journal of Biogeneric Science and Research
Tumblr media
ABSTRACT
Background Despite many critical voices regarding its efficacy and safety, digoxin may still have a role in the management of heart failure. The objective of this study was to evaluate the efficacy and safety of a short course digoxin therapy started in the emergency department based on clinical outcome after 30 days post hospital discharge.
Methods From Great Tunisian registry, acute decompensated heart failure (ADHF) patients from January 2016 to January 2018 were identified. Patients with incomplete data were excluded. Digoxin treated and non-treated patients were compared in a matched control study with respect to primary outcomes of all-cause mortality and HF readmission. Secondary outcomes included changes of cardiac output (CO) and left ventricular ejection fraction (LVEF) after 72 hours of hospital admission.
Results The study population comprised 104 digoxin treated and 229 matched non-treated with a median age of 67.4±12.8. After 72 hours of ED admission, there was a larger increase of CO (17.8 % vs 14%; p=0.015) and LVEF (14.4% vs 3.5%; p=0.003) in digoxin group compared to control group. At 30-day post-hospital discharge 34 (10.2%) patients died and 72 (21.6%) patients were readmitted. Use of digoxin was associated with decreased risk of death and hospital readmission [odds ratio, 0.79 (95% CI, 0.71-0.89)].
Conclusion In ADHF patients, treatment with digoxin was associated with a significant decrease risk of 30-day mortality and hospital readmission with an improvement of cardiac output and left ventricular ejection fraction.
Key words: Acute heart failure; digoxin; mortality; rehospitalization; emergency department.
INTRODUCTION
Heart failure (HF) is a major worldwide health problem and one of the most important causes of hospital admissions [1,2]. These hospitalizations are responsible for an important economic burden and are associated with high mortality rates, up to 20% following hospital discharge [3,4]. Acute decompensated HF (ADHF) management is difficult given the heterogeneity of the patient population, incomplete understanding of its pathophysiology and lack of evidence-based guidelines. Although the majority of patients with ADHF appear to respond well to initial therapies consisting of loop diuretics and vasoactive agents, these first line treatments failed to decrease post-discharge mortality and readmission rate [5,6]. Investigations of novel therapies such as serelaxin did not show a significant clinical benefit. In a recent multicenter, double-blind, placebo-controlled trial including patients who were hospitalized for acute heart failure, it was shown that the risks of death at 180 days were not lower in patients who had received intravenous serelaxin for 48 hours than in patients who had received placebo [7]. Numerous other clinical trials have been published on ADHF treatment and their results were disappointing in term of efficacy and/or safety [8-11]. Digoxin is one of the oldest compounds in cardiovascular medicine but its beneficial effect is very controversial [12]. Yet, digoxin has many potential beneficial properties for heart failure as it is the only oral inotrope available that did not alter blood pressure neither renal function. Despite its useful hemodynamic, neurohormonal, and electrophysiological effects in patients with chronic congestive HF, concerns about digoxin safety were constantly highlighted [13]. Consequently, the use of digoxin has decreased considerably, in the last 15 years [12]. Digoxin under prescribing is problematic for several reasons. First, it underestimated the substantial beneficial effect of digoxin on the reduction of hospital admissions in HF patients. Second, for its low cost, the favorable cost-effectiveness ratio of digoxin is highly desirable in low-income countries. Moreover, the question whether a short course of digoxin is useful in ADHF was not previously investigated in the era of new heart failure therapies including β-blockers, angiotensin converting enzyme inhibitors and angiotensin-receptor blockers [12]. The objective of this study is to assess the efficacy and safety of a short course digoxin in patients admitted to the ED with ADHF (Figure1 and Figure 2).
PATIENTS AND METHODS
Data Source
We conducted a retrospective matched case-control study to assess the association between digoxin treatment and 30-day outcome in patients with ADHF. The ADHF patients were identified from the Great Tunisian database between January 2016 and January 2018. The patients included are residents of a community of 500,000 inhabitants in the east of Tunisia, served by 2 university hospitals (Fattouma Bourguiba Monastir, and Sahloul Sousse). ADHF was defined as an acute onset of symptoms within 48 hours preceding presentation, dyspnea at rest or with minimal exertion, evidence of pulmonary congestion at chest radiograph or lung ultrasound, NT-proBNP ≥1400 pg/ml. This electronic medical recording system provided detail of each patient admitted to emergency department (ED) for acute undifferentiated non traumatic dyspnea.
Study Population
Patients were included if the following data are available: demographic characteristics, comorbidities, current drug use, baseline NYHA functional class, physical exam findings, standard laboratory tests, brain natriuretic peptide levels at ED admission; echocardiographic results, bioimpedance measured cardiac output at ED admission and at hospital discharge, digoxin daily dose, 30-day follow-up information including ED readmission and survival status. A patient who received at least 0.25 mg of oral digoxin (1 tablet) for three days during hospital stay was defined as case; those who did not receive digoxin treatment were selected as control. The protocol used in this study was approved by the ethics committee of our institution, and all subjects gave their written informed consent to be included in the data base. All the listed criteria have to be fulfilled for patient’s inclusion. Exclusion criteria included ongoing treatment with digoxin, pregnant or breast-feeding women, patients with known severe or terminal renal failure (eGFR<30 ml/min/1.73m2), alteration of consciousness (Glasgow coma score <15) and patients needing immediate hemodynamic or ventilatory support. Cases were matched first for sex, then for age (±2 years) and NYHA functional class. We performed an individual matching; we matched each patient under digoxin (case) with 2 patients who did not receive digoxin (control) for age, gender and New York Heart Association (NYHA) classification. Reviewers were limited to matching criteria data only (e.g., blinded to 30-day outcomes) to eliminate potential sources of bias. Patients who were treated with digoxin and those who did not receive digoxin were clinically managed the same way.
Outcome Measures
The main end points included death or rehospitalization within 30 days after hospital discharge, and 30-day combined death-rehospitalization outcomes. Secondary end-points included CO change from baseline and length of stay in the hospital during the index episode.
Statistical Analysis
Baseline characteristics were compared between groups to detect any differences between cases and controls; independent t-tests were performed for normally distributed variables; Mann Whitney U tests were performed for continuous non-normally distributed variables; and chi square analyses were performed for categorical variables. Logistic regression analysis was performed to identify the odds ratios (ORs) and 95% confidence intervals (95% CIs) for hospital readmission and/or death risk with respect to digoxin treatment. Data are reported as means ± standard deviations, unless otherwise noted, and a p-value less than 0.05 via two-sided testing was considered statistically significant. Data were analyzed using the statistical software package SPSS version 18.
RESULTS
The initial study population comprised 1727 participants who were registered in the database. From this initial population, we excluded 956 with non-cardiac cause of dyspnea, and 211 with incomplete data. From the remaining patients, 104 were included in the digoxin group and 229 in the control group. Digoxin was orally administered once a day and almost all of our patients received the same dose (0.25 mg, one tablet) each day during at least three days. Only few patients received a lower (0.125 mg) or a higher (0.5mg) dose. Baseline characteristics of both groups are shown in table 1. Demographic characteristics were comparable among both study’s groups. There were no relevant differences in age, sex, or NYHA classification. The NYHA class collected was related to base line medical status (within three months before the ongoing exacerbation). Cardiovascular medical history was comparable for both groups. There were no significant differences between cases and controls regarding underlying other comorbidities. Fifty-two percent of the patients had ischaemic cardiomyopathy as the primary aetiology of their heart failure (47-57%) (Table1). Principal baseline medication consisted of diuretics, angiotensin converting enzyme-inhibitors, beta-blockers, and nitrates. Mean vital signs values at baseline were comparable among the 2 groups with respect to heart rate, respiratory rate, and blood pressure. NT-proBNP levels ranged from 1412 to 8615 pg/ml between; 61% in digoxin group and 59% in control group had reduced LVEF (<45%) (p=0.77). After 72 hours of ED admission, there was a larger increase of CO (17.8% vs 14%; p=0.015) and LVEF (14.4% vs 3.5%; p=0.003) in digoxin group compared to control group (Figure 1); NTpro BNP levels decreased and in digoxin group (2%) and in control group (1.2%) but the difference was not significant (p=0.06). Digoxin treatment was associated with a reduced length of hospital stay (10.1±7.2 days versus 6.6± days; p<0.01). At 30-day follow-up, digoxin group showed a significantly lower all-cause (p=0.04) and heart failure (p=0.02) hospital readmission rate compared to control group, and lower mortality (11.8% versus 6.7%; p=0.03) (Table 2). Digoxin treatment was found to significantly decrease the odds for the combined events of mortality and hospital readmission [odds ratio, 0.79 (95% CI, 0.71-0.89)]. No major side effects were observed in relation to digoxin therapy.
DISCUSSION
Our results demonstrated that digoxin is associated with a lower risk of 30-day hospital readmission among ED patients with decompensated HF. Compared with control group, LVEF and cardiac output increased and length of hospital stay decreased significantly in digoxin-treated group. Most available studies analyzed long-term effect of digoxin in patients with chronic heart failure, but data on the effect of short course digoxin on early clinical outcome and physiological related parameters in patients with acute heart failure are scarce. The concordance between physiological and clinical outcomes was in favor of the validity of our results. Digoxin is one of the oldest drugs used in cardiology practice, and few decades ago, it was prescribed in more than 60% of heart failure patients in the United States [14]. Digoxin is the only inotropic drug known to increase cardiac output and to reduce pulmonary capillary pressure without increasing heart rate or lowering blood pressure in contrast to other oral inotropes. However, despite the evidence of its beneficial effects on hemodynamic, neuro-hormonal and electrophysiological parameters, a great concern regarding its safety profile has been raised and the use of digoxin has declined significantly over the past two decades [15]. Indeed, in the ESC guidelines (2016), digoxin indication was limited only to patients with AF and rapid ventricular rate [16]. This could be understandable given the scarcity of randomized trials specifically aimed at testing digoxin safety in heart failure patients. The Digitalis Investigation Group (DIG) trial, the only large randomized trial of digoxin in heart failure, reported a significant reduction in heart failure hospitalizations [17]. Most of the identified studies against the use of digoxin had many potential sources of bias requiring careful assessment. In fact, digoxin safety concern comes from very heterogeneous studies and non-experimental observational studies carrying a high risk of misinterpretation [18-20]. A recent study concluded that prescription of digoxin is an indicator of disease severity and not the cause of worse prognosis which means that a significant prescription bias might be caused by the fact that sicker patients, having a higher mortality risk receive additional treatment with digoxin [21]. Currently, in DIG trial, there is no evidence of an increased risk with digoxin treatment. Importantly, DIG trial demonstrated that beneficial digoxin effects were mainly observed in patients with HFrEF and those with serum digoxin concentration ≤0.9ng/ml. Digoxin efficacy may be attributed in part to the neurohormonal‐inhibiting properties of digoxin, especially in lower doses; it may also be related to its synergistic effects with beta-blockers as pro‐arrhythmic effects of digoxin would be expected to be attenuated by β‐blockers [22].
Our study has several limitations. First, as this is a retrospective analysis, we should clearly highlight that our results only describe associations and not causality. Second, our study is limited by its small sample size. Third, as in all case control studies; bias due to unmeasured confounders remains possible. We should have used the propensity-score matching to better match our two groups but we should point out that most of confounding variables influencing outcome were well balanced between the 2 groups of our study. Third, we had no data regarding post-discharge adherence to prescribed treatment nor we had informations on neither serum digoxin concentration nor the incidence of digoxin toxicity. We acknowledge that this important information would be a valuable support to our findings in demonstrating a correlation between serum digoxin levels and their clinical outcome in our patients. In addition, in our study only 30% of our patients were receiving aldosterone antagonists, and none were receiving cardiac resynchronization therapy, which may limit generalizability of our results.
CONCLUSIONS
Our findings provided an additional data to support the association between use of digoxin and clinical benefit in HF patients with reduced LVEF. Digoxin may potentially serve as an inexpensive tool for the reduction of short-term mortality and hospital readmissions which is an important objective especially in low-income countries in the health system.
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00258.pdf https://biogenericpublishers.com/jbgsr-ms-id-00258-text/
0 notes
Text
Noise Pollution is One of the Main Health Impacts in Big Cities Today by Tamaz Patarkalashvili* in Open Access Journal of Biogeneric Science and Research
Tumblr media
ABSTRACT
Noise pollution today is one of the biggest health risks in big cities along with air pollution. It must be admitted that noise pollution was overlooked by scientists and city authorities lately. Noise pollution has adverse effect on all living organisms. Scientists confirm that noise incentives central nervous system that stimulus to release some hormones which increases risk of hypertension. Hypertension is related with many other cardiovascular and cerebrovascular diseases like infarction and strokes. Nowadays this tendency is being changed at last and noise pollution is often considered not only as harmful as air pollution but sometimes even more. European and North American countries have taken a number of measures to reduce noise level in big cities. Examples of popular measures include replacement of older paved roads with smoother asphalt, better management of traffic flows and reducing speed limits to 30 km. per hour, using less-noisy models of transport, like electric vehicles, cycling and walking.
KEYWORDS: Noise; Pollution; Health; Traffic; Aviation; Vehicle; Electric Car; Cycling; Walking
INTRODUCTION
Noise pollution is a constantly growing problem in all big cities of the world. Many people may not be aware of its adverse impacts on their health. Noise pollution is a major problem both for human health and the environment [1,2]. Long-term exposure to noise pollution can induce variety of adverse health effects like increasing annoyance, sleep disturbance, negative effects on cardiovascular and metabolic system, as well as cognitive impairment in children. Millions of people in big cities suffer from chronic high annoyance and sleep disturbance. It is estimated that school children suffer reading impairment as a result of aircraft noise. Despite the fact that noise pollution is one of the major public health problems in most big cities of the world there was a tendency of underestimating it making accent mostly on-air pollution [3].
World Health Organization (WHO) guidelines for community noise recommends less than 30 A-weighted decibels dB(A) in bedroom during night for good quality sleep and less than 35 dB dB(A) in classrooms to allow good teaching and learning conditions. The WHO guidelines for night noise recommend less than 40 dB (A) 0f annual average (L night) outside of bedrooms to prevent adverse health effects from night noise.
According to European Union (EU) publication:
about 40% of the population in EU countries is exposed to road traffic noise at levels exceeding 55 dB (A)
20% is exposed to levels exceeding 65 dB (A) during daytime and
more than 30 % is exposed to levels exceeding 55 dB (A) at night
Some groups of people are more vulnerable to noise. For example, children spending more time in bed than adults are more exposed to night noise. Chronically ill and elderly people are more sensitive to disturbance. Shift workers are at increased risk because their sleep structure is under stress. Nuisance at night can lead to increased visits in medical clinics and extra spending on sleeping pills that effects family’s budgets and countries’ health expenditure [4,5].
FACTS AND ANALYSIS
Adverse effect of noise is defined as a change in the morphology and physiology of an organism that results in impairment of functional capacity. This definition includes any temporary or long-term lowering of the physical, psychological or social functioning of humans or human organs. The health significance of noise pollution is given according to the specific effects: Noise-induced hearing impairment, Cardiovascular and physiological effects, Mental health effects, Sleep disturbance and Vulnerable groups.
Noise-Induced Hearing Impairment
The International Organization for Standardization (ISO 1999) standard 1999 gives a method for calculation noise-induced hearing impairment in populations exposed to all types of noise (continu-ous, intermittent, impulsive) during working hours. Noise exposure is characterized by LAeq over 8 hours (LAeq, 8h). In the standard, the relationships between Laeq, 8h and noise-induced hearing impairment are given for frequencies of 500-6000Hz and for exposure time of up to 40 years. These relations show that noise-induced hearing impairment occurs predominantly in the high-frequency range of 3000-6000Hz, the effect being largest at 4000Hz [6,7].
Hearing impairment in young adults and children were assessed by Laeq on 24h time basis [7-9]. It includes pop music in discotheques and rock-music concerts [8]. Pop music through headphones [10,11], music played by brass bands and symphony orchestras [11,12]. There is literature showing hearing impairment in people exposed to specific types of non-occupational noise. These noises originate from shooting, motorcycling, using noisy toys by children, fireworks’ noise [13,14].
In Europe environmental noise causes burden that is second in magnitude to that from air pollution. At least 113million people are suffered from traffic-related noise above 55dB Lden that costs the EU about E57.1 billion a year. Additionally, 22 million Europeans are exposed to railway noise, 4 million to aircraft and about 1 million to industrial noise. All these exposures to noise pollution cause about 1.6 million of life lost annually, about 12000 premature deaths and 48000 cases of ischemic heart diseases. About22 million people suffer from chronic high annoyance and 6.5 million from sleep disturbance [15-17].
Cardiovascular and Physiological Effects
Laboratory studies of workers exposed to occupational noise and noisy streets, indicate that noise can have temporary as well as, permanent impacts on physiological functions in people. Acute noise exposures activate autonomic and hormonal systems, leading to temporary changes such as hyper- tension and ischemic heart diseases associated with long-term exposure to high sound pressure levels [7,11,18]. The magnitude and duration of the effects are determined in part by individual characteristics, lifestyle behaviors and environmental conditions. Sounds also evoke reflex responses, particularly when they are unfamiliar and have a sudden onset. The most occupational and community noise studies have been focused on the possibility that noise may be a risk factor for cardiovascular disease. Studies in occupational settings showed that workers exposed to high levels of industrial noise for many years at their working places have increased blood pressure and risk for hypertension, compared to workers of control areas [19,20]. Cardiovascular adverse effects are associated to long-term exposure of LAeq, 24h. values in the range of 65-70 dB or more, for both air and road-traffic noise.
Mental Health Effects
Environment noise accelerates and intensifies development of adverse effects on mental health by variety of symptoms, including anxiety, emotional stress, nervous complains, nausea, headaches, changes in mood, increase in social conflicts, psychiatric disorders as neurosis, psychosis and hysteria [21-32]. Noise adversely effects cognitive performance. In children environmental noise impairs a number of cognitive and motivational parameters [20,22]. Two types of memory deficits were identified under experimental noise exposure: incidental memory and memory for materials that observer was not explicitly instructed to focus on during learning period. Schoolchildren in vicinity of Los Angeles airport were found to be deficient in proofreading and persistence with challenging puzzles [20]. It has been documented following exposure to aircraft noise that in workers exposed to occupational noise it adversely effects cognitive task performance. In children too environmental noise impairs a number of cognitive and motivational parameters in children too [21-24].
Sleep Disturbance
Annoyance in populations exposed to environmental noise varies not only with the acoustical characteristics of the noise, but also with many non-acoustical factors of social, psychological, or economic nature [17,7]. These factors include fear associated with the noise source, conviction that the noise could be reduced by third parties, individual noise sensitivity, the degree to which an individual feels able to control the noise.
At nights environmental noise starting at Lnight levels below 40 dB, can cause negative effects on sleep such as body movements, awakenings, sleep disturbance, as well as effects on the cardiovascular system that becomes apparent above 55dB [24-27]. It especially concerns vulnerable groups such as children, chronically ill and elderly people. All these impacts contribute to a range of health effects, including mortality. During the COVID-19 pandemic European cities experienced sufficient reduction in noise pollution due to reduced road traffic movement.
The WHO recommends reduction of road traffic noise levels to 53dB during daytime (Lden) and 45dB during the night (Lnight). Though, the Environment Noise Directive (END) sets mandatory reporting for noise exposure at 55dB Lden and 50 dB Lnight [26-28]. It means that we don’t yet have accurate understanding of exact number of people exposed to harmful noise levels as defined by the WHO [5,6].
Vulnerable Groups
Vulnerable groups of people include people with decreased abilities like: people with particular diseases and medical problems; blind people or having hearing impairment; babies and small children; elderly and old-aged people. These people are less able to cope with impairments of noise pollution and are at greater risk to harmful effects. People with impaired hearing are most effected to speech intelligibility. From 40 years aged people demonstrate difficulties to understand spoken messages. Therefore, majority of this population can be belonged to vulnerable group of people. Children are also included in vulnerable group of noise exposure [29]. So, monitoring is necessary to organize at schools and kindergartens to protect children from noise effects. Specific regulations and recommendations should be taken into account according to types of effects for children like, communication, recreation, listening to loud music through headphones, music festivals, motorcycling, etc.
CONCLUSIONS
Our cities have already witnessed welcome period of unusual quiet during confinement periods due to Covid-19 pandemic, but noise pollution is rising again and, in some cases, even more than precrisis levels. It is clear that we cannot live without sound or noise and reducing noise pollution to zero level is unrealistic. However, we must work to make sure that noise be reduced to less harmful levels to environment and human health. Examples of measures include: installing road and rail noise barriers; optimizing aircraft movements around airports and urban planning measures. But the most effective actions to reduce exposure can be reduction of noise at source, namely by reducing number of vehicles, introducing quieter tires for road vehicles and laying quieter road surfaces. Anyway, it is unlikely that noise pollution will decrease significantly in near future and that transport demand is expected to increase. Air traffic noise is also predicted to increase along with city inhabitants. Effective measures against this situation can be raising awareness and changing people’s behavior by using less-noise models of transport, such as electric vehicles, cycling and walking. Zero emission buses must be welcomed in big cities as well, as refuse collection trucks and municipal vans. Required infrastructure of safe cycling must be constructed in cities for safe cycling and available public bike fleet. Such types of transport as motorcycles and scooters must be banned in big cities because they produce the most terrible and loud noise that adversely impacts on citizens. Municipalities and mayors of big cities must organize so-called quiet city areas, like commodity parks and other green spaces, where people can go to escape city noise.
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00257.pdf https://biogenericpublishers.com/jbgsr-ms-id-00257-text/
0 notes
Text
Effect of Qishan Formula Granules on Interventing Obesity Intestinal Microflora and Immune-Inflammatory by Wei Yan in Open Access Journal of Biogeneric Science and Research
Tumblr media
ABSTRACT
Objective: To investigate the effects of Qishan Formula Granule on Simple Obesity and Intestinal Microflora-Inflammatory Immune Pathway. Methods: Eighty patients with simple obesity in our hospital were randomly divided into two groups: traditional Chinese medicine group and placebo group. The Chinese medicine group was treated with lifestyle intervention + Qishan formula granule, while the placebo group was treated with lifestyle intervention + placebo. The therapeutic effect, biochemical indexes, clinical symptoms, the number and composition of bacteria, the proportion of Th17/Treg cells in serum and inflammatory factors were measured before and after treatment. Results: After treatment, the total effective rate of simple obesity patients in Chinese medicine group was significantly higher than that in placebo group. (P <05). Compared with the placebo group after treatment, the biochemical indexes and clinical symptoms of the Chinese medicine group improved significantly after treatment. Further tests showed that Qishan Formula Granule could significantly improve the intestinal bacterial abundance, species and quantity of simple obesity patients. The levels of IL-17, TNF-alpha, Th17/Treg and LPS in patients with simple obesity in traditional Chinese medicine group were significantly lower than those in placebo group before and after treatment (P < 0.05). Conclusion: Qishan Formula Granule can alleviate clinical symptoms of simple obesity and improve treatment efficiency through intestinal flora-inflammatory immune pathway.
KEYWORDS: Qishan formula granule; Obesity; Intestinal flora; Immune inflammatory; Th17/Treg
In recent years, with the change of people's lifestyle, the incidence of obesity has increased rapidly. For the chronic metabolic disorders celected to obesity and overweight, the prevalence of diseases such as diabetes and cardiovascular and cerebrovascular diseases has increased year by year [1]. Obesity cannot only lead to diabetes or high incidence of cardiovascular and cerebrovascular events, but also closely relate to cancer, depression, asthma, apnea syndrome, infertility, osteoarthropathy, fatty liver and many other diseases [2-5]. Therefore, it has become a serious impact on people's health, it’s urgent to find reasonable and effective intervention measures. At present, the main drugs of weight loss treatment include non-central drugs, central drugs and hypoglycemic drugs, which have many problems such as low effective response rate, large side effects and weight rebound after stopping [6]. The combination of diet and exercise is often difficult to adhere to for a long time, and compliance is poor [7]. Seeking effective drugs or methods to treat simple obesity has become an important research hotspot in recent years.
It is believed that intestinal flora plays an important role in the regulation of immune inflammation and glycolipid metabolism [8,9]. Studies have shown that there is chronic low-level inflammation in obese people, and chronic low-level inflammation caused by obesity may promote the occurrence development of metabolic disorders [10]. At the same time, it has been found that the disorder of intestinal flora and its metabolites in obesity patients and the obvious imbalance of proportion can affect the formation and differentiation of immune cells such as Th17 cells and Treg cells, thus leading to chronic low-level immune inflammation and obesity [11-13]. At present, a number of studies have shown that traditional Chinese medicine has an important effect on intestinal flora, berberine, Gegenqinlian decoction, tonifying traditional Chinese medicine and so on have a certain degree of adjustment of bacterial dysbiosis [14-15]. Qishan formula granules from Gegenqinlian modification, is a national famous old Chinese medicine experience prescription. The results of the previous study showed that Qishan formula has the effect of reducing blood sugar, improving insulin resistance and reducing body weight. However, the mechanism is not clear, and the simple obesity population has not been studied. Therefore, this study explores the efficacy of Qishan Formula Granule in the treatment of simple obesity and its effect on intestinal flora immune inflammatory pathway, in order to provide reference for traditional Chinese medicine in the treatment of obesity.
INFORMATION AND METHODOLOGY
General Information
In this study, A randomized (randomized digital approach), double-blind(The subjects, researchers, surveyors or data analysts did not know the treatment allocation, placebo-controlled, prospective study approach was selected for simple obese patients through health check-ups, community population screening, and outpatient visits. All patients were treated with lifestyle intervention(In a low sugar diet, 200-350g of main food should be eaten every day, and the ratio of carbohydrate to total calories should be 50% - 65%. Low fat diet, fat intake within 50g, about 30% of the total calories. Protein balance, about 15% of the total heat. Encourage the intake of foods rich in dietary fiber and vitamins. The daily total heat is controlled within 100kj / kg. Adhere to moderate intensity aerobic exercise, i.e., heart rate + 170 age after exercise, at least 3-5 days a week. It lasts for half a year; those with diabetes were treated with glizat sustained-release tablets; and those with hypertension were treated with amlodipine. The study included 80 patients who still met the following criteria after the 1-month elution period. a random number table was established using excel software. the standard 80 patients were randomly averaged into two groups: the traditional medicine group (40) and the placebo group (40). In the group of traditional Chinese medicine,14 cases were male and 26 cases were female; the age was 25-50 years, the average age was 38.74±10.23 years; and the average course of disease was 6.63±3.55 years. in the placebo group,16 men and 24 women; age 25-50 years, mean age 39.61±9.83 years; and mean course of disease 6.17±3.82 years.
Inclusion and Exclusion Criteria
Inclusion Criteria
(1)Patients with simple obesity (male waist ≥90 cm, female ≥85 cm, and BMI≥25 kg/m2) in accordance with the 2011 edition of the Expert Consensus on the Prevention and Control of Adult Obesity in China;(2) age greater than 25 years of age less than 70 years of age;(3) classification of TCM syndrome differentiation as obesity-wet-heat accumulation of spleen syndrome :The body is fat(25kg/m2≤BMI≤28kg/m2:+1score, 28kg/m2≤BMI≤30kg/m2:+2scores, 30kg/m2≤BMI:+3scores,); the abdomen is full(+1score); the food is little and tired(+1score); the head is heavy as wrap(+1score),The loose stool is not good(+1score); the urine color is yellow(+1score); and the whole body is hot and humid jaundice(+1score);The tongue is fat(+2scores); with yellow and greasy fur(+2scores);and smooth veins(+2scores);(4) discontinuation of drugs affecting weight for 4 weeks; (5) signing of informed consent.
Exclusion Criteria
(1)Weight gain due to drugs, endocrine diseases or other diseases;(2) severe liver and kidney dysfunction or other severe primary diseases;(3) history of acute cardiovascular and cerebrovascular events or myocardial infarction within 6 months;(4) stress state or secondary blood glucose elevation or secondary hypertension;(5) weight-loss surgery within one year;(6) severe dyslipidemia;(7) unwillingness of cooperators (who cannot cooperate with dietary control or do not use drugs as prescribed);(8) mental illness, tumor patients;(9) women with or breast-feeding, and women with planned or unplanned contraception;(10) possible allergy to gestational drugs;(11) patients with diabetes who have received medication.
Treatment
(1) The Chinese medicine group was given lifestyle intervention + qishan formula granules orally, one pack at a time, twice a day. (The formulas are: Pueraria root 15g, Scutellaria baicalensis 10g, Coptis chinensis 10g, Rhubarb 3g, gynostemma pentaphyllum 10g, Shengqi 20g, Huai yam 20g, Atractylodes chinensis 15g, Poria cocos 15g, fried Fructus Aurantii 10g, Raw Hawthorn 10g, Chuanxiong 10g, produced by the preparation room of traditional Chinese medicine in our hospital). (2) The placebo group was given a lifestyle intervention plus a placebo oral dose, twice daily. (The formulas are: starch, pigment and adhesive, which are produced by the traditional Chinese medicine preparation room of our hospital).
Indicator Measurements
Fasting blood glucose (FPG), blood lipids, blood pressure, waist-to-hip ratio, body mass index (BMI), body fat content, tcm symptom score (and other biochemical indicators, clinical symptoms were measured every 4 weeks. HbA1c, fasting insulin (FINS), fecal intestinal flora, proportion of serum th17/treg cells and serum IL-17, TNF-α, liver and kidney function, blood routine, urine routine, electrocardiogram were measured at week 0 and 12.
Determination of Flora Size and Composition
Quantitative intestinal excreta were diluted and inoculated into bs medium (bifidobacterium isolate) anaerobic culture for 48 h, bbe medium (bacillus isolates) anaerobic culture for 24h, lactic acid bacteria selective medium for 24h, enterococcal agar for 24h, fs medium (clostridium isolates) for 72h, kf streptococcus agar (streptococcus isolates) anaerobic culture for 24 h, and iridium agar for 24h. After the growth of the colony, the desired target bacteria were identified by colony morphology, Gram staining and biochemical reaction. On a variety of different media, the colonies were identified, and the number of each bacteria was compared with the reference value, and the B/E value was calculated to evaluate the number and composition of intestinal flora in patients with simple obesity after oral intervention of Qishan without sugar.
PCR-DGGE Analysis Intestinal Flora Composition
Fecal specimen collection and DNA extraction: The collection of feces of simple obese patients by aseptic method was about 1 g in 2 mL EP tube, and the stool genomic DNA was extracted according to the instructions of the DNA extraction kit.pcr: the v3 section of bacterial 16srdna was amplified by universal primers. the amplification conditions were 94°c for 3 min predenaturation,94° c for 1 min denaturation,55°c for 1 min annealing,72°c for 1 min extension, a total of 36 cycles,72°c for 10 min extension, and 4°c preservation.PCR products were detected by 2% agarose gel electrophoresis and stored at -20°C.dgge: the pcr product was separated on 8% polyacrylamide glue, the gel was stained by gelred after the end, the gs-800 grayscale scanner was imaged, and the correlation analysis of dgge molecular fingerprint was performed by biomerics software.
Quantitative Quantitative PCR Analysis of Intestinal Microflora
Fecal specimen collection and DNA extraction: The collection of feces of simple obese patients by aseptic method was about 1 g in 2 mL EP tube, and the stool genomic DNA was extracted according to the instructions of the DNA extraction kit. PCR primer design: according to Bifidobacterium, Lactobacillus, Escherichia coli, Bacillus, Clostridium, Streptococcus 16SrDNA gene sequence, the corresponding bacterial PCR primer was designed, and the specificity of the corresponding bacterial sequence was compared in the BLAST gene bank. Preparation of the standard curve: the PCR products of each bacteria in the control group were purified according to the instructions of the DNA purification kit, and the absorbance (A value) and concentration of the purified product were determined, and the copy number of each standard product 1μl was converted to be used to make the standard curve.
Detection of Biochemical Indexes
Blood sugar, blood lipids and other biochemical indicators were determined by the Olimpas 2000 large automatic biochemical instrument. serum insulin, hba1c was determined by our advia centur®xp fully automated chemiluminescence immunoanalyzer. Serum LPS was detected by ELISA. Methods for the determination of intestinal flora and SCFAs:2g of fresh feces of patients were frozen at -20°C refrigerator with a toilet provided by the Institute of Microbiology of Zhejiang Province (containing stabilizer), and the Institute was commissioned to test it. Body fat content was determined by the department's own body fat tester.HOMA-IR is calculated by formula HOMA-IR = FPG × FINS /22.5, HOMA-IS is calculated by formula HOMA-IS =1/HOMA-IR=22.5/ FPG × FINS.
Anges Of Th17/Treg Cells Before and After Intervention
The peripheral blood of the two groups of patients was collected, standing for 1 h,2000 rpm,4°c, centrifuged for 10 min, collected and packed into supernatant, and stored at -20°c when not detected in time. the content of serum il-17, tnf- α was determined by double antibody sandwich enzyme-linked immunosorbent assay (elisa), and the specific operation was carried out strictly according to the instructions of elisa kit. cd3-pecy7 and cd4-pe 0.5μg each, after oscillating and mixing evenly, incubated at room temperature for 30 min;300 g centrifuged for 5 min. after washing with cold pbs,1 ml of diluted fixed, membrane-penetrating agent was added. after reaction for 50 min, the concentration of th17 and treg cells was determined by flow cytometry.
Safety Evaluation and Adverse Reaction Management
If there is an alt increase during medication, the principle of adjusting the drug dose or interrupting treatment is:1 If the alt increase is within 2 times the normal value, continue to observe. If ALT rises at 2-3 times the normal level and is taken in half, continue to observe if ALT continues to rise or remains between 80-120 U.L-1 and interrupt treatment. 3 If ALT rises above 3 times the normal value, stop the drug. After the withdrawal of drugs return to normal can continue to use, and strengthen the treatment of liver protection and follow-up. If leukopenia occurs during medication, the principles for adjusting the drug dose or interrupting treatment are as follows:1 If leukopenia is not lower than 3.0 x 109·L-1, continue to take medication to observe. If the white blood cell drops between (2.0 and 3.0) ×109·L-1, observe in half. Most patients can return to normal during continued medication. If the review of leukocytes is still below 3.0 x 109·L-1, the treatment is interrupted. 3 If leukopenia falls below 2.0 x 109·L-1, interrupt treatment.
CRITERIA FOR EVALUATION OF SYNDROME EFFICACY
Clinical recovery: TCM clinical symptoms, signs disappear or basically disappear, syndrome score reduction ≥90% and weight lost by ≥ 15%. Remarkable effect: the clinical symptoms and signs of TCM were obviously improved, and the score of syndromes was reduced by more than 70% and weight lost by ≥ 10%. Effective: TCM clinical symptoms, signs are improved, syndrome score reduction≥30% and weight lost by ≥ 5%. Invalid: TCM clinical symptoms, signs are not significantly improved, or even aggravated, syndrome score reduction <30% or weight lost by<5%. Total effective = (clinical recovery + significant + effective)/ total number *100%.
Statistical Analysis
Statistical software SPSS 17.0 is used to compare and analyze the indexes in this paper. The measurement data are expressed in form, and the measurement data are normally distributed, which meet the t-test standard.T-test to compare the counting data with xs test comparison; the count data are compared by Χ2 test; when p < 0.05, the statistics have significant differences.
Estimation of Sample Size
Estimation of sample size: according to the estimation method of sample size in clinical experimental research, the sample size of two sample mean comparison is estimated. Check the "sample size table required for two sample mean comparison". According to the bilateral α = 0.05, the test efficiency (1- β) = 0.9, μ 0.05 = 1.96, μ 0.1 = 1.28, according to the previous research experience, σ is the estimated value of the overall standard deviation of two samples = 16, δ is both samples. The difference of number = 3.0, and the result is n = 34. Considering the loss rate of sample 15%, 40 cases in the experimental group and 40 cases in the control group were preliminarily determined.
RESULTS
The rate of abscission and baseline were compared between the two groups. In the placebo group, 40 cases were enrolled, 2 cases were dropped, 38 cases were observed, and the drop rate was 5%. In the treatment group of traditional Chinese medicine, 40 cases were enrolled, 3 cases fell off, 37 cases were observed, the rate of falling off was 7.5%. There were 12 diabetic patients in placebo group, 15 hypertensive patients, 13 diabetic patients and 14 hypertensive patients in traditional Chinese medicine treatment group. There was no significant difference between the two groups in the proportion of diabetic and hypertensive patients. The age gender, baseline FPG, total cholesterol, triglyceride, LDL, BMI, body fat content, HbA1c, fins, waist to hip ratio, TCM syndrome score and the number of specific intestinal flora were comparable.
Groups Comparison of Clinical Effect of Simple Obesity Patients After Treatment
Compared with the placebo group after treatment, the total effective rate of simple obese patients in the traditional chinese medicine group was significantly increased after treatment (81.1% vs 50.0 %), and there was a significant difference in comparison (p <0.05). the clinical efficacy of the two groups was compared in table 1. Further observation found that the two groups of patients did not have liver function, renal function, abnormal white blood cell level and other adverse reactions Table 1.
Two Groups Comparison of Biochemical Indexes of Simple Obesity Patients Before and After Treatment
Compared with the previous treatment, the indexes of the patients with simple obesity in placebo group did not change significantly after treatment, and there was no significant statistical difference (P >0.05). The scores of FPG, total cholesterol, triglyceride, low-density lipoprotein, BMI, body fat, HbA1c, FINS, waist-to-hip ratio and TCM syndromes were significantly lower in patients with simple obesity than before and after treatment group of placebos, whereas HDL was significantly higher than that before and after treatment placebo group, there was a significant difference in comparison (P <0.05); the comparison of biochemical indexes before and after treatment in the two groups was shown in Table 2.
Groups Patients with simple obesity before and after treatment Comparison of intestinal flora
The colony numbers of Bifidobacterium, Bacillus fragilis, Lactobacillus, Enterococcus and Escherichia coli in the placebo group were not significantly different after treatment compared with that before and after treatment (P >0.05). The colony numbers of Bifidobacterium and Bifidobacterium in the traditional Chinese medicine group were significantly higher than those in the comfort group before and after treatment (P <0.05). A similar result was found for the detection of bacterial copy number of each stool by real-time fluorescence quantitative PCR, and the real-time fluorescence quantitative PCR is shown in Table 4.
Two Groups Comparison of Inflammatory Indexes in Simple Obesity Patients Before and After Treatment
Compared with pre-treatment, the inflammatory indexes of patients with simple obesity in placebo group did not change significantly after treatment (P >0.05)The levels of IL-17, TNF-α and Th17/Treg were significantly lower in patients with simple obesity than those before and after treatment placebo group, there was a significant difference in comparison (P <0.05); the comparison of biochemical indexes before and after treatment in the two groups was shown in Table 5.according to the relationship between th17/treg level and normal range after treatment of qishan formula granules, it was divided into normal group (th17/treg level was within normal range) and high level group (th17/treg level was higher than normal range). Further statistics found that the total amount of intestinal flora after treatment in the normal group was significantly higher than that in the high-level group (P <0.05, Fig.1) Table 3.
DISCUSSION
In recent years, with the rapid development of today's society, people's life style and diet, nutrition structure has undergone great changes, so the incidence of obesity remains high, and the incidence of diabetes, cardio-cerebrovascular diseases and other diseases caused by obesity is increasing year by year. Several studies have shown that Qishan formula granules play an important role in reducing body weight. Therefore, this study discussed the effect of Qishan formula granule on simple obesity and further discussed its mechanism.
This study found that the total effective rate of simple obesity patients was significantly increased after the intervention of qishan formula granules. at the same time, the intervention of qishan formula granules could improve the clinical symptoms and biochemical indexes such as blood sugar and blood lipids in simple obesity patients. The application of Qishan formula granules is beneficial to the treatment of simple obese patients. The theory of traditional Chinese medicine believes that obesity is due to dietary fat, inactivity,dysfunction of spleen in transportation, accumulation of phlegm and dampness cream. Body diseases characterized by obesity and fatigue. Its most common symptom is damp-heat accumulation spleen syndrome [16,17]. Qishan formula granules from Gegenqinlian decoction and six gentleman decoctions reduced. Raw astragalus, huai yam, poria qi invigorating spleen, Scutellaria baicalensis, Coptis chinensis Qingzhongjiao dampness and heat, rhubarb, gynostemma pentaphyllum to remove dampness, Atractylodes aromatization and dampness, Fructus Aurantiii, Hawthorn Qi digestion, phlegm elimination, Pueraria, Chuanxiong heat Qingjin Qi Huoxue. Thus, it has the effect of invigorating qi and invigorating spleen, clearing away heat and removing dampness and removing turbidity. It can make temper health transport and water Tianjin four cloth, phlegm turbidity inside the heat clear and fat in the full elimination. The purpose of removing Glycyrrhiza in the original prescription of Gegen Qinlian decoction is to prevent the rise of blood sugar and the storage of water and sodium.Therefore, the application of Qishan formula granules can significantly improve the total effective rate of patients and significantly improve clinical symptoms, blood sugar, blood lipids and other biochemical indicators. However, the mechanism of cell biology and molecular biology has not been elucidated Table 5.
Several studies have pointed out that differences in the composition of the intestinal flora are one of the most important causes of obesity, and their mechanisms mainly involve activating inflammatory responses, promoting energy absorption and regulating intestinal permeability [18,19] Studies have shown that obese patients are often accompanied by inflammatory signaling pathway activation, immune cell infiltration and other pathological changes [20]. therefore, stopping from the intestinal flora-inflammatory immune pathway will be helpful to elucidate the mechanism of qishan formula granules to improve simple obesity. this study examined the number and composition of intestinal flora and found that the colony numbers lactic acid bacteria, bifidobacterium and bacteroides were significantly higher in patients with simple obesity than in the placebo group before and after treatment after treatment after treatment with qishan formula granules. The results showed that Qishan formula could significantly affect the intestinal flora of simple obese patients. He Xuyun and other studies have found that Astragalus polysaccharide, the main ingredient of Astragalus membranaceus, can significantly inhibit the formation of obesity in mice and significantly restore intestinal flora disorders [21]. At the same time, several studies have pointed out that Radix Puerariae, Scutellariae, Coptis, Huai yam, Ligusticum chuanxiong, Poria cocos and Astragalus membranaceus It can affect the composition and richness of intestinal flora [22-25]. thus further confirming the effect of Qishan formula granules on intestinal flora.Intestinal flora plays an important regulatory role in the balance of immune cells. Fang Qian et al found a linear positive correlation between Bifidobacterium/ Escherichia coli ratio and Treg/Th17 in children with asthmatic bronchitis [26]. At the same time, numerous studies have found that berberine regulates the intestinal flora and the balance of Th17/Treg cells in rats, while the disruption of the balance between pro-inflammatory Th17 cells and inhibitory Treg cells is a key factor in many immune and metabolic diseases [27-29]. Therefore, the effect of Qishan formula granules on inflammatory cells and inflammatory factors was further examined. the results showed that il-17, tnf-α, th17/treg and levels were significantly reduced in patients with simple obesity after intervention of qishan formula granules, indicating that qishan formula granules could affect the inflammatory response in patients with simple obesity. further exploring the relationship between th17/treg levels and intestinal flora found that the total amount of intestinal flora in patients with th17/treg levels in the normal range was significantly higher than that of patients with th17/treg levels than normal, indicating a association between intestinal flora and treg/th17 values in patients with simple obesity. In summary, Qishan formula granules can improve the symptoms of obesity by increasing the richness and diversity of intestinal flora in simple obese patients and inhibiting the inflammatory response.
To sum up, this study found that Qishan formula granules can alleviate the clinical symptoms of simple obesity and improve the treatment efficiency through the intestinal flora-inflammatory immune pathway. Qishan formula granules can regulate the proportion of Th17, Treg cells and the secretion level of inflammatory factors by influencing the composition and richness of intestinal flora, so as to reshape the body shape and improve the biochemical index, and then achieve the purpose of treating simple obesity. However, there are still some shortcomings in this study, and it is necessary to further explore the main components of its efficacy and how the intestinal flora affects the inflammatory response.
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00256.pdf https://biogenericpublishers.com/jbgsr-ms-id-00256-text/
0 notes
Text
Complex Kinesiological Conundrum Could Microzyman Machinations/Inhibitions Explain the Relative Tardiness of Initial Infantile Human Locomotion? Seun Ayoade* in  Open Access Journal of Biogeneric Science and Research
Tumblr media
ABSTRACT
“The onset of walking is a fundamental milestone in motor development of humans and other mammals, yet little is known about what factors determine its timing. Hoofed animals start walking within hours after birth, rodents and small carnivores require days or weeks, and nonhuman primates take months and humans approximately a year to achieve this locomotor skill”.
Introduction
We, mankind, are the tardiest living thing in terms of the age we start walking. This is highly embarrassing. It is embarrassing to evolutionists who declare man to be the most biologically advanced and evolved species. It is equally embarrassing to creationists who insist that man was made in the Image of God. If man is the most advanced and evolved animal why do our babies take so long to learn to walk? Why are we, the “peak of God’s creation” carried around by our mothers for a year while the zebras and goats and horses are proudly walking and cavorting just hours after delivery? Creationists have a ready excuse-the fall of man and his expulsion from the Garden of Eden caused man to become genetically degraded [1]. After all, they argue, the first humans Adam and Eve walked and talked the very day they were created. Evolutionists on the other hand put forth other arguments for the very embarrassing ambulatory limitations of Homo sapiens. I hereby refute these arguments viz-Refuting the gestation argument-This argument states that humans are pregnant for 9 months unlike those other animals that are pregnant for shorter periods. However, baby elephants walk hours after birth and the gestation period in elephants is 18 to 22 months! Refuting the Life Span Relativity Argument This argument states that because horses and dogs have shorter life spans than we humans their apparent early walking is not really that early [2]. I refute this argument in the table below by showing at what age human babies would walk if we had the life span of cats and dogs etc table 1.
Refuting the Brain Development Argument
This argument states that all animals start walking when their brains reach a particular stage of development [3]. Then why do humans reach the stage so late if we are the most evolved animal?
Refuting the Bipedal Argument
This argument states that walking on two legs involves much more balance and coordination than walking on all fours and so should take longer. If this argument was true human babies would start crawling hours after birth! Human babies don’t crawl till 4 -7 months! Also studies by Francesco Lacquaniti at the University of Rome Tor Vergata, Italy have shown that despite homo sapiens’ unique gait, the motor patterns controlling walking in other animals are nearly identical to that in man!
Intelligence Argument
This argument claims that since humans are more intelligent than other animals we have to start walking later because we have so many other things to do with our minds apart from walking [4-7]. However, ravens are very intelligent birds yet raven chicks walk and fly at one month old. Monkeys are intelligent and yet start waking at 6 weeks!
MY HYPOTHESIS AND PROPOSAL
The key to cracking this mystery will be to do a comparative study of the cellular dust [8-10] of various animals. This is not likely to happen any time soon however as the mainstream scientific community continues to deny the existence of the microzymas [11].
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00254.pdf https://biogenericpublishers.com/jbgsr-ms-id-00254-text/
0 notes
Text
Learning Difficulties and Reading Comprehension in the First Grades of Primary School by Theofilidis Antonis* in  Open Access Journal of Biogeneric Science and Research
Tumblr media
ABSTRACT
This paper is a study on the concept of learning disabilities and reading comprehension. Specifically, it studies the learning difficulties and the reading ability in terms of the school performance of the students of the first grades of primary school.
Aim: In our work we try to analyze and present the following topics:
The definition of the term "Learning Disabilities", their etiology and the correlation of learning difficulties and reading.
What learning difficulties do,what we face in reading and writing and how do they affect the school context and students' performance.
How to diagnose learning disabilities in reading ability and interventions that need to be implemented in the classroom to reduce them.
Method: We followed the most up-to-date literature on the subject
Conclusions: Learning difficulties can also cause emotional problems in children as they feel that they are lagging behind compared to the rest of the class. Our goal is to include children with learning disabilities in the classroom by adapting the lesson to the children and not the children in it. Our main concern should be the valid and timely diagnosis of difficulties and effective intervention to address them.
Keywords: Learning difficulties, reading comprehension, primary school
LEARNING DIFFICULTIES - DEFINITION AND ETIOLOGY
Learning disabilities are a generalized expression of some of the individual difficulties encountered by students. More specifically, the concept of learning disabilities refers to a variety of heterogeneous disorders, which result in difficulty in learning, speaking, writing, reading, information processing, mathematical computation, attention retention and in the coordination of movements. Every disorder that is part of the learning disability is differentiated in terms of the intensity of its manifestation, the nature and the symptoms of the difficulties, as well as the consequences that they have. These disorders can cause problems throughout the life of the individual [1]. An important point in the study of learning difficulties concerns the impossibility of having common symptoms of expression of these difficulties, a fact that leads to their delayed recognition, which usually occurs during school age [2].
The goal of any educational system is the success of students in their academic performance, as well as their acceptance by the school environment. Many children, however, who have learning difficulties do not have the expected school performance required. The learning difficulties that can occur to each child individually vary, which makes the role of the teacher difficult, since he is called to deal with each case separately [3].
The origin of learning difficulties seems to be mainly due to dysfunction in the central nervous system of the individual [5]. Thus, learning disabilities come through the pathogenesis of the individual himself. However, apart from the neurological factors that seem to play a major role in the development of learning disabilities, the great heterogeneity of the symptoms of learning disabilities leads, however, many researchers to conclude that the etiology of these difficulties is very likely to be multifactorial and epigenetic factors also play a role. More specifically, researchers should not neglect the study of environmental, cultural and emotional factors, processes that seem to play a significant role in the occurrence of learning disabilities. Such factors may be an inappropriate school environment, a difficult family environment, depression, anxiety, the child's personality, psychological neglect, etc. Still, we should not overlook the cognitive factors that emerge through low performance in almost all learning activities. In addition, there seems to be a differentiation of learning difficulties in relation to the sex of the child. In contrast to girls, boys have a higher rate of learning difficulties especially in the behavioral field and language learning [4]. Therefore, as learning is a complex, multifactorial process, all the complex factors that may affect it should be considered. However, all of the above have not yet been clarified in the literature whether they are factors that cause learning disabilities or simply predisposing or risk factors [6].
Papadakou, Palili, Gini, (2014) studied some epidemiological factors that seem to be directly related to the occurrence or not of learning difficulties. The research process showed that children diagnosed with learning disabilities had at least one first-degree relative with learning disabilities. The researchers also found that learning disabilities were directly linked to sleep disorders, attention deficit hyperactivity disorder (ADHD), and a variety of other emotional and social problems. Such problems may be related to anxiety, depression, adaptive disorders and social interaction, etc. The researchers emphasize that the above factors could be considered as prognostic and gain important function in the design of targeted early interventions, with the aim of better academic and social development [7].
Finally, the correlation of learning difficulties with various emotional problems that children may manifest is considered important. Although no precise explanation has been given, it seems that children with learning disabilities tend to develop less positive and more negative emotions, which reduce their willpower and prevent them from making the necessary effort in the school context. (Gugumi, 2015). Behavioral problems can be characterized as internal and external. Problems such as stress, melancholy, depression, obsessive-compulsive disorder, dysthymia and social phobia, as well as disorders such as bulimia, nightmares, anorexia, shyness and isolation are considered internal, while problems such as aggression are considered. Disasters, negativity, rudeness, attention deficit hyperactivity disorder, adjustment and conduction disorders, hostility and theft, are considered external problems [8]. Bornstein, Hahn & Suwalsky, 2013). Children's intrapersonal or interpersonal adjustment is directly affected by these emotional and behavioral problems, which may be due to students' low self-esteem, which stems from the learning difficulties they face (Koliadis, 2010).
LEARNING DISABILITIES AND READING
Most of the children who have Learning Disabilities have problems in the cognitive process of reading and understanding the written text. The process of reading is a complex task of cognitive function, which refers to the processing and analysis of graphs, phonemes and semantic information of the written language. It is closely related to a variety of other cognitive functions of the child, which must be activated for the full understanding of reading, such as the degree of ability of his phonological awareness and the capacity of his short-term memory, perception, concentration, attention, language and thinking, as well as sensory skills such as vision, motor and reading ability [9] Natália Jordão, Adriana de Souza Batista Kida, Danielle Dutenhefner de Aquino, Mariana de Oliveira Costa, & Clara Regina Brandão de Avila. 2019). More specifically, reading refers to the process in which the student decodes the written symbols of our language and converts them into speech. The graphic processing of these written symbols can contain phonemic, phonological and semantic information about the receiver (Porpodas, 2002).
In order for the reading process to take place correctly, it is necessary not only to decode the imprinted symbols, but also to understand their conceptual content. It is thus an important process of processing and extracting information, deeply connected and dependent on decoding and understanding (Porpodas, 2002. Tsesmeli, S. 2012).
We understand, then, that decoding and comprehension are the two most important cognitive functions that play the most important role in the reading process. More specifically, decoding is the ability to recognize written symbols and automatically convert them to a phonological representation. An important role in the correct decoding is played by the state of long-term memory, access to it and retrieval of any information necessary for the correct letter-phonemic matching [19] (Tzivinikou, S. 2015. Kim, MK, Bryant, DP, Bryant, BR, & Park, Y. 2017) [11]. Comprehension, which is the second equally important cognitive function of reading, presupposes the recognition of the semantic content of words, which can come from the knowledge of the meaning of words, the understanding of their grammatical pronunciation as well as their syntactic structure (Kokkinaki 2014. Westwood, 2016). For the correct understanding of the text, the cognitive strategies should be used correctly, the words of the text should be recognized and combined with the previous knowledge [12] (Westwood, 2016. Tzivinikou, 2015) [13]. According to Kaldenberg, Watt, & Therrien, (2015) reading comprehension is directly related to the reader's prior knowledge, as the information obtained from a text in order to be generalized and understood, must be processed and to connect based on the knowledge already possessed by the individual. Lack of this knowledge leads to inability to use metacognitive strategies needed for reading (Kaldenberg et al., 2015) [14].
Reading, then, can be said to be the product of these two factors presented above, decoding and comprehension. As a result, any malfunction even in one of these two factors can lead to the so-called Reading Difficulties. Difficulty in reading comprehension can manifest itself throughout the levels of the reading process, from the simple learning of individual graphs to the reading, comprehension and retention of the acquired textual information. The reading difficulties that children present are mainly based on neurological abnormalities, they can be combined with delayed speech and generally with language problems (Kokkinaki 2014).
LEARNING DIFFICULTIES AND READING IN PRIMARY SCHOOL
In terms of reading comprehension in the school performance of primary school students we must keep in mind the basic cognitive development of the reading process. A student is expected to have successfully completed the decoding ability by the second grade of elementary school. By the third grade, a child should not only be able to easily decode the written text, but also understand the meaning of what they are reading. With this in mind, we can talk about learning difficulties related to reading comprehension, only if the child has received the appropriate education for the stage of the class in which he is. The usual deviation of students with learning disabilities from the rest of the class is about one to two years.
Learning disabilities seem to slow down children's performance at school. The motivation and enthusiasm that each student has for learning, does not seem to exist in students with learning difficulties, resulting in a low academic level [15] (Lama, 2019). We understand, then, that the early diagnosis and treatment of learning difficulties in a child in the first grades of primary school is vital. The main problem lies in the field of decoding since the difficulty of the reader to decode the words hinders his entry into the process of comprehension (Kokkinaki 2014) [16].
LEARNING DIFFICULTIES AND WRITING SPEECH PROBLEMS IN PRIMARY SCHOOL
The role of writing in school is very important since apart from being a means of communication it is also one of the basic skills that will accompany the child throughout his student years and later in life. A necessary criterion for the production of written speech is the linguistic, metalanguage but also the cognitive, metacognitive skills of the individual. An important role in the production of written speech is played by the already existing knowledge and experiences of the individual, his motivations, feelings and goals (Panteliadou 2000. Vasarmidou, D., & Spantidakis, I. I. 2015). Students with learning disabilities have difficulty using metacognitive strategies that should be employed to produce written communication. These strategies would give the student the opportunity in the writing process, would help him in the production and in the end would give him the opportunity to check his result and evaluate it, allowing him to make the necessary corrections (Panteliadou, 2000. Vasarmidou , 2015).However, in addition to cognitive and metacognitive skills, difficulties may also arise in the student's mechanistic skills. These skills include handwriting, spelling, vocabulary development and the use of punctuation, accentuation, writing and the use of uppercase and lowercase letters. Difficulties in some of these skills or in all, create problems in writing [17] (Vasarmidou, 2015. Panteliadou, 2000). Also, features of written speech difficulty can be the reversal or confusion of letters, omissions or additions of letters, illegible letters and permutations. Identifying more serious issues related to speech and reading in combination with writing is much more complicated than the first grade of elementary school, since the difficulty in organizing speech is something common in preschool children (Tzouriadou, 2008) [18].
DIAGNOSIS AND TEACHING INTERVENTIONS IN CHILDREN WITH LEARNING DISABILITIES
There are many cases of students with learning difficulties who have the ability to keep up with the class schedule without particular problems. However, in most cases, the difficulties are quite intense, as a result of which the content needs to be adjusted in order to be able to follow it [20] (Tzouriadou, 2011).
Learning Disabilities are a common problem for many students, but due to their special nature, they enable us to carry out effective intervention programs. This intervention must be timely, in order to deal with the current difficulty in giving birth, as well as not to create negative feelings regarding the self-esteem, self-image and self-confidence of children for their abilities and school performance. Proper intervention comes from a valid assessment of students' dysfunctions and weaknesses and is a multifactorial process, which requires adequate knowledge of the child's weaknesses, strengths and personality, cooperation with parents, study of social, family and cultural environment of the child, as well as many other important information (Porpodas, 2002). Assessment in terms of cognitive objects should be based on phonological awareness, short-term memory, decoding and finally, comprehension of the reported text (Kokkinaki 2014).
Therefore, a necessary condition for an effective didactic intervention is the correct diagnosis. Specifically, for reading ability, starting with a well-targeted assessment based on the difficulties we detect in the child, we give a specialized approach to the teaching of reading. In order to make this assessment, we must take as a guide the level of reading ability that the student already possesses. The correct evaluation in the kindergarten and in the first grades of elementary school leads to a timely intervention and prevention of the student's difficulties [19] (Tzivinikou, S., 2015).
Today, we are given the opportunity, with the use of appropriate tools, to understand the learning difficulties that a child faces from an early age. Some of the most common screening tests that are mainly related to reading ability are:
The Predictive Assessment of Reading (PAR). It is considered one of the most reliable and valid tests for kindergarten children, through which we have the ability to predict the reading ability of children up to high school.
The Texas Primary Reading Inventory (K-2). It focuses on children from pre-school to the third grade and has the ability to recognize and evaluate the developmental stages of their reading [21].
The Dynamic Indicators of Basic Early Literacy Skills (DIBELS). It is a test that deals with the processes of phonemic awareness, the alphabetic principle and phonological awareness, the ease and accuracy of reading a text, the development of vocabulary and the process of comprehension.
Finally, the AIMS-Web test, which enables the use of RtI programs and multilevel teaching in schools. It refers to children from kindergarten to high school and gives a basis for the detailed curriculum in mathematics and reading [21] (Tzivinikou, S., 2015)
So, after completing our diagnosis and evaluating the capabilities and needs of the student, we turn to the right teaching intervention. In order to eliminate the differences between the students, we focus on adapting the curriculum to the needs of the student and not the other way around. Thus, the goals we have set in a classroom do not change according to the abilities of each student, but the way they are approached and their degree of difficulty differ (Skidmore 2004). The ultimate goal is to develop learning strategies and use them within the classroom. Strategies such as group research or peer-to-peer teaching can be particularly useful, not only for students with difficulties but also for all students in the class (Tzouriadou, 2011). By learning to use cognitive and metacognitive strategies, students have the opportunity to process and use the information they will receive, to think and perform a task, and to evaluate their performance in it [22] (Luke 2006).
Students with reading difficulties can improve their reading skills through various educational approaches applied during teaching. Indicatively, we can refer to direct teaching, the formation of small groups that enhance discussion and support, vocal thinking, etc. (Tzivinikou, 2015). Also, strategies that have to do with reading are:
The Reading Analysis, Merge and Decoding strategy. It refers to students with learning difficulties aged 7 to 12 years and has to do mainly with the connection of sounds with voices.
Auditory Discrimination in Depth. It emphasizes the posture of the mouth and they learn the feeling that each sound gives at the time of its pronunciation. Thus, students analyze the words and recognize the sounds according to the placement of the tongue and mouth.
Analysis for Decoding Only. A strategy that is implemented in order to teach students to analyze letter patterns in small words that they often come across. For example, the word "beyond" combined with the words "day, wedding ring, good morning".
The Read-By-Ratio Approach. This strategy is based on phonemes aimed at word recognition. Through the same spelling patterns of words, students are taught how to analyze and decode unknown words (Tzivinikou, 2015).
CONCLUSIONS
In recent years, there has been a rapid increase in learning disabilities. The percentage of children diagnosed with learning disabilities, which can cause serious developmental problems within and outside the school context, is increasing. Children who have been diagnosed with learning disabilities appear to have serious difficulty adapting to the classroom and thus lagging behind in their academic performance. In the past there was a perception that these children were "lazy", "bad students", "stupid", but now we know that this perception is wrong, since we are not talking about bad or lazy students, but about students who due to a nervous disorder their system, do not have the same capabilities as the rest. Learning difficulties can also cause emotional problems in children as they feel that they are lagging behind compared to the rest of the class. Our goal is to include children with learning disabilities in the classroom by adapting the lesson to the children and not the children in it. Our main concern should be the valid and timely diagnosis of difficulties and effective intervention to address them.
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00255.pdf https://biogenericpublishers.com/jbgsr-ms-id-00255-text/
0 notes
Text
Marine Drugs as a Valuable Source of Natural Biomarkers Used in the Treatment of Alzheimer’s Disease by UMA NATH U* in  Open Access Journal of Biogeneric Science and Research
Tumblr media
ABSTRACT
Alzheimer’s disease (AD) is a multifactorial neurodegenerative disorder. Current approved drugs may only ameliorate symptoms in a restricted number of patients and for a restricted period of time. Currently, there is a translational research challenge into identifying the new effective drugs and their respective new therapeutic targets in AD and other neurodegenerative disorders. In this review, selected examples of marine-derived compounds in neuro degeneration, specifically in AD field are reported. The emphasis has been done on compounds and their possible relevant biological activities. The proposed drug development paradigm and current hypotheses should be accurately investigated in the future of AD therapy directions.
KEYWORDS: Marine drugs; Alzheimer’s disease; Mechanisms of activity
Introduction
Right now, 46.8 million persons in the world are suffering from dementia and it is expected that this number will increase to 74.7 million in 2030 and 131.5 million in 2050. Alzheimer’s disease (AD) is the main cause of dementia in the elderly. AD is a progressive, continuous and incurable brain disorder leading to increase severe disability such as memory loss (amnesia), minimal to no communication (aphasia), the inability to perform activitiesofdaily living (ADL) (apraxia), the impairment of the sensory input (development of agnosias). In briefly, AD is a multifactorial neurodegenerative disorder that affects cognition (memory, thinking, and language abilities), quality of life and self-sufficiency in elderly [2]. AD is strictly related to aging, indeed the majority of cases (≥ 90%) are initially diagnosed among persons ≥ 65 years of age (AD with late onset-LOAD). In particular, genes involved in the production of the amyloid β (Aβ) peptides such as amyloid precursor protein (APP), Presenilin 1 (PSEN1), and 2 (PSEN2) may account for as much as 5%–10% of the EOAD incidence.
BRYOSTATIN
Bryostatin 1 is a natural product derived from the marine invertebrate Bugula neritina. It has potent and broad antitumor activity. Bryostatin 1 activates protein kinase C family members, with nanomolar potency for PKC1α and ε isotypes.
In the central nervous system, bryostatin 1 activation of PKC boosts synthesis and secretion of the neurotrophic factor BDNF, a synaptic growth factor linked to learning and memory. The compound also activates nonamyloidogenic, α-secretase processing of amyloid precursor protein.
Preclinical work on bryostatin in nervous system diseases has mainly come from the Alkon lab. In their studies, intraperitoneal administration activated brain PKCε and prevented synaptic loss, Aβ accumulation, and memory decline in Alzheimer’s disease transgenic mice. he drugs preserved synapses and improved memory in aged rats, and in rodent models of stroke and Fragile X syndrome. In a different lab, bryostatin given by mouth improved memory and learning in an AD model. In a mouse model of multiple sclerosis, bryostatin promoted anti-inflammatory immune responses and improved neurologic deficits.
MACROALGAE
Acetylcholinesterase (AChE) and butyrylcholinesterase (BChE) are important enzymes involved in the regulation of acetylcholine (ACh) in the cleft of neurons to promote cognitive function. However, loss or rapid degradation of acetylcholine leads to cholinergic dysfunction and synaptic ultimately memory impairment. Hence, cholinesterases have been developed to alleviate cholinergic deficit by restoring ACh levels and improving cognitive function Seaweed-derived biologically active compounds have been reported to exhibit inhibitory effects on enzymes associated with Alzheimer’s disease. revealed that aqueous-ethanol extracts rich in phlorotannins, phenolic acids, and flavonoids from Ecklonia maxima, Gelidium pristoides, Gracilaria gracilis, and Ulva lactuca exhibit acetylcholinesterase and butyrylcholinesterase inhibitory activities. Furthermore, sulfated polysaccharides obtained from Ulva rigida as well as the aforementioned algal species also showed potent inhibitory effects on BChE and AChE in vitro. Purified fractions of Gelidiella acerosa showed AChE and BChE inhibitory activity. Phytol was identified in the fraction as the most effective constituent. In the same study, molecular docking analysis revealed that phytol tightly binds to the arginine residue at the active site of the enzyme, thereby changing its conformation and exerting its inhibitory effect. AChE inhibitory activity of Codium duthieae, Amphiroa beauvoisii, Gelidium foliaceum, Laurencia complanata, and Rhodomelopsis africana. Hypnea musciformis and Ochtodes secundiramea extracts showed weak inhibitory activity (less than 30% inhibition) on AChE. Jung et al. also reported AChE and BChE inhibitory effects of methanol extracts of Ecklonia cava, Ecklonia kurome, and Myelophycus simplex. Glycoprotein isolated from Undaria pinnatifida showed dose responsive inhibitory effects on butyrylcholinesterase and acetylcholinesterase activities.
MEDITERRANEAN RED SEAWEED HALOPITHYS INCURVA
The close relationship between the amyloid aggregation process and the onset of amyloidosis constantly encourages scientific research in the identification of new natural compounds capable of suppressing the formation of toxic amyloid aggregates. For the first time, our findings demonstrated the in vitro anti-amyloidogenic role of the H. incurva, whose metabolic composition and bioactivity were strongly influenced by seasonality. This work focused on the bioactivity of H. incurva phytocomplex to evaluate the synergistic action of its various constituents, while the structure and functionality of its secondary metabolites will be the subject of further studies.
FASCAPLYSIN
Fascaplysin, a bis-indole alkaloid, was isolated from a marine sponge Fascaplysinopsis Bergquist sp. Fascaplysin is a specifc kinase inhibitor for CDK 4.
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00253.pdf https://biogenericpublishers.com/jbgsr-ms-id-00253-text/
0 notes
Text
Forest Biodiversity Degradation: Assessment of Deforestation in Ohaji Egbema Forest Reserve, Imo State, Nigeria Using GIS Approach by Egbuche Christian Toochi in Open Access Journal of Biogeneric Science and Research
Tumblr media
ABSTRACT
This research is focused on a spatial analysis of a reserved forest deforestation over a period of time using a GIS approach in Ohaji Egbema Local Government Area Imo state, Nigeria. It aimed at assessing and analyze deforestation in Ohaji Egbema forest reserve and examined the possible effects of deforestation on the forest environment. The assessment concentrated on when and where have forestlands changed in the reserved forest programmed within the period of 1984 - 2040 forecast. The key objectives were to assess the impact of land use and land cover changes on forest cover for the past 36 years, while sub objectives were dedicated to achieve in mapping out different land cover in Ohaji Egbema forest reserve, to assess land cover changes in the forest reserve susceptible to long term degradation from 1984 to 2020 of about 20 years. To evaluate forest loss in the area for the past 36years, and to predict the state of the land cover (forest) for the next 20 years (2040). Primary and secondary data employed using (200 ground truth points) were systematically collected from four different LULC classes in the study area using geographical positioning system (GPS), the secondary data (Satellite Landsat Imageries of 1984, 2002 and 2020) of the study area was acquired. The imageries were processed, enhanced and classified into four LULC classes using supervised classification in Idrisi and ArcGis software Ground truth points were utilized to assess the accuracy of the classifications. The data collected was analyzed in tables and figures and represented with a bar chart and pie chart graphs. Results showed that forest land, built up, grassland and water body were the four LULC classified in the study area. Kappa coefficient values of 91%, 85% and 92% for 1984, 2002 and 2020 respectively shows the accuracy of the classifications. Classifying the land uses into built-up and forest lands revealed that the built-up lands constantly rose while the forest lands kept dropping. The built-up lands increased by 49.30% between 1984 and 2000, 50.00% between 2002 and 2020 and 28.40% between 2020 and 2040 at the expense of the forest portion of the area which fell by 33.88% between 1984 and 2000,46.45% between 2002 and 2020, and 49.22% between 2020 and 2040. Increase in population, per capita income, and land use activities and by extension urban expansion were found to be the major factors causing deforestation in the forest reserve, it is likely that in the nearest future the remaining forest lands would be gradually wiped out and consequently the environmental crisis would be aggravated. Based on the findings of the study, there is need to urgently limit and control the high rate of deforestation going on in Ohaji Egbema forest reserve and embark on tree replanting campaigns without delay. There is need and recommended that a higher quality satellite imagery that offers up to 4m resolution should be used and a forest relic analysis should be conducted.
KEYWORDS: Biodiversity; Forest degradation; GIS; Forest Reserve; LULC; Deforestation and Satellite Imagery
Introduction
Deforestation constitutes one of the serious threats to forest biodiversity and pose a global development challenges of long-term environmental problem at both regional level and the world at large. According to [1] and [2] the degradation of the forest ecosystem has obvious ecological effects on the immediate environment and forested areas. Deforestation can result in erosion which in turn may lead to desertification. The economic and human consequences of deforestation include loss of potential wood used as fuel wood for cooking and heating among others. The transformation of forested lands by human actions represents one of the great forces in global environmental change and considered as one of the great drivers of biodiversity loss. Forests are cleared, degraded and fragmented by timber harvest, conversion to agriculture, road-construction, human-caused fire, and in myriad of other ways of degradation. According to [5], deforestation refers to the removal of trees from afforested site and the conversion of land to another use, most often agriculture. There is growing concern over shrinking areas of forests in the recent time [7]. The livelihoods of over two hundred million forest dwellers and poor settlers depend directly on food, fibre, fodder, fuel and other resources taken from the forest or produced on recently cleared forest soils. Furthermore, deforestation has become an issue of global environmental concern, in particular because of the value of forests in biodiversity conservation and in limiting the greenhouse effect [8]. Globally, deforestation by this trend has been described as the major problem facing the forest ecosystem. The extent of deforestation in any particular location or region can be viewed from economic, ecological and human consequences as well as scramble for land. Forest degradation may in many ways be irreversible, because of the extensive nature of forest degradation which the impact of activities altering their condition may not be immediately apparent and as a result they are largely ignored by those who cause them. Forest is often perceived as a stock resource and always and freely available for conversion to other uses without considering the consequences for the production services and environmental roles of the forest. As environmental degradation and its consequences becomes a global issue, the world is faced with the danger that the renewable forest resources may be exhausted and that man stands the risk of destroying his environment if all the impacts of deforestation are allowed to go unchecked. It becomes therefore important to evaluate the level of deforestation and degradation in Ohaji Egbema forest reserve using a GIS application. The effect of deforestation and degradation of the only forest reserve in South east Nigeria has recently become a serious problem. It has been identified that in the area is mostly the quest for fuel wood, grazing and for agricultural use. One of the effects of deforestation is global warming which occurs as a result of deforestation as trees uses carbon dioxide during photosynthesis. Deforestation leads to the increase of carbon dioxide in the environment which traps heat in the atmosphere leading to global warming. I become very objective to assess the impact of land use and land cover changes on forest cover for the past 36 years with further interest to map out different land cover in the Ohaji Egbema forest reserve, assess land cover changes from 1984 to 2020 at 20 years, evaluate forest loss in the area for the past 36 years and make a prediction of the state of the land cover (forest) for the next 20 years (2040). It is known that deforestation and degradation of the forest has posed a serious problem especially at this era of global climatic change.
United Nations Food and Agricultural Organization [12], deforestation can be defined as the permanent destruction of forests in other to make the land available for other uses. Deforestation is said to be taking place when forest is being cut down on a massive scale without making proportionate effort at replanting. Also, deforestation is the conversion of forest to an alternative permanent non-forested land use Such as agriculture, grazing or urban development [5]. Deforestation is primarily a concern for the developing countries of the tropics [6] as it is shrinking areas of the tropical forests [3] causing loss of Biodiversity and enhancing the greenhouse effect [8]. Forest degradation occurs when the ecosystem functions of the forest are degraded but where the area remains forested rather cleared [9]. Available literatures shows that the causes of forest deforestation and degradation are caused by expansion of farming land, logging and fuel wood, overgrazing, fire/fire outbreak, release of greenhouse gases and urbanization/industrialization as well as infrastructural provisions. More so agents of deforestation in agricultural terms include those of slash and burn farmers, commercial farmers, ranchers, loggers, firewood collectors etc. Generally, the center of biodiversity and conservation (CBC 1998) established the remote sensing and geographic information system (RS/GIS) facilities as technologies that will help to identify potential survey sites, analyze deforestation rates in focal study areas, incorporate spatial and non-spatial databases and create persuasive visual aids to enhance reports and proposals. Change detection is the process of identifying differences in the state of an object or phenomenon by observing it at different times [11]. Change detection is an important process in monitoring and managing natural resources and urban development because it provides quantitative analysis of the spatial distribution of the population of interest.
Study Area
Ohaji Egbema lies in the southwestern part of Imo state and shares common boundaries with Owerri to the east, Oguta to the north Andogba/Egbema/Ndoni in Rivers state in the southwest. The 2006 census estimated the study area to over 182,500 inhabitants but recently due to industrialization and urbanization, Ohaji/Egbema has witnessed a great deal of population influx. The study area lies within latitudes 50 11’N and 50 35’N and longitudes 6037’ ad 6057’. It covers an area of about 890km2.
The study area is largely drained by the Otammiri River and other Imo river tributaries. The study area belongs to a major physiographic region- the undulating lowland plain which bears a relationship with its geology. The low land areas are largely underlain by the younger and loosely consolidated Benin formation [12]. The vegetation and climate of the study area has been delineated to have 2 distinct seasons both of which are warm, these are the dry and rainy season.
Climate and Vegetation
The dry season occurs between November and March, while the rainy season occurs between April and October. The high temperatures, humidity and precipitation of the area favour quick plant growth and hence vegetation cover of the area that is characterized by trees and shrubs of the rainforest belt of Nigeria. Geology and Soil
The study area is located in the Eastern Niger delta sedimentary basin, characterized by the three lithostratigrapgic units in the Niger delta. These units are – Akata, Agbada and Benin formation in order of decrease in age [13]. The overall thickness of the tertiary sediments is about 10,000 meters.
Method of Data Collection
Data are based on field observation and from monitoring the real situation, they are collected as fact or evidence that may be processed to give them meaning and turn them into information in line with [14]. Heywood (1988). Geographical Positioning System (GPS) was used to collect fifty (50) coordinate points at each land use land cover, totaling 200 points for the four major land use and land cover identified in the study area. Landsat Imageries of one season (path 188, 189 and row 56) were acquired from United State Geological Survey (USGS) in time series; 1984 Thematic Mapper (Tm), 2002 Enhanced Thematic Mapper (ETM) and 2020 Operational Land Imager (OLI) as shown in the table one below table 3.
Data Analysis and Data Processing
The acquired landsat imageries were pre-processed for geometrical corrections, stripes and cloud removal. Image enhancement was carried out on the acquired imageries employing bands 4, 3, 2 for LANDSAT TM and ETM while bands 5, 4, 3 for LANDSAT OLI/TIRS to get false colour composite using Idrisi and Arcgis softwares. In the resultant false colour composite, built up areas appear in cyan blue, vegetation in shades of red differentiating dense forest and grass or farm lands, water bodies from dark blue to black, bare lands from white to brown [15]. This was necessary to enhance visualization and interpretability of the scenes for classification. The study area was clipped out using administrative map of Nigeria containing Imo State and Local Government shape files in Arc map (Table1 and table 2).
Land Use Land Cover Classification
The false Colour composite images were subjected to supervised classification which was based on ground-based information. Maximum likelihood was adopted to define areas of Landsat images that represented thematic classes as determined by maximal spectral heterogeneity according to [16]. Maximum likelihood algorithm considers the average characteristics of the spectral signature of each category and the covariance among all categories, thus allowing for precise discrimination of categories. Hence the land covers were classified into four major land use land cover classes: Built up, forest cover, grass cover and water body. Forest vegetation are the areas dominated by trees and shrubs; grass land are the areas dominated by grasses, including farm lands and gardens; water body are the areas occupied by streams, rivers, inland waters; while built-up areas are the areas occupied by built structures including residential, commercial, schools, churches, tarred roads and those land surface features devoid of any type of vegetation cover or structures including rocks. Four applications (ArcGis 10.5, Idris software, Excel and Microsoft word) were also applied in this study.
Accuracy Assessment of The Classification
The aim of accuracy assessment is to quantitatively assess how effectively the pixels were sampled into the correct land cover classes. Confusion matrix was used for accuracy assessment of the classification procedure in accordance with the training samples and the ground truth points as a reference point. This approach has also been adopted effectively in similar studies by [17]; [18]. The accuracy assessments of the classified maps for 1984, 2002 and 2020 were evaluated using the base error matrix. The base error matrix evaluates accuracy using parameters such as agreement/accuracy, overall accuracy, commission error, omission error and the Kappa coefficient. The agreement/accuracy is the probability (%) that the classifier has labeled an image pixel into the ground truth class. It is the probability of a reference pixel being correctly classified. The overall accuracy specifies the total correctly classified pixels and is determined by dividing the total number of correctly classified pixels by the total number of pixels in the error matrix. Commission error represent pixels that belong to another class but are labeled as belonging to the class; while the Omission error represent pixels that belong to the truth class but fail to be classified into the proper class. Finally, the Kappa coefficient (Khat) measures the agreement between classification map and reference data, as expressed below:
kappa coefficient t= (Observed Accuracy-Chance agreement)/ (1-Chance agreement)
It is stated that Kappa values of more than 0.80 (80%) indicate good classification performance. Kappa values between 0.40 and 0.80 indicate moderate classification performance and Kappa values of less than 0.40 (40%) indicate poor classification performance [19].
Change Detection Analysis
Spatio-temporal changes in the four classified land use classes for the past 36 years were analyzed through comparison of area coverage of the classified maps. Change detection was carried out in each of the classes to ascertain the changes over time in terms of area and percentage coverage according to [18]. This was done by computing the area coverage for each of the feature class in each epoch from the classified images in idrisi and Arcmap softwares following the expression below:
Area (m2) = (Cell Size x Count)/10,000
Percentage cover (%) = Area/ (Total) x100
Where cell size and count were gotten from the properties of the raster attributes.
The extent of land use land cover over change, land use encroachment as well as gains and losses experienced within the study period were analyzed and presented in maps and charts.
Prediction Analysis
The classified land use imageries were subjected for Land Change Modeling in idrisi software using Cellular Automata and Markov Chain algorithm for prediction. Then land cover scenario under prevailing conditions for the year 2040 was modeled table 6.
RESULTS AND DISCUSSION LAND COVER LAND USE CHANGES LASSIFICATION FROM 1984-2040
The result of the work is presented starting with the land use and land cover classification in the years 1984,2002,2020 and 2040 presented in Figures 4.1,4.2,4.3 and 4.4 below. Dark green colour represents forest vegetation, light green represents grass lands, blue represent water body, while orange colour represent built-up area. In figure 4.1 which is the LULC classification of 1984 shows that the study area is largely covered with deeply dark green which is forest vegetation, patches of scattered light green and orange colour which is grass land and built-up while the blue colour which is the water bodies covers a little part of the study area. This depict that the study area was more of forest vegetation in 1984 table 7.
In figure 4.2, it is observed that dark green colour is reduced, there was a slight increase in blue colour, and there was a slight reduction inlight green colour, while the orange colour was at an increase rate mainly at Obofia, Awarra, Amafor, Ohoba, Umukani, Ohaji Egbema forest reserve and Adapalm axis. This indicated that by the year 2002, reasonable forest lands were deforested and converted to residential, commercial, Agricultural and other land uses, and this could be attributed to infrastructural development, urbanization, industrialization and human population increase in the area which is the causes of deforestation table 4 and table 5.
In fig 4.3, deforestation continued. There were more of orange colours and light green colors were observed more at Umukani and Umuakpu axis in the map, while the dark green colour is gradually decreasing and it’s been observed that the blue colour was rarely seen in the map. These indicated that as the years passes by, there are more built ups, and grass lands while the forest land is gradually degraded and used for built-up and agricultural purposes.
In figure 4.4 below, the whole map is mostly covered with orange colour, with scattered patches of light green, and dark green colour being deforested, while the blue colour is hardly seen in the map. This shows that the forest land cover has been on the regular decrease, while built ups, grass lands have been on the regular increase.
Area coverage, percentage cover and change detection land use and land cover 1984 - 2040
The area coverage and percentage cover of different land use classes are represented below. It is observed that the forest land covers about 723.26km2 with a percentage cover of 81.31% which was the major land cover of the study area in 1984. This implies that more than half of the study area was under forest cover in 1984, In the areas of built-up, it covers about 128.40km2 in 1984 with a percentage cover of 14.43%. Areas covered by grass land (either by sparse vegetation, farmland or grasses) was at minimal in 1984 with an area cover of 32.57km2 and percentage cover of 3.66%, while the water bodies cover an area of 5.34km2 and a percentage cover of 0.6%.
Figure: 5. The land use land cover classification of 2040
Figure 4.7 and 4.8 below show the area coverage and percentage coverage of the year 2002. As time goes on, the forest land decrease from area coverage of 723.26km2 in 1984 to 699.68km2 in 2002, with a percentage cover of 81.31% in 1984 to 78.68% in 2002 which depict that the forest cover was at a loss while the built–up was at increase from 128.40km2 to162.46km2 and a percentage cover of 18.26%. Areas covered by grass land gradually decrease to 21.41km2 with a percentage cover of 2.41% in 2002, and a slight increase of the water bodies from an area cover of 5.34km2 to 5.80km2 with a percentage cover of 0.65%.
In the year 2020, it is shown that the forest land covers about 589.73km2 with a percentage cover of 66.30% which shows that there was a decrease within 2002 to 2020 while in the area of built up it increased to an area cover of 280.98km2 with a percentage cover 31.59%. An area covered by grass land covers about 15.53km2 with a percentage cover of 1.75% and the water bodies cover about 3.27km2 and a percentage cover of 0.37%. This shows that the built-up area which is at increase were initially forest lands and water bodies in the past years.
In 2040 it was predicted that the forest cover about 497.67km2 with a percentage cover of 55.95%, the built up was predicted to be on the increase with an area cover of 334.11km2 and a percentage cover of 37.56%. Area covered by grassland was at predicted to be on increase with an area cover of 55.92km2 and a percentage cover 6.29% and this grass land was formally forest land in the past years and this change occurred mainly at Ohoba, Awarra, Umuakpu, Umukani and Ohaji Egbema forest reserve the only forest reserve in the south east of Nigeria which has been deforested and use for agricultural purposes. The water bodies cover about 1.82km2 with a percentage cover of 0.20%. This implies that the forest land has been deforested and degraded to other land uses in the study area within the study periods. All this are shown in figure 4.11 and figure 4.12.
Change Detection Observed Between (1984-2040)
Approximately the change detected in the forest land from 1984 to 2002 in the area coverage is 23.41km2 with a percentage change of 33.88% which shows it was at decrease, the built up from 1984 to 2002 the change detected in area is -34.06km2 with a percentage change of 49.30% which is at increase while in the area of grassland, the change observed is 11.16km2 with a percentage change of 16.15%. The change detected in the area coverage of water bodies from 1984 to 2002 is -0.46km2 with a percentage coverage of 0.67% which was at increase.
The change detection observed in table 2 below shows that between 2002 to 2020 the forest cover was at a high decrease with about 110.12km2 of change observed and a percentage change of 46.45% and the built up was about -118.53km2 and a percentage change of 50.00% which shows there was an increase in the area. The change observed in the grass was at 5.88km2 with a percentage cover of 2.48% which is at a decrease while for the water bodies the change observed is 2.53km2 with a percentage change of 1.07% which implies that the area of built up has been on the high increase over other land classes.
In the 2020 to 2040 change detection table shows that there was more of built up in the study area which is observed to cover about -53.12km2 with a percentage change of 28.40% which shows that the built up was at increase, and the forest cover was at decrease with the change observed at 92.06km2 and a percentage change of 49.22%.Also the grassland was detected to be on a increased with about -40.39km2 with a percentage change of 21.60% and the water bodies was at a decrease with the change detected at the study period to be about 1.45km2 and a percentage change of 0.78% which implies that the water body has been lost to other land uses.
Land Use Land Cover Classification Accuracy
The result of the accuracy assessment for 1984, 2002 and 2020 were presented in table 4 to table 6below. The overall accuracy for 1984 classification was 0.94%, with overall kappa accuracy of 91%, in 2002 classification, overall accuracy 0.89%, kappa accuracy was 85%, in 2020 classification, overall accuracy was 0.94%, kappa accuracy was 92%.
Discussion
Findings from the study showed that land cover of the study area has been heavily deforested and degraded within the study period 1984, 2002 and 2020, which will continue if control measures are not taken into consideration. Forest land was on the decrease while built ups and grass lands were on the increase. This is in line with other findings of [16] and [23] and [30]. These outrageous changes in the origin all and cover in the study area could be linked to human population, unsustainable human activities in the study area as well as unsustainable environmental management practices and weak environmental policies. As the human population increases, more lands were needed for settlements and many other commercial activities, which gradually led to rapid industrialization, infrastructural development and urbanization. Increase in human population could also increase the levels of anthropogenic activities such as deforestation, intensive farming and sand mining. In other words, the large spread of forest land in 1984 could be linked to low population and productivity, less socio-economic activity. The forest lands have been drastically reduced to build-ups and other land uses in the study area, without consideration to the many environmental needs that forest provides. Hence loss of biodiversity, land degradation, noise pollution, air pollution and climate change could be rooted to changes in the land cover. It can observe that in the past two centuries the impact of human activities on the land has grown enormously, altering entire landscapes, and ultimately impacting the earth's nutrient and hydrological cycles as well as climate. The classification accuracy for the 3 years represents strong agreement. According to [31] values between 0.4 and 0.8 represent moderate agreement, values below 0.4 represent poor agreement and values above 0.81 represent strong agreement.
CONCLUSION AND RECOMMENDATION
In this study, four land use land cover classes were identified as they change through time. However, the result shows a rapid change in the vegetation cover of the study area between 1984 to 2040. Within this period, 225.59km2 of forest land areas and 3.52km2 of water body were lost and converted to other land uses in the study area. Whereas built up and grassland was at increase covering part of the forest and water body. However, if these patterns of degradation continue in the study area, it is likely that in the nearest future the remaining forest land would be wiped out and environmental crisis would be aggravated. Therefore, the assessment of the level of deforestation in Ohaji Egbema using GIS is thus a vital tool for sustainability of the forest management and environmental planning of the area especially at the only forest reserve in the South east, Nigeria.
Based on the findings, there is need to urgently limit and control the high rate of deforestation going on in Ohaji Egbema and embark on tree planting campaigns without delay. It is also recommended that an Environmental Impact statement (EIS) should be carried out. Furthermore, policy makers should ensure that the existing/future polices with regard to environmental and forest degradation is utmost implemented. There is need to create an awareness programme for all stakeholders on the issues at hand and the need to adopt sustainable use of natural resources, sustainable living habits and minimizing impact on the environmental. Finally, [2] having conducted species relics in this forest reserve further research should be conducted on higher quality satellite imagery that offers up to 4m resolution within as well as forest relic analysis.
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00252.pdf https://biogenericpublishers.com/jbgsr-ms-id-00252-text/
0 notes
Text
Wilms’s Tumor Gene Mutations: Loss of Tumor Suppresser Function: A Bioinformatics Study by Uzma Jabbar in Open Access Journal of Biogeneric Science and Research
Tumblr media
ABSTRACT
Introduction: Mutation in the Wilms’s Tumor (WT1) gene product has been detected in both sporadic and familial cases suggesting that alteration in WT1 may disrupt its normal function. The study aims to find the protagonist amino acid in WT1 proteins by mutating these residues with other amino acids.
Material and Methods: The 3D modeling approach by MODELLER 4 was utilized to build a homology of WT1 proteins. Quality of the WT1 model was verified by predicting 10 models of WT1 and hence selecting the best one. Stereochemistry of model was evaluated by PROCHECK. Mutational studies were done by WHAT IF. Five human WT1 mutations were modeled which were Lys371→Ala371, Ser415→Ala415, Cys416→Ala416, His434→Asp434 and His434→Arg434.
Result: Based on active side of WT1 protein and its role in DNA binding mutation. No significant change was observed when Lys371 was mutated to Ala371, Ser415 was mutated to Ala415. Significant change was observed in Cys416 mutated to Ala416. In mutant Ala416, loss of coordination with the metal ion Zn was also predicted. In case of Mutants His434→Asp434, there was a loss of coordination of metal ion (Zn203) with mutant Asp434. In case of mutant His434→Arg434, there was a loss of Zn203 coordination with Arg434. His434 does not interact directly with any DNA base, whereas mutated Arg434 is predicted to interact directly with DNA base.
Conclusion: It is concluded that mutation of amino acid residue Cys416→Ala416, His434→Asp434 and His434→Arg434 may lose the proto-oncogenic function of WT1.
Keywords: WT1 protein, MODELLER9.0, Mutation, Active side residues
Introduction
WT1, is a protein, which in humans is encoded by the WT1 gene on chromosome 11p13. The WT1 is responsible for the normal kidney development.  Mutations in this gene are reported to develop tumors and developmental abnormalities in the genitourinary system. Conversion of proto-oncogenic function to oncogenic in WT1 has also been documented cause of various hematological malignancies. (***)
Multifaceted protein of WT1 gene has transcriptional factor activity [1]. It regulates the expression of insulin-like growth factor and transforming growth factor system, implicated in breast tumorigenesis [2]. A main function of WT1 is to regulate transcription, which control the expression of genes involved in the process of proliferation and differentiation [3]. In wide range of tumor, WT1 is shown to be predisposing factor for cancer, therefore it has become hot target in research to find out it’s inhibitor which can be safely used as a treatment of cancer. It can induce apoptosis in embryonic cancer cell, presumably through the withdrawal of required growth factor survival signal [4]. WT1 is involved in the normal tissue homeostasis and as an oncogene in solid tumors, like breast cancer [5]. Increased expression of WT1 is related with poor prognosis in breast cancer6. A number of hypotheses are postulated for the relationship of WT1 with tumorigenesis. Acceding to one of the hypothesis, elevated levels of WT1 in tumors may be related with increased proliferation because normally WT1 have a role with apoptosis [7,8]. Another study proposed that WT1 can alter many genes of the the family of BCL2 [9,10] and also have a role to regulate with Fas-death signaling pathway [11]. Furthermore, it is suggested that WT1 can encourage cell proliferation by up-regulation of protein cyclin D1 [12].
A group of workers hypothesized that WT1 has been observed in the vasculature of some tumour types [13]and its expression may be related with angiogenesis especially in endometrial cancer [14]. Another hypothesis based on the fact that WT1 is a main regulator of the epithelial/mesenchymal balance and may have a role in the epithelial-to-mesenchymal transition of tumor cells [3]. Expression of WT1 is higher in estrogen receptor (ER) positive than in ER negative tumors. It is therefore possible that WT1 not only interact with ER alpha, but it may orchestrate its expression [15]. A study, on triple negative breast cancers [7] has shown that high WT1 levels associate with poor survival due to increased angiogenesis [16,17], altered proliferation/apoptosis10,11, and induction of cancer- epithelial-to-mesenchymal transition4. In breast tumors, WT1 is mainly related with a mesenchymal phenotype and increased levels of CYP3A4 [18]. A mutation in the zinc finger region of WT1 protein has been identified in the patients that abolished its DNA binding activity [19]. A study also observed that the mutation in the WT1 gene product has been detected in both sporadic and familial cases suggesting that alteration in WT1 may disrupt its normal function [20].  Bioinformatics approaches are being utilized to resolve the biological problems. Efforts start with the prediction of 3D structures. To achieve the aim, study was designed to view 3D structure of WT1, tumor suppressor protein predicted by homology modeling and to study the role of crucial residues in WT1 proteins by mutating these residues with other amino acids.
Material and Methods
3D structure of WT1 was taken as target of human WT1. Figure 1 shows the normal interaction of WT1 with DNA strands based on the crystal structure of a zinc finger protein.
The 449 amino acid sequences of WT1 were used for homology modeling. Sequences of WT1 were retrieved from Swiss Prot Data Bank in FASTA format [21]. The best suitable templates were used for 3D-structure prediction. The retrieved amino acid sequences of WT1 were subjected to BLAST [22]. Templates were retrieved on the base of query coverage and identity. The 3D structures were predicted by MODELLER 9.0 [23] that is the requirement of 3D structure building of target protein. Tools including stereochemistry and Ramchandran plots were used for the structure evaluation [24]. Identification of Template was carried out, and Sequence Alignment was carried out by using FASTA, BLAST. Quality of the WT1 model was verified. Stereochemistry of model was evaluated by PROCHECK [25]. Mutational studies were done by WHAT IF [26]. Five human WT1 mutants are modeled. These were: Lys371→Ala371, Ser415→Ala415, Cys416→Ala416, His434→Asp434 and His434→Arg434.
Results and Discussion
The study was largely based on active side of the WT1 and its role in DNA binding mutation. Zinc finger binding domain interact selectively and non-covalently. This zinc finger-binding domain is the classical zinc finger domain, in which two conserved cysteine and histidine co-ordinate a zinc ion at the active site.
Cys416®Ala416 MUTANT
Significant change was observed in Cys416 mutated to Ala416. In mutant Ala416, reduction in the Van der Waal’s contact between the amino acids. Loss of coordination with the metal ion Zn was also predicted (Figure 6 A and B).
Figure 6 A and B: Wild Type (Cys416) and mutated (Ala416) WT1.  Distance between Zn and Cys416 is increased in mutated (Ala416) model. Cys416 is predicted to be found in the vicinity of His434 and His438 which are implicated in catalysis (6A) while Ala416 can only interact with His434 and not with His438 in the mutated model (6B).
Cys416 is located at the domain interface with its polar side chain completely buried (0.00 Å). Replacement of this amino acid may account for considerable changes in the interior of protein (Table 1). We have predicted the possible changes that arise due to the mutation of Cys to Ala by molecular modeling experiments. Amino acids, Pro419, Ser420, Cys421, His434 and some atoms of His438 (ND1, NE2, CD2 and CE1) are present near Cys416. Zinc (Zn203) is also present in the vicinity (1.82 Å) of Cys416 (Figure 6). The mutated residue, Ala is also predicted to remain buried (0.00 Å) in the interior of protein. Significant change is observed however, in the surrounding area of the mutated Ala416. Only a few atoms of His434 (CD2 and NE2) and His438 (CE1) were seen in the surrounding. This may reduce the Van der Waal's contacts between the respective amino acids. The loss of coordination with the metal ion, zinc was also predicted as the distance is increased from 1.82 Å to3.12 Å.   It is therefore predicted that Cys416 plays a vital role in the interaction with other amino acid residues as well as in the metal coordination. It is observed that there is a possibility of loss of these interactions in case of Cys416 replacement.
His434®Arg434 MUTANTS
In case of mutant His434→Arg434, there was a loss of zn203 coordination with Arg434.  His434 does not interact directly with any DNA base, whereas mutated Arg434 is predicted to interact directly with DNA base, A1. This suggests that change might effect on the DNA binding pattern, Figure 7 A and B.
FIGURE 7 A and B: Wild Type (His434) and mutated (Arg434) WT1.  Distance between Zn and His434 is increased in mutated model. Arg434 is predicted to bind DNA base A1 (B) while His434 in the original model (A) show no bonding with DNA base.
In case of mutation of His434®Arg434, the distance between the mutated Arg and zinc (Zn203) was increased from 2.28 Å to5.00 Å suggesting that there could be a loss of coordination with the metal ion. Mutational studies proved that hydrogen bonding network close to the zinc-binding motif plays a significant role in stabilizing the coordination of the zinc metal ion to the protein23. The mutated amino acid, Arg434 also moved considerably form buried to relatively exposed environment (2.28 Å to 5.35 Å). Presence of positively charged Arg on the surface could account for additional interaction of the protein with other proteins or with the surrounding water molecules. His434 does not interact directly with any DNA base whereas mutated Arg434 is predicted to interact directly with DNA base Adenine, A1. (Figure 7). This suggests that the change might cause the DNA binding pattern.
Lys371®Ala371 and Ser415®Ala415 MUTANT
No significant change was observed when Lys371 was mutated to Ala371, and Ser415 was mutated to Ala415. It is observed in this mutation that the change that arise in the overall structure and surrounding amino acid residues (Table 1). Lys371 is present on the surface (accessibility = 47.04 Å) of the WT1 molecule. It was observed that the internal protein structure was not affected considerably, as Lys371 is present on the outer most surface of the protein. In the original model, Lys371 stacks against thymine. It also forms a water-mediated contact with side chain hydroxyl of Ser367. Although, Ala371 also stacks against the same DNA base but the distance is slightly altered. The hydrogen bond between Ala371 and Ser367 has not been predicted in the mutated model.  It has been demonstrated that mutation within finger 2 and 4 abolished sequence specific binding of WT1 to DNA bases19. The mutation of the corresponding lysine in a peptide could reduce its affinity for DNA seven folds [27].  On the other hand, it is reported [28] that a surface mutation would not cause a significant change in the internal structure of protein.  However, the replacement of a basic polar residue with a non-polar one could account for a reduction in polarity.  The modeling studies of Lys to Ala mutation do not however support this finding and require further analysis.
Mutation of Ser415®Ala415 in the WT1 model (Table 1). Ser415 is located near the active center of WT1. It has been demonstrated that Ser415 makes a water-mediated contact with phosphate of DNA base, guanine [20]. In our predicted model of WT1, Ser415 makes two water (numbers 516 and 568) mediated contacts. Mutation of this Ser with Ala resulted in the loss of one of these contacts leading to the loss of binding. The replacement of relatively polar residue, Ser to a non-polar one, Ala could account for this reduced interaction. This is also evident by a slight decrease in the accessibility of Ala (Ser415 = 7.96 Å; Ala415 = 7.61 Å).
His434®Asp434 MUTANTS
In case of mutants His434→Asp434, there was a loss of coordination of metal ion (zn203) with mutant Asp434. Glu430 move from relatively exposed to completely buried environment. His434 is also present at the active center of WT1. We predicted two mutants; His434®Asp434 and His434®Arg434 mutants by molecular modeling (Table 1). In case of His434®Asp434 mutation, the water mediated contact is lost. The distance between mutated Asp and zinc (Zn203) was also increased from 2.28 Å to 3.57 Å suggesting that there could be a loss of coordination with the metal ion as well. The amino acid Glu340 that is present near His434 also moved considerably form relatively exposed to completely buried environment (14.83 Å to 00.0 Å).
Conclusion
It is concluded that mutation of amino acid residue Cys416→Ala416, His434→Asp434 and His434→Arg434 of WT1 may lose its function to regulate the function of genes by binding to specific parts of DNA. Besides the mutation of above-mentioned amino acid residue, the role of WT1 in cell growth, cell differentiation, apoptosis and tumor suppressor function is also lost.
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00250.pdf https://biogenericpublishers.com/jbgsr-ms-id-00250-text/
0 notes
Text
Gene Editing via Integrase Enzyme by Umair Masood* in Open Access Journal of Biogeneric Science and Research
Tumblr media
Short Communication
Targeted integrase enzyme is a most powerful tool for mediating genome alteration with high precision. The gene of the interest is directly catalyze nucleophilic attack of the 3 prime hydroxyl group at the end of processed DNA on a pair of phosphodiaster bond in the targeted DNA or genome of the interest. Integrase gene editing method contain two parts gene of the interest with integrase enzyme and desire genome. The integrase enzyme is start integration the gene of the interest into the desire genome or DNA.
Results via Gel electrophoresis
In order to check that whether gene of interest is integrated or not we can perform a gel electrophoresis. The gel contain a two band A and B.A band is a vector of human insulin vector which can show the negative control and B band is a vector of human insulin but the gene have some sequence mutation [1-5]. We can add a gene of the stranded human insulin by using integrase enzyme and gene can be precisely add to the B band that why B band is in 3.0 and A band is 6.0 which mean that A band is high molecular weight than the B band and a Marker must be 1kb and agarose should be 1% [6-9].
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00249.pdf https://biogenericpublishers.com/jbgsr-ms-id-00249-text/
0 notes
Text
Giant Cell Tumor of the Distal Tibia and Fibula (Rare Location) by Mohamed Hamid Awadelseid* in Open Access Journal of Biogeneric Science and Research
Tumblr media
ABSTRACT
Giant cell tumor of the distal tibia is a rare, benign and usually asymptomatic condition. The discovery is sometimes made following a medical imaging examination or a painful symptom or more often a visible or palpable swelling with or without vascular and/or nerve compression. At an advanced stage, the X-ray is of paramount importance. The well complete surgical resection is part of the therapeutic. We present a clinical case report of a young man with a giant cell tumor localized in the distal tibia in Khartoum, Sudan. This case concerns a 37-year-old patient who presented in July 2021 of a huge painful swelling at left distal tibia treated with bonesetter at Kassla, eastern Sudan and whose X-ray radiography showed lytic lesion of the cortical bone in the lower third of the tibia. After the operative resection of the tumor mass, the pathological examination of the operative specimen revealed the diagnosis of a giant cell tumor. A giant cell tumor is a benign condition, with a few symptoms and the location at the ankle is exceptional. Complete surgical resection is a viable treatment option.
Keywords: Giant Cell Tumor, Wide Surgical Resection, tibia
Introduction
Giant cell tumor (GCT) of bone is one of the commonest benign bone tumors encountered by an orthopedic surgeon. The reported incidence of GCT in the Oriental and Asian population is higher than that in the Caucasian population and may account for 20% of all skeletal neoplasms. It has a well-known propensity for local recurrence after surgical treatment Although considered to be benign tumors of bone [1].
GCT has a relatively high recurrence rate. Metastases occur in 1% to 9% of patients with GCT and some earlier studies have correlated the incidence of metastases with aggressive growth and local recurrence. Current recurrence rates between 10-20% with meticulous curettage and extension of tumor removal using mechanized burrs and adjuvant therapy are a vast improvement on the historically reported recurrence rates of 50-60% with curettage alone. GCT of bone constitutes 20% of biopsy analyzed benign bone tumors. It affects young adults between the ages of20 and 40 years, several authors have reported a slight predominance of women over men. However, GCT can be seen in patients over 50 years old. Ninety percent of GCT exhibits the typical epiphyseal location. Tumor often extends to the articular subchondral bone or even abuts the cartilage. The joint and or its capsule are rarely invaded. In rare instances in which GCT occurs in a skeletally immature patient, the lesion is likely to be found in the metaphysis,The most common locations, in decreasing order are the distal femur, the proximal tibia, the distal radius, and the  sacrum .Fifty percent of GCTs arise around the knee region. Other frequent sites include the fibular head, the proximal femur, and the proximal humerus. Pelvic GCT is rare[6]. Multicentricity or the synchronous occurrence of GCT in different sites is known to occur, but is exceedingly rare [2].
We report a case of a 37-year-old man who presented in July 2021, with a huge painful swelling his left distal tibia with an X-ray radiography showing lysis of the cortical bone in the lower third of the tibia. After the operative excision of the tumor mass, the pathological examination of the specimen revealed the diagnosis of a giant cell tumor. With a lytic lesion of the distal tibia and fibula bones in a young man on X-ray, one must think of a giant cell tumor [3].
Case Report
We report the case of a 37-year-old young man with no notable pathological antecedents who presented at the orthopedic consultation for a painful swelling of the left distal tibia and fibula, that had been evolving for 5 months and without any alteration of the general state. There was change in color and consistency of the skin with respect to the tumor action.
On physical examination, there was a dorsoflextion blockage of foot because of the large volume occupied by the tumor mass and the articular destruction at the level of the distal tibio-fibular joint; plantar flexion estimated at 10˚ and dorsal flextion at 3˚; inversion at 5˚ and eversion at 2˚. The x-ray showed a lesion with blurred boundaries, extending into the soft tissue that is not limited by a bony shell; with destruction of the cortex, invasion of soft parts and honeycomb pseudo-partitions (Figure 1-2). And finally, the Magntic Resonant Imaging (MRI) of the left leg showed (Figure 3-4-5-6) a lysis of the cortex of the lower extremity of the leg. It corresponds to grade 3 of the Campanacci and Merled ’Aubignee classification.
A complete surgical resection below knee amputation (BKA) was offered to the patient. Under spinal anesthesia, this incision was made proximally to expose a healthy portion of the leg bone. Surgical removal of the tumor by BKA a proximal resection of the tibia and fibular bone by about 2 cm in the healthy zone. The anatomopathological assessment (Figure7) showed abundant mononuclear cells and discrete nuclear anomalies with marked mitotic activity, but without atypical forms. The histological examination of the bone fragments confirmed a grade 3 giant cell tumor according to Sanerkin, Jaffe Lichtenstein and Pottis. CT scan was done to exclude pulmonary metastases (Figure 8). At three weeks removal of suture and start physiotherapy of knee. Surgical treatment with the excision of the large tumor mass by BKA improved the function of the leg and general condition of the patient.
Discussion
Giant cell tumors (GCT) account for 5%–9% of all benign and malignant bone tumors. They are considered benign but may present a progressive, potentially malignant clinical course. GCT recur in a high percentage of cases, become sarcomatous, yet produce metastases even without apparent malignant changes [4].
In the literature, the recurrence rate varies considerably, depending not only on the site and extension of the lesion but also on the type of primary treatment performed [5]. Successful treatment of GCTs and the adequacy of tumor removal is influenced by tumor location ,associated fracture, soft tissue extension ,and understanding of the functional consequences of resection .each option has advantage and disadvantage [6]. This technique has the advantage of preserving knee joint function, improved general and psychological condition of the patient and recurrences are no more frequent than with other techniques according to several authors.  Excision with tumor free margins is associated with lesser recurrence rates. However, for periarticular lesions this is usually accompanied with a suboptimal functional outcome [7]. Various studies suggest that wide resection is associated with a decreased risk of local recurrence when compared with intralesional curettage and may increase the recurrence free survival rate from 84% to 100% (8). However, wide resection is associated with higher rates of surgical complications which led to functional impairment, generally necessitating reconstruction.  This procedure resulted in good function preserving knee joint function after removal of suture 3 weeks later. After 5 months follow up, there was no recurrence or functional sequelae of the leg. Giant cell bone tumors generally have a good prognosis [9].
Conclusion
The giant cell tumor of the distal tibia and fibula bone, although rare, does not present any particularity. The gold standard X-ray image guided radiography with the bone tissue histology confirmed the diagnosis. use a CT scan or an MRI study if we fear an invasion of soft parts. Surgical treatment preserved joint function. This should be considered when presented with a lytic epiphyseal bone lesion.
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00248.pdf https://biogenericpublishers.com/jbgsr-ms-id-00248-text-2/
0 notes
Text
Contribution of Award-Winning Research to the Visibility of Science on a University by Raquel de la Cruz Soriano in Open Access Journal of Biogeneric Science and Research
Tumblr media
ABSTRACT
This article presents the experience in the preparation and presentation of results to the call for awards of the Ministry of Science, Technology and Environment, evidence of the impact achieved. The methods of science were applied such as: analysis and synthesis, historical and logical, hypothetical deductive; within the empirical ones: observation, document review, interview, training actions, conducting scientific sessions, giving conferences and joint elaboration. The scientific results with the potential to opt for the call were determined by reviewing documents, interviewing teachers-researchers, holding scientific sessions on the subject and giving lectures on the methodological procedure.
KEYWORDS: Science and Technological innovation; Prizes; Scientific results.
Introduction
This article highlights one of the indicators of the Science, Technology and Innovation process as one of the pillars of the management of substantive processes in municipal university centers. As innovation is a social process, the inadequacies of the educational system and high levels of poverty and social inequality affect the development and performance of innovation systems. The links between universities and public Research + Development (R + D) centers, with the productive sector, are mostly based on obtaining information and training and not on forms of interaction to reverse specific problems, through the application of results. scientists, that is, closing the research cycle [1]. The evaluation of the impact of science and technology constitutes a strategic need, as a way to verify the development of a country, its scientific policy, as well as its management in terms of society and the human beings who live in it [9]. Given the new development perspectives depending on the local, [8] indicates the need for greater integration of the Municipal University Centers with the productive sector, increasing the connections between the different actors outside the municipality, which enhances the development of the learning and innovation, from the increase of the scientific debate in the locality, which means increasing the rigor in the analysis of the problems [7]. sustains that the research bases a broad innovation approach or “DUI innovation mode” (doing, using, interacting) in which learning is key [5-6]. He argues in favor of innovation systems that favor social inclusion and care for the environment. Technological trajectories should benefit the human groups involved, expand their knowledge, improve their quality of life, among others [3].
Achieving relevance in science and technology has its starting point in the needs of economic, social and cultural development of the territory, these guide the substantive processes of science, technique and postgraduate that are executed in the Municipal University Centers (CUM); In these, the search is encouraged based on the need to possess knowledge and they drive action. The learning needs determine the high-level continuing training activities, based on the demand of the group of professionals graduated from a territory, who become the main clients of the postgraduate studies of their university. This activity enhances interaction with the university environment and provides new knowledge to those involved in research and development projects, in turn, it is a search for new research needs and opportunities, research projects; so that an ascending cyclical and interactive process is established.
On many occasions, quality in education centers has been related to terms such as: prestigious centers or centers of excellence, certified or accredited, with good economic resources and good infrastructures or good facilities, centers with excellent academic results, with good teachers and great leaders, with the satisfaction of parents and students and with evaluation of all kinds: of the system, of the educational processes, of the results.
As part of the dynamics of science and technological innovation management, the scientific visibility indicators express the relevance and relevance of the research results. That is why achieving territorial recognition is a goal of the Cabaiguán CUM, an aspect that has been projected in the strategic planning of the institution in different stages, which has required the execution of different training actions for the faculty.
This article proposes to expose the experience in the preparation and presentation of results to the call for awards of the Ministry of Science, Technology and Environment, evidence of the impact achieved.
Conclusion
The increase in results in the calls made as of 2012 is significant, determined mainly by the results of investigations of teachers in the completion of master's thesis, the completion of study of undergraduate students, innovations in companies in the territory and in university processes; as part of research projects or problem bank in local institutions. The experience presented shows the viability of the flow diagram for the management of science and innovation in a university campus, where the training carried out to the faculty and the method of joint preparation of the files to be presented to the awards calls, allow materialize presented results.
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00247.pdf https://biogenericpublishers.com/jbgsr-ms-id-00247-text-2/
0 notes
Text
Management of Hospital Overcrowding during the Second Wave of COVID-19 Pandemic in Pisa (Italy) before Vaccination Campaign: from Medical Stays to Low and Intermediate Cares by Angelo Baggiani in  Open Access Journal of Biogeneric Science and Research
Tumblr media
ABSTRACT
Background: After the summer season, in Italy, a second wave of COVID-19 pandemic involved all the Italian Regions. From September to December 2020, in Tuscany Region COVID-19 cases increased from 15.000 to 110.000.
Methods: This occurrence led to a sharp raise of hospitalized patients, in the Azienda Ospedaliero-Universitaria Pisana (AOUP) (Pisa, Tuscany), a highly specialized teaching, with 1082-bed hospital. In this perspective we describe the application of a structural plan in AOUP for the management of hospital overcrowding during the second wave of COVID-19 pandemic.
Results: From November 16th, AOUP COVID hospital has been organized in different areas: Intensive Care Units intended for critical patients; Medical Stays intended for medium critical patients; a Low care structure intended for low critical patients needing continuous cares; an Intermediate care structure intended for patients needing nursing cares; and a COVID hotel intended for still positive patients in discharge.
Conclusion: This strategy may improve the COVID patients flow during the epidemic, allowing a quickly beds release and a continuous patient path from one level of care to another.
Keywords: COVID-19, low cares, intermediate cares, second wave
Introduction
In Italy COVID-19 emergency evolved in a first wave in the period between February and  May 2020 with over 200.000 cases [1]. Italian hospitals managed the increase of hospitalizations in terms of COVID and not COVID areas. In North West Tuscany, the Azienda Ospedaliero-Universitaria Pisana (AOUP), began their preparedness. The AOUP is a highly specialized, tertiary, 1082-bed hospital. COVID clinical wards were divided into infectious disease and pulmonology units. Further clinical wards and operating rooms were repurposed to realize 160 additional beds in COVID medical stays and 83 COVID beds in Intensive Care Units (ICUs) [2,3]. This response induced the stop of the scheduled surgical activities, which were partly resumed from June 2020.
After the summer season, a second wave of COVID-19 emergency involved in Italy. From September 01st to November 24th, in Tuscany Region COVID-19 cases increased from 14.827 to 96.990 [4]. This occurrence led to a sharp raise of hospitalized patients, which increased in AOUP from 11 to 214, in almost 50 days. During the first wave the peak of hospitalized COVID patients was achieved in March 30th, with 187 patients.
Methods and Results
Considering this epidemiological trend, from October 15th the regional task force was set up in order to coordinate a new preparedness plan of the AOUP health services, providing a new procedure for hospital reorganization.
In a first moment, we dedicated 23 beds of infectious disease unit; 19 beds in pulmonary ward and 20 beds in a new COVID-19 pavilion [5].
In this second wave, a critical point consisted in the difficulty in converting of operating rooms and its ICUs in COVID areas, as performed in March 2020. This limit is due to the slowdowns in surgical activities, which were caused by the first wave. AOUP is a high specialized hospital, where surgical activities cover almost 65% of all healthcare services.In this plan, the AOUP may guarantee all the high specialized surgeries (transplants, oncologic and cardiac surgeries). Emergency interventions and 80% of all further surgeries were maintained. From October 15th, new clinical wards (endocrinology, geriatric, urology, internal medicine) were gradually converted in COVID area, with a total of 217 beds (132 beds in medical stays; 43 beds in sub-ICUs and 42 beds in ICUs).
This new reorganization resulted not-sufficient and a rapid beds exhaustion was obtained in few days. Considering the choice of not further reducing surgeries and clinic wards and considering the fast hospitalizations in COVID medial stays, on November 16th the task force teams implemented a plan to integrate the intermediate and low cares in AOUP. COVID-19 hospital emergency needs an “intermediate structure” suitable for patients in discharge from medical stays, needing a protected environment having medical devices and a continuous nursing surveillance [6].
The importance of these cares during COVID-19 pandemic, described by Regional Decree (7) may be useful for:
avoiding the inappropriate hospitalization;
ensuring the continuous cares;
promoting the patients discharge and the homecare.
From November 16th, in AOUP, COVID hospital has been organized in different areas (Figure 1), including:
42 beds in ICUs and 43 beds in sub-ICUs, intended for critical patients (including which needing C-PAP therapy);
132 beds in Medical Stays intended for medium critical patients (including which needing C-PAP therapy) and Operating room unit;
16 beds in “Low care structure” intended for low critical patients needing continuous cares (from November 16th).
32 beds in “Intermediate care structure” intended for patients needing nursing cares (from November 16th);
90 beds in “COVID hotel” intended for still positive patients in discharge.
Intermediate and low care structures were implemented after the evaluation of structural requirements (number of beds, ambulatories); organizational requirements (continuous nursing cares); technological requirements (equipments, medical devices).
Organizational model provides a “Low Care Team” composed by:
Medical staff (internist, geriatrician and anesthetist) with a 24/7 service;
Nursing staff (nurse and a social health operator) with a 24/7 service;
Rehabilitation staff (physiotherapist).
With a daily basis, medical and nursing staff check COVID Medical Stays in order to detect the patients which may be transferred in low care area. These evaluations are applied following the requirements defined by the Regional Decree (8) (blood oxygen saturation >94% in 48h; slow relaxed diaphragmatic breathing <22 breath per minute; absence of dyspnea; absence of non-invasive ventilation in 78h; hemodynamic stability).
The same evaluation is performed in low care structure, in order to identify patients needing the only continuous nursing assistance in intermediate care structure.
Conclusion
These implementations improve the COVID patients flow during the epidemic, allowing a quickly beds release and a continuous patient path from one level of care to another. Patient discharge may be enhanced throughout the addition of different healthcare levels, from the high to the low care units, present in AOUP. In this way the most surgical activities are guaranteed and the risk of COVID hospital overcrowding may be reduced.
Acknowledgments
The authors acknowledge the efforts of healthcare workers and essential workers during the COVID-19 pandemic.
Conflict of Interest
All authors report no conflicts of interest relevant to this article.
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00245.pdf https://biogenericpublishers.com/jbgsr-ms-id-00245-text/
0 notes
Text
Nutritional Status of Subjects with Celiac Disease (CD) at a Tertiary Care Hospital in Saudi Arabia - A Retrospective Cross-Sectional Study by Kavita Sudersanadas
Tumblr media
ABSTRACT
Introduction: Celiac disease (CD) is chronic gluten-sensitivity enteropathy. Due to the inflammatory reactions, it is known to cause malnutrition. Objective: The study aimed to assess the nutritional status of subjects with CD by anthropometric, biochemical methods.
Subjects and Methods: The study followed a retrospective cross-sectional design-seventy-one subjects with CD registered in a tertiary care teaching hospital from 2008 to 2018. Data concerning demography, clinical manifestations, and the biochemical and iron profile of the selected subjects were collected by using IRB-approved data collection form from the Best Care Hospital Information system. The data were analyzed by using SPSS Version 22. Categorical variables and continuous variables were expressed respectively by frequencies and percentages and by mean ± Comparison of mean values was done with student's t-test. The IRB of KAIMRC approved the study.
Results: The age of the subjects ranged from 3 to > 60 years. The majority (78.9%) of them were females. Moderate to severe underweight was found respectively among female and male children of 3-12 years. Adequate nutrition was reported among 13-18 years. According to BMI for age percentile, the prevalence of malnutrition among children was 8.7%, whereas 26.1% of the children with CD were either overweight or obese. BMI of adult subjects indicated that 29.6% were with energy deficiency and 32.4 % were overweight or obese. Iron deficiency was the most prevalent micronutrient deficiency found among the subjects, especially among females.
Conclusion: Those with CD are at risk of iron efficiency anemia and loss of lean mass. Loss of lean mass and severe undernutrition results in growth retardation. The study indicates the importance of nutrition monitoring and follow up and nutrient supplementation to those with CD.
KEYWORDS: Celiac disease; Nutritional status; Anthropometry; Biochemical profile; Iron profile
Introduction
Celiac disease, known as gluten-sensitivity enteropathy, is a chronic autoimmune inflammatory disease in the small intestine. It is characterized by permanent gluten intolerance and malabsorption syndrome. The etiology of CD could be due to environmental factors such as the ingestion of gluten and genetic factors such as HLA and tTG auto-antigen. Therefore, CD affects genetically susceptible individuals. Gluten is recognized as a protein found in prolamine fragments of barley (hordein), wheat (glutenin and gliadin), or rye (secalin) [1–5].
CD could be associated with autoimmune diseases such as diabetes, type 1, and hypothyroidism. Over the previous decades, the prevalence rate of celiac disease in different parts of the world was underestimated by relatively 1 in 1000 individuals. It was considered an uncommon disease that mainly affects children, and typical symptoms, gastrointestinal manifestations. The gastrointestinal manifestations are chronic diarrhea, vomiting, bloating, abdominal pain, abdominal distention, and steatorrhea. The recent introduction of susceptible serological tests led to increased screening. The subjects with CD are diagnosed by serological tests such as tTG, anti-gliadin, and EMA. They can be diagnosed by small intestinal mucosal biopsy, which is treated as the golden standard for CD diagnosis (5-11). Accordingly, the prevalence of CD is increased continuously at the rate of 1 in 100 or 200 individuals. High rates of prevalence were reported among females than males (2.8:1) [2,4, 12-14]. In KSA, it was reported that females have a high prevalence of CD than among males [12].
The CD is classified into typical (with gastrointestinal manifestations), atypical (extra-intestinal manifestations), and asymptomatic [1, 2, 14-16]. Extra-intestinal manifestations include fatigue, dermatitis herpetiform, and bone problems such as osteopenia, hematological abnormalities such as anemia; leukopenia; thrombocytopenia; and thrombocytosis.
The inflammatory reaction causes morphological changes in the proximal small intestine, which ate villous atrophy, abnormal surface epithelium, typical flat mucosa, and hyperplastic crypts. These morphological changes occur primarily in the duodenum and jejunum and because malabsorption leads to nutrient deficiencies [2, 4,13,14,17,18]. There is much evidence that the macronutrients and micronutrients deficiency among individuals with CD is higher than those without CD [4, 18]. Iron deficiency anemia (IDA) due to iron deficiency is the main hematological magnification found in subclinical cases of CD and could be the only manifestation observed with CD subjects. Iron deficiency can result from iron malabsorption, gastrointestinal bleeding, or iron loss via diarrhea or steatorrhea. IDA is usually considered as presenting feature of CD. 0.5%-6% cases of IDA result from CD.
Numerous studies documented the impact of nutrient malabsorption caused by CD in both children and adults. Although gluten sensitivity is temporary and resolves with the healing of the small intestine, additional restrictions to a gluten-free diet increase the risk of overall nutritional deficiencies. This study aimed to assess the nutritional status of subjects with CD as depicted by biochemical and iron profiles.
SUBJECTS AND METHODS
The study was conducted at the gastroenterology department of King Abdulaziz Medical City from 2008 to 2018. KAMC is one of the biggest hospitals in KSA located in Riyadh. It has 690 beds tertiary care, and it is a teaching hospital.
Subjects
All patients, irrespective of gender, registered in King Abdul-Aziz Medical City and referred to the gastroenterology department with a diagnosis of celiac disease were included in this retrospective cross sessional study. Exclusion criteria included those with pregnancy or lactating mothers, patients with apparent blood loss (not caused by celiac disease) such as hypermenorrhea, Melena, hemoptysis, those with gastrointestinal abnormalities such as irritable syndrome, chronic liver disease, chronic kidney disease, and Crohn's disease, those with febrile diseases such tuberculosis, those with cancer of intestine/ colon/ or any organ in the gastrointestinal tract and those who have metabolic disorders. Accordingly, 71 patients were selected for the study by using the convenience sampling technique.
Data Management and Analysis
Data were collected from the hospital information system, Best Care of KAMC. Data concerning demography, clinical manifestation, and biochemical and iron profile of the subjects were taken using IRB Approved Data collection charts.
The nutritional status of the subjects was assessed by anthropometric and variables related to the biochemical and iron profile. For children, Body Mass Index (BMI) for age percentile based on CDC growth chart was used to conclude anthropometric data. Those with less than the fifth percentile were considered underweight, 5- 85th percentile as normal BMI, 85 < 95th percentile as overweight, and equal to or ≥95th percentile as obese [19]. In addition, the subjects' weight and height were used to calculate BMI and classified as per universally accepted classification of BMI. The ideal body weight and percent of ideal body weight was calculated using MediCalc online calculator [20].
The collected data was entered in MS excel. After proper cleaning of the data, it was exported in SPSS version 20 for further statistical analysis. Prior to statistical analysis, the normality of the data was tested by Shapiro Wilk’s test. Categorical variables were presented by frequencies and percentages while for continuous variables mean ± SD was used. Mean values were compared by using an independent t-test. Statistical significance of the test was assumed with p-value of <0.05 [21].
RESULTS
A total of 71 subjects with confirmed celiac disease diagnosed by duodenum biopsy and serological tests were identified and included in the study. Table 1 details the demographic characteristics of the subjects.It was observed that the majority of the subjects were in the age group of 30-60 years (35.2%), followed by those in the age group of 18-29 years (29.6%) and more than ¾th of them (78.9%) were females. The important gastrointestinal manifestations reported were diarrhea (29.6%), abdominal pain (26.8%), and vomiting (25.5%). In addition, around 26.8% were diagnosed with extra-intestinal manifestations such as skin lesions.
The nutritional status of the subjects based on anthropometric data is given in Table 2. It was observed from Table 2 that there was significant gender variation in height, current body weight, and ideal body weight of the subjects of age 19-50years. In addition, the percent of ideal body weight (%IBW) was 21.4 and 24.3 respectively for adult females and males, and there is a significant gender difference in % of IBW. A significant difference in IBW was observed among females and males of 13-18year old subjects. Severe underweight observed among the subjects of age group 2-12years and 19-50years. CD is one of the etiological factors for malnutrition among adults and children.
It was found that the prevalence of energy deficiency as indicated by Body Mass Index (BMI) ranges among children aged 2-19 years old with CD was 8.7%. Most child subjects (65.2%) had normal BMI; however, 17.4% were overweight, and 8.7% were obese. Among the adults, 29.6 percent had energy deficiency, whereas 38 % were of normal BMI (Figure 2). The biochemical profile of the subjects with CD is given in Table 3. The biochemical profile of the subjects indicated that compared to males, female subjects are at risk of low biochemical values. The majority (25%) of the subjects were presented with lower blood urea and serum creatinine (25.35%). In addition, there was a significant difference between blood urea and serum creatinine values of females and males. Table 4 details the iron profile of subjects with Celiac Disease. Results of the study indicated that compared to male subjects, female subjects are at greater risk of iron deficiency. Around 41.07 % of females presented low hemoglobin levels (Table 4). A significant difference existed between the hemoglobin values of male and female subjects (p= 0.001). The majority (73.21%) of the female subjects showed low serum iron levels. Low serum ferritin levels were found among 33.33% of males and 19.64 % of females. There was a significant difference (p= 0.012) between the serum ferritin values among both genders. Females with high total iron-binding capacity were at a higher percentage (78.57%) than males with CD (73.33%).
DISCUSSION AND CONCLUSION
Celiac disease (CD) is a chronic autoimmune inflammatory disease. In the present study, adults formed a significant proportion of the sample (64.8%), followed by 31% of children. Gender-wise distribution of the study subjects showed that females (78.9%) were more affected with CD. In addition, meta-analysis studies based on the seropositive diagnosis of CD from KSA showed that seropositive females are more common than males [22].
CD causes morphological changes in the duodenum and jejunum, which are the main sites of nutrient absorption. Gastrointestinal manifestations such as diarrhea (29.6%), abdominal pain (26.8%), and vomiting (25.4%) were reported by the subjects of the current study. These clinical manifestations interfere with the digestion and absorption of food and nutrients, resulting in nutritional deficiencies. Many studies support the evidence that, the macronutrient and micronutrient deficiency among individuals with CD is higher than those without CD [4,10]. The main finding of our study showed that the subjects experienced malnutrition due to chronic energy and iron deficiency.
Based on %IBW, children of 3-12 years of age were more affected with malnutrition than other age groups. They were categorized as moderate (females) to severely (males) underweight. Other age groups under the study were with good, overweight, and obese category of %IBW.
BMI for age percentile of the children was low for 8.7%. On the contrary, in a retrospective study conducted in Iran, the prevalence of malnutrition among child subjects with CD was 43% [23]. BMI of those above 18 years indicated that most (32.4%) of them were obese or overweight, whereas 29.6% were underweight. It was reported that overweight and obesity could co-exist with CD [24].
The loss of villi and surface epithelium due to CD increases the plasma protein leakage in such patients [25]. About 9.86 percent of the subjects of the study were presented with low serum albumin levels indicative of protein-losing enteropathy. Moreover, 35.21 % of the subjects had low serum urea, representing protein malnutrition or a low protein diet. Low muscle mass due to malnutrition is the leading cause of low creatinine levels. In the present study, 25.35% of the subjects were with low serum creatinine values. Iron is the major micronutrient depleted in subjects with CD caused by iron malabsorption, reduced duodenal iron absorption, gastrointestinal blood loss, autoimmune diseases, and microcytic anemia. The frequency of Iron Deficiency Anemia (IDA) among those with CD ranged from 12% to 69% [25]. This study found that 38.03% of subjects had low hemoglobin levels, and 67.61 % had low serum iron levels. Low serum ferritin, the marker of iron stores in the body, was observed among 22.54% of the subjects. It was stated that the depletion in body iron storage and reduced hemoglobin levels were observed among patients with celiac disease [26].
With the reduction in the body iron stores, there will be an elevation in the iron-binding capacity of the iron transporting protein transferrin. We found that 77.46 percent of the study subjects had elevated Total Iron binding Capacity (TIBC). In addition, females with CD are at higher risk for iron depletion in the body than males.
The strength of our study is that, it is the first study in KSA which indicates the nutritional status of subjects with CD by anthropometric, biochemical and iron profile for all ages. However, this study had certain limitations. The design of the study was retrospective. The samples with incomplete file records were excluded; hence the sample size was drastically reduced, and the sample size was very low for specific age and gender groups.
Celiac disease is chronic gluten-sensitivity enteropathy with varying clinical manifestations. Due to the inflammatory reactions and morphological changes at the micro and macronutrient absorption sites, the patients are at risk of nutrient deficiencies. Furthermore, nutrition and food consumption pattern in Saudi Arabia largely depends on cereal groups, with wheat as the most popular staple food. Therefore, improper nutritional management of CD may result in an unchanged or increased risk of having complications due to multiple nutrient deficiencies. In this regard, the changes in biochemical and iron profile due to iron depletion require more attention and follow-up. In addition, the results of our study indicated the importance of nutritional and lab assessments, Gluten-Free Diet, diet counseling, iron supplementation, and continuous follow-up and monitoring of the interventions with CD subjects during nutrition rehabilitation.
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00243.pdf https://biogenericpublishers.com/jbgsr-ms-id-00243-text/
0 notes
Text
Structuring Virtual Teams by Ecler Jaqua* in Open Access Journal of Biogeneric Science and Research
Tumblr media
ABSTRACT
The primary purpose of the virtual team is to address a specific problem at hand from different locations, which could either be locally, regionally or internationally. It implies that while working from diverse settings, it is vital for leadership to create a favorable environment where every member is afforded the opportunity for showcasing their skills and knowledge while at the same time demonstrating that they can be relied upon for the accomplishment of organizational purposes.
Introduction
The virtual team is created to share information about their operations to offer a review of the corporate hence ensuring that it continues with its success trajectory while at the same time eliminating the foreseeable challenges. In this regard, the team's togetherness is best described through linear approach, intuitive approach, and finally componential approach.
In the linear approach, the virtual team members accept the cultural diversity enabling the creative process, which involves defining and clarifying the work and solution to the challenge. It implies that the team's togetherness is created by the top-level management who, through induction, introduces the team's objectives and outlines this approach which primarily deals with elaboration of the steps guiding the development of the solution. It illustrates that togetherness is enhanced through three crucial steps: problem finding, fact-finding, and problem defining. On the other hand, the intuitive approach, a significant factor determining the team, is also best described through the intuitive approach, which encourages creativity among the team (Nemiro, 2004) [4]. It is one of the critical stages which holds the team together and can easily break the same. Finally, it implies that the team's togetherness is best described through the ability to recognize each member to contribute to the study discussion without being judged. The access the open communication is an indication that eliminates doubt within the team, thus making each member feel comfortable, which overall facilitates the attainment of the purpose. When virtual team members can communicate without being judged, they experience a sense of belonging and remain committed to organizational aims.
In the same way, the team's creativity is demonstrated through the componential approach, which mainly emphasizes the social psychology of creativity. Accordingly, this is an approach best described through three critical factors comprising a combination of factual knowledge, creativity, and task motivation. Again, and as demonstrated, the team's togetherness is promoted by setting rules, clarifying the purpose, showing respect and trust alongside the ability of effective communication.
In contrast, the apartness of the team is described through the personal differences happening in the context of personal issues, cultural barriers, time zone, and lack of effective leadership. This issue causes the absence of unity of purpose. Each member focuses on their problem whose conflict of interest makes the overall attainment of the virtual team objective incredibly impossible. Accordingly, the cultural barriers are a challenge that requires the virtual team leadership to provide an approach to attaining the primary goal. It could entail looking for a professional translator to break down the communication barriers making the information accessible to all the people. Specifically, given that the team is working from different geographical areas, it is possible to have other working times, making the team unable to stick together and attain its objective within the specified time. It implies that the leadership should create a timetable when all people should increasingly participate in the virtual meeting, thus enabling the group's goals. The team apartness, also best described via lack of effective communication, shows a lack of leadership, thus making the team lack direction on how to execute their objectives. This overall enables negative analysis and communication within the team. Notably, there is no sequence for idea generation, development, finalization, and closure.
The Options for Work Design and Leadership of Virtual Teams
The available options for the work design and leadership in the virtual team comprise the wheel, modular and iterative approaches that guide and shape the team members from the idea generation to development, finalization, and closure. The wheel approach mainly entails the leader communicating what will be accomplished, how, and when to the whole team. The higher status or the manager is the communication through which all the information must pass from the low-level members. Thus, it eliminates the communication gaps and inconsistencies and assures the whole team to have an efficient flow of information. Finally, all the team members propose the ideas to the team leader, who, as a result, ends up making the requisite decision on tabling the various views to the whole team, which discusses the matter extensively, thus enabling the leader to make the right choices.
On the other hand, the modular approach involves the team member's ability to meet together to decide on the task, need, and purpose to be pursued. After agreeing on the goal, the team members divide the amount of work based on expertise. After each member has completed their pie, they consolidate and revise before finalizing the same and the successful implementation. This approach for work design plays an instrumental role that overall facilitates impressive work, which illustrates the ability to continue better and improved outcomes on the assigned tasks by the management.
The iterative approach, on the other hand, contains the use of e-mail. This approach requires adequate time, an open communication system, honesty, work-sharing technology, and increased task interaction alongside the willingness to accept feedback and continue with cooperation (Delice, Rousseau & Feitosa, 2019) [3]. The team members are provided with the tasks which they should accomplish within specific timelines. Therefore, once a team member works on a particular task within a specified time, they share the same with the management from which they are provided with feedback. From this, they work on the areas they have been advised to before submitting the same to the management. Accordingly, this creates a favorable environment that encourages more creativity, improving the work for the whole project. In essence, all this demonstrates that the leadership of a virtual team is instrumental during the work design. Furthermore, this indicates that the leadership creates a supportive environment and culture that makes all the employees feel valued and, at the same time, valid for the accomplishment of the corporate objectives. Accordingly, this signifies that a leader encourages creativity, employs open communication, embraces cultural diversity, and, most of all, offers direction while regulating the behavior among the team.
How Task Requirements and Team Characteristics Affect Work to Design and Leadership
Often, the virtual team works on a project needed to be accomplished within the specified time. From this project, the leader gauges the team members' capability based on the outcome they provide. Arguably, having a hierarchy within the virtual team may significantly alter the team to deliver on its mandate. Furthermore, as severally indicated, the team members are working from different places, and therefore reaching the team leader may be a challenge due to the various time zones. It implies that decision-making becomes impaired, increasingly affecting the ability of the corporate to attain its objectives.
In the same way, some tasks may be challenging for some members, thus requiring rotation which increasingly becomes the onset of bad blood within the team members. Often, the member whose work has been transferred to another individual within the team could have worked on the same for a considerable time. Still, since they cannot complete the same as anticipated, they get transferred to another capable member. This aspect affects the work design and the ability to lead the whole team since they need to accomplish their purposes.
In the same way, the issue of different time zones also affects the ability of the leadership to design the work effectively. Accordingly, when the other members are up, the other member could be asleep. Therefore, the variance of time overall affects the ability to have better and improved performance, thus wasting time and resources alongside inconveniencing the whole team (Davidaviciene, Majzoub & Meidute-Kavaliauskiene, 2020) [1]. In the same way, the team members have different attitudes and approaches towards each other. Therefore, the task at hand, which also considering they are working virtually, becomes impossible to consolidate the objective, thus increasingly facilitating the attainment of the corporate goals. Conversely, given that the team should work based on standardized procedures, following the same virtually can be challenging within the team, illustrating the challenge for the attainment of the group purposes.
Assessment of the Effectiveness of the Structures/Practices Covered by the Readings in this Module for Virtual Teaming.
Given the above task requirements and team characteristics affecting the choices of work design and leadership, it is easily possible to argue that virtual teaming is a flop. Because of the ability to create the physical human relations needed within the team, the communication barrier occasioned by the different time zones, and eventually, personal differences within the team. In contrast, these are challenges found within the virtual team but require effective communication and the establishment of a flat hierarchy of communication where all the members increasingly have access to open communication. Furthermore, it implies that the team first does the induction process, which outlines the expectations of each member while at the same time setting out the rules and regulations which should be followed within the team. Overall, this clearly indicates that once the team gets provided with training and development, they elevate communication and acceptance amongst themselves, thus changing the direction that should be employed. All this signifies the possibility for the continuation of open communication while at the same time increasingly enhancing cultural diversity within the whole team, which promotes a sense of purpose.
Conclusion
In this regard, I am convinced that virtual teaming works out provided there is effective leadership that offers communication, training, and development and, importantly, outlines the steps that should be followed for the team relations while encouraging open communication and subsequent feedback (Malhotra, Majchrzak & Rosen, 2007) [2]. Regarding timeframe, the team leader communicates a specific time that favors everyone but is equal to the team, following the team members. Since the COVID-19 outbreak, the reality of virtual teaming proved effective, with also immense literature supporting the view that it is practical. Virtual teaming requires building trust, appreciating diversity, managing the work cycle adequately, and enhancing the visibility of team members. Also, it requires adequate monitoring of the team members, ensuring that each member stands to benefit from the whole group.
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00242.pdf https://biogenericpublishers.com/jbgsr-ms-id-00242-text/
0 notes
Text
The Importance of the Cold Chain Logistics in the Pandemic: The Transport of Covid-19 Vaccines by Mesut Selamoglu*
Tumblr media
ABSTRACT
Pharmaceutical and medical products play a major role in the cold chain and its logistics, especially in temperature-sensitive products. Cold chain logistics are referred to as supply chain systems consisting of series of protocols including multi-level processing, transportation, storage, distribution and retail sale of products. All these procedures are indispensable to maintain a temperature-controlled environment for the pharmaceutical products such as drugs vaccines and to minimize deterioration and maintain quality standards. Therefore, with advancement in technologies in the industrial sector, cold chain logistics have made remarkable progress.
Keywords: Pandemic, Vaccines, Covid-19, Logistics, Cold chain, Cold chain logistics, Smart health technology
Introduction
Human history has witnessed many pandemics and significant impacts caused by these pandemics on health, economy and even global security. Such a pandemic situation has infected millions of people, caused massive diseases outbreak and deaths. Pandemics also threaten global security which directly influences economic stability and lives. The mortality and morbidity rates can be controlled by a prompt, efficient and effective emergency response to reduce long term social and economic impacts. The term ‘Pandemic’ is referred to the outbreak of a fatal disease at a global level irrespective of regions and climates such as the Black Death, HIV-AIDS and SARS and plague. In the 21st century, infections of avian influenza (bird flu) and SARS emerged and they cause pandemic which arose from Asia and caused millions of death and infected people all around the world. The emergence of new and virulent viral strains cause global pandemics and the spread of such infectious viral diseases is easy. Such infectious diseases lead to high rates of transmission, mortality and morbidity as the human population do not show immunity against the virus [1].
The outbreak of infectious diseases spreads very quickly and can cross borders very easily, which threatens not only economies but also the regional stabilities as was witnessed in the pandemic and epidemic of H1N1, H5N1, HIV and SARS. Such health emergencies caused by infectious diseases threaten public health as well as global stability. During December 2019, there were many reports of pneumonia type incidences caused by some unidentified viral strain outbroke Wuhan, China. The disease clinically resembled some new type of viral pneumonia and Flu. After the isolation of the virus and several analyses of its genomic sequence, a novel strain of coronavirus was identified which was designated as ‘severe acute respiratory syndrome-Related Coronavirus-2’ or SARS-CoV-2. The respiratory diseases were caused by this newly discovered ‘SARS-CoV-2’ which was later declared as ‘Coronavirus disease 2019’ (Covid-19) by World Health Organisation [2-4]. The virus originated from bats and then transmitted to human beings. This Covid-19 pandemic has been threatening human health all around the world and its chain is not yet been broken as humans are spreading it. This pandemic situation has lime lighted the worth of laboratory medicines. The clinicians, laboratories and medical scientists have played a very critical part and responded very promptly to this health emergency of SARS-CoV-2. Currently, the laboratory medicine services are required to take care of the rapid diagnosis and medical solution of this viral infection, serological and biochemical monitoring of infected the hospitalized patients and epidemiological surveillance. The immune responses of post-infection patients are detected by antibody assays against pathogens. Serological investigations are pivotal in determining the efficiency of vaccines and also in evaluating the immune response of patients. Especially, the federal drug authority takes an average of 12 years to approve a drug or vaccine for any disease. However, there are great efforts worldwide on the developing of vaccines against the Covid-19. Therefore, there are 5 different platforms for Covid-19 vaccines and about 16 different vaccines developed. The platforms and vaccines in each platform are as follows [5]:
RNA based vaccine eg Pfizer, Moderna, CureVac G
Viral Vector (adenovirus-non-replicating) eg AstraZeneca, CanSino, Sputnik, Janssen
Inactivated Virus eg Sinopharm, Sinovac etc.
Protein subsunit eg Novavax
DNA based Vaccine eg Zydus Cadila
There was rapid progress and scientific efforts to develop vaccines for Covid-19 and from initial trials to the approval of vaccines the hopes were alive to break this chain of virus and stop the pandemic. The global distribution of the coronavirus vaccines required a huge setup of their transport and the logistic challenges in air transportation are fundamental. This present paper will encompass all key considerations regarding vaccination and its role in recovering international travels. The worldwide transportation of Covid-19 vaccines requires some stringent and safe means. After successful trials, the Covid-19 vaccines enlightened the hopes all around the world to get rid of the pandemic situation and returning to the normality of life. The global transportation of vaccines require international travels and this process must be standardized and consistent to minimize complexities and wastage of doses. Moreover, a clear roadmap plan is needed to implement and manage the vaccination process, especially during roll-out periods of different countries which are going at a different pace to access vaccine supplies. This is very crucial to maintain the vaccination process while there is an overlap between testing and vaccination [6]. The transportation and handling of life-saving pharmaceutical products demand stringent and careful handling conditions. During the transportation of potential medicines may lose their potency and become ineffective. The most critical issue while transporting pharmaceutical products is the maintenance of quality which cannot be compromised. The impacts of logistic constraints and their strategies are mitigated by dealing with pharmaceutical logistics. The operational challenges of transporting vaccines across the borders are linked to the standardized prerequisites to be fulfilled. The training of transportation staff is highly necessary so as their prior knowledgeability about the issue. The risk assessment and reviewing to adjust any risk is the primary requirement that has to be dedicated to the types of equipment and infrastructure [7].
There are different economic and technical indicators aimed at ensuring product control due to the inability to restore quality losses. The final quality of the product is evaluated together with the time, temperature and tolerance of the product during storage. An effective shipment requires good coordination and time management. Any delay in the process of logistics activity of the product, a change in the degree of heat will both financially damage the enterprise, and, more importantly, a deterioration in pharmaceutical products subject to the cold chain will adversely affect health and may even lead to fatal consequences. In order to ensure that the product does not deteriorate and is not damaged during the cold supply chain process, the pharmaceutical and medical sectors require cold chain technology more and more. In this context, this study aimed to focus on the importance of the cold chain logistics related to the transportation of Covid-19 vaccines in the pandemic days.
Logistics
A network of entities to produce and distribute services or goods from the suppliers to end-users is designated as a supply chain. Logistic is a complex process to organize and implement any operation. Logistics and supply chain management are interlinked with the plans, control mechanism, implementation, the effectiveness of forward and reverse flow processes as well as the final storage of the services and goods. There is a continuous transfer of information from point of origin to the consumers to fulfil customer requirements. The main task of distribution logistics deals with the delivery of products from the manufacturer to distributors and finally to consumers. There is some basis operational step such as processing of orders, warehousing and the transportation to the destination. These distribution logistics are very important to maintain the standards of the process and depend on the time, place, and quantity of production and their consumption parameters. Management of resources in logistics maybe some tangible goods (materials, supplies or equipment etc.), food or other consumable products [8,9].
Cold Chain Logistics
The term ‘Cold chain’ refers to a series of actions and the equipment required to maintain the quality of products in a specific low-temperature range from their production to consumption. The supply chain in cold chain setup is strictly temperature-controlled and requires uninterrupted refrigeration during productions, storage and distribution operations. There are some other activities associated with the equipment and logistics to maintain low-temperature with a specific range. The whole process is required to ensure quality, preservation and extension of shelf life of the consumer products including agricultural produce, frozen food, seafood, photographic films, pharmaceutical products and other chemicals. The transport and transient storage of such products maintaining low temperatures is sometimes termed as cool cargo.  Moreover, the cold chain is considered as a science, technology and also process. It is considered as ‘science’ as the understanding of biological and chemical processes is associated with the perishability of products and goods. On the other hand, the cold chain is termed as ‘technology’ due to its reliance on physical means for ensuring desired thermal conditions. It is also a ‘process as it requires a series of tasks in manufacturing, storage, transportation and monitoring of temperature for all sensitive products [8].
Cold chains are among the common processes in the pharmaceutical, and food industries and also required for chemical shipments. The most common low-temperature range in pharmaceutical processing units is maintained between 36 to 46 °F (2 to 8 °C), but the temperature range can be specific depending on the needs of products to be shipped. Some other parameters are also considered while shipping fresh products to maintain a specific environment including air quality in terms of oxygen, carbon dioxide, humidity. These requirements complicate the operation and maintenance of cold chain processes [10]. Unlike other merchandise, cold chain goods are endangered to perishability and disability so these products always require safe and quick transportation towards end-users. These goods are transported temporarily maintaining a low-temperature environment so also termed as cold cargo during entire logistics [11].
Cold chain logistics in the transport of the Covid 19 Vaccines
Some 80 potential vaccines for Covid-19 have been launched in the market until now, but still, many research programmes are underway. To transport these vaccines from the production place to the destination, the air freight industry has to respond accordingly for efficient global transportation and delivery. Transportation and handling of Covid-19 vaccines introduced some other dimensions to supply chains. These highly sensitive and valuable vaccines require not only a temperature-controlled environment but also have to follow international regulations published by ‘EU Good Distribution Practices’, ‘US Federal Drug Administration’ and also World Health Organization (WHO) about temperature control [12].
The capacity in airfreight logistics is generated in such a way to meet all existing programmes to transport vaccines all around the world. Both the resources and infrastructure are very critical as every country was prepared to respond for massive vaccination against Covid-19 as this virus has impacted all territories to a different extent. There are some upcoming challenges for supply chain stakeholders in planning and executing delivery mechanism at the global network for Covid-19 vaccines. Collaborations are much needed in this scenario to build trust and confidence, and the integrity of sensitive vaccines is required strict maintenance throughout the process of transportation. The demand for the vaccine is increasing worldwide and to respond to this situation the ‘traditional manufacturing’ approach could be replaced with ‘distributed manufacturing’. Thus, it can be said that the decentralization approach would create multiple manufacturing units and load would be distributed among them. This would also facilitate the end-users and reduce constraints of supply chain logistics. This goal could only be met for Covid-19 vaccines if the newly developed companies ensure access to their products and compounds. Nowadays, this situation is very challenging due to the imposition of export restrictions on Covid-19 vaccines by government authorities. Such life-saving and essential medical supplies require different approvals and certification policies [13].
Covid-19 Vaccines and Smart Health Technologies in the Pandemic
The Covid-19 disease outbreak is putting a greater emphasis on protracted technological and medical advancements. Firms who are unable to provide a rich experience to their employees and clients are at risk as they traverse the new work-from-home reality. Cloud computing and cybersecurity have become key parts of organization based as companies navigate the new work-from-home reality. In the field of medicine, attempts to find a cure or better therapies are refocusing emphasis on some of the most recent advances in genetics and immunology. Significantly, innovations resulting from the imminent pandemic issue are not transient; it can be proposed entrepreneurs should broaden their horizons to see the long-term implications of current issues. Covid-19 unpredictability drives healthcare and technology innovations [7].
Vaccinations are a significant new tool to combat over Covid-19, and the fact that far too many vaccines are proven to be effective and are being developed is quite promising. Scientists from everywhere in the globe are researching and creating as swiftly as they can to deliver us diagnostics, cures, and vaccinations that would save lives and ultimately lead to the end of this pandemic [14]. There are different platforms for Covid-19 vaccines and many different vaccines developed with their various conditions (Figure 1). To speed healthcare innovation and combat the coronavirus, organizations all across the world are implementing considered trying technology as well as inventing new ones. The goal of smart healthcare is to transport data rather than people. All that is left is to hope that this future answer arrives soon, and that the promise afforded by the interplay of technological advancements is completely realized [7].
Every year, vaccines protect millions of people's lives. Vaccines function by retraining and strengthening the body's natural defenses, the immune system, to detect and combat the viruses and bacteria they are designed to combat. If the body is exposed to such disease-causing microorganisms after vaccination, the body is prepared to kill them right away, minimizing sickness. Effective and reliable vaccinations will improve the future, but for the time being, we should start wearing masks, keep a safe distance, and avoid crowds. Getting vaccinated does not imply that we may disregard prudence and put yourself and others at danger, especially because the extent to which vaccinations can defend not just against sickness but also against infection is yet unknown [14].
The temperature of most vaccinations must be kept within 1 degree Fahrenheit of their optimal temperature. Traditional immunizations are normally kept between 35- and 46-degrees Fahrenheit. Although most Covid-19 vaccinations must be maintained at temperatures below 32 degrees Fahrenheit, several of the most popular Covid-19 vaccinations must be kept at significantly lower temperatures. Pfizer's vaccine candidate needs a preservation temperature of - 94 degrees Fahrenheit, while Moderna's vaccine needs minus 4 degrees Fahrenheit. It's not simple to keep these temps consistent [14].
Once a Covid-19 vaccine is manufactured, it will mostly certainly be delivered by vehicle towards the next suitable airport. Because a Covid-19 vaccine is so important and time-sensitive, it will almost certainly be flown around across nation or over the world via air. Following the unloading of these planes, the vaccinations will be transported by truck to suitable stockroom storage facilities for distribution. Several vaccines may be shipped straight from the warehouses to the health-care institutions where they will be administered. Following the unloading of these planes, the vaccinations will be transported by truck to suitable stockroom storage facilities for distribution. Several vaccines may be shipped straight from the warehouses to the health-care institutions where they will be administered [7].
The very first stage should be to figure out where vaccinations will be manufactured. Industries might have to use vehicles and airplanes for transportation inside their own nations and for broader marketing to others if manufacturing is mostly done elsewhere. There's also some doubt regarding which Covid-19 vaccination will be licensed initially. Depending on the vaccination, different temperatures and processing protocols may be required. As a result, distinct instruction would be required for personnel across the cold chain about how to manage each vaccination [7]. Getting the right vaccines into the right people at the right time during a global pandemic is, unsurprisingly, proving to be a logistical challenge. Numerous large logistics businesses, such as UPS and DHL, are indeed making significant investments in the cold chain processing facilities. Near UPS aviation centers in Louisville, Kentucky, and Atlanta, Georgia, UPS is establishing frozen farms with 600 freezers capability of attaining - 80 degrees Celsius. The Netherlands is a country in Europe. Also every freezer would contain 48,000 vaccine bottles and so will be capable of storing whether the Pfizer or the Moderna vaccines at the required low temperatures [7]. In several localities, establishing freezers appropriate of the cold temperatures required by the Pfizer vaccine is not feasible, thus systems must be put in place to ensure that such locations receive a sufficient quantity of such vaccine. Aviation and logistical firms are assessing whether they would handle this demand. The outcome will have to wait and see. Ever other vaccination created has the potential that can save a world and gives the globe closer to routine, but delivering the vaccinations to where they are needed will be difficult. Establishing and strengthening the storage conditions for vaccine delivery would assist the globe avoid wasting vaccines and help these people overcome the pandemic quicker [7,14]. Nowadays, the globe can produce and distribute around 6.4 billion flu vaccinations each year. Analysts claim that firms will manufacture roughly 9 billion Covid-19 vaccines in 2021, and the cold storage will need to be capable of managing this massive increase on edge of the vaccines that should currently be supplied each year. According to research published in 2019, 25% of vaccinations are damaged either by time they reach their ultimate targets. When a vaccine is exposed to high temperature beyond its normal operating range and that this is discovered, the vaccinations are often discarded. A temperature error is occasionally made, and one of these vaccinations is given. According to researchers, these vaccinations have no side effects, but they may provide less immunity and need an individual to really be revaccinated. Vast majority of individuals in the United States and billions throughout the world may eventually require a coronavirus vaccination – maybe two doses. This massive immunization campaign will necessitate a complicated vaccine cold chain on a never-before-seen scale. The present vaccine cold storage is inadequate, and increasing the distribution network will be difficult [7].
So, how do businesses and government entities provide vaccinations for those that need them?
The solution is the vaccine freezing chain, which is a distribution network that can retain vaccinations at precise temperatures from the time they are manufactured until the time they are delivered to a person. A further concern is how often delivery to sites of care will be required. This will be determined by the refrigeration capacity of healthcare institutions and hospitals, personnel resources, vaccine distribution sites, and a variety of other criteria, as well as the vaccine's storage period. Finally, there is also the straightforward issue of how to increase transport and storage capability (7). Regular restaurant freezers also have range of temperatures of 5 to minus 10 degrees Fahrenheit, which is insufficient to meet the requirements needed by the Pfizer vaccine. It is necessary to use specialized equipment. It was investigated that the vulnerable product distribution networks in the pharmaceutical business and also how they relate to quality of product as just an operations management scholar. Considering trillions of vaccines required to combat the pandemic, a high spoiling rate could result in a tremendous loss of revenue as well as a significant delay in immunizations, perhaps leading to fatalities and a lengthier worldwide closure. Covid-19 vaccinations are estimated to be required in the range of 12 billion to 15 billion globally, according to experts. Vaccines are consumable products that must be stored at extremely low temperatures. The bulk of Covid-19 vaccines in production, such as those developed by Moderna and Pfizer, are RNA-based vaccines. They will spoil if they become too hot or too cold. A rotten vaccination, like damaged seafood, must be discarded. WHO is collaborating with Gavi and UNICEF to guarantee that the infrastructure as well as technical assistance are in place in order to ensure that Covid-19 vaccinations are securely given to all individuals who require them [7,14].
At minimum seven distinct vaccinations across three platforms have already been put out in nations as of February 18, 2021. Vaccination is prioritized for vulnerable groups in all nations. Simultaneously, more than 200 other vaccine candidates are being developed, with more than 60 of them in clinical trials. COVAX is a component of the ACT Accelerator, which WHO and partners introduced in 2020. COVAX, the ACT Accelerator vaccines cornerstone hosted by CEPI, Gavi, and WHO, intends to stop the Covid-19 pandemic's acute phase by; accelerating the production of safely and effectively Covid-19 vaccines; facilitating the development of production facilities; and collaborating with governments and producers to guarantee that vaccinations are distributed fairly and equally to all nations – the only global program to do so. A Covid-19 epidemic is sweeping the globe. WHO and collaborators are scrambling to develop and implement safely and effectively vaccinations while they work together over the reaction — monitoring the epidemic, consulting on essential measures, and sending important medical supplies to individuals in need [7].
Pharmaceutical and biochemical engineering items must be handled and transported under strict guidelines, otherwise the drug may lose its efficacy and become useless. This adds to the present constraints of restricted aviation freight capacity and worldwide connection as a result of the cancellation of roughly two-thirds of the commercial network. Handling and delivering vaccinations adds a new layer to the logistics of the supply chain - it's not simply a box! Such elevated and delicate items may necessitate not just a temperature-controlled development environment, but also must adhere to international regulatory criteria outlined in the Temperature Control Regulations [14].
The majority of temperature errors in the cold storage facilities are caused by ineffective transportation practices, with yearly losses approximated at $34.1 billion. However, that figure does not include the cost – both financially and physically – of any sickness that could've been avoided if high-quality vaccinations had been delivered on time. Aircraft, trucks, and cold storage facilities are all necessary components of the cold chain. The vaccine manufacturing sites and demand points determine how and why the infrastructure is interconnected and exploited [7]. Todays modern air cargo logistical capacity is geared to match each country's present scheduled immunization programs. As governments prepare for a huge vaccine reaction to Covid-19 that will affect all nations and territories, including infrastructure and manpower will be important. The supplier stakeholders' next task is to create and implement a worldwide network distribution strategy for the Covid-19 vaccines, which has never been done before. Would this temperature-controlled distribution network be able to handle, store, and transfer such a significant increase in vaccine quantities? -Certain transporters, ground handlers, forwarders, and transporters may be confused how to manage temperature-sensitive items successfully. Furthermore, because temperature-controlled life science medical supplies may not have been acceptable for travel in the passenger cabin, pharmaceutical producers may be less reluctant to have their precious goods handled in this fashion. As a result, before accepting or handling vaccine shipments, all suppliers must acquaint themselves with either the overall criteria for securely processing vaccine deliveries. It may be necessary to dedicate special or supplementary resources inside their networks, as well as assign additional and/or legally compliant vaccine storage capacity. It's possible that industry retraining and compliance certifications may have to be expanded [7,14].
Conclusion and Suggestions
The vaccine manufacturing industries are largely influenced by the contribution from intermediate industries in the locality end user’s destinations. The major portion of vaccines are imported from foreign countries, the bottleneck strategy may prevent pharmaceutical industries from maximizing their impact along the supply chains. In such a situation, domestic production could be boosted if the policymakers execute supportive policies which would, in turn, reduce the import of vaccines. This would also facilitate the pharmaceutical industry to flourish and its production would also increase manifolds. Expectedly, the producers of vaccines and vaccine services are the two most important industries which have globally influenced vaccine processing as they are major users of the supply chain during a pandemic. Cold chain logistics is the life of pharmaceutical products such as the Covid-19 vaccine products in the pandemic.
The continuous supply of vaccines could only be ensured by cooperation, collaboration and communication. Prompt actions are needed to initiate industrial transformation that could only be achieved through an efficient, integrated, and collaborative supply chain mechanism. Whether in alignment with globally harmonized standards or with the industrial collaborative efforts, by digitization, data sharing, risk assessment, or by tracking and tracing strategy. This improved approach will lead to growing expectations, transparency and high standardization across supply chains. The airfreight logistics is ambitious and must follow some workable plans for resilient solutions.
There has been a collaboration between logistics and pharmaceutical industries for many decades to improve the quality of products within realistic capabilities. The logistics in general and cold chain logistics, in particular, will continue their vital contribution in supplying life-saving products to help fight against the coronavirus and to diminish its impacts on humanity. It is highly believed that laboratory scientists with support from medical professionals, industries, and public health authorities will eventually overcome this pandemic.
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00241.pdf https://biogenericpublishers.com/jbgsr-ms-id-00241-text/
0 notes
Text
Entrepreneurship Development through Agro-Processing Centers in Production Catchment for Secured Farmers Income by Mohammed Shafiq Alam*
Tumblr media
Short Communication
An agro-processing industry or centre refers to the subset of manufacturing that processes raw materials and intermediate products derived from the agricultural sector. Agro-processing industry thus means transforming products originating from agriculture, forestry and fisheries. A very large part of agricultural production undergoes some degree of transformation between harvesting and final use. The industries that use agricultural, fishery and forest products as raw materials comprise a very varied group. They range from simple preservation and operations closely related to harvesting to the production, by modern, capital-intensive methods, of such articles as textiles, pulp and paper.
The potential for agro-industrial development in the developing countries is largely linked to the relative abundance of agricultural raw materials and low-cost labour in most of them. The most suitable industries in such conditions make relatively intensive use of these abundant raw materials and unskilled labour and relatively less intensive use of presumably scarce capital and skilled labour.
At present, farming alone no more seems to be a viable and remunerative venture for small and medium farmers, unless properly espoused with value addition through efficient primary processing operations carried out, in the production catchments itself, by the farmers transformed into processors. Rural entrepreneurship through suitable agro-processing models/complexes is essential to increase the income of the farmers, provide significant employment opportunities to the rural youth and reduce huge amount of post harvest losses especially in cereals, spices, cotton and oil-seeds. Based on the resource availability in an agro-climatic zone, agro-processing models for various regions of Punjab were developed and validated in a research Project in AICRP on Post Harvest Engineering & Technology.
The concept of Agro Processing Centres (APCs) was to process the grains at the village level to substantially enhance the income of the farmers. Moreover, the primary processed products from wheat, paddy, oilseeds, pulses etc are of daily use in every household. So there lies a great potential for the primary processing of these durable food grains. These agro processing complexes have been found to be technically feasible, economically viable and socially acceptable models. These farmers who have established these complexes have earned to such an extent that they have given up farming and associated their family members in the processing activity. These complexes consist of two or more machines for processing at farm/village level.  The machines are mini rice mill, baby oil expeller, small atta chakkies and large atta chakkies with scouring machine, masala grinder, penja, cleaner and feed mill along with construction and installation costing approximate Rs. 20-25 lakhs.
A desired covered space of approx. 200-300 sq.yard is required for the installation of all these machines. Presently more than 300 of such APCs have been installed by the farmers/rural youth under the guidance of PAU directly/indirectly and are being running successfully in different parts of state. The state has a lot of potential for installing similar units all over Punjab. Presently 15-35% subsidy on machinery based on categories is provided by KVIC.
The following advantages have been experienced through APC’s in rural areas:
Employment generation in the production catchments
Migration check of unemployed rural youth to urban areas
Processing of farm product locally
Increased farm income
Reduced transportation costs of raw materials/products
Adulteration free product
Ease of disposal of waste
Minimizes pollution from agro-processing in cities
The following agro industrial models have been developed which may be adopted in specific areas depending upon the crop produce of the area:
These models consist of two or more machines for processing at farm/village level.  A number of such complexes have been adopted by the farmer/entrepreneurs at various places in Punjab.  These are technically feasible and economically viable units. Details of agro processing machinery along with their cost are given in Table 1. After deciding the technologies, the components of agro industrial complex can be finalized and a specific agro industrial model can be finally selected for installation.
a. Extra Cost of Rs 6, 00,000 (for construction of APC shed along with installation of machinery) b. Cost of packaging material = Rs. 150/- to Rs. 275/- per kg (depending upon the thickness), Printing Charges = Rs. 0.50/package
Selection of suitable agro industrial technology/equipment
For selecting a suitable agro industrial technology/equipment in rural areas, the following important points should be given due consideration:
Crop production & pattern i.e. availability of raw material
Technology/process to be used for processing
Volume of production
Identification of suitable technologies, plant and machinery for desired volume of production.
Facility for storage and marketing
Apart from proper selection of equipment, assessment of availability of raw material and market potential, it is also necessary to exercise the following steps:
Collect benchmark information of selected village and cluster of villages on crops grown, processing of farm produce, and the demand for processed products;
Identify corresponding post-harvest equipments/technologies based on crops production scenario of cluster and study their techno-economic feasibility;
Complete formalities for creating infrastructural facilities;
Get license from appropriate authorities of the government to take up processing activities.
Install processing equipment and introduce process technologies in APC;
Constitute a local governing committee to oversee safety and maintenance of agro-processing centre;
Periodically observe the functional performance and monitor processing machines;
Develop modalities for credit assistance from banks to procure additional processing equipment as and when needed;
Develop strategies to include such processing activities/strategies to enable the APC to run for more than nine months in a year;
Involve Self-Help Groups of Women in the value-added activities as well as marketing of processed products; and
Obtain technical advice from the competent source to update and keep pace with the changing environment in value-addition activities.
APCs can generate a platform to process the raw produce with ensured good quality finished product and are technically feasible and economically viable units. The entrepreneurs who had already installed the APC with our guidance, running it successfully and are fetching good financial capital with monthly profit of approx. Rs 0.50 to 1.00 lakhs  apart from providing employment to 2-5 persons. The state has a lot of potential for installing similar units all over Punjab.
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00240.pdf https://biogenericpublishers.com/jbgsr-ms-id-00240-text/
0 notes
Text
The use of Zoom Videoconferencing for Qualitative Data Generation: A reflective account of a research study by Temitope Labinjo
Tumblr media
ABSTRACT
Historically, the face-to-face interview has been the method of choice for undertaking interviews in qualitative research. However, the introduction of communication technologies (for example, Zoom) has resulted in qualitative researchers re-thinking how they generate data.
This article presents the experience of a Ph.D. researcher who used Zoom videoconferencing to interview participants, including its benefits and limitations, and suggestions for future research. The article demonstrates that although video conferencing tools like Zoom are not meant to replace traditional face-to-face interviews, they are a helpful addition to the researcher's choice of methods. Although there are some technical limitations to using this tool, this can be overcome by familiarization and training. This is particularly useful due to the COVID-19 pandemic where face-to-face interaction is not allowed, disrupting their research. However, future research is needed to determine Zoom's suitability and security, especially in health and social care research.
KEYWORDS: Interviews; Qualitative research; Zoom Video conferencing; Data generation; Online communication
INTRODUCTION
Interviewing is the most used form of data collection in qualitative research (Creswell, 2007). Historically, face-to-face interviews have been the preferred method for generating data in qualitative research. However, online interviews are research methods conducted online using computer-mediated communication (CMC) (Salmons, 2014). There are two types of online interviews which are synchronous (real-time) and asynchronous (non-real-time). Asynchronous (non-real-time) online interviews are usually in the form of emails, discussion groups, etc. (Hooley et al., 20120. Also, synchronous (real-time) online interviews are generally in the form of text-based chat rooms, videoconferencing, instant messaging, etc. (Steiger and Gortiz, 2006). More recently, with advances in communication technology and internet usage, video conferencing is becoming widely used and an alternative to traditional means of interviewing in qualitative research.
Video conferencing is defined as a communication technology that allows real-time, online simultaneous conversations to occur with audio-visual information (Salmons, 2012). Videoconferencing involves the use of instant feedback and non-verbal cues such as facial expressions and voice gestures. However, there is a paucity of research exploring digital technology as a data collection tool (Archibald et al., 2019).
The use of digital technology in research has a lot of potential benefits such as convenience and cost-effectiveness of online methods compared to in-person interviews, especially when researching a large geographical area (Hewson, 2008; Horrell, Stephens & Brehany, 2015; Braun, Clarke and Gray, 2017; Cater, 2011; Deakin & Wakefield, 2014 & Archibald et al., 2019). Furthermore, online methods are beneficial in many research contexts where there is a need to communicate with multiple stakeholders in geographically dispersed areas with limited resources (Archibald et al., 2019). Therefore, online interviews can be 'conducted' in an environment more relaxed, comfortable, and familiar to the participant (Irani, 2018).
A significant advantage of online interviews is that they allow researchers to identify non-verbal cues to build trust and encourage engagement while collecting rich textual data (Hesse-Biber and Griffin, 2013). Studies found that researchers reported the ability to respond to nonverbal cues like facial expressions and gestures. This improved engagement, built trust while promoting natural and relaxed conversations. There were also instances where researchers reflected that the ability to view and respond to the participant's body language improved when the participants were familiar with videoconferencing technology, allowing for the generation of rich qualitative data.
However, there were also concerns about mistrust because, usually in face-to-face communication, lack of eye contact is sometimes seen as a sign of distrust or deception. Bekkering (2004) created a method to compare trust perceptions of email messages, voice messages, and video messages recorded at different angles. In the different angles at which the video was recorded, all participants in the study saw and heard the same messages irrespective of how the message was delivered. The study also sought to determine if the participant's behaviour will differ due to a higher or lower level of perceived trust. The study found that the level of trust was determined by the richness of the communication channel, i.e., a combination of audio and video components.
Sometimes video conferencing may not be appropriate to some sensitive research studies or where a participant expresses emotion, and the researcher cannot comfort and build appropriate rapport with the participant (Irani, 2018). However, some participants might find online interviewing an advantage in studies researching sensitive topics. For example, Mabragana et al. (2013) used videoconferencing to obtain a sexual history from participants in a vaginal micro-blade study. However, they reported that they would have felt too embarrassed to discuss their sexual behaviour face to face with the interviewer (Mabragana et al., 2013).
Another significant aspect is nonverbal cues of eye contact, which are absent in non-real-time online interviews. Eye contact may not be visible during videoconferencing interviews due to the camera location. Vertegaal et al. (2002) attempted to resolve this by developing the Gaze-2 system, where a tracker selects the camera closet to the eye position. Using this system, the current speaker is viewed in the full-frontal, and the images of the listeners are viewed when rotated towards the speakers' image. Unfortunately, it also limits the ability to fully assess the participant's environment, which may sometimes be crucial during the analysis phase.
Zoom as a data collection tool.
Zoom is a web-based conferencing tool with a local desktop client and a mobile application that allows users to meet online (Maldow, 2013). Zoom users can record sessions, engage in projects, and share each other's screens using an easy platform. In addition, zoom offers quality video, audio, and wireless-sharing performance (Keanu, no date).
Security while using the application to undertake research is a priority. No meetings were allowed before the host (researcher) arrived. All meetings were one on one and scheduled with the participant's approval. Due to the Covid pandemic, Zoom updated its features to improve its security system. One of the ways was by account holders creating passwords for all meetings and the ability to control meetings and remove unwanted guests. Finally, a key feature was to include end-to-end encryption. This allows all communication between the user and other people in the chat or sessions to be made available to only these parties (Zoom Privacy statement, 2021).
Zoom categorically states that they have no access to the meetings, sessions, or interviews, including all audio and video files except authorised by the account holder or required by law, safety, or security reasons. Therefore, only the account holder and any third person authorised have access to all data used on the Zoom platform (Zoom Privacy Statement, 2021). Additionally, for international data transfers (as the case with this study), Zoom operates globally, meaning data can be moved/ stored, or processed outside of the county where data was collected.
Confidentiality is an essential aspect of all research, and the use of Zoom as a tool adds another consideration to that. To help mitigate data protection issues while using Zoom, an account is created specifically for a research study. On completion of the research, the account is closed, and all data is removed. Therefore, based on the benefits and data security, zoom video conferencing was used as a tool to generate data from study participants resident in Nigeria about a sensitive topic of mental health.
Archibald et al. (2019) explain that in addition to the advantages of VoIP technologies such as Zoom and, in comparison, to face-to-face interviews, the outcome of these experiences is based on the researcher's subjective assessments of the quality of the interview data generated. Therefore, using Zoom as a platform for qualitative data generation can guide researchers' decisions, thereby developing strategies to overcome contact and platform barriers to support positive relationships between researchers and participants.
Zoom video conferencing has been successfully used to generate qualitative data, supervise work teams, and provide guidance and supervision to junior medical officers (Archibald et al., 2019; Bolle et al., 2009; Cameron, Ray, and Sabesan, 2015).  A recent study (Archibald et al., 2019) asked researchers and research participants about their Zoom experiences as a research tool. On the whole, respondents were optimistic about their experiences and recommended Zoom as an alternative to face-to-face, telephone, and other videoconferencing service platforms. Furthermore, the study suggests using Zoom as a qualitative data collection tool due to its ease of use, cost-effectiveness, data management features, and security options (Archibald et al., 2019).
Regarding conducting qualitative interviews, a study by Gray et al. (2020) found that Zoom contributes to high quality and in-depth qualitative interviews when face-to-face interviews are not possible. The tool was also developed to eliminate long-distance and promote international communication, thereby reducing travel costs. Above all, participants in the study described using Zoom as a positive experience with benefits such as convenience, ability to discuss personal issues, accessibility to electronic devices, and saving time, especially where no travel requirements are needed. This is particularly useful during recent times when the COVID-19 pandemic has restricted face-to-face contact. This has enabled qualitative researchers to conduct safe and secure interviews (Davis et al., 2020).
Using Zoom in qualitative research: an example
METHODOLOGY
The research is a qualitative phenomenological study that was recently conducted to explore the experiences of mental health among internal migrants in three states in Nigeria. Interviews were used as a method of generating data for the study. Serrant- Green (2005) describes interviews as a way of encouraging researcher and participant involvement and an inclusive approach to exploring experiences. One-on-one and face-to-face interviews were the recommended method as they allowed participants to describe their experiences, especially on a sensitive topic like mental health.
Therefore, open-ended, and minimally structured interviews were chosen because this method helps first-person verbal descriptions (Huberman and Miles, 2002). This considered their comfort, convenience, and available resources, including the interview venue (Zoom video conferencing). There were no language issues as all the participants were educated and spoke English because English is the official spoken language in Nigeria.
All participants were informed of and agreed to the use of Zoom video conference to conduct the interviews. This allowed the interviews to occur without traveling to Nigeria to conduct them, which was initially considered. Furthermore, the use of Zoom enabled the project to be successfully undertaken following the emergence of the recent worldwide pandemic. The participants were residents in Nigeria, while the researcher was resident in Sheffield, UK. In addition, the questions were judged (supported by the supervision team) to be sufficient for an online interview using Zoom video conferencing. The average length of the in-person video interviews was between 30-45 minutes.
LIMITATIONS
One of the limitations of video conference technology like Zoom is that this might create a potential barrier to potential participants due to a lack of computers. However, the participants in this study were quite educated and conversant with computers and the internet. Nigeria has 99.05 million internet users. Fifty-four percent accesses the internet daily, while 12 percent have active social media accounts. Individuals spend an average of 3 hours 17 minutes on social media (Clement, 2019; Udodiong, 2019).
Another limitation is a technical difficulty due to poor internet connection. To overcome internet connection issues, the researcher ensured that participants familiarised themselves with the tool. Therefore, participants could familiarise themselves with Zoom video conferencing by discussing the tool checklist before the proposed interview. This involves collecting demographic data helps familiarize the tool by having a video conversation before the main interviews (appendix 1) to allow uninterrupted usage of the internet tool during the interviews.
However, due to poor connectivity issues in some areas in Nigeria, a few interviews had to be rescheduled. For the same reason, a few participants (n=5) opted for the telephone option of the application. Therefore, it would also be helpful to explore the impact of digital literacy on qualitative data generation.
All participants were literate and educated, spoke English fluently, and were residents in urban centres. Uneducated people with low English proficiency, unskilled, and residents in rural areas with poor internet connectivity are likely to have a different outcome and most likely prefer the traditional face-to-face mode. Research to determine the suitability of Zoom for various users is necessary to create specific strategies, improve the contribution and digital literacy.
Future studies should determine the degree of consensus or dissent about the merits or demerits of using Zoom video conferencing among both researchers and study participants. This will involve differences in data quality, sampling, and recruitment. Finally, future research should also encourage the improvement of future applications of video conferencing technology in areas of context, user satisfaction, and data quality and integrity (Archibald et al., 2019).
CHECKLIST FOR COMMON PROBLEMS WITH ZOOM
Some of the common issues are:
Video/ Camera not working
If the participant's camera is not showing up in Zoom settings or not showing the video:
Test your video to confirm that the correct camera is selected and adjust video settings.
Test the video before the meeting by clicking settings, click the video tab; a preview of the camera is shown, and can choose a different camera.
When in meeting:
Click the arrow next to start video/ stop video
Select video settings- Zoom will display your camera’s video and settings.
If you don’t see your camera’s video, click the drop-down menu and select another camera.
Audio is not working
Speaker issues: if you cannot hear the other speaker in a zoom meeting, follow these steps:
Click ‘Test speaker/microphone’, when the new window pops up, click ‘test speaker’, if there is a test sound then its ok, if not, then the wrong input is selected.
Echoes sound: This occurs due to multiple devices in the room joining the same meeting. ‘mute your microphone and turn down the speaker volume.
The image is skipping or shaking
This happens due to poor internet connection and lacks the bandwidth to send the signals to the destination. You can diagnose the issue by running a speed test if the video meeting on the mobile device/ older computer could be due to inadequate memory or CPU. To resolve this, close other applications to devote more CPU power to the meeting.
Wireless (Wi-Fi) Connection Issues
If you are experiencing any issue(s) with latency, frozen screen, poor quality audio, or meeting getting disconnected while using a home or non-enterprise Wi-Fi connection, try the following:
Watch a video about Wi-Fi connectivity
Check your Internet bandwidth using an online speed test
Try to connect directly via Wired (if your internet router has wired ports)
Try bringing your computer or mobile device closer to the Wi-Fi router or access point in your home or office
Upgrade your Wi-Fi router firmware. Check your Wi-Fi router vendor support site for firmware upgrade availability.
Retrieved from (Zoom Blog, 2013 & Zoom Help Centre, 2019).
CONCLUSION
The availability and advancement of communication technology have a significant implication on qualitative research (Irani, 2018). New and continuous use of online communication technology like Zoom has an essential significance on the practice of research and data generation tools (Archibald et al., 2019). Due to Zoom's flexibility and convenience, Zoom and other similar technologies can significantly contribute to qualitative research while providing rich quality data (Archibald et al., 2019).
Although videoconferencing research is not meant to replace traditional interview methods, it can be a valuable cost and time-saving tool in qualitative research. Existing research has shown that Zoom is a reliable and effective tool in collecting qualitative data, even on sensitive topics like mental health (Mabragana et al., 2013). Although there are some technical limitations in using Zoom, these can be overcome by familiarization with the platform and training.
Research has found Zoom a promising tool that can complement and extend qualitative researchers' options of generating rich data (Archibald et al., 2019). It is an excellent tool in the health sector to encourage diversity of users' (participants) experiences. Archibald et al. (2019) recommend that researchers include an evaluation of both participants and researcher experiences.
However, in this study, the generation of rich data from the participants' lived experiences, and an objective assessment of the researcher made the tool appropriate for the study.
More information regarding this Article visit: OAJBGSR
https://biogenericpublishers.com/pdf/JBGSR.MS.ID.00238.pdf https://biogenericpublishers.com/jbgsr-ms-id-00238-text/
0 notes