To simplify the meshing details

To simplify the meshing details of tiles perforations, the method presented by Abdelmaksoud et al. [8] for accurate simplification of the momentum source was utilized in the present study. The method adds a body force filed above the tiles to correct the momentum deficit of flow through a fully open tile to accurately resemble a perforated tile. The required body force was given by:where V is the body force computational volume, ρ is density of air density, Q is fully open tile air flow rate, σ is the opening ratio of perforation tile and is tile fully open area. A 0.534×0.534×0.01m computational volume above each perforated tile was considered and the body force was calculated using Eq. (1). The flow through the server is modeled by specifying a fixed mass flow rate in and out of each server. A typically perforation factor of 0.35 was assumed for each rack. Similar to the perforated tiles, a momentum source was considered behind each server. A 0.610×0.500×0.0763 computational volume is considered for the server body force. Gambit software was used to mesh the model using tetrahedron elements. Refined mesh elements were used around servers, perforated tiles and room outlets.

Mathematical formulation

Grid Independent study and model validations
Grid refinement study has been conducted to reach the minimum grid size that KPT-185 Supplier does not affect the numerical solution and insure grid independent results. The overall performance parameters indices (SHI, RHI and RTI) were calculated from the results of different meshes sizes to find the grid independent size. Results of grid independent study are illustrated in Table 2.
To validate the present numerical model, the results were compared with the results of the numerical work of Fernando [28]. The temperature distribution obtained from the present simulation was analyzed and compared with the values obtained by Fernando [23] for 40 different points on an XY plane above the server’s tops in each rack as shown in Fig. 4. The temperature distribution comparison between the model and the Fernando [28] results are illustrated in Fig. 5. As shown in the figure, a fair agreement between the present model and Fernando measurements [28] exists.

Results and discussion

Numerical investigations of the performance of air cooled up-flow data centers using perforated air tiles have been carried out for different arrangements of racks and CRAC layouts and different configurations of side and roof top cold aisle containments. Temperature distribution, air flow characteristics particularly air recirculation and bypass and thermal managements in data centers are evaluated in terms of the measureable overall performance parameters (RTI, SHI, RHI and RCI). The results showed that:

The production of thermal and engineering devices is concerned by irreversible losses that lead to increase the entropy and decrease system efficiency. Thus it is important to find the factors that minimize the entropy generation and maximize the flow system efficiency. To analyze the irreversibilities in the form of entropy generation, the second law of thermodynamics is applied. Factors that are responsible for the irreversibility are heat transfer across finite temperature gradients, characteristics of convective heat transfer and viscous dissipation. Most of the energy related applications such as cooling of modern electronic systems, solar power collectors, and geothermal energy systems depend on entropy generation. Several investigations ([1–3]) were carried out on entropy generation under various flow configurations.
Fluid flow and heat transfer inside channels with simple geometry and different boundary conditions is one of the fundamental areas of research in engineering. It has wide range of applications such as thermal insulation engineering, electronics cooling and water movement in geothermal reservoirs. Recently, a wide literature on fluid flow, heat transfer and entropy generation in various channels has been developed. Havzali et al. [4] investigated the entropy generation of incompressible, viscous fluid flow through an inclined channel with isothermal boundary conditions. They observed that the entropy generated in a small section is dominant on the total entropy produced in the entire system. Komurgoz et al. [5] discussed the entropy generation of porous-fluid layers contained in an inclined channel by using the differential transform method. Karamallah et al. [6] studied the consequence of differential heat isothermal walls on entropy generation of a vertical square channel saturated with porous media. Liu and Lo [7] performed a numerical analysis on the entropy generation within a mixed convection magneto hydrodynamic (MHD) flow in a parallel-plate vertical channel. They observed that the minimum entropy generation number and the maximum Bejan number occur at centerline region of the channel under asymmetric heating conditions. Makinde and Eegunjobi [8] addressed the irreversibility analysis in the flow of couple stress fluid through a porous medium. Damseh et al. [9] studied the presence of transverse magnetic field on entropy generation due to laminar forced convection flow in a channel. Das and Jana [10] investigated the combined effects of Navier slips, suction/injection and magnetic field on entropy generation in a porous channel by considering the constant pressure gradient. Chen and Liu [11] numerically investigated the viscous dissipation effect on entropy generation due to mixed convection nanofluid flow within a vertical channel.

Bird exposure may also induce sensitization to avian serum

Bird exposure may also induce sensitization to avian serum albumins present in feathers as well as in egg yolk which in turn may result in secondary egg allergy known as bird-egg syndrome. In agreement with two previous reports mentioning double-sensitization to millet and bird feathers, two out of eight bird-keepers in our study had concomitant bird-egg syndrome () supporting the view that pre-existing sensitization to millet allergens facilitates subsequent co-sensitization to avian serum albumins (or vice versa). Accordingly, potential egg allergy should always be addressed during work-up of millet-allergic subjects.
Current knowledge concerning cross-reactivity between millet and other cereals is inconsistent and rests upon a very limited number of patients. While some studies observed broad cross-reactivity, others did not. These divergent findings may be partly due to the use of distinct and sometimes poorly-validated diagnostics, but also because experimental data are often incomplete, especially with regard to rice and corn. According to the present study, cross-sensitization to other cereals appears to be extensive in millet-allergy since most of our patients showed positive ImmunoCAP results also to rice, wheat and corn, the latter often revealing binding scores comparable to those for millet. This fits well to new genetic studies in grasses proving a particularly close phylogenetic relationship between millets and corn. Importantly, this cross-reactivity was clinically relevant in more than half of our patients. Though symptoms were mostly mild, all of them permanently avoided eating these cereals. It should be also emphasized that all patients were evidently CCD-negative thereby confirming that the observed in-vitro reactivity with millet and other cereals is protein-specific and not due to clinically irrelevant bethanechol chloride against CCDs.

In addition to their conventional purposes, which is for the definitive diagnosis of a food allergy or for the discernment of tolerance to the allergen, oral food challenges (OFCs) are performed to determine the threshold dose of food allergens for risk-assessment or minimal avoidance. More severe reactions tend to be provoked in these OFC settings. We have already reported a model for predicting severe allergic reactions provoked in boiled egg challenges. In the present study, we developed a model for predicting a severe reaction to a milk OFC. This new model was developed via a similar method to that used for the egg challenge.
An open OFC of milk was performed in accordance with the Japanese Guideline for Food Allergy 2014 throughout the study period. Raw cow\’s milk was administered in a gradient dose (typically 4 to 5 doses from 0.2-0.5-1-2-5-10-20 ml) every 30–40 min. The challenge was stopped if the patient exhibited an objective allergic reaction corresponding to ≥5 points of the total score (TS) of Anaphylaxis Scoring Aichi (ASCA). To quantify the overall severity of the result of OFCs, the TS/Pro was applied, which was calculated by dividing the TS by the cumulative protein dose (Pro) of milk (3.3% of whole milk) administered before the appearance of symptoms. We divided patients in the development dataset into two groups (severe cases and non-severe cases) based on the median value of the TS/Pro of the development dataset.

Thalidomide (Thal) has been shown to be effective in treating multiple myeloma (MM), erythema nodosum leprosum (ENL), and various autoimmune diseases through various mechanisms such as inhibition of growth factor secretion, activation of cytotoxic T and natural killer cells, and suppression of angiogenesis. One report stated that leukocytoclastic vasculitis (LCV) associated with cryoglobulinemia was successfully treated with Thal. In contrast, Thal has been reported to cause various adverse effects, including LCV. We herein describe for the first time a patient who developed LCV with eosinophilic infiltration during Thal treatment for MM.

br Topology selection criteria The boost and buck are

Topology selection criteria
The boost and buck are most promising topologies for cascaded system as mentioned in [22]. However, the converter topology depends on the cascaded system connection either series or parallel. For example, the boost topology is inappropriate for series connection. The series connection mandates equal output currents, and in the boost cell Iin>Iout is an operational constraint. Therefore, if a PV generator is shaded, its current drops significantly and hence the power of the entire system. The boost topology could work satisfactorily in the parallel-cascaded systems. In the parallel operation of boost-based cascaded system, the system output current is a summation for the output currents of the parallel modules. Thus, if a PV generator encounters shading, its current drops and hence the output current of this protein kinase g faulty module only, while the remaining modules are still functioning normally; and the system output power/current is albeit reduced, Fig. 1.
The buck topology suits series connection of cascaded PV system. Each channel, PV generator coupled to a buck converter, is entirely decoupled from the remaining channels. Furthermore, in the buck topologies, the semiconductors arrangement ensures continuous system operation under fault disabling a PV generator, as the freewheeling diode provides alternative path for the current under PV generator failure. This scenario is shown in Fig. 2. Therefore, the system advised here is based on series connection of buck converters coupled to PV generators, Fig. 3.

System layout
Fig. 3 shows the system under concern. It consists of three PV generators, each coupled to a half-bridge buck converter. The unit composed of a PV generator, buck converter and the associated controller is defined here as a channel (see Fig. 4).
The half-bridge is used in the system under concern due to its availability, modularity and ease of packaging. Moreover, the half-bridge allows the deployment of a modulation strategy, which reduces the generated harmonics and filter size without comprising the efficiency significantly [30]. According to [30], Fig. 7.30, half-bridge permits application of unipolar PWM switching modulation, which could not be fulfilled by the basic buck cell. This modulation strategy not only reduces the ripple magnitude in the output voltage but also pushes the ripples to twice the switching frequency, which optimizes the filter volumetric dimension. However, the half bridge may have higher conduction and switching losses than basic cell, which could reduce its efficiency slightly [30]. The half-bridge buck cell has also a bidirectional power-flow capability, which could be utilized in interfacing different types of energy storage devices into the DC bus. These energy storage elements are usually deployed in cascaded system to optimize system performance and ensure interruptible system operation.

Converter performance
The performances of the cascaded PV system under static, dynamic and failure operating conditions are thoroughly investigated in the following. Innovative remedy strategies for anticipated fault scenarios of the multi-level cascaded PV system are advised. This section is divided into three distinct sub-sections, and they are as follows:

The operation of multi-level cascaded PV system under different normal/abnormal operating scenarios was thoroughly investigated. The main conclusions extracted from the article are as follows:

Data clustering [1] is a discovery process that groups set of objects in disjoint clusters such that the intra-cluster similarity is maximized and inter-cluster similarity is minimized. The resulted clusters can explain characteristics of the underlying data distribution which can be used as a foundation for other data mining and analysis techniques. Data clustering is used in a wide range of applications. For example, in marketing, clustering is used to find group of customers with similar behaviors [2]. In biology, it is used to find similar plants or animals given their features [3]. In search engines, clustering is used to group similar documents to facilitate user search and topics discovery [4], while in networks, clustering is used in analyzing and classifications of network traffic [5]. Given the number of clustering applications, there are challenges associated with the clustering process. Such challenges include the choice of a suitable clustering algorithm, the choice of best representative features of objects and the choice of appropriate distance/similarity measure [6]. Other challenges include dealing with outliers [7], ability to interpret clustering results (selection of cluster’s representative and cluster summarization) [8] and dealing with huge number of dimensions and distributed data [9].

Figs a c illustrate the influence of the porosity K

Figs. 2a–2c illustrate the influence of the porosity (K) on the boundary layer variables, u, θ and ϕ, respectively. Increasing the porosity of the porous medium clearly serves to enhance the flow velocity (Fig. 2a), i.e. accelerates the flow. This effect is accentuated close to the surface K where a peak in the velocity profile arises. With further distance transverse to the surface, the velocity profiles are all found to decay into the free stream. An increased porosity clearly corresponds to a reduced presence of matrix fibers in the flow regime which, therefore, provides a lower resistance to the flow and in turn, boosts the momentum. For increasing values, the time t required to attain the steady-state scenario is also elevated considerably. As such, the steady state is achieved faster for higher values of K. Conversely, with increasing porosity values, the temperature profile (Fig. 2b) in the regime is found to be decreases, i.e. the boundary layer is cooled. A reduction in the volume of solid particles in the medium implies a lower contribution via thermal conduction. This serves to decrease the fluid temperature. As with the velocity field (Fig. 2a), the time required to attain the steady-state condition decreases substantially with a positive increase in. In Fig. 2c a similar response for the concentration field is observed, as with the temperature distributions. The values of ϕ are continuously decreased with increasing values of porosity, also decrease with positive increases in the values of K, but reach the steady state progressively faster.
Figs. 3a–3c present the effects of the chemical reaction parameter, , on the velocity (u), temperature (θ) and concentration (ϕ) profiles, respectively. Physically, the mass Nutlin-3a Eq. (13) can be adjusted to represent a destructive chemical reaction (means endothermic, i.e., heat is absorbed) if K>0 or a generative chemical reaction (means exothermic, i.e., heat is generated) if K>0. Endothermic reactions cannot occur spontaneously, due to that a work must be done in order to get these reactions to occur. When endothermic reactions absorb energy, a temperature drop is measured during the reaction. Endothermic reactions are characterized by positive heat flow (into the reaction) and an increase in enthalpy. Exothermic reactions may occur spontaneously and result in higher randomness or entropy of the system. They are denoted by a negative heat flow (heat is lost to the surroundings) and decrease in enthalpy. There is a clear increase in the velocity values at the wall accompanying a rise in from −1.5 through −1.0, 0.0, 1.0 to 1.5, i.e. the flow is accelerated throughout the boundary layer. This shows that velocity increases during generative reaction and decreases in destructive reaction. Irrespective of the values of or t, it is important to highlight that there is no flow reversal condition, i.e. no back flow in the boundary layer regime. The velocity u sustains positive values throughout the flow regime. With an increase in the value of , the time taken to attain the steady-state condition does not follow a direct increase or decrease. For =−1.5, t=10.51 a value which decreases to 9.27 for =−1.0 but then increases to 9.67 for =0.0 and then continues to increase to 10.71 for =1.0 and finally increases to 11.01 for =1.5. The steady-state condition is therefore, achieved fastest for =−1.0. Conversely, with an increase in the value of , the temperature, θ, as shown in Fig. 3b, increases continuously through the boundary layer. Again the steady-state condition is attained fastest for =−1.0 but slowest for =1.5. Fig. 3c indicates that a rise in the value of strongly suppresses the concentration levels in the boundary layer regime. All profiles decay monotonically from the surface (wall) to the free stream. The concentration boundary layer thickness is therefore, considerably greater for =−1.5 than for =1.5.
In Figs. 4a–4c, the influence of the radiation parameter, on the steady-state velocity, temperature and concentration distributions with distance transverse to the surface (i.e. with y-coordinate) are presented, respectively. The parameter defines the ratio of thermal conduction contribution relative to thermal radiation. For =1, the thermal radiation and the thermal conduction contributions are equivalent. For >1, the thermal radiation effect is dominant over the thermal conduction effect and vice versa for <1. An increase in the value of from 0 (non-radiating) through 0.5 (thermal conduction is dominant over radiation), 1.0, 3.0, 5.0 to 8.0 (radiation is dominant over thermal conduction), causes a significant decrease in the velocity with distance into the boundary layer, i.e. decelerates the flow. The velocities in all cases ascend from the surface, peak close to the wall and then decay smoothly to zero in the free stream. It is also noted that with increasing values of the parameter , the time taken to attain the steady state condition is reduced. Therefore, it is concluded that the thermal radiation flux has a de-stabilizing effect on the transient flow regime. This is important in polymeric and other industrial flow processes since it shows that the presence of thermal radiation while decreasing temperatures, will affect flow control from the surface into the boundary layer regime. As expected, the temperature values are also significantly reduced with an increase in the value of as there is a progressive decrease in the thermal radiation contribution accompanying this. All profiles show monotonic decays from the wall to the free stream. The maximum reduction in temperatures is witnessed relatively close to the surface since thermal conduction effects are prominent closer to the surface, rather than further into the free stream. The concentration (ϕ) profile is conversely boosted with an increase in the value of (i.e. decrease in thermal radiation contribution). The parameter does not arise in the species conservation Eq. (13) and therefore, the concentration field is indirectly influenced by via the coupling of the energy Eq. (12) with the momentum Eq. (11), the latter also being coupled with the convective acceleration terms in the species Eq. (13). However, as with the response of the velocity and temperature fields, an increase in the value of decreases the time that elapses to achieve the steady-state condition. Therefore, while greater thermal radiation augments diffusion of species in the regime, it requires greater time to achieve the steady-state condition.

The sharing function is calculated

The sharing function is calculated by using objective function as distance metric as:
The parameter d is the distance between any two solutions in the population and is the sharing parameter which signifies the maximum distance between any two solutions before they can be considered to be in the same niche. The above function takes a value in [0,1] depending on the values of d and . If α=1 is used, the effect linearly reduces from one to zero.
The normalized distance between any two solutions can be calculated as follows:where and are the maximum and minimum objective function value of the kth objective.
As the amc 7 of the solution is reduced by dividing the assigned fitness by the niche count, to keep the average fitness of the solutions in a rank same as before sharing, the fitness values are scaled. This procedure is continued until all ranks are processed. Thereafter, tournament selection, BLX-α crossover [29] and nonuniform mutation operators are applied to create a new population.

Results and discussion
The proposed voltage stability enhancement technique is implemented on IEEE 30 bus and IEEE 57 bus test systems and the results are presented. The generators are modeled as PV buses with reactive power limits and the loads are represented by constant PQ loads. The power system is stressed by including the concept of (N−1) contingency analysis and increase in load. (N−1) contingency analysis is performed considering the outage of lines and selecting the worst contingency based on voltage stability index, Lmax.
The IEEE 30 bus system [35] consists of 6 generators, 24 loads and 4 transformers with off nominal tap ratio and 41 transmission lines. The lower voltage magnitudes at all buses are 0.95pu for all buses and the upper limits are 1.1 pu for generator buses and 1.05pu for remaining buses. The active and reactive powers of system load are 283.4MW and 126.2 MVAR respectively. As a preliminary computation, the (N−1) contingency analysis is carried out. According to these results, the line outage (28–27) is the most severe contingency in IEEE 30 bus system.
The IEEE 57-bus system [36] was chosen as the second test system to demonstrate the method’s usefulness on a large system. IEEE 57-bus system has 4 generators, 3 synchronous condensers, 50 load buses, 80 transmission lines and 16 tap changing transformers. From the contingency analysis, line outage (46–47) is found to be the most severe case with the Lmax value of 0.4598. The simulation studies were carried out on Pentium IV, 2.4GHz system in MATLAB 7.1 environment.
In this case, power generation rescheduling is carried out to get more reduction in power flows amc 7 and hence increase in system security and reliability. The two objective functions considered here are as follows: minimization of L-index and minimization of deviation in real power adjustments control variables from base case to contingency state. The generation cost function is minimized than the approach without generation rescheduling to obtain more economical corrective control action. In the first approach of corrective control including generation rescheduling, the decision variables are real power settings of generators () and post-contingency state (), voltage magnitudes of generators (), tap settings of transformers (), reactive power settings of capacitor banks (). The operating range of transformer is between and 8 with step size equal to 0.025 and the capacitor is in the range of 0–5 MVAR. The best solutions of L-index and control variable adjustment optimized individually for IEEE 30 bus system in line outage (28–27) under 125% loaded condition are presented in Table 1. Further, from the fuzzy decision making strategy, the best compromise solution in the overall nondominated solutions is also presented. This demonstrates the effectiveness of the proposed approach as the best solutions of both objectives along with a set of nondominated solutions can be obtained in a single run. Fig. 4 depicts the pareto optimal set of case 1 for IEEE 30 bus system. It can be seen that MOGA is more efficient in the point of optimality and distribution of solutions in the trade-off surface. From the best compromise solution, it is worth mentioning that the inclusion of power generation rescheduling in the VSC-OPF problem considerably reduces the production cost by 16.5% in the severe contingency state and L-index by 35% than before optimization.

Introduction This paper evolved from a master thesis Crime and

This paper evolved from a master thesis “Crime and urban planning in Egypt” on the relationship between crime patterns within the Greater Cairo Region and different urban planning aspects, including both social and physical ones [1]. The objectives were to do the following:
The study was bounded in several ways. Firstly, due to the wide scope of the study, it was not possible to conduct the analysis at all of the region’s districts; therefore, the main agglomeration districts were selected to be analyzed. Secondly, the study period has been identified by five-years starting from 2004 till 2008 for the following reasons:
Finally, the research studies crimes are only related to geography and linked to specific physical environment, including the following:

Crime and the built environment
Earlier crime and crime prevention studies started in 1800s as the industrial revolution changed the urban–rural relationship and reshaped the urban structure; this change ultimately caused many social problems. Early sociologists focused on how these social problems led to crime occurrence. It is worth mentioning that this concept continued until the early 1960s. In 1960s, researchers have drawn the attention to the relation between the built environment and crime. Jane Jacob’s book “The Death and Life of Great American Cities” [2] was the first influential work to suggest that active street life could cut down opportunities for crime. She focused on the role that “eyes on the street” played in maintaining social control. Jacobs’ thesis was simple: people, not police, are the guardians of the public space [3]. Her critique corresponded over the physical design of urban America, which emphasized high rise apartment buildings separated by public space without any specific guardianship. Office areas became vacant after supper, which led to a cessation of informal surveillance and to a reduction in the sense of alzheimer\’s disease among residents. According to Jacobs, city streets were unsafe because they were deserted [2]. She frequently cites New York City’s Greenwich Village as an example of a vibrant urban community, and how well-used streets were more likely to be safe from serious crime. She found that natural surveillance was essential for the feeling of safety and that could be achieved by increasing the number of people using a particular area through encouraging a diversity of uses and creating opportunities for positive social interactions [2]. The early 1970s saw a surge of studies depending on the previous work of Jane Jacobs. In 1971, Oscar Newman published a paper “Architectural Design for Crime Prevention”, and in 1973 he published a book “Defensible Space, Crime Prevention through Urban Design” [4]. He argued that an area is safer when people feel a sense of ownership and responsibility for that part of a community. Newman studied crime rates in low-income housing projects in New York City. He observed the development of an eleven story, 2740-unit public housing complex, named “Pruitt-Igoe”. The Pruitt-Igoe was supposed to be an ideal housing community for low-income families. The idea was to keep the grounds and the first floor free for community activity. Each building was given communal corridors on every third floor to house a laundry, a communal room and a garbage room. The outside areas of each building were also common areas. According to Newman, because all the grounds and common areas were disassociated from the units, residents could not feel the responsibility toward them and they became unsafe. The corridors, lobbies, elevators and stairs were dangerous places to walk, they became covered with graffiti and littered with garbage and human waste, and women had to get together in groups. The project never achieved more than 60% occupancy. The complex failed miserably and was demolished about 10years later [4]. However, across the street from Pruitt-Igoe was an older, smaller, row-house complex occupied by an identical population called “Carr Square Village”. It remained fully occupied and trouble-free throughout the construction, occupancy, and the decline of Pruitt-Igoe. With the social variables constant in the two developments, Newman began to look into what physical factors were different between the two complexes that would allow one complex to thrive while the other had to be torn down. One of the first things Newman looked at was building type. He noticed that:

Background Research is a process of searching the knowledge

Research is a process of searching the knowledge by systematic investigation and to establish novel facts by scientific methods. Medical research has lot of importance as it is much needed for the addition of knowledge and thereby to improve health care [1]. It can be achieved by various methods of research ranging from drug trials to Brefeldin A manufacturer surveys and from trends in the past years to unusual cases in the hospital.
Even though variety of approaches are available, the scenario of medical research in India is not encouraging. The undergraduate medical research is still a rare phenomenon. There are reports available mentioning facts & figures about the modern medical undergraduate students level research and also discussing the potential reasons behind this apathy [2,3]. The situation of the global undergraduate medical research is far better. Considering the global scenario, the Medical Council of India is focusing more on the improvement of medical research and emphasizing the same even at undergraduate level.


The career options prioritized by the interns are summarized in Table 1. Out of 40 interns, 36 preferred clinical practice as first career priority, whereas 4 selected teaching as option. Research was chosen as second priority by 18 and third priority by 22 interns.

For the students of modern medicine (MBBS), various schemes and programmes are available such as ICMR Short Term Studentship (STS), KVPY (Kishore Vaigyanik Protsahan Yojana), conferences for medical students [4] and so on. MBBS students are eligible to get SRF through which they can work on a research project for three years and can extend the work by registering to PhD. It is offered by Indian Council of Medical Research (ICMR), Council of Scientific and Industrial Research (CSIR) and University Grants Commission (UGC). Such provisions should be made available even for Ayurvedic students [1].
We further feel that if the students are exposed to the research methodology in their undergraduate days, they will be in a better position to conduct full-fledged research studies during their postgraduate tenure. This will certainly help to improve the standards of MD/MS dissertations and PhD theses. Vaidya-Scientist fellowship programme [5] to train the candidates in the Shastras, science and medicine along with exposure to appropriate research methodology is an ambitious programme, which not only needs to be continued but also to be strengthened by increasing intake capacity.
To sum up, the career preferences given by interns in our study indicates “conventional” approach towards medical education. There is need to change their attitudes through appropriate training [6] and by introducing different programmes to foster the research culture.

Source of support

Conflict of interest

Gastric ulcer is one of the most widespread diseases in the world and occurs with stress, nonsteroidal anti-inflammatory drugs, Helicobacter pylori infection, and alcohol ingestion [1]. Ulceration occurs when there is an imbalance between aggressive (acid-pepsin secretions) and protective factors such as mucus secretion, mucosal barrier, cell regeneration, blood flow, and prostaglandins [2]. Most of the drugs used for the treatment of gastric ulcers, show numerous adverse effects [3]. In the search for new drugs, metabolites derived from plants used in traditional medicine provided an alternative source of therapeutic drugs [4]. Plant extracts containing a wide variety of antioxidants such as phenolic and flavonoid compounds, are some of the most attractive sources of new drugs and have been shown to produce promising results in the treatment of gastric ulcers [5].
Gmelina arborea (GA), Gambhari in Sanskrit, a popular commercial timber grows naturally in the warm temperate regions of Mediterranean and South Asia [6]. The plant is widely used in Ayurveda, one of the major traditional forms of medicine in India. The root of the plant is a member of “brihat panchamoola,” which is a major constituent of many ayurvedic preparations [7] used for treating chronic fever, hemorrhages, urinary tract infections, anuria, etc. The plant forms one of the ingredients of Dashamoolarishta—a reputed restorative tonic and Shriparnyadi Kwath—prescribed in bilious fever [8]. The bark is bitter, tonic, and stomachic and is useful in curing fever and dyspepsia [9]. The Ayurvedic Pharmacopoeia of India recommends the use of bark and stem in treating inflammatory diseases and edema [10].

Activation energy was calculated for each degradation

Activation energy was calculated for each degradation interval of 10% from α=0.1 to α=0.6 (where α=0.1 means 10% degradation region, α=0.2 means 20% and so forth). This range was selected because after the combustion of more than 50% of the sample mass, proportion of residues becomes accountable. Values of heating rate and absolute temperature were converted as per requirement of the energy expression, shown in Table 2, to draw a graph between the two parameters [3].
Graph between 1/T and logβ, shown in Fig. 3, presents six straight lines each corresponding to a certain degradation region (α). Slope of each line was substituted in Flynn and Wall expression to obtain energy of activation for each weight fraction, as listed in Table 3.
It is evident from the table that, as the degradation proceeded, the value of activation energy increased up to α=0.5 [3], except its maiden value, the higher value of which was responsible for initiating thermal destruction. As the degradation exceeded 50% weight loss, a decrease in Ea value came forth, thereby increasing the degradation rate.
Formation of different products was confirmed by characterisation through FT-IR and XRD. The peaks at 1092.48cm−1, in Figs. 4–6, show SiO bonding, i.e. silica was present after each thermal treatment. The trospium chloride peaks at 801.278cm−1, in Figs. 4 and 5, depict SiC bonding, i.e. silicon carbide was formed during pyrolysis in argon and nitrogen atmospheres [4,5]. No absorption peak matched clearly to the SiNO or SiN bonding type that may be due to the limited IR data available on inorganic compounds.
X-ray diffraction patterns (with 2θ on abscissa and counts on ordinate) are shown in Fig. 7(a)–(c) for products of Ar pyrolysis, N2 pyrolysis and pyrolysis in the closed tube respectively. Successful pyrolysis under argon atmosphere produced considerable quantity of silicon carbide as evident from the peaks at 2θ=35.7, 60.1, and 71.8, whereas the peak at 26.4 shows the presence of tridymite.
Pyrolysis under nitrogen atmosphere produced silicon oxynitride rather than pure silicon carbide, as can be seen in Fig. 7(b), wherein peaks at 2θ=20.2, 33.2, and 37.2 are characteristic for silicon oxynitride. XRD pattern of closed tube pyrolysis products depicts presence of silicon nitride at 2θ=64.8. For quantification, all the pyrolysed products were treated with HF solution to detach silica contents. In case of pyrolysis under argon atmosphere considerable yield of silicon carbide (27.3%) was achieved which is very close to that obtained by Janghorban and Tazesh [5] under same conditions of atmosphere and pyrolysis temperature. A mixture of carbide and oxynitride was obtained after pyrolysis under nitrogen atmosphere and pyrolysis in closed tube produced about 15% of silicon nitride.

Rice husk has silica contents which can be utilised to produce various useful silicon based materials, including silicon carbide, oxynitride and nitride. Acid-treatment of RH converted carbohydrates into complex amino acids containing CN and NH2 bond groups, due to which, when exposed to controlled heating, the material undergoes decomposition earlier than raw husk does. It is more useful practice to calculate energy of activation over a range of temperature because its value does not remain constant at all stages of degradation. It was found that the first value of Ea was greater, but then abruptly decreased once the degradation started, and then gradually rose up to 50% weight loss. Ea value again decreased beyond 50% weight reduction, thereby increasing the rate of decomposition.

The appropriate bioactive, biocompatible, stronger and inexpensive synthetic materials are essential for biomedical application. Hydroxyapatite (HAp) and β-tricalcium phosphate (β-TCP) are extensively used as potential bioceramics for both dental and orthopedic applications due to their close chemical similarity with the inorganic component of bone and tooth mineral [1]. HAp has received more attention as bone replacing materials in orthopedic surgery because of its inherent ability to bond with the hard tissue [2]. But it has a low resorption rate and bone growth occurs only at the interface [3]. The stoichiometric synthetic HAp does not degrade completely and remnants of this are found after a long period of the implantation [4]. But β-TCP exhibits optimal level of solubility in biological fluid and the chemical composition is similar to apatite present in bone tissues. Hence, this material has extensively been used for bone grafting [5]. β-TCP is an attractive biomedical material owing to its excellent biocompatibility and the nontoxicity of its chemical components. It also serves as a potential precursor for Ca and PO43− ions and allows new bone formation [6].

Compressive strength was determined on day

Compressive strength was determined on 28-day cylindrical samples with 60mm height using an electro-hydraulic press (M&O, type 11.50, No. 21) with capacity of 60kN. The samples are submitted to a compressive force at average rate of 3mm/min until the sample failed according to ASTM C 109 standard test method. The compressive strength was calculated using the following equation:where F is the failure force and A is the cross-section of the sample.

Results and discussion

The aim of this bcr-abl tyrosine kinase inhibitors study was to evaluate the influence of the thermal treatment on the availability of Mg from talc in cementitious products preparation. The structural evolution of the talc upon firing indicates that under thermal treatment, talc is converted into enstatite from 800°C and this conversion is maximal at 900°C. The formulation of cementitious products using fired talc from 400°C to 1000°C shows the following:

The Ministry of Higher Education of Cameroon is greatly acknowledged for a travel grant to Mr. Ngally Sabouang for a research stay at the University of Liège (Belgium).
The University of Liège (Belgium) is acknowledged for analytical facilities to Ngally Sabouang during his stay at Liège (Belgium).

Even though many new antibiotics

Even though many new citco have been developed in the last few decades, none of them have been found with better activity against multidrug-resistant bacteria. It is therefore essential to plan better healing strategies including novel antibiotics. Recently metal oxide nanoparticles have been effectively used for the delivery of therapeutic agents, in chronic disease diagnostics, to reduce bacterial infections in skin and burn wounds, to prevent bacterial colonization on medical devices and in the food and clothing industries as an antimicrobial agent. Because of their unique mode of action and potent antimicrobial activity against gram positive and gram negative bacteria\’s, the prospectus for developing new generation antibiotics makes metal oxide nanoparticles as an attractive substitute to antibiotics to overcome the drug resistance problem [21]. ZnO and TiO2 nanoparticles show antibacterial properties, but there is no satisfactory literature relating to the antibacterial behaviour of nanocrystalline ZnTiO3 ceramic. This attracts our attention to prepare nanocrystalline ZnTiO3 and to study its antibacterial properties. In this study we present the preparation and characterization of ilmenite type nanocrystalline ZnTiO3 ceramic and studied its effectiveness in the adsorption of hazardous MG dye. We also evaluated the antibacterial activity against different pathogenic bacteria\’s by agar well diffusion method.


Characterization techniques
SCS derived product was characterized by PXRD. Powder X-ray diffraction patterns were collected on a Shimadzu XRD-700 X-ray diffractometer with CuKα radiation with diffraction angle range 2θ=20–80°, operating at 40kV and 30mA. Product was morphologically characterized by HR-TEM analysis which was performed on a Hitachi H-8100 (accelerating voltage up to 200kV, LaB6 Filament). FE-SEM was performed on a ZEISS scanning electron microscope equipped with EDS with a voltage of 5kV. Malvin zeta profiler was used to study the surface morphology with Z range 25μm, step size 0.124μm, field view 474μm×356μm. The FT-IR studies have been performed on a Perkin Elmer Spectrometer (Spectrum 1000) with KBr pellet technique in the range of 400–4000cm−1. To calculate optical energy band gap, UV–vis spectrum was recorded using Elico SL-159 UV-Vis spectrophotometer. Kemi centrifuge was used to separate dye solution from adsorbent.

Results and discussion

Adsorption studies
Adsorption experiments were performed using organic hazardous cationic MG dye. MG is a basic triphenylmethane dye with a molecular weight 327. IUPAC name of MG is [4-[(4-dimethylaminophenyl)-phenylmethylidene]-1-cyclohexa-2,5-dienylidene] dimethylazanium with molecular formula C23H25N2+. MG has a high solubility in acidic organic solvents but less in water [29]. Chemical structure and UV–vis spectrum of malachite green dye are shown in Fig. 8.
Batch experiments were carried out at different times, dose, pH, and initial concentration of dye. 100ml of dye solution of concentration (5ppm, 7.5ppm and 10ppm) was mixed with different dose (5–65mg) of adsorbent in 250ml beaker at lab temperature. The dye solution containing adsorbent was stirred magnetically (in absence of light) to increase the contact between the dye solution and the adsorbent. After desired time the adsorbent was separated from the solution by centrifugation at 1800rpm for 5min. Residual concentration of dye in supernatant was estimated spectrophotometrically by monitoring the absorbance at 618nm (λmax).

Antibacterial studies
Antibacterial activity was screened by agar well diffusion method [34] against four bacterial strains gram negative Klebsiella aerogenes NCIM-2098, Pseudomonas desmolyticum NCIM-2028, Escherichia coli NCIM-5051, and gram positive bacteria Staphylococcus aureus NCIM-5022. The Muller hinton agar was used to culture bacteria. Nutrient agar plates were prepared and swabbed using sterile L-shaped glass rod with 100μl of 24h mature broth culture of individual bacterial strains. The wells were made by using sterile cork borer (6mm) and were created into the each petri-plate. Varied concentrations of nanocrystalline ZnTiO3 (1000 and 1500μg/well) were used to assess the activity. Compound was dispersed in sterile water and it was used as a negative control and simultaneously the standard antibiotic Ciprofloxacin (5μg/50μl) (Hi Media, Mumbai, India) as positive control was tested against the bacterial pathogens. Then the plates were incubated at 37°C for 24–36h, the zone of inhibition was measured in millimetre of the every well and also the values were noted. Triplicates were maintained in every concentration and also the average values were calculated for the ultimate antibacterial activity.