Thus for a pressure variation of GPa and a typical

Thus, for a pressure variation of 100GPa and a typical value of bulk modulus for a solid (say 150GPa), a calculation gives around 3eV, say almost 3 order of magnitude higher than a corresponding energy shift for at ambient pressure (i.e. 90meV).
To focus on the acoustic properties, the measurement of sound velocity v versus pressure (and thus of the elastic moduli , considering the Christoffel equation) enables to probe with a high sensitivity the repulsive part of the interatomic potential, i.e. the most unknown part of the internal energy U. In the adiabatic case (ΔS=0), the Maxwell definition of an elastic constant gives:where is the strain tensor. This last equation illustrates why acoustic properties of solids and liquids are very sensitive to subtle changes in local or long-range order, and why measurements of phonons velocity under high pressure are considered as one of the most useful probe of interatomic potentials variations.
Elasticity of stressed material also provides crucial insight in the thermodynamics of condensed matter through the determination of the structural stability, the pressure dependence of the density, the melting curve, the piezoelectric properties, or the mechanical properties as few examples. However, for many years, inherent problems of carrying out elastic measurements in high pressure and high temperature GDC0449 have prevented acoustics experiments under extreme thermodynamic conditions. Consequently, little is still known about acoustic properties of liquids and solids at high density, data of major interest for physics, chemistry and planetology (Fig. 1). For what concerns the last case, the argument is straightforward: taking into account that the deepest core sampling has been extracted at a depth of only few kilometers, more than 99% of the Earth interior remains to be investigated by reproducing the thermodynamic conditions in the laboratory. For example, knowledge of the Earth interior composition involves the comparison of velocity-depth models derived from seismic data with sound velocities measured under extreme conditions (hundreds of GPa) in the laboratory.

Measuring techniques

Data analysis and sound velocity measurements
This section will be concerned primarily with the problem of determining the sound velocity of liquids and solids at extreme conditions by means of travel time acoustic wave measurements. Two types of measurements will here be emphasized. The so-called “temporal method”, GDC0449 where v is determined using a similar technique as the pulse echoes one. Note that here, the knowledge of the sample thickness e is requested. This technique being one of the most frequently used methods in ultrasonics community, it will not be considered at length in the present paper. The “imagery method” will however be described in more details. Mainly inspired by the acoustic wavefronts imaging technique developed in the 1990s [17], we would here particularly stress our development of a new type of analysis, which enables the determination of both v and e at the same time and for each pressure.

Measurement of sound wave velocity of polycrystalline iron at high pressure
Experiments at conditions of planetary’s core are still extremely challenging and important topics addressed include the density, the magnetism, and of course the sound velocity at ultrahigh pressures. Among all, the elastic properties of iron, particularly reference to get a better understanding of the Earth’s core, have been the subject of several papers. However, a huge discrepancy exists between shock waves non-equilibrium experiments [29] and indirect measurements of sound velocity under static compression [30].
In this study [31], the diamond anvil cell device combined with the technique of picosecond ultrasonics is demonstrated to be an adequate tool to measure the acoustic properties of iron up to 152GPa, i.e. one order of magnitude higher than previously published ultrasonic data. A disk of iron with 10m of diameter has been deposited on a platelet of silica 5m thick (see right part of Fig. 3). Loaded in a DAC (diamonds with culet of 100m and bevel of 300m), the quality of the iron sample has been first checked through X-ray diffraction, giving rise to homogeneous diffraction rings (i.e. excellent polycristallinity) and the expected density =7.85(1)gcm−3). The grain size of the polycristalline iron sample was less that 1m–.

br Numerical results and discussion

Numerical results and discussion
As an example, consider the AlN on Si filter with Mo electrodes in [17,18]. The thicknesses of the top electrode, the AlN film, the ground electrode, and the Si layer are Mo (100nm)/AlN (1μm)/Mo (100nm)/Si (5μm). The width of the input and output electrode fingers are a1=a2=a=10μm. The spacing between the electrode fingers is b=5μm. These are the same as those in [17,18]. However, in [17,18], the dimensions of the unelectroded parts at the left and right edges were not specified. We use c=30μm which is about five times the total plate thickness in order to exhibit the Sulfo-NHS-Biotin trapping behavior of the modes of interest. We consider the case when the input and output electrodes each has four fingers for our parametric study below. This corresponds to P=17. As to be seen below, P=17 is sufficient to show the basic behaviors of the filter while in real devices a larger P is sometimes used (P=41 in [17,18]).
Fig. 3 shows the seven trapped modes found in the frequency interval determined by (10) and (12) in the order of increasing frequency. The frequencies of thickness-extensional modes are mainly determined by the plate thickness [26,27]. Therefore they only increase slightly and slowly in Fig. 3 because n=4 is already fixed for the 4th-order thickness-extensional modes in [17,18]. The first mode in Fig. 3(a) does not have a nodal point (zero) along x1 and all of the top electrode fingers vibrate in phase, which is ideal as the operating mode of the filter. Notice that in the mode in Fig. 3(a) the vibration decays to zero exponentially near the plate edges, which is the desired energy trapping. The mode in Fig. 1(a) has small oscillations with peaks corresponding to the fingers of the top electrodes and valleys corresponding to the spacing among the electrode fingers. This is also because of the exponentially decaying behavior of the vibration away from the edges of the electrode fingers. As the frequency increases, from (b) to (g), the modes have one to six nodal points, respectively. When there are nodal points, the fingers of the input (or output) electrode may vibrate out of phase, causing cancellations of the charges on different fingers of the input (or output) electrode, which may be undesirable. The modes in Fig. 3 are either symmetric or antisymmetric about the middle of the plate alternatingly. This symmetry or antisymmetry of the modes about the plate center and the successive increase of nodal points are typical behaviors of plate thickness-extensional modes with in-plane variations and are as expected. They are long plate modes compared to the plate thickness. Because of the presence of the electrode fingers, these long plate modes are modulated by the electrode fingers which are responsible for the small oscillations of the modes.
When plotting Fig. 4, we increased the electrode finger width a, from the 10μm in  to 20μm and all other parameters were kept the same as those used for Fig. 3. In this case there are 11 trapped modes, with the first two shown in Fig. 4 which is sufficient for our purpose. Comparison of Fig. 4 with Fig. 3(a) and (b) shows that the frequencies have become slightly lower because wider electrode fingers have more inertia. The width of the small peaks are related to the electrode finger width and has become wider.
In Fig. 5 the electrode finger spacing b is changed from the 5μm in  to 10μm. All other parameters are the same as those used in Fig. 3. In this case there are eight trapped modes. Only the first two are shown in Fig. 5. Comparing Fig. 5 with Fig. 3(a) and (b), it can be seen that the frequencies have increased slightly when b increases. This is because larger electrode finger spacing effectively means less electrode inertia on the crystal plate and hence higher frequencies.
In Fig. 6 we increase c, the dimension of the two unelectroded edge parts, from 30μm to 50μm while keeping all other parameters the same as those in Fig. 3. Again only the first two trapped modes are shown. Within five significant figures the frequencies remain the same. This is reasonable because there is not much vibration in the edge parts. The main difference between Fig. 6 and Fig. 3(a) and (b) is that the near the plate edges the modes in Fig. 6 are more flat, showing stronger energy trapping.

Introduction Since its invention transmission electron

Since its invention, transmission electron microscopy (TEM) has been an invaluable addition to the materials science toolbox. It has been routinely used in fields such as catalysis, semiconductors and photonics as illustrated in a number of review papers [1–7]. No other tool provides the versatility of spatial information at high resolution along with the simultaneous acquisition of spectroscopic information. Since the early days of TEM, non-vacuum imaging has been pursued. For more than half a century the capability of imaging features at elevated temperatures in non-vacuum conditions using electrons has been available [8–11]. However, it is only within the last decade that commercially available TEMs have offered this option [12–16]. Scientists are now routinely exposing samples to heat, gas, liquid, stress, and light while performing electron microscopical investigations.
In order to expose samples to a gaseous environment, certain modifications have to be made. Gas can be introduced either via the microscope column or via the sample holder. In the former case, the microscope column itself is fitted with a gas inlet through the objective lens and differential pumping apertures in the upper and lower pole pieces. This setup is known as the differentially pumped column and is described by Boyes et al. [11]. The other option is to inject gases via the sample holder. This has been described by Kishita et al. [17]. In either case, microscopes capable of exposing samples to a gaseous 5 alpha reductase are now known as environmental transmission electron microscopes or ETEMs.
In order to simulate the working environment of e.g. an industrial catalyst, a gas atmosphere along with an ability to heat the sample is necessary. Heating is typically done using a heating holder. However, this also represents a challenge to conventional knowledge of TEM experimentation. Conventionally, heating experiments in the TEM have been carried out in high vacuum. This means that very little power is needed to heat the thermally isolated sample region (traditionally a metal grid 3mm in diameter) as there is virtually no gas atmosphere to remove heat from the sample. In an ETEM however, there is a flow of gas and thus a heat sink around the sample. Hence additional heat needs to be supplied in order to maintain a certain set temperature. Keeping the power constant when introducing gas can result in a significant temperature drop (typically hundred degrees under the conditions evaluated in the present work). Furthermore, as heat (in traditional heating holders) is supplied via the periphery of the 3mm metal grid, there may be a temperature gradient across the sample, rendering the center of the 3mm grid at a lower temperature than what is measured (typically by thermocouples) at the furnace supplying the heat. However, to the best knowledge of the authors, the level of non-uniformity of the sample holder temperature field in a gaseous atmosphere has not been investigated previously.
Using a simplified model of the inside of an objective lens, the gas flow, temperature and pressure field inside the ETEM chamber in steady state is calculated. The simulation was made using the Weakly Compressible Navier–Stokes and General Heat Transfer packages of the commercial software program .

CFD model
The CFD model was defined and solved using ; a commercial multiphysics modeling and simulation software. A Gatan 652 double tilt heating holder in an FEI Titan 80-300 differentially pumped ETEM was used as basis for the model. Fig. 1(a), (b), and (c) shows a two dimensional cross-section of the modeled geometry of respectively the x–y-plane, y–z-plane, and x–z-plane.
The model is divided into thirteen regions, denoted as CX, where X denotes a specific region. Table 1 describes the function and material of each of these regions.
A sample holder is inserted from the left of Fig. 1(b). This consists of three sections: the holder barrel (C4), the holder tip for the furnace and sample (C5), and the furnace (C6). When using the ETEM the sample is placed in the center of the furnace on a grid (C13); the center of this grid is used as origin in the drawings of Fig. 1. A cold trap (C8) is placed around the holder and beneath the holder is the objective aperture, which consist of an objective aperture barrel (C9) and the objective diaphragm (C10). A sniffer (C7) connected to a quadropole mass spectrometer is placed parallel to the gas flow with the orifice facing the sample. The upper and lower pole pieces (truncated cones, C2 and C3) are located above and below the sample. Each pole piece has a Pt diaphragm (C11 and C12) at the cone center. C2 and C3 were assumed to be equivalent to 304 steel in the lack of better material knowledge and also considering that relevant material properties do not change much between different types of steel.

br Experimental br Results and discussion br


Results and discussion

The impact of the detector dead-time on the quantitative analysis of nominally pure boron (B), Si, and for B-implanted Si (NIST-SRM2137) with a retained 10B dose of about 1×1015atomscm−2 was investigated. Ion correlation analyses were used to graphically demonstrate the impact of detector dead-time on multi-hit detection events. Given the important role of detector dead-time as a signal loss mechanism, three different methods for estimating the detector dead-time were presented. The following findings resulted from this research:


Polycrystalline films are of great importance to many technological applications, for example as protective coatings [2], device interconnects [3], sensors [4], surface acoustic wave devices [5] or transparent electrodes [6]. The polycrystalline nature of the films has a strong effect on film properties, which may therefore differ significantly from those of their single crystal Go 6983 counterparts. It is thus important to establish growth models that explain how processing parameters influence grain size, shape and orientation in these films.
In many cases polycrystalline film growth starts from non-epitaxial nucleation of closely spaced nano-sized grains forming a layer of nuclei at the substrate. If, during film thickening, the atomic mobilities are low enough for the grain boundaries in the bulk to remain immobile, growth Go 6983 between neighboring grains occurs at the free surface. Grains with their fastest growth axis close to the film growth direction overgrow otherwise oriented grains and so coarsen as the film thickens. This leads to a film with a preferential orientation composed of dominant, columnar or V-shaped grains [7]. The understanding of this growth competition and the resulting microstructure evolution during film thickening has been an important topic for polycrystalline film growth and several models and simulations have been proposed [7–10]. To validate such proposed models and simulations for a given film it is necessary to have quantitative data on how the film’s microstructure and preferential orientation develops throughout its thickness.
Scanning electron microscopy (SEM) and atomic force microscopy (AFM) imaging of the surface of a film series of varying thicknesses are common methods for the characterization of the microstructural evolution. However the precise and reliable quantification of grain size from images of the free surface is often not simple and sometimes not even possible, and additionally no direct information on preferential orientation is obtained. X-ray diffraction (XRD) is often used to evaluate preferential orientation, but with the diffracting volume usually extending through the entire film thickness, it is not straightforward to correlate the measurements to the microstructure. Alternatively, conventional transmission electron microscopy (TEM) provides coupled information on preferential orientation and microstructure even at the nanoscale, as required for ultrafine-grained materials. However quantifiable observations, for instance on grain size, are often only possible with tedious and time-consuming manual outlining, due to the presence of complicated image contrast. A possible way to solve this problem is by using automated crystal orientation mapping (ACOM). In recent years there has been a surge of orientation mapping techniques working at a nanoscale resolution [11–13]. One very successful method is based on scanning nano-beam diffraction followed by matching of recorded diffraction patterns to pre-calculated templates [13–16]. While powerful, TEM-based ACOM requires there to be a minimal amount of grain overlap in the specimen, a condition that is stringent on sample preparation, especially when studying nano-sized grains.
Here we use TEM-based ACOM to quantify the microstructure and preferential orientation evolution of polycrystalline low pressure metal–organic chemical vapor deposition (LP-MOCVD) grown ZnO thin films, currently used as transparent electrodes in photovoltaic applications [17]. First, in order to illustrate how the amount of grain overlap is affected by the chosen specimen geometry, we compare ACOM on standard cross-section and plan-view sample geometries and show the advantages of plan-view sections for quantification of a film’s microstructure.

br Acknowledgements br Introduction Atom


Atom Probe Tomography (APT) presents [1–5] enormous potential in probing the sub-nanometer character of materials. However, recovering this information from data generated at such high spatial Dinaciclib naturally presents the concomitant challenge of interpreting very high volume and high density data [6,7]. Even small data collections involve ~107 atoms and the standard procedure of visualizing this data set to isolate features can allow important ones, such as precipitates, to be easily lost within the high volume of data. Feature extraction typically requires drawing iso-concentration surfaces [8,9] at a particular concentration threshold, then visually exploring the data space to probe for various features, and repeating the procedure over an entire range of concentration threshold values. Following up on our earlier work in rendering such high volume APT data to aid in feature extraction [10–12] we now provide an alternate data driven approach of objectively classifying different phases such as precipitates by mapping the topology of the APT data set using concepts from algebraic topology, namely, simplicial homology [13–15].
Topology is inherently a classification system that deals with qualitative geometric information. This includes the study of what the connected components of a space are and their connectivity information in different dimensions of space [16]. Metric properties such as the position of a point, the distance between points, or the curvature of a surface, are irrelevant to topology. Thus, a circle and a square have the same topology although they are geometrically different. Such topological invariants can be represented by simplicial complexes, which are combinatorial objects that can represent spaces and separate the topology of a space from its geometry [14]. Examples of simplices include a point (0-dimensional simplex), a line segment (1-dimensional simplex), a triangle (2-dimensional simplex) and a tetrahedron (3-dimensional simplex).
Simplicial homology is a process that provides information about the simplicial complex by the number of cycles (a type of hole) secondary phloem contains. One of its informational outcomes are Betti numbers which record the topological invariants of an object, invariants such as the number of connected components, holes, tunnels, or cavities [17]. While a structure can have infinite shapes, many of which cannot be quantified, it can have only limited topological features depending on its dimension. For example, in three dimensions (3D), a structure can be simply connected, or it can be connected such that a tunnel passes through it, or it can be connected to itself such that it encloses a cavity, or it can remain unconnected. Thus, we can characterize the topology of a structure by counting the number of simply connected components, number of tunnels and number of cavities denoted by Betti numbers β0, β1 and β2.

When dealing with point cloud data representing physical structures, such as the APT data, the number and type of topological invariants clearly depends on the degree of connectivity between the various points, established through some metric such as distance. The determination of which points to connect can be addressed by defining a sphere of radius ‘ɛ’ around each point and connecting it to all those other points that lie within this sphere. Again there could be a measure of arbitrariness in determining the appropriate value of ɛ. A small change in ɛ for randomly distributed points can quickly change the underlying topology due to Dinaciclib statistical noise, thus changing the Betti numbers of the structure. The challenge is to determine the appropriate value of ɛ that corresponds to a meaningful feature. A powerful technique to overcome this problem is persistent homology [18], so termed because it is based on the idea that betti numbers relating to random distribution of data points and noise cannot persist as we vary ɛ. The value of ɛ is gradually increased from and the numbers of different topological components that appear and disappear are tracked for changing ɛ. This process is called filtration. Only those topological invariants that represent true features in the underlying data will remain unaffected by small changes in ɛ.

Lastly even with the reduction of size P then

Lastly, even with the reduction of size P, then the labelling may remain ambiguous. Indeed, in the case of overlaps, it is theoretically impossible to reduce P to 1 by definition. In the absence of a more thorough method of reduction of P, heuristic methods can be employed to assign a “score” to each possible member of P, to provide a ranked list of possible candidates. Such a ranked list allows for more rapid rejection of unlikely combinations, such as mass 2 being suggested as (2H)22+, rather than the more probable, H2+.
For the purposes of this work, a highly simplistic weighting scheme is utilised to roughly separate highly unlikely from possible elements. To do this, an assumed bulk composition multiplied by the natural abundance is used to assign a relative weight to the occurrence of each isotope. The product of the isotopic scores is the score for the final molecular ion. As an example, for an Fe–Mn alloy with 20at% Mn, the score for an 56Fe54Fe molecular GSK461364 would be (0.8×0.917)(0.8×0.058)=0.034, assuming no contaminant species. For elements typically not present in the bulk, but present in the analysis as contaminants (e.g. H and O), a weighting factor must be given, based upon the propensity of the material (in the case of H) to be present in the APT dataset – unfortunately, estimations for this can be quite arbitrary. However, as the quality of the ranking is only heuristic and not exact in nature, inaccuracies below order-of-magnitude levels often do not change the relative ranking of the elements of P.
Whilst it is now possible to generate a suggestion set P, and roughly rank P using compositional data, additional data regarding peak positions is required for the reduction step. This places an additional burden on operators – however this too can be partially automated. Unlike the peak identification stage, APT mass spectral peak detection fundamentals are not too dissimilar to other mass spectral methods. It has long been considered that peak detection methods can be effective in correctly extracting peaks from a mass spectral signal [10], specifically in the related mass spectral imaging technique of MALDI-TOF.
Such peak extraction methods compute the total peak area without a-priori peak shape assumptions. Indeed, considerable work has been conducted in the area, with comprehensive reviews of the relative strengths and weaknesses of these automated approaches [11]. In techniques such as MALDI-TOF, mass spectra are highly complex [12], and peaks can be present in high mass regions, such as ~10,000Da [13].
The MALDI-TOF tool “MaldiQUANT” was selected for the use in this work for the purposes of peak and background extraction [14], as it has been extensively developed. Comparative reviews for various signal processing techniques (such as wavelets [15], MEND [16]), are further discussed in detail elsewhere [17]. The optimisation of the peak extraction and identification steps for the context of APT are outside the scope of this work – peak detection here is used only to demonstrate the complete processing chain.
Similar to the method of Andreev [16], MaldiQuant was used to process time-domain signals, rather than the m/z domain, due to the non-linearity of the transform between the two domains, which results in artificially altered peak and background shapes. The output from MaldiQUANT which is relevant to this work is a set of detected peaks, and a background spectrum.
For this work, the method used was the wavelet “TopHat” mode [14], with a fixed Half-Width of 0.3amu1/2. The cutoff amplitude for thresholding the wavelet-processed signal was set by first manually ranging, then reducing the cutoff until the same number of ranges (within a small tolerance ~2 peaks) was identified by the automated detector as for the manual ranging. In a full implementation, pre-calibrated thresholds can be used. Automatic identification was performed on the peak positions, and the identity was assigned as the highest ranked species from the set of suggestions for each peak.

High frequency ultrasound as a non

High-frequency ultrasound as a non-invasive procedure allows direct visualization of the A1 pulley (Boutry et?al. 2005). However, there is no conclusive sonographic technique including reliable measurement parameters to represent the pathophysiological mechanism in trigger finger (Sato et?al. 2012; Sampson et?al. 1991). Recently, an increase in flexor tendon thickening under the A1 pulley has been shown (Sato et?al. 2012). Another approach is to use the decreased elasticity of pathologic A1 pulley in sonoelastography as a diagnostic feature (Miyamoto et?al. 2011). Currently, the most frequently used sonographic measurement in the literature is the dorsopalmar thickness of A1 pulley at the level of the metacarpophalangeal joint (Guerini et?al. 2008; Kim and Lee 2010; Miyamoto et?al. 2011; Tagliafico et?al. 2011). The histopathological equivalent to the above-mentioned sonographic feature is the fibrocartilaginous (or chondroid) metaplasia of A1 pulley (Sampson et?al. 1991; Sbernardori and Bandiera 2007; Drossos et?al. 2009) and flexor tendon thickness in stenosing tenosynovitis (Sampson et?al. 1991).
Guerini et?al. (2008), Kim and Lee (2010) and Miyamoto et?al. (2011) compared sonographic measurements of A1 pulley thickness both in healthy volunteers and in patients with trigger finger (Table?1). Furthermore, it azilsartan medoxomil was shown that the A1 pulley of the contracted proximal interphalangeal joint was significantly thicker than the non-contracted joints (Sato et?al. 2014), and that thicknesses and areas of trigger digits were significantly greater than those of non-involved contralateral digits (Chiang et?al. 2013).
To our knowledge, there is yet to be a proven correlation between sonographically and in?vivo–measured thickness of the A1 pulley in stenosing tenosynovitis.

Materials and Methods


Boutry et?al. (2005) described the examination of the finger pulley system with high-frequency ultrasonography using a 17.5?MHz transducer with excellent?visualization of the A1 pulley. Despite the increasing literature on sonographic measurements (Chiang et?al. 2013; Khoury et?al. 2007; Kim and Lee 2010; Lee and Healy 2005; Sato et?al. 2014) no real comparison of sonographic values and in?vivo A1 pulley measurements have been published. For the increasingly popular (Ryzewicz and Wolf 2006) percutaneous release of the A1 pulley especially when sonographically assisted (Chern et?al. 2005; Jou and Chern 2006; Rojo-Manaute et?al. 2010) it seems to be important to have reliable thickness values of the A1 pulley in stenosing tenosynovitis.
Based on our study, there is a very strong and linear correlation between the sonographic and intra-operatively measured thickness of A1 pulley. Therefore, it can be concluded that the sonographic measurement may reflect the exact extent of the real thickening of the A1 pulley. Histologically, the thickening correlated in each case to an extensive chondroid metaplasia as described by several papers (Sampson et?al. 1991; Sbernardori and Bandiera 2007; Liu et?al. 2013), underlining the clinical and sonographic diagnosis of stenosing tenosynovitis. Furthermore, we defined a cut-off value of the pulley thickness on the basis of a simple-to-use value of 0.62?mm in order to distinguish between healthy and diseased A1 pulleys. This value can be applied regardless of age, sex, BMI and height in adults. In children with pediatric trigger thumb, no definite ultrasound abnormality of the A1 pulley has been found (Verma et?al. 2013). Therefore, we do not recommend using this cut-off value in children.
Moreover, we were able to confirm the statistically significant thickening of the A1 pulleys in patients with trigger finger, which agreed with other research (Chiang et?al. 2013; Guerini et?al. 2008; Kim and Lee 2010; Miyamoto et?al. 2011; Sato et?al. 2012; Sato et?al. 2014).
A comparison of values of pulley thickness with the values obtained in the literature demonstrates good consistency. The fact that the pulley thickness of affected fingers tends to be in the lower range compared with the literature may be explained by several factors. First, in eight of 15 involved patients (2–17?mo before sonographic measurement, mean 9.5?mo) corticosteroid injections were pre-operatively performed. Guerini et?al. (2008) and Kim and Lee (2010) did not mention previous local steroid injections. Miyamoto et?al. (2011) excluded all patients with history of steroid injection within the past three mo, and prophase showed that the thickness of the pulley was reduced after corticosteroid injection. Secondly, there may be different methods for measuring pulley thickness. Our measurements were done at the most palmar (median) point of the A1 pulley according to Miyamoto et?al. (2011), whereas Guerini et?al. (2008) used the location with the most predominant thickening in the transversal view, which is not always equal the most volar point of the pulley. Thirdly, technical factors may influence the measurements. Kim and Lee (2010) used the same 17.5?MHz transducer that we used, whereas Guerini et?al. (2008) used a 15?MHz transducer and Miyamoto et?al. (2011) used a 14?MHz transducer. The technical advantages of high-frequency transducer (i.e., ≥ 17?MHz) are obvious. One example is that Boutry et?al. (2005) found the A4 annular pulley to be detected with the 17.5?MHz transducer in all cases, whereas Hauger et?al. (2000) found the A4 pulley less frequently (67%) using a 12?MHz transducer. Rojo-Manaute et?al. (2010) reported difficulties visualizing the A1 pulley by ultrasonography, using a 5–11?MHz transducer.

Introduction Toxoplasma gondii is one of the most prevalent

Toxoplasma gondii is one of the most prevalent zoonotic parasites in the world (Tenter et al., 2000). T. gondii has three infectious stages, namely tachyzoite, bradyzoite and sporozoite (Weiss and Kim, 2007). Tachyzoites are able to infect all cell types and proliferate intracellularly by endodyogeny (Hill et al., 2005). The members of the Felidae family are the final host for T. gondii but many animal species, including cats and man, can act as intermediate hosts (Dubey, 2010). Some hosts are very sensitive for development of T. gondii (Dubey, 2010). Sheep are one of the most sensitive hosts. When ewes are infected during pregnancy T. gondii may cause serious pathology such as early embryonic death, abortion and stillbirth dependent on the stage of pregnancy (Dubey, 2010). Whilst cattle can be infected with T. gondii, the parasite is often eliminated without causing clinical signs – perhaps because of innate immunity (Dubey, 2010).
Innate immunity is a primary defence mechanism that provides rapid protection against pathogens entering the body (Wilson 2012). The primary role of innate immunity is to contain pathogens in the area of initial infection and thus prevent systemic spread (Papayannopoulos and Zychlinsky, 2009; Mesa and Vasquez, 2013). Innate immunity is a complex phenomenon involving several mechanisms (Kumar and Sharma, 2010). A major component of the innate immune system are neutrophils, which are produced in the bone marrow and, although short lived, rapidly congregate in the area of infection and use a variety of strategies to fight pathogens (Mantovani et al., 2011). The primary function of the neutrophil is phagocytosis and, after uptake, the pathogen is killed in the phagolysosome by ibmx and proteins (Brinkmann and Zychlinsky, 2012). Some of these factors may also be released by exocytosis into the extracellular environment (Wakelin, 1996). Netosis, is a novel defence strategy employed by neutrophils, and plays an important role in the host’s early immune response (Brinkmann et al., 2004).
Netosis leads to nuclear and cytoplasmic changes in neutrophils (Guimaraes-Costa et al., 2012). Reactive oxygen species are produced when neutrophils encounter pathogens (Kaplan and Radic, 2012). The neutrophil nucleus then loses eu- and heterochromatin discrimination and its characteristic lobular form disappears, the nuclear membrane swells and granule membranes break, so nuclear, cytoplasmic and granular content become mixed with one another (Papayannopoulos and Zychlinsky, 2009). Finally, extracellular traps augmented with myeloperoxidase (MPO), elastase (NE) and histones (H1, H2A, H2B, H3, H4) are excreted from neutrophils into extracellular areas (Papayannopoulos et al., 2010; Kaplan and Radic, 2014). Netosis is different from other cytoxic mechanisms such as necrosis and apoptosis (Fuchs et al., 2007). After they were first discovered by Brinkmann et al. in 2004, NETs were reported in many species including man, mice (Abi Abdallah et al., 2011), cattle (Munoz-Caro et al., 2014a), goats (Silva et al., 2014), cats (Wardini et al., 2010), dogs (Jeffery et al., 2015; Wei et al., 2016), sheep (Pisanu et al., 2015), fish (Palic et al., 2007), chickens (Chuammitri et al., 2009) and shrimps (Koiwai et al., 2016). Extracellular traps can also be produced by other granulocytic cells including eosinophils (Yousefi et al., 2012) and monocytes (Kaplan and Radic, 2014; Munoz-Caro et al., 2014b; Reichel, 2015 Munoz-Caro et al., 2014b; Munoz-Caro et al., 2014bKaplan and Radic, 2014; Munoz-Caro et al., 2014b; Reichel, 2015). Some authors called this process etosis instead of netosis (Guimaraes-Costa et al., 2012). Netosis develops both in vitro (Silva et al., 2014; Munoz-Caro et al., 2014a; Reichel et al., 2015; Wei et al., 2016) and in vivo (Abi Abdallah et al., 2011; Munoz-Caro et al., 2016).
Some bacteria, viruses, fungi and parasites are known to trigger netosis (Kaplan and Radic, 2012; Branzk and Papayannopoulos, 2013). Plasmodium falciparum was the first parasite reported to trigger NETs formation (Baker et al., 2008). NETs formation was subsequently observed in other protozoon parasite species such as Leishmania amazonensis, Eimeria bovis, T. gondii, Besnoitia besnoiti, Eimeria arloingi, Neospora caninum, Cryptosporidium parvum and Entamoeba histolytica (Guimaraes-Costa et al., 2009; Behrendt et al., 2010; Abi Abdallah et al., 2011; Munoz-Caro et al., 2014a; Silva et al., 2014; Munoz-Caro et al., 2015b; Wei et al., 2016; Avila et al., 2016) and also some helminth species (Chuah et al., 2014; Bonne-Annee et al., 2014; Munoz-Caro et al., 2015a).

Few studies have specifically explored the use of antibiotics in

Few studies have specifically explored the use of phalloidin in patients with an elevated PSA level. Saribacak et al compared 50 patients to whom fluoroquinolone was administered prior to TRUSBP with an equal control group. A significant PSA decrease was observed in the treatment group at repeat measurement and the drop was also more significant in patients without PCa than with PCa. In another study, 215 culture-positive prostatitis patients with a high PSA received 2-months treatment with levofloxacillin and were followed with another reading. A significant reduction in PSA level was noted to the extent that TRUSBP was not performed in 53 out of 215 patients. However, seven (4%) out of the remaining 162 patients who underwent TRUSBP had PCa. Azab et al conducted a prospective study on 142 patients with an elevated PSA and positive-expressed prostatic secretions. All patients received treatment with antibiotics for 6 weeks and were followed with another PSA reading. Despite finding a significant decrease in PSA level up to 41%, PCa was detected in patients with PSA levels less than 2.5 ng/mL.
Schtricker et al failed to find any advantage for administration of antibacterial therapy. Out of 65 men who received antibiotics, and 70 men who did not receive antibiotics, PSA reduction was equally noticed in 40% of patients from both groups. This study failed to find any statistically significant difference in PSA levels between patients receiving antibiotics and patients not receiving antibiotics (Table 1). Additionally, in patients receiving antibiotics, PSA reduction was statistically significant in those who were found to have PCa (Table 2). The significance of the latter finding suggests that the drop in PSA after antibiotic treatment, may not necessarily rule out cancer. In fact, a link between prostate inflammation and PCa has been established which may explain why PSA may respond to antibiotic therapy in PCa patients. A large, multiracial study reported an increased risk of PCa in patients having a history of prostatitis (relative risk = 1.3, 95% confidence interval: 1.10–1.54). Furthermore, a recent study by Stark et al, suggested that chronic inflammation may be associated with PCa progression. Unfortunately, inflammation is commonly underreported when cancer is detected in tissue samples, an obstacle that prevented us from confirming such an association in this study.
This is the first study to correlate PSA change with Gleason score. Although no statistically significant difference in Δ PSA was noticed between PCa patients from Groups 1 and 2, overall a significantly higher Gleason score was found in patients with the most reduction in PSA level (Table 3). This eerie result is not necessary uncommon. In a recent study by Izumi et al evaluating 642 patients who underwent TRUSBP, although not statistically significant, the percentage of Gleason score 8–10 in patients with a PSA level of < 3.5 ng/mL was higher than that in patients with a PSA level between 3.5 ng/mL and 10 ng/mL. In another study evaluating outcomes of patients with Gleason score 8–10 PCa, patients with a PSA level ≤ 2.5 ng/mL had proportionately worse outcomes than their counterparts with higher PSA levels.

Conflicts of interest

Sources of funding

Multicystic dysplastic kidney (MCDK) is a common nonhereditary developmental anomaly (1:4300 live births) believed to be a consequence of either an early in utero urinary tract obstruction or failure of the union between the ureteric bud and the metanephric blastema. The obstructive theory has been investigated by early ligation of the ureter in animal models, resulting in mild to severe multicystic dysplastic changes. Segmental (partial) MCDK is a less known entity that is increasingly encountered with the improvement of the imaging modalities and is probably attributable to the same pathogenesis.

br Methodology Spatial datasets on

Spatial datasets on urban morphology and surface characteristics were provided as 2D polygons (ESRI shapefiles) and as a 3D city model (City Geography Markup Language, level of detail 2 (CityGML LOD-2)) by the Department of Environment and Health of the City of Munich and the Bavarian Land Surveying Office. Tabular statistics on population at block level were provided by the Bavarian State Statistics Department. The available statistical data includes demographics itemized by age groups, age of buildings, building characteristics, statistics on quantity and size of accommodation units and usage of ground floor. Building density and population density are calculated accordingly. Spatial statistics were calculated using ESRI ArcGIS and Quantum- or QGIS ( as geographic information system (GIS). Spatial and tabular statistics where joined in the GIS via existing numbering of blocks or houses respectively to provide an integrated overview and computation EPZ004777 of all available data. Level surface area calculation was performed as standard task in GIS for area inside the block courtyards and roof surfaces. To retrieve the slope angles of all roofs in Maxvorstadt the CityGML file was processed in FME Desktop, formerly known as “Feature Manipulation Engine”, the assets thereof being powerful transformation tools which allow for manipulation of data structure and content ( To calculate façade area, façade length is multiplied by number of storeys, assumed to be 3.5m each. Hanging systems are the most suitable option for façade greening in terms of a “low-tech low-intervention” approach, as the little adaptability of these systems to façade variability has to be considered when calculating their potential length. Therefore, façades underlying building polygons were simplified via the “Simplify Buildings” and “Integrate” commands in ArcGIS by omitting avant corps of <1m and “filling” gaps between adjacent building polygons of <0.5m which are mainly so-called “slivers” resulting from poor data quality. These values were chosen empirically after iteratively testing values on distortion of the resulting streamlined façades. To exclude polygon edges lying between directly adjoining buildings the polygons were spatially merged at block level via the “Dissolve” command and thereafter converted to lines (“Polygons to Lines”). The resulting enfolding lines were split at vertices (“Split Lines at Vertices”) marking the edges of buildings to cope with possible different heights of neighbouring buildings. Orientation of façades, essential to indicate suitable, well-lit zones, was retrieved by calculating the “Directional Mean” for every façade subsection. Building fronts were grouped by cardinal and intercardinal direction and the surface area for every class was calculated. Landmarked building fronts were excluded from the calculation since by law no alteration to them is allowed. To estimate the potential for implementing green infrastructure in Maxvorstadt, the following assumptions were made:
The urban district Maxvorstadt used as a model in this study covers an area of 430ha. In this model district, land use type is distributed as follows: the urban fabric type “block” covers 195ha or 45% (Fig. 1). In Maxvorstadt, a typical block has 5 storeys. There is a significant amount of haphazard building development inside these block courtyards, owing amongst others to acute housing shortage in the wake of World War II (Fig. 1). Outside the blocks, 25% of the district is roads and pavement surfaces, <10% green area, including parks and water surfaces, and the remainder other types of urban fabric including public buildings like museums, art galleries, universities, and also high-rise and row-house structures. The population in 2011 of Maxvorstadt residing in the blocks was 48,474, according to the data provided by the City of Munich. Although blocks only cover 45% of the area, the population living in blocks is 94% of the total population, because most of the other buildings in this district are non-residential. Maxvorstadt has 136 blocks and the average size of a block is 1ha. Inside these blocks, 80% of the surface is sealed, mainly by buildings. Green area inside blocks constitutes about 20% of the area. In terms of the type of buildings inside blocks, one-storey buildings cover 14ha and two-storey buildings cover 6ha. The other buildings are three stories or more. The distribution of land use type is summarized as follows (Table 2):