Tuesday, January 28, 2020

A computed tomography

A computed tomography 1. Introduction One of the most used techniques in the imagiology field is called Computed Tomography (CT), a method to acquire slices of the body based on the attenuation of X-rays. This monograph will try to compile the most important information about CT, namely its history, physical principles, fundamental instrumentation, data acquisition and processing techniques, as well as its applications. Firstly, a brief tour through the history of the technique will be taken, while some of the most important achievements will be referred. The starting point will be the discovery of the X-rays, then passing through the creation of the first CT scanner and the development of data analysis and processing algorithms. Then, a concise revision of the evolution of the scanners will be done, delineating the different generations of scanners and the key features of each one. In order to understand how an object can be scanned by this technique, a review of the physical concepts that constitute the basis of CT will be done. More precisely, we will discuss the attenuation of radiation while passing through objects. A short description of how X-rays interact with matter and the concept of linear attenuation coefficient will be discussed. The instrumentation needed for CT will shortly be referred, in particular the most important components of a CT scanner will be briefly explained. As data acquired by the scanners are not displayed in the way they are obtained, we will afterward explain the most used methods to process and analyze the great amount of information acquired by the CT detectors. The process of creating a scale to represent data the CT numbers will subsequently be overviewed, in order to understand how images are created and shown to the doctors. A description of how CT allows to distinguish different anatomical structures and how it permits to see just the structures we want will also be done. After that, an enumeration of some of the many clinical applications of CT will be done, knowing at the start that it will be impossible to list all the applications, reason why just a few will be referred. Besides, it is not the main goal of this monograph, although it is essential to understand the crucial importance of CT in the medicine field. Finally, we will try to conjecture about the future of CT, specifically what it can be improved and what are the actual challenges for this technique and how it can be overcame. This monograph is part of the Hospital and Medical Instrumentation course and pretends to be an overall view of CT, reason why there is not exhaustive detail in each section (for more detail in the approached topics, please read the references). 3-Dimensional reconstruction techniques will not be discussed because it is the topic of another group. Incisive instrumentation will not be exploited because it not exploited in the course as well. 2. Historical Background The history of CT started with the discovery of X-rays in 1895 by Wilhelm Conrad Roentgen, which gave him the Physics Nobel Prize in 1901. During 1917, the Austrian mathematician Johann Radon developed a study in which he demonstrated that making several projections in different directions of a material and recreating its associated pattern, it was possible to obtain a slice where one could characterize different densities of the material. The idea of using these mathematical methods to create images of slices of the human body in radiographic films was proposed by the Italian radiologist Alessandro Vallebona in 1930. Between 1956 and 1963, the physicist Allan Cormack developed a method to calculate the distribution of absorbed radiation in the human body based on transmission measurements, which allowed to detect smaller variations in absorption. [2], [3], [4] In the year of 1972, Sir Godfrey Hounsfield (who won the Nobel Prize in Medicine or Physiology in 1979, shared with Cormack) invented the first CT scanner in United Kingdom when he was working at EMI Company, which, at the time, was actually best known for its connection to the music world. The original prototype, called EMI Scanner, recorded 160 points for each projection in 180 different angles (with steps of 1 °) and each slice took 5 minutes to be acquired. A 180160 matrix was then constructed with these data, which took 2 and half hours to be analyzed until the final 2D-images could be visualized. The first types of scanners required the patients head to be immerged in a water-filled container in order to reduce the difference of X-rays attenuation between the rays that crossed the skull and the ones that only crossed the environment, because the detector had a small range of intensities that it could measure. [5], [6] During the subsequent years, CT scanners increased its complexity, and based on that evolution, we can distinguish five generations of machines that will be discussed in the next section (Section 3). Later, in 1989, it was developed a new technique in which data acquisition was done continuously the spiral CT scanning using the movement of the platform where the patient was lying. [4] Nowadays, CT machines have obviously superior performances than the prototypes of the 70s. In fact, several rows of detectors have been added which now allows registration of multiple slices at the same time the multislices scanners. These improvements allowed to represent data in 10241024 matrixes, which have a 1 megapixel pixel resolution. [7], [8] 3. Evolution of CT Scanners Over the time, the fundamentals of data acquisition and the key characteristics of the machines changed in many ways. This fact, allow us to split the evolution of the CT scanners in five generations. 3.1 First Generation Parallel Beam The first technique implemented in CT commercial machines consisted of the emission of a parallel X-ray beam that passed through the patient until it reached a detector located on the opposite side. Both X-ray and detector were place in the edge of a ring with the patient as the center. The X-ray source, as well as the detector, suffered a linear translation motion to acquire data from all maters directions. Then, the X-ray tube and the detector, was rotated about 1 °, having the patient as isocenter, and a new beam was emitted and the movement of translation restarted. This process was repeated until it reached 180 ° and, for each cycle of emitted beams, 160 projections of the material on analysis were recorded. The highly collimated beam provided excellent rejection of scattered radiation in the patient. At this point, the most used image reconstruction technique was the backprojection. Later in this work (Section 6) we will explain the techniques used in reconstruction. The ti me needed for data acquisition was extremely long (5 minutes per slice), due to technological limitations. [8] 3.2 Second Generation Fan beam In the second generation, the collimated beam was replaced by a fan X-ray beam and the simple detector was replaced by a linear array of detectors. This advance resulted in a shorter scan time, although this technique still continued to use a coupled source-detector translation motion. At the same time, the algorithms used to reconstruct the slice images became more complicated. Because of the vast amount of time needed to acquire data, both the first and second generations of scanners were limited to head and extremities scans, because those were the regions of the body that could remain immobilized during the long scan time. [9], [2], [8] 3.3 Third generation Rotating detectors The third generation of scanners emerged in 1976. In this generation, the fan beam was large enough to completely contain the patient, which made the translation movement redundant and the scanner commenced to execute only the rotational movement. Such as the fan beam, also the detectors became big enough to record all data of each slice at a time. The detector consisted of a line with hundreds of independent detectors that, like as in the second generation, rotated attached to the X-ray source, which required up to 5 seconds to acquire each slice. The power supply was now made by a slip ring system placed on the gantry, which allowed to continually rotate it without the need to reverse the rotating motion to untwist the power cables used before, as it was needed after each rotation in first and second generations. [2], [8] 3.4 Fourth generation Fixed detectors This generation was implemented in the late 70s and its innovation was a stationary ring of detectors that surrounded the patient. In this case, only the X-ray beam had movement. The ring consisted of a 600 to 4800 independent detectors that sequentially recorded the projections, so detector and source were no longer associated. However, detectors were calibrated twice during each rotation of the X-ray source, providing a self-calibrating system. Third generation systems were calibrated only once every few hours. In the fourth generation systems, two detectors geometries were used. The first one consists of a rotating fan beam inside the fixed ring of detectors and the second one has the fan beam outside the ring. These technological advances provided a reduction of the scan times to 5s per image and slice spacing below 1 mm. Both third and fourth generations are available in market and both have success in medical activities. [8], [2] 3.5 Fifth Generation Scanning electron beam The innovation of the fifth generation of CT scanners (early 80s) was a new system of X-ray source. While the ring of detectors remains stationary, it was added a new semicircular strip of tungsten and one electron gun which is placed in the patient alignment. By directing this electron beam to the anode of the tungsten strip, the release of X-ray radiation is induced. This method results in a no moving parts system, i.e. no mechanical motion is needed to record data because the detectors completely surround the patients and the electronic beam is directed electronically. The four target rings and the two detector banks allow eight slices to be acquired at the same time, which reduce the scan time and, consequently, the motion artifacts. This fact led to the reduction of scan time to between 33 and 100 ms, which is sufficient to capture images of the heart during its cardiac cycle, reason why it is the most used in diagnostic of cardiac disease. For that reason, this is also called U ltrafast CT (UFCT) or Cardiovascular CT (CVCT) Because of the continuous scan, special adjustments in the algorithm are needed to reduce image artifacts. [2], [8], [9] 3.6 Spiral Scanners The idea of creating a spiral CT came with the need for scans of 3-Dimensional images. This system to acquire 3-Dimensional CT images was born in the early 90s and consists of a continue translation movement of the table which supports the patient. This technique is based on the third generation of machines and allows scan times of the abdomen to be reduced from 10 minutes to 1 minute, which reduces the motion artifacts. Besides, a 3-Dimensional model of the organ under study can be reconstructed. The most complex innovation of this technique consists of the data processing algorithms, because they must consider the spiral path of X-ray beam around the patient. Technically, this was possible only due to the slip ring system implemented on the third generation of scanner. [9], [8], [10] 3.7 Cone beam After the development of new techniques, detectors, methods and algorithms, nowadays the question is: How many slices can we acquire at same time?. The answer to this question lies in the placement of several rows of detectors and the transformation of a fan beam X-ray to a 3-Dimensional cone beam. Nowadays, manufacturers have already placed 64 rows of detectors (multislice systems) and the image quality reached high levels. Moreover, the completely scan of a structure takes now about 15 seconds or even less. [2] 4. Physical Principles The basic principle of CT is measuring the spatial density distribution of a human organ or a part of the body. It is similar to conventional X-ray, in which an X-ray source of uniform intensity is directed to the patient and the image is generated by the projection of the X-rays against a film. The X-rays are emitted with a certain intensity I0 and they emerge on the other side of the patient with a lower intensity I. The intensity decreases while crossing the patient, because radiation interacts with matter. More precisely, X-rays used in CT are of the order of 120kV and, with that energy (120 keV), they interact with tissues mainly by photoelectric (mostly at lower energies) and Compton effects (at higher energies), although they can also interact by coherent scatter, also called Rayleigh scatter (5% to 10% of the total interactions). Photoelectric effect consists of the emission of an electron (photoelectron) from the irradiated matter caused by the absorption of the X-rays energy by an inner electron of the medium. In Compton effect, a X-ray photon interacts with an outer electron of matter and deviates its trajectory, transferring part of its energy to the electron, which is then ejected. In coherent scatter, the energy of the X-ray is absorbed by the tissue causing the electrons to gain harmonic motion and is then reradiated in a random direction as a secondary X-ray. [10], [11], [12], [13], [14] CT X-rays are not monoenergetic, but for now, to simplify the understanding of this concept, we will consider them monoenergetic. When an X-ray (as well as other radiation) passes through a material, part of its intensity is absorbed in the medium and, as a consequence, the final intensity is lower than the initial one. More precisely, the Beers Law states that intensity transmitted through the medium depends on the linear attenuation coefficient of the material  µ if we consider that we are in presence of a homogeneous medium and the thickness of the material x according to the following expression: The problem with conventional radiographs is that it only provides an integrated value for  µ along the path of the X-ray, which means that we have a 2-Dimensional projection of a 3-Dimensional anatomy. As it can be easily understood, all the structures and organs at the same level will appear overlapped in the image. As a consequence, some details cannot be perceived and some organs may not be entirely seen. For example, it is very hard to see the kidneys in a conventional radiography because the intestines appear in front of them. [15], [16], [11] Moreover, as there are many values of (typically one for each point of the scanned part of the body), it is not possible to calculate their values with one singe measure. However, if measures of the same plane by many different directions are made, all the coefficients may be calculated, and that is what CT does. As Figure 4 shows, a narrow X-ray beam that is produced by the source in the direction of a detector, which means that only a narrow slice of the body is imaged and the value of intensity recorded by the detector depends on all the material crossed by the X-ray in its way. That is the reason why it is called tomography it derives from the Greek tomos which means to cut or section. Many data of X-ray transmission through a plane of an object (an organ or a party of the body) from several directions are recorded and are then used to reconstruct the object by signal processing techniques. These techniques will be discussed later in this monograph (Section 6). The tightly colli mated X-ray beam ensures that no significant scatter is present in order to assure a low signal-to-noise ratio (SNR), a necessary premise to obtain a faithful image of the scanned object. For that reason, unlike conventional tomography, in CT, patients structures located outside the area that is being imaged do not interfere. [17], [9], [12] 5. Instrumentation The X-ray system is composed by an X-ray source, collimators, detectors and a data-acquisition system (DAS). X-ray source is undoubtedly the most important part, because it is what determines the quality of the image. [10], [8] 5.1 The X-ray source The basis of the X-ray source (called X-ray tube) is to accelerate a beam of electrons between two electrodes against a metal target and is shown in Figure 5. The cathode is a coiled tungsten filament, which is crossed by a current which causes the filament to heat up. At high temperatures (2220 °C), the tungsten releases electrons, a process called thermionic emission. A 15 to 150 kV potential difference is applied between the cathode and the anode, which forces the released electrons to accelerate towards the anode. [10] When the electrons hit the anode, they produce X-rays by two ways. On the one hand, when an electron passes near the tungsten nucleus, it is deflected by an attractive electric force (because the nucleus is positively charged and the electron has a negative charge) and loses part of their energy as X-rays. As there are an enormous number of possible interactions and each one leads to a partial loss of kinetic energy, the produced X-rays have a great range of energies, as Figure 5 shows. This process is called bremsstrahlung (i.e. braking radiation). On the other hand, if an electron from the cathode hits and penetrates an atom of the anode, it can collide with an inner electron of it, causing the electron to be ejected and the atom to have a hole, which is filled by an outer electron. The difference of binding energy of these two electrons is released as an X-ray. This process is called characteristic radiation, because its energy depends on the binding energy of the electrons, which is characteristic of a given material. [10], [9], [15] The tube current represents the number of electrons that pass from the cathode to the anode per unit of time. Typical values for CT are from 200 up to 1000 mA. The potential difference between the electrodes is generally of 120 kV, which produces an energy spectrum ranging from 30 to 120 keV. The tube output is the product between the tube current and the voltage between the electrodes and it is desired to have high values because that permits a shorter scan time, which reduces the artifacts due to movement (such as for heart scans). [10], [8] Production of X-rays in these tubes is an inefficient process and most of the power supplied to the tube is converted in heating of the anode. So, a heat exchanger is needed to cool the tube. This heat exchanger is placed on the rotating gantry. Spiral CT in particular requires high cooling rates of the X-ray tube and high heat storage capacity. [8] 5.2 Collimators The electron beam released from the source is a dispersed beam, normally larger than the desired field-of-view (FOV) of the image. Usually, the fan beam width is set for 1 to 10 mm (although recent CT scanner allow submilimetric precision), with determines the width of the imaged slice. The collimator is placed between the source and the patient and is composed by lead sheets to restrict the beam just to the required directions. An X-ray beam larger than the FOV leads to a larger number of X-rays emitted than the ones needed to the scan and that has two problems: the radiation dose given to the patient is increased unnecessarily; and the number of Compton-scattered radiation increases. [10], [8] 5.3 Antiscatter grids An ideal CT system only with primary radiation (x-rays emitted from the source) reaching the detector does not exist and Compton scatter is always present. As this scatter is randomly distributed and has no useful information about the distribution of density of the scanned object, it just contributes to the reduction of image contrast and should be minimized to the maximum. This, because unlike photoelectric effect, Compton effect has a low contrast between tissues. As referred above, collimators are useful to limit the X-ray beam to the FOV. However, even with a collimator, 50% to 90% of the radiation that reaches the detector is secondary radiation. To reduce the Compton scatter, antiscatter grids can be placed between the detector and the patient. [10] An antiscatter grid consists of strips of sheets oriented parallel to the primary radiation direction combined with a support of aluminum, which drastically reduces the scatter radiation that has not the direction of the primary one, as illustrated in Figure 6. In order to not lower the image quality because of the grid shade, the strips should be narrow. There is, however, a tradeoff between the reduction of scatter radiation (that improve the image contrast) and the dose that must be given to the patient to have the same number of detected X-rays. [10] 5.4 Detectors At the beginning, single-slice CT scanners with just one source and one detector were used. However, these took much time to acquire an image, reason why the evolution brought us single-source, multiple-detector machinery and multislice systems. The third and fourth generations added a wider X-ray fan beam and a larger number of detectors to the gantry (typically from 512 to 768), which permitted to acquire more information in a smaller time. The detectors used in CT must be highly efficient to minimize the dose given to the patient, have a large dynamic range and be very stable over the time and over temperature variations inside the gantry. Three factors contribute to overall efficiency: geometric efficiency (fraction of the total area of detector that is sensitive to radiation), quantum efficiency (the fraction of incident X-rays that is absorbed to contribute to signal) and conversion efficiency (the ability to convert the absorbed X-rays into electrical signal). These detectors can be of two types (shown in Figure 7): solid-state detectors or gas ionization detectors. Solid-state detectors consist of an array of scintillating crystals and photodiodes, while gas ionization detectors consist of an array of compressed gas chambers to which is applied a high voltage to gather ions produced by radiation in inside the chamber. The gas is kept under a high pressure, to maximize interactions between X-rays and gas molecules, which produce electro-ion pairs. [10], [8] 5.5 Data-Acquisition System The transmitted fraction of the incident X-ray intensity (I/I0 in equation 1) can be as small as 10-4, reason why DAS must be very accurate over a great range. The role of DAS is to acquire these data and then encode it into digital values and transmit these to computers for reconstruction to begin. DAS make use of many electronic components, such as precision preamplifiers, current-to-voltage converters, analog integrators, multiplexers and analog-to-digital converters. The logarithmic step needed in equation 3 to get the values of  µi can be performed with an analog logarithmic amplifier. Data transfer is a crucial step to assure speed to the whole process and used to be done by direct connection between DAS and the computer. However, with the appearance of rotating scanners in third and fourth generations, these transfer rate, which is as high as 10 Mbytes/s is now accomplished by optical transmitters placed on the rotating gantry that send information to fixed optical receivers. [8] 5.6 Computer system The data acquisition of the projections, the reconstruction of the signal, the display of the reconstructed data and the manipulation of tomographic images is possible by computer systems used to control the hardware. Current systems consist of 12 processors which achieve 200 MFLOPS (million floating-point operations per second) and can reconstruct an image of 10241024 pixels in less than 5 seconds. [8] 6. Signal Processing and Analyzing Techniques As data are acquired in several directions (e.g. with increments of 1 ° or even less) and each direction is split in several distinct points (e.g. 160 or more), at least 28 800 points are stored, which means that there must be efficient mathematical and computational techniques to analyze all this information. A square matrix representing a 2-Dimensional map of the variation of X-ray absorption with the position is then reconstructed. There are four major techniques to analyze these data, which we will discuss subsequently. [12] 6.1 Simultaneous linear equations As it was referred above (Section 4), there is a measure of for each pixel, which means that modern CT scanners deal with 1 048 576 points for each slice (nowadays the matrixes used are 10241024). As a result, to generate the image of one single slice, a system of at least 1 048 576 equations must be solved (one equation for each unknown variable), which means that this technique is totally unusable. In fact, imagine that in 1967, Hounsfield built the first CT scanner, which took 9 days to acquire the data of a single slice and 21 hours to compute the equations (and by the time, the matrix had only 28 000 entries). Besides, nowadays CT scanners acquire about 50% more measures than it would be needed in order to reduce noise and artifacts, which would require even more computational resources. [16], [11], [8] 6.2 Iterative These techniques try to calculate the final image by small adjustments based on the acquired measures. Three major variations of this method can be found: Algebraic Reconstruction Technique (ART), Simultaneous Iterative Reconstruction Technique (SIRT) and Iterative Least-Squares Technique (ILST). These variations differ only in the way corrections are made: ray-by-ray, pixel-by-pixel or the entire data simultaneously, respectively. In ART as an example, data of one angular position are divided into equally spaced elements along each ray. Then, these data are compared with analogous data from another angular position and the differences between X-ray attenuation are added equally to the fitting elements. Basically, for each measure, the system tries to found out how each pixel value can be modified to agree with the particular measure that is being analyzed. In order to adjust measures with pixel values, if the sum of the entries along one direction is lower than the experimental measure for that direction, all the pixels are increased. Otherwise, if the sum of the entries is higher than the measured attenuation, pixels are decreased in value. By repeating this iterative cycle, we will progressively decrease the error in pixels, until we get an accurate image. ART was used in the first commercial scanner in 1972, but it is no longer used because iterative methods are usually slow. Besides, this method implies th at all data must be acquired before the reconstruction begins. [9], [16] 6.3 Filtered backprojection Backprojection is a formal mathematical technique that reconstructs the image based only on the projection of the object onto image planes in different directions. Each direction is given the same weight and the overall linear attenuation coefficient is generated by the sum of attenuation in each X-ray path that intersects the object from different angular positions. In a simpler manner, backprojection can be constructed by smearing each objects view back trough the image plane in the direction it was registered. When this processed is finished for all the elements of the anatomic section, one obtains a merged image of the linear attenuation coefficients, which is itself a crude reconstruction of the scanned object. An illustration of this technique is represented in Figure 8. By its analysis, it is also clear that the final image is blurred, which means that this technique needs a little improvement, which is given by filtered backprojection. [12], [9], [16] Filtered backprojection is therefore used to correct the blurring resultant from simple backprojection. It consists of applying a filter kernel to each of the 1-Dimensional projections of the object. That is done by convolving a deblurring function with the X-ray transmission data before they are projected. The filter removes from data the frequencies of the X-ray responsible for most of the blurring. As we can see in Figure 8, the filter has two significant effects. On the one hand, it levels the top of the pulse, making the signal uniform within it. On the other hand, it negatively spikes the sides of the pulse, so these negative neighborhoods will neutralize the blurring effect. As a result, the image produced by this technique is consistent with the scanned object, if an infinite number of views and an infinite number of points per view are acquired. [16], [9] Compared with the two previous methods this process has also the advantage that reconstruction can begins at the same time that data are being acquired and that is one of the reasons why it is one of the most popular methods nowadays. [9] 6.4 Fourier reconstruction The last signal processing technique that will be discussed in this monograph is the Fourier reconstruction which consists of analyzing data in the frequency domain instead of the spatial domain. For this, one takes each angular orientation of the X-ray attenuation pattern and decomposes it on its frequency components. In the frequency domain, the scanned image is seen as a 2-Dimensional grid, over which we place a dark line for the spectrum of each view, as Figure 9 shows. To reconstruct the image, one has to take the 1-Dimensional Fast Fourier Transform (FFT). Then, according to the Fourier Slice Theorem, each views spectrum is identical to the values of one line (slice) through the image spectrum, assuring that, in the grid, each view has the same angle that was originally acquired. Finally, the inverse FFT of the image spectrum is used to achieve a reconstruction of the scanned object. 7. Data Display As it was said earlier (Section 6), linear attenuation coefficients give us a crude image of the object. In fact, they can be expressed in dB/cm, but as they are dependent on the incident radiation energy, CT scanning does not use the attenuation coefficients to represent the image, but instead it uses integer numbers called CT numbers. These are occasionally, but unofficially, called Hounsfield units and have the following relation with the linear attenuation coefficients: where  µ is the linear attenuation coefficient of each pixel and  µw is the linear attenuation coefficient of water. This CT number depends clearly on the medium. For human applications, we may consider that CT number varies from -1000 for air and 1000 for bone, with CT number of 0 for water, as it is easily seen from equation 5. [9], [13], [4], [12] The CT numbers of the scanned object are then presented on the monitor as a grey scale. As shown in Figure 10, CT numbers have a large range and as human eye cannot distinguish so many types of grays, it is usually used a window to show a smaller range of CT numbers, depending on what it is desired to see. The Window Width (WW) identifies the range of CT numbers and consequently alters the contrast (as Figures 11 and 12 show), whereas Window Level (WL) sets the centre of the window and, therefore, select which structures are seen. The lowest CT number of the window, which corresponds to the lowest density tissue, is represented in black and the highest Ct number (highest density tissue) is represented in white. 8. Radiation Dose As it can easily be understood, radiation dose given to the patient is dependent on the resolution of the scanner and its contrast, as well as

Monday, January 20, 2020

Stress in College: What Causes it and How to Combat it Essay -- Main C

Many first year college students face problems as they enter a new educational environment that is very different than that of high school. However, the common problem is that many first year students become stressed. For many students, college is supposed to be the most fun time of their life; however, their fun can be restricted if it is limited by stress and other mental illnesses. According to the National Health Ministries (2006), stress is caused by â€Å"greater academic demands,† the feeling of being independent from family, â€Å"financial responsibility,† homesickness, being exposed to meeting new people, peer pressure, â€Å"awareness of one’s own sexual identity,† and the abuse of drugs and alcohol (p. 2). However, the causes to first year students’ stress mainly include academic demand, parents, finance, and peer pressure. Stress is an important problem faced by many college students, especially first year students, and it can have some large impacts on college freshmen. For example, according to Hirsch and Keniston (1970), about half of first year students do not graduate from college due to dropping out (p. 1-20). Also, David Leonhardt (2009) agrees that the United States excels at putting â€Å"teenagers in college, but only half of students who enroll end up with a bachelor’s degree† (p. 1). In addition, the level of stress seems to increase each year. For instance, the National Health Ministries (2006) claim that many college students have become â€Å"more overwhelmed and stressed† than the student generation of the last fifteen years (p. 2). Also, the percentage of first year students feeling stressed is greater than thirty percent (National Health Ministries 2006). If the problem of stress is not resolved properly, th... ...-funding Lederman, Doug (2005). Pressure on College Prices. Inside Higher Ed. http://www.insidehighered.com/news/2005/04/20/access Lehigh University. Challenges in College. http://www.lehigh.edu/~incso/challenges.shtml Leonhardt, David (2009). Colleges Are Failing in Graduation Rates. New York Times, p. 1. http://www.nytimes.com/2009/09/09/business/economy/09leonhardt.html?_r=1 Lipman, Marc. Personal Interview. March 21, 2010. Marano, Hara E (2004). The Pressure from Parents. Psychology Today. Reviewed on January 24, 2007. http://www.psychologytoday.com/articles/200405/the-pressure-parents National Health Ministries (2006). Stress & The College Student. The University of Illinois at Chicago. http://www.uic.edu/depts/wellctr/docs/Stress%20and%20the%20College%20Student.pdf Zinsser, William. College Pressures. Norton-Simon Publishing, 1978.

Sunday, January 12, 2020

Could The Cold War Have been Avoided

A medical doctor had assisted a lady in labour The Later facts revealed that a few months In of the Childs growth the parents discovered a problem which was very worrying They later learnt from a professional that there child had deficiencies which capped his mobility on the left arm and the left leg Duty of Care: Breach of Duty: The first issue is the standard of care in which the doctor will be Judged on and it is going to be Judged on the reasonable standards of a component doctor. The question arises! How do you test whether an act or failure Is negligent? https://donemyessay.com/end-of-the-cold-war/The doctor Is to be Judged on the state of knowledge at the time of the Incident. In this case there Is special skill required In this filed and It would be Judged by the conduct of a competent doctor exercising that particular art. There many Issues In the case arguments for and against:- Assuming the doctor who was present at the time was a junior Doctor:- If at the time of childbirth the doctor successfully was able to assist in the birth of the child and request the advice and help ofa superior doctor then it is mostly likely that he has atisfied the Bolem test even though he/she may have made a mistake.Therefore we could draw the line that if the doctor had their work checked by a reasonably competent doctor believed that the action of this doctor were reasonable then the Judge may find the doctor had not been negligent. However If the doctor had properly accepted his post In a hospital In order to gain necessary experience then he sho uld only be held liable for acts or omissions which a careful doctor with his qualifications and experience would not have done. Causation: Causation would have to show that a health care professional has been negligent in ome form or another.It must also show that the doctor at the time of the labour has caused the patient or victims any injuries. The test used here would be the, ‘but for', test which simply asks whether the patient would have suffered any Injuries. the child by making the mistake by wrongfully applying pressure on the Childs arm or leg. If it had been possible that the doctor would have removed the child without complication then he may have acted negligently and caused the Childs condition. Damages When the defendant has succeeded to prove that there was A duty of careA Breach of the that duty Which caused the patient's condition, the patient is entitled to damages Basic Principle in Tort: The claimant should be put back in the position he was in before the negligent act was omitted The aim of damages: not to punish the defendant, ‘BUT', to compensate the claimant The patient can claim for damages for: Pain and suffering (Relatively small) Loss of amenity Extra Costs (for Example Private care) Loss of Earning Future loses Compensation can be reduced when: Contributory negligence ex: patients does not disclose information Could The Cold War Have Been Avoided Could the Cold War have been avoided? Discuss with reference to the key schools of thought on the origins of the Cold War. The cold war is the product of confrontation between US and USSR, reflected by conflicts of interests in political, ideological, military sphere and so on (Baylis et al. 2010, p51), and it lasted nearly half century and ended up with dissolution of Soviet Union. There are so many debates about its origin and some people argue that cold war might be avoided.However this essay will indicate that cold war is inevitable with discussion of orthodox view, revisionist view of origins of cold ward and focus on the third view—post revisionist. Orthodox or the traditional view refers to that Soviet Union aggressive expansion created American insecurity, and it is dominant among historians in US until 1960s. They argued that Stalin went against the principles agreed at Yalta, and employed the policy of â€Å"expansionism† in Eastern Europe and tried to spread communism all over the world.While the loss of China to communism, Korean War and rise of McCarthyism created a strong anti-communism sentiment in the West (Bastian, 2003). Therefore, this brought an image that US hoped to maintain the peace and cooperation with the Soviets, but with the expansion of press and radio controlled, personal freedom suppressed and even evil Soviet Union, America had no choice but to react in defense of its own security and freedom principle.After 1960s, when US get involved in Vietnam War, some other historians began to challenge the orthodox view and question the motives of US capitalism. The so called revisionist or left leaning historians argued that US capitalism expansion created insecurity to Soviet Union. The representative is William Appleman Williams (1972) who saw US capitalism as aggressive requirements for huge foreign market, investment and resource of raw materials and US foreign policies were to ensure there was an â€Å"Open Door† for American trade and build an US-dominated international capital market.By contrast, Soviet Union just did the same things to protect its national interests as other countries did and reacted defensively to the fear of American global capital expansion. However, the third view—post-revisionist did not simply blame cold war on either one side, but showed that the causes of cold war lied on both countries. The post-revisionist ideas could be more convincing to explain the inevitability to cold war. Post-revisionists tended to believe revisionists’ idea that Soviet Union tried to maintain its own security and  create its influential sphere in Eastern and Central Europe for safety concern.While John Lewis Gaddis (2005) one of the most important post-revisionists argued that Western countries cannot make sure what Soviet Union was up to do, and actions of protecting Soviet security could still be regarded as threats to Western interests, so the worries about Soviet were legitimate and understandable. Therefore, the mutual misunderstanding and reactivity reinforced the conflicts step by step which refer to Churchill’s ‘Iron Curtain’ speech, Marshall Plans, followed by Berlin blockade and so on.Moreover, article of Whelan (2011) adopted theory of Thucydides Trap to further explain this situation. The Thucydides Trap illustrates that growth of Athenian power and the fear it inspired in Sparta made the war inevitable. Whelan suggested that the cold war could not have been avoided since US carried out atomic bomb successfully, and when it was used on Hiroshima and Nagasaki. It sent a strong signal or fear to Soviet Union that United States had the means and intention to use nuclear bombs again if necessary.The fear caused Soviet Union to develop its own nuclear weapons, and explosion of Soviet atomic bomb again exposed a threat to US to take more aggressive military actions. It became a cycle that never ended which dragged Soviet an d US into endless arm race. In addition, the conflict of ideology has made the negotiation even more difficult. According to Gaddis (2005), historians underestimated the clash of ideology which played an essential role in cold war. After October 1917, a new ideology—Marxism and Leninism was born with Russian revolution.Marxist-Leninists believed that history was contradictions of classes, and capitalism was exploitation of working class. But eventually with the consciousness of working class, the revolution would rise up and bring capitalism to its end. Therefore, this ideology was rivalry to US capitalism in the nature and seemed to be a threat to liberal democracies in the West. (Bastian, 2003) Although the ideological rivalry became less important in the 1920s and 1930s because Soviet Union and US were dealing with Fascism, the rivalry increased dramatically by 1945.For instance, in 1946 George Kennan’s famous â€Å"Long Telegram† suggested that Sovietâ€℠¢s ideology was greatly hostile to US interests and had to be contained. Furthermore, Whelan’s article (2011) indicated that the fear, paranoia and propaganda created by ideologically conflicts made it very difficult to see points of view from opposing side, which almost left no room for communication and negotiation to stop arm race. He directly suggested that cold war was inevitable because â€Å"the Cold war had already commenced in October 1917 the start of the Russian revolution†.In conclusion, this essay has briefly introduced the orthodox and revisionist reviews of origins of cold war, and concentrated on analysis of post-revisionist thoughts. From the perspective of post-revisionists, misunderstanding and reactivity caused insecurity of both Soviet Union and US, while nuclear weapons reinforced the insecurity to inevitable cold war. At the same time the huge rivalry in natures of two nations’ ideologies nearly eliminated communication and made cold war e ven more not avoidable.

Friday, January 3, 2020

Johann Friedrich Struensee Biography

Though he was an important figure in Danish history, German physician Johann Friedrich Struensee is not particularly well known in Germany. The period he lived in, the late 18th century, is known as the Age of Enlightenment. New schools of thought were introduced and revolutionary ideas made their way to courts, Kings,  and Queens. Some of the policies of European rulers were heavily coined by the likes of Voltaire, Hume, Rousseau or Kant. Born and schooled in Halle, Struensee soon moved close to Hamburg. He studied medicine and, just like his grandfather, he was to become personal physician to the Danish King, Christian VII. His father Adam was a high-ranking cleric, thus Struensee came from a very religious home. After he had already finished his university career at the age of twenty, he chose to become a doctor for the poor in Altona (today a quarter of Hamburg, Altona used to be a Danish city from 1664-1863). Some of his contemporaries criticized him for using new methods in medicine and his rather modern worldviews, as Struensee was a strong supporter of many enlightened philosophers and thinkers. As Struensee had already been in contact with the royal Danish court, he was picked as the personal physician for King Christian VII while the latter traveled through Europe. Throughout their journey, the two men became close friends. The King, in a long line of Danish Kings with severe mental issues, known for his wild antics without regard to his young wife, Queen Caroline Mathilde, sister of the English King George III. The country was more or less ruled by a council of aristocrats, which made the King sign every new law or regulation. When the travel party returned to Copenhagen in 1769, Johann Friedrich Struensee joined them and was appointed the permanent personal physician to the King, who’s escapades got the best of him once more.  Ã‚   Just as in any good movie, Struensee got to know the Queen Caroline Mathilde and they fell in love. As he saved the crown prince’s life, the German doctor and the royal family became very close. Struensee managed to rekindle the King’s interest in politics and started influencing him with his enlightened views. Right from the start of his involvement with the King’s affairs, many members of the royal council looked upon Johann Friedrich with suspicion. Nonetheless, he became more and more influential and quite soon the Christian appointed him to the royal council. As the King’s mind drifted away more and more, Struensee’s power increased. Soon he presented Christian with numerous laws and legislation that changed the face of Denmark. The King willingly signed them. While issuing many reforms that were supposed to better the situation of the peasants, amongst other things making Denmark the first country to abolish serfdom, Struensee managed to weaken the royal council’s power. In June 1771, Christian named Johann Friedrich Struensee Secret Cabinet Minister and gave him the general power of attorney, de facto making him the absolute ruler of the Danish Kingdom. But whereas he developed an incredible efficiency in issuing new legislation and enjoyed a harmonious love life with the Queen, dark clouds started to tower on the horizon. His conservative opposition to the basically powerless royal council turned to intrigue. They used the rather new technology of printing to discredit Struensee and Caroline Mathilde. They spread flyers all over Copenhagen, stirring up the people against the opaque German physician and the English Queen. Struensee didn’t really pay attention to these tactics, he was far too busy, radically changing the cou ntry. In fact, the rate at which he issued new laws was so high he even opposed those powers at the court that weren’t actually opposed to many of the changes he made. Though, to them, the changes came too fast and went too far. In the end, Struensee became so involved with his work, that he didn’t see his downfall coming. In a cloak-and-dagger operation, the opposition made the now almost moronic King sign an arrest warrant for Struensee, marking him a traitor for consorting with the Queen – a crime punishable by death – and further charges. In April 1772, Johann Friedrich Struensee was executed, while Caroline Mathilde was divorced from Christian and eventually banned from Denmark. After his death, most of the changes Struensee had made to Danish legislation were undone.​ The dramatic story of the German doctor who ruled Denmark and – for a short while – made it one of the most advanced countries at the time, who fell in love with the Queen and ended up being executed, has been the topic of many books and movies, even though not as many as you might think.