RESEARCH PAPER ON RADIATION DOSAGE CALCULATOR
Lakshya Garg(CSE 2nd year), Vellore Institute of Technology, Vellore, India
Abstract'The network of healthy and high functioning human beings is required by various institutions and societies for the development of our race. This requires individuals who are fit and free from any medical anomalies. Each year high dosage of radiation is emitted from various natural and man-made sources, which can lead to malignant tumors and other skin diseases. This threat is equally threatening in highly developed colonies as well as in segregated areas of the globe. The immediate scenario may not be hazardous but, at the current rate of development of technologies and the exploitation of resources the effect may raise manifolds. The Radiation Dosage Calculator (RDC 1.0) serves a software that inputs data from the user about their context and gives a report of the amount of dosage they receive in those conditions and the damage that may happen to them in continuation of those circumstances. It also acts as learning tool for medical students and can be used as data for research publications by institutions , newspaper , e-health journals etc.
Keywords- RDC- Radiation Dosage Calculator Administrator- He has the authority to add/delete users, grant permission to them to access the services. Developer-Workers behind the scene who are creating and running the software. TBD-to be declared.
The RDC software is a tool developed for generating awareness in public and use the data for study and other purposes. The main audience to be covered the software are: 1. General Public- Anyone can access the software and may use it to fit their purpose. 2. Medical Dept.-The data is calculated on certified scientific method and thus can be used for medical reports. 3. Government- The report may be used in social platforms. 4. Coorperate Houses- The report can be used in commercials or advertising. 5. Researchers- They can use the data for research and cite it in their reports. The orderly sequence to study this document is in numerical order defined in the contents.
Objective - The first and foremost objective of the RDC software is to be available as a tool for acquiring knowledge and solve different purposes for the uses. It shall create alertness in the public not only educated members of the society but also the unprivileged section too. The eminent threat of radiation may be small but even fragments of exposure to radiation may prove lethal. So, the software must be a channel to alert people.
Profits - It can be linked with other surveillance softwares like Google Map which can act as a profitable business equipment. Confidential data including locations of nuclear arsenal of the countries shall be provided to only registered users through on demand channels which can generate profit. It can act as standard source to label organisations as eco-friendly or hazardous to mankind in the later stages. All these can increase the usage and hence the productivity of the RDC softwar
 A. Ahnesjo, 'Collapsed cone convolution of radiant energy for photon dose calculation in heterogeneous media,' Medical Physics, vol. 16, no. 4, pp. 577'592, Jul-Aug 1989.  N. C. Institute, 'Radiation therapy for cancer: Questions and answers,' Online: Available http://www.cancer.gov/cancertopics/factsheet/ Therapy/radiation, 8 2004.  M. Technology, Online: http://www.model.com.  M. T. R., S. J. W., and B. J. J., 'A convolution method of calculating dose for 15 MV x rays,' Medical Physics, vol. 12, pp. 188'196, 1985.  J. V. Siebers, 'Monte carlo for radiation therapy dose calculations,' Online: Available www.radonc.rdo.vcu.edu/AAPM, 2002.  A. L. Boyer, R. Wackwitz, and E. Mok, 'A comparison of the speeds of three convolution algorithms,' Medical Physics, vol. 15, pp. 224' 227, 1988.  H. Liu, T. Mackie, and E. McCullough, 'A dual source photon beam model used in convolution/superposition dose calculations for clinical megavoltage x-ray beams,' Medical Physics, vol. 12, no. 24, pp. 1960'74, December 1997.  D. C. Murray, P. W. Hoban, W. H. Round, I. D. Graham, and P. E. Metcalfe, 'Superposition on a Multicomputer system,' Medical Physics, vol. 18, no. 3, pp. 468'473, May 1991.  J. W. Matthews, F. U. Rosenberger, W. R. Bosch, W. B. Harms, and J. A. Purdy, 'Real-time 3D dose calculation and display: A tool for plan optimization,' Int. Journal of Rad. Onc., vol. 36, no. 1, pp. 159'165, August 96.  P. Alderson, M. Wright, A. Jain, and R. Boyd, 'Parallel pencilbeam redefinition algorithm,' Lecture Notes in Computer Science, vol. 2840, pp. 537'544, January 2003.  P. Belanovic and M. Leeser, 'A library of parameterized floatingpoint modules and their use,' 12th International Conference on Field-Programmable Logic and Applications, pp. 657'666, 2002.  A. Jaenicke and W. Luk, 'Parameterised floating-point arithmetic on FPGAs,' ICASSP IEEE INT CONF ACOUST SPEECH SIGNAL PROCESS PROC, vol. 2, pp. 897'900, 2001.  K. Underwood, 'FPGAs vs. CPUs: Trends in peak floating-point performance,' International Symposium on Field Programmable Gate Arrays, vol. 12, pp. 171'180, 2004.  Celoxica, 'Dk design suite,' http://www.celoxica.com/products/dk/.  H. H. Liu, T. R. Mackie, and E. C. McCullough, 'Correcting kernel tilting and hardening in convolution/superposition dose calculations for clinical divergent and polychromatic photon beams,' Medical Physics, vol. 24, no. 11, pp. 1729'1741, November 1997.  Altera, 'Quartus ii design software,' Online. Available http://www.altera.com/products/software/products/quartus2.  H. Bui and S. Tahar, 'Design and synthesis of an IEEE-754 exponential function,' IEEE Canadian Conference on Electrical and Computer Engineering, pp. 450'455, May 1999.  Altera, 'Stratix devices,' http://www.altera.com/products/devices/stratix/.  '', 'Nios II development kit, Stratix professional edition,' Online. Available http://www.altera.com/products/devkits/altera/kitnios 1S40.html.  J. F. Carter, 'Dell inspiron 6000d,' Online: Available http://www.math.ucla.edu/ jimc/insp6000/p-proc.html, March 2005.  Intel, 'Pin,' Online: Available http://rogue.colorado.edu/Pin/index.html, June 2005.
TECHNOLOGY AND SOLUTION
3-Design tool ' Wordpress A business rule is a rule that defines or constrains some aspect of business and always resolves to either true or false. Business rules are intended to assert business structure or to control or influence the behavior of the business. Business rules describe the operations, definitions and constraints that apply to an organization. Business rules can apply to people, processes, corporate behavior and computing systems in an organization, and are put in place to help the organization achieve its goals.
4-Development tool 'RAD IBM Rational Application Developer for WebSphere Software (RAD) is an integrated developmentenvironment (IDE), made by IBM's Rational Software division, for visually designing, constructing,testing, and deploying Web services, portals, and Java (J2EE) applications.
5-Database platform ' DB2 DB2 Database is the database management system that delivers a flexible and cost effective database platform to build robust on demand business applications and supports web services standards.
B)Design and implementation constraints
GUI is only in English. Login and password is used for the identification of users. Though almost all the services are readily available for the public use confidential data can only be accessed by registered users. Limited to HTTP/HTTPS.
This system is working for single server. The software data must be updated. Redundancy of data in population census can lead to error in calculating the radiation
1)The population census must be updated.2)Only registered users may access confidential services.3)No violation of govt. policies on weapons disclosure should be violated.4)Feedback from users must be examined.
1)Servers must be online all time. 2)Backup management shall be applicable all time.3)Financial aid. 4)Data must be taken from verified sources.
A. Background Information
Generally speaking, dose calculation determines the amount of energy deposited within a phantom (material used for the absorbing medium in experimental dose measurement) from photons produced by a linear accelerator or other radiation emitting device. To accurately calculate radiation dose, many different factors must be considered. First, the source of the radiation is used to determine beam energy. When this source is known, the composition of tissues in the patient is determined. Once the composition of tissues is known, and the energy (fluence) of the photon beam is known, the distribution of radiant energy released locally in a patient can be easily calculated . This energy is then further distributed by secondary particles (electrons, positrons, and photons) until it is absorbed as dose, or it escapes from the boundaries of the patient and is no longer of interest. This two-step process is the basis for several methods of dose calculation where the distribution of primary energy is convolved with a kernel describing the energy spread by the secondary particles. The distribution of primary energy is typically referred to as the TERMA (Total Energy Released to MediA). There are three main methods of dose calculation in use today. One class of algorithms are known as convolution, another as superposition, and the last as Monte Carlo analysis. Monte Carlo methods are the most accurate and also the most computationally intensive methods available. They are not used in clinical practice due to the enormous computational load they generate. Convolution and superposition methods are less accurate, but can typically be computed in much less time than Monte Carlo methods. An overview of a convolution method can be found in  and a Monte Carlo overview can be found in . Current running times for dose calculation range from 5 to 45 minutes. (These numbers were provided by the University of Maryland School of Medicine, where two different dose calculation systems are in use, the Pinnacle system from Philips, and Prowess Panther from Prowess, Inc.)
C. Related Work
Much of the research available concentrates on providing parallel implementations of different radiation dose calculation algorithms. The results obtained demonstrate the potential in using multiple processors to compute the dose, and illustrate the parallelism available in dose calculation. Current parallel processing machines, however, present additional facility concerns, such as cost, cooling, space, and power requirements, that may be mitigated by using dedicated hardware. In , Murray and others present a multicomputer approach to speed up dose calculation using transputers. They were able to achieve a 7.81x speedup by using 8 transputers in a tree network. Matthews and others present work aimed at using parallel processing to quickly compute and display dose in . Their algorithm however is not accurate enough for a final dose calculation. Finally, Alderson et al. present a parallel pencil-beam redefinition algorithm that is able to achieve a 5.56x speedup by using a 12 processor cluster over a single processor . In order to achieve the desired dose accuracy, we opt to employ floating point processing in our implementation. With remarkable advances in FPGA technology as well as the availability of several FPGA floating point libraries (e.g., , ), floating point processing is becoming more popular. These libraries have appeared due to the fact that floating point calculations are more efficient than fixed point calculations when the dynamic range of the numbers is large. Underwood has compared FPGA and CPU past, current, and future floating point performance in . This is done by selecting representative processors and FPGAs from different years and constructing a trend line to estimate future performance. It was found that in the future FPGAs will be more efficient, in terms of peak MFLOPS (Millions of Floating Point Operations Per Second), than CPUs in performing all types of floating point operations and that currently they are capable of performing more MFLOPS than CPUs for most floating point operations.
The methodology is an idea taken from nuke maps ,
the basic idea here is calculation of the amount of radiation in merms,that is a unit of radiation here we provide the used with some kind of questionaire which when answered accordingly has certain for every specigic and particular answer selected - there the values in the end get added up to give the total amount of radiation exposure to the given person the average is aruond -42.03 mrem/year.
It seems tempting to directly convert a software implementation of the collapsed cone algorithm to an FPGA implementation. (In fact, such software was provided to us by researchers at the University of Maryland School of Medicine.) However, our initial analyses suggested that a straightforward conversion would have yield minimal speed improvements. In order to improve both computation effi- ciency and accuracy, we have made several modifications to the original collapsed cone algorithm. These modifications were mainly made in the Rp mn and Rs mn calculations presented in (2) and (3). The following subsections describe the new algorithm, as well as other implementation details. A. Rp mn Calculation The original dose calculation method has been presented as Equations (1) and (2). Unfortunately, the use of these equations makes the addition of dose correction factors difficult. The main dose correction factors of interest are kernel tilting and kernel hardening, the exclusion of which can have a detrimental effect on the accuracy of the final result of the calculation as shown in . A major reason that these correction factors are difficult to implement is the requirement of a constant 'r due to the parallel lattice of lines needed for the original equations. The reason for this requirement of a constant 'r will be discussed later. We have modified the original collapsed cone algorithm such that a constant 'r is no longer required. This enables us to start a calculation from each voxel v and to easily perform kernel tilting by providing different Am and am values as needed by v. In our modified algorithm, the cones are extended from each voxel with non-zero TERMA value. TERMA indicates the energy released, and the kernel describes how that energy is dissipated from the interaction site. Therefore, if a voxel contains no TERMA, then there is no reason to ray trace from it as there will be no energy deposited. Algorithm 1 presents the key procedure for calculating the primary energy distribution. Procedure Primary_Dose is called iteratively and the number of iterations depends on the voxel size, which varies depending on both the clinical software used and the region of interest within a patient. With this algorithm, there are more variables that must be tracked throughout the calculation. However, it enables a much more flexible implementation than (1) and (2). There are many input values to the modified algorithm. Ti , ''i , 'mn, 'r, ''i , Am, and am represent the same parameters as in . Added to the algorithm are p diff, which represents a difference value from the previous iteration, num '' showing the total number of iterations performed, sum '' giving the previous sum of the '' values and p sum which represents the previous radiological distance. These additional variables are used in order to perform the algorithm with a different 'r for each iteration. They are used to keep track of quantities which are lost if only Equations (1) and (2) are employed. Equation (1) calculates the energy released within a cone. Our modified algorithm, in essence, calculates and saves the previous amount of energy released in this cone, calculates a new value for energy released and subtracts from it the previous value in order to obtain a more accurate dose than possible with Equations (1) and (2) when 'r is changing. An explanation regarding why a constant 'r must be used for (1) and (2) is described here. Assuming that Ti , ''i , 'mn, ''i , Am, and am remain constant, then only 'r may change. Here, assume that the original 'r1 is large, i.e. 'r1 >> 1 and the second 'r2 is small, i.e.'r2 << 1. Then, the (1'e 'am''i'r ) factor in Equation (1) is essentially (1'e ''r1) or ' 1. So, the initial 'Rp mn used to calculate Rp mn would be very large as most of the energy would have been deposited within the cone. Now, the exponential factor in Equation (2) is meant as an attenuation factor. So, if 'r2 is small, such that the exponential factor is ' 1, then nearly the same amount of energy will be deposited in the region of 'r2 as was deposited in 'r1, which is not sensible. The same procedure can be repeated to show that the opposite effect occurs if 'r1 is small and 'r2 is large. To see why our modification works, observe that by using 2 'r's, 'r1 and 'r2 where 'r1 = 'r2, the equations will yield consistent results. That is, the same amount of energy will be deposited by Equations (1) and (2) as would be deposited by (1) assuming that a 'r3 was used where 'r3 = 'r1 + 'r2. A block diagram for the Rp mn calculation is given in Figure 3, where the circles represent functional blocks. The figure illustrates the initial inputs along with the different steps required to produce an output. Most of the functional blocks (such as addition/subtraction, multiplication and division) are realized by the corresponding pipelined modules provided by the Quartus design tool from Altera . The entire implementation is fully pipelined with buffers inserted to balance the execution paths, and is capable of producing a result every clock cycle after an initial period of latency.
A. Exponential Kernel Calculation
Instead of performing a table lookup for the kernel calculation, which is typically done in many existing dose calculation systems, the use of an exponential kernel calculation first proposed in  was performed. In , Ahnesjo'' showed that a kernel calculated using an exponential function closely resembled that obtained by using table lookup. Most of the time, with modern CPUs, it is more efficient to lookup a value from a table than it is to perform a lot of additional computation, but with FPGAs that is not the case. Our implementation of the exponential kernel calculation is based on the exponential function evaluation discussed in . The authors of  presented the design and implementation of an IEEE-754 compliant exponential function on an FPGA. We modified the design in  to suit our application. Strict compliance with the IEEE-754 standard was removed due to foreknowledge regarding the use of the inputs in our application and the design was pipelined. Once a pipelined exponential function was available on an FPGA, it was a simple task to implement the exponential kernel calculation. With the hardware exponential kernel computation, additional costly external memory lookups were avoided.
B. Pipelined Ray Tracing
The reference software algorithm from the University of Maryland School of Medicine relied on table lookup in order to perform ray tracing. For that implementation, all ray tracing calculations are performed prior to the start of the dose calculation. When a value is needed, it is simply looked up from a table. This is not an efficient implementation on an FPGA due to the slow access of external memory on FPGAs. Instead of precalculating and storing ray tracing values, it is more efficient to simply calculate a value when it is needed. We have designed a fully pipelined ray tracing module. Though this module adds some initial latency to the overall calculation, it does not affect the overall throughput at all. In this way, we were able to eliminate an unnecessary external memory access by replacing a table lookup with calculation. C. Table Usage The use of several small tables enabled the algorithm to obtain needed values in parallel from on-chip embedded memory blocks instead of resorting to loading them sequentially through external memory. One example of this is the direction x, y, and z memory used in conjunction with the ray tracing module. These memories designed such that they can be loaded in parallel. With such parallel data access, the ray tracing module is able to compute a new result every clock cycle. If a sequential external memory were used, the best that could be accomplished would be a new result every 3 cycles. In order to reduce the impact of external memory access on system performance, we have examined several factors to determine which required data could be moved from the external memory to on-chip memory. First, the size of the matrices needed was a concern. Most modern FPGA chips contain no more than 9 Mb of on-chip memory. The matrices could only take up a small portion of this. Second, it needed to be shown that data brought into onchip memory could be used in such a way as to indeed accelerate the computation of the algorithm. The direction x, y, and z memory meet both of these criteria. The rest of the data are from large matrices and hence are kept in external memory
SUMMARY AND FUTURE WORK We have implemented a FPGA-based system for implementation of radiation dose calculation. This method implements a collapsed cone convolution and has been shown to decrease computation time when compared to a commodity CPU. The presented method is efficient both in terms of number of calculations and in memory bandwidth. Such achievements can be of great benefit to improving the quality of radiation therapy. This method shows the ability of new FPGA chips to handle a large amount of calculation element and successfully compute a complex software algorithm. In addition to providing a quicker collapsed cone convolution method, we have implemented the algorithm in FPGA using state of the art design technology. Much work was done in hardware design and in algorithm selection and modification for improving execution efficiency and required memory bandwidth. In the future, we would like to obtain a new development board that would allow us to implement and test the entire algorithm in hardware. In addition, an interface with an existing commercial dose calculation engine needs to be designed would allow for thorough testing and would facilitate the use of this new technology. Also, the addition of kernel tilting to the current implementation would improve the accuracy and put us one step closer to obtaining a clinically viable dose calculation chip.
' journal on radiation
' A. Ahnesjo, 'Collapsed cone convolution of radiant energy for photon dose calculation in heterogeneous media,' Medical Physics, vol. 16, no. 4, pp. 577'592, Jul-Aug 1989.
' N. C. Institute, 'Radiation therapy for cancer: Questions and answers,' Online: Available http://www.cancer.gov/cancertopics/factsheet/ Therapy/radiation, 8 2004.
' M. Technology, Online: http://www.model.com.
' M. T. R., S. J. W., and B. J. J., 'A convolution method of calculating dose for 15 MV x rays,' Medical Physics, vol. 12, pp. 188'196, 1985.
' J. V. Siebers, 'Monte carlo for radiation therapy dose calculations,' Online: Available www.radonc.rdo.vcu.edu/AAPM, 2002
...(download the rest of the essay above)