Under conventional radiology, excessive exposure outputs a "black" film. In case of digital systems, good images are got from a large range of doses. With the help of digital fluoroscopy systems, it is extremely simple to get as well as delete images. There might be an inclination to get more images than what is required. In case of digital radiology, higher patient dosage implies improved image quality and therefore a propensity to apply higher patient doses than is actually needed. Different medical imaging works need different intensities of image quality and due to that doses that have no extra benefit for clinical purpose are avoided.
The quality of image can be affected through lack of correct levels of data compression and also post processing techniques and all these new challenges must be part of the optimization process and covered in the clinical and technical protocols. Local diagnostic reference levels must be reassessed for digital imaging and patient dose constraints must be shown at the console of the operator. Frequent patient training must be done at the time when digital methodologies are launched. Rendering training in managing image quality and patient dose in case of digital radiology is essential. Digital radiology will entail new laws and present new challenges in case of practitioners. Since digital radiology images are simpler to get and to communicate the justification, standards must be strengthened. Installation of digital systems must entail clinical specialists, medical specialists as also radiographers to guarantee that imaging potential and radiation dose management are dovetailed.
The biggest single man-made source of X-ray happens to be medical diagnostic radiography. Latest estimates have stated that the X-ray examinations constitute the reason behind the maximum of the total effective-dose-per-capita irradiation. The diagnostic nuclear medicine procedures are responsible for more radiation. It is normally consented that medical X-ray exposure can be lowered considerably without lowering the quality of radiological images. Hence it is important that patients are not exposed to unnecessary radiological examinations, and are safeguarded from greater degree of exposures when the radiological methods are needed. A lot of major dose surveys have been undertaken particularly in the developed nations. In 1991-92, Harison et al. undertook a pilot study through the application of indirect methods to examine the capability for reducing the radiation dose to patients and to made recommendations on efficient procedures. Even though it is restricted to quality control activities like tube potential (kV), mAs, sensitometry, and image quality tests, the gravity of patient dose monitoring was also identified as a vital aspect of the entire program. In the initial survey, the entrance skin exposures -- ESEs of the average size patients for the routine radiographs in public hospitals were conducted.
In the study it was observed that very wide variations of the patient dose in the identical X-ray examination in the different hospitals. Some of the factors that were responsible to the observed radiation in the patient exposure can be ascribed to the application of lack of optimal imaging devices, substandard choice of technical factors and wrong film processing methodologies. Therefore it is suggested that a considerable reduction in the doses of radiations can be done without seriously impacting the quality of image. Use of fast film-screen combination was possibly one of the major factors in lowering the ESE by 30 to 40%. Nearly, the entire radiology centre taking part in the study was making use of the combination of fast film-screen and good quality development drugs. Therefore the patient dose spread was primarily because of the choice of exposure factors, focus film distance and output of the X-ray units.
Since the chest and skull mAs and kVp in the first study was respectively higher and lower compared to the NRPB calibrations, it has been suggested that they raised kVp and lower mAs. These alterations would raise the patient dose without significant impact on the quality of the image. It has also been calculated that raising the tub potential from a value of 600 to 90 kVp will outcome in an ESE savings of 60%. It was discovered by Matrin et al. that raising the tube potential by 8-13 kVp in the lumbar and thoracic spine tests resulted in dose reductions of 26-30%. In the experiments it was also observed that the lowering in mAs lowers the film optical density and patient dose by a factor of 10 to 50% without obviously lowering the quality images. It implies that the contrast or resolution of a white radiograph might be equal to that of a black radiograph.
Use of digital imaging for enhancing image quality:
With massive technological development in the digital imaging, Picture Archiving and Communication Systems -- PACS has been a contributor in enhancing image quality in Radiology. However, maintaining PACS and clinical workstations comes under the purview of the overall hospital information technology management program. Whereas this condition in the majority of the situation happens in the clinical areas outside of radiology in which several multi-use Personal Computers are also used as image display devices, this state of affairs is more and more usual in the radiology environment wherein PACS workstations are managed as high-end computers in place of medical display devices. A large number of differences in the data sets exist that are created by different image acquisition systems that should be given room in case the spread of digital imaging across specialties has to progress in the absence of spawning severe negative side effects. For instance two instances can be given in which such problems crop up. (i) an ER doctor using the display monitor of a laptop computer for diagnosing a possible pneumothorax and (ii) a pulmonologist using a standard 1.2 megapixel (MP) display monitor to view the chest images to find out the presence of lung modules. In case of the above examples, the risk remains that the two physicians will not be able to see fine image details which are present in the image data sets as the display device will not be capable of producing them in a humanly intelligible way.
The two important factors in a display device happen to be spatial resolution and contrast resolution. The spatial resolution is found out by how many pixels a display device has, whereas the contrast resolution is found out by the brightness range of the monitor along with the application of DICOM calibration. Users of display and managers of medical display technology must be acquainted with these concepts and make application of that knowledge to the correct procurement, installation, and servicing of medical displays. Different doctors apply different data types and thus need different display devices. Apart from that, some physicians provide diagnostic reports or make treatment decisions based on the data they observe and, thus must have their monitors optimized to that assignment while others make application of images and the reports together as data points in a differential diagnostic process and are not in need or highest quality of image display. It is also crucial to consider safety concerns related with all equipment, particularly instruments in operating rooms, treatment rooms, procedure rooms, trauma rooms as well as ICU patient rooms. One recent phenomenon has been teleradiology that allows doctors to review images from home during emergency situations which also requires correct displays. Installation of a high-end laptop computer with a DICOM-calibrated high-resolution display can also be used.
Display devices contribute significantly in a lot of areas of healthcare and the absence of knowledge regarding the differences in image quality of display devices are areas of concern for the medical profession. Display broadly falls into two broad categories viz 'diagnostic' and 'review' which might also be referred to by the alternative names such as 'primary' and 'secondary'. Diagnostic displays are used for making a primary diagnosis and also for the purposes of displaying an image on which a report will be based. Review displays are used for display images for purposes apart from making a primary diagnosis and therefore the needs of these displays are less severe compared to those for primary displays. As regards maintaining display quality, it needs to remain at the specified settings and must not degrade in the course of time. The factors are focus, luminance, and contrast and color rendition. It is important to note that display devices must have limited user access to controls in order to check brightness and contrast being adjusted by the wrong personnel. Displays regardless of monochrome or color, might require color matching when used in groups. Review quality display devices do not require the identical level of image quality assurance as diagnostic displays. Nevertheless, they require cleaning and the image quality checked that could be attained with in-house trained staff. Care must also be given to display monitors used for instance at the homes of clinicians for…