Practical quality assurance guidelines

6.2.4. Practical quality assurance guidelines

Some practical guidelines are given below for various situations where the user is confronted with a new or revised clinical procedure using new or updated clinical software. The general test given in Section 6.2.3 is applicable for testing software and for assessing its acceptability.

6.2.4.1. Learn and understand the documentation When clinical software is to be used for a particular clinical procedure,

the user must first understand the objectives of using the software and the exact procedure requirements in order to use that software correctly. This includes learning about the requirements for the radiopharmaceutical, the patient preparation and study set-up, the data acquisition method, the requirements for data analysis and display, the algorithms and assumptions of the software, and for the use of any normal database results. The documentation supplied with the software should supply this information.

This also applies when an updated version of existing software is to be used. The documentation should be studied carefully to learn what changes have been made to the existing software.

6.2.4.2. Initial tests of software with clinical sample data Before using the new software or updated software routinely, the

software should be tested using sample clinical studies supplied by the manufacturer. These sample data must always be available and must include at least one normal and one abnormal example. These studies must be supplied with the appropriate values and images. It is helpful to have a knowledgeable representative of the manufacturer present during these tests, especially when using the software for the first time. Any discrepancy between one’s own results and the documentation should be noted and reported.

If a database of validated clinical studies with reference results exists, the clinical software should be tested using this database and the results examined; if differences are noted, reasons should be sought (see Section 6.2.4.5).

6.2.4.3. New data acquisition protocol New clinical software may introduce a new acquisition protocol that

replaces an existing one. It is important to learn how the new software works and how its requirements differ from the old protocol. For static, gated SPECT and whole body studies (all of which can be repeated without a second injection of the radiopharmaceutical), a study should be acquired and the data processed using both new and old protocols and the differences determined. This will not be possible with dynamic studies.

6.2.4.4. Use of a new or different computer system or software All users are confronted at some time with the task of replacing an old

computer system with a new one, or with implementing new applications software. For all routine studies, it is essential that differences in procedures and software results be evaluated between the new and old software in order to conserve continuity in results. Different approaches to acquiring test data could

be considered, depending on the situation: (1) Acquire data simultaneously on both old and new computers and

evaluate and compare results. (2) Use a set of studies from the old system and process with the new system, then compare the results. One problem could be ‘reading’ the studies with the new system and differences in acquisition protocols.

(3) Apply the same database of clinical studies on both systems to check the processed results.

Results to be expected from upgrading the computer system with: • No change in the clinical software. No changes should be expected.

Identical results should be obtained for the same input data when no user interaction is required and a maximum of 5% variation can be expected when user interaction is required.

• New software from the same manufacturer. A maximum variation of 5% can be expected. The user must be particularly aware of a systematic offset in the results.

• New software from a new manufacturer. A database of studies must be tested. For software with the same function (e.g. ejection fraction determination), a comparison between the old and new situations is required. The user must be very careful to re-establish a normal range of values (see Section 6.2.4.5).

6.2.4.5. Normal ranges Normal values and ranges should be established, preferably with a

suitable database of clinically validated studies, if available. This is one of the most difficult aspects of the quality assurance procedure. If such a database is not available or suitable, then it is necessary to build one that has a ‘normal’ population. Any study used to determine a normal value must be carefully archived for future use as reference data and for re-evaluation with new or modified software. The clinical criteria originally used to categorize the study as normal must also be archived for reference purposes.

Once a range of normal values has been established, or a new normal range has been established, transfer this knowledge to the clinical users of the system. Confirm this range periodically by reprocessing the same normal set of studies (e.g. for training and continuing education). This serves as a quality control check to ensure that a change in processing method has not crept in over time.

6.2.4.6. Training and responsibilities

A person should be designated to receive special training in the use of the clinical software. This person, the ‘specialist’, should then be responsible for instructing others in the use of the software. This may include initially testing the software. This person should assist in drawing up a local guideline for using the new software and can help with monitoring the use of the software and the software results.

New personnel must be properly instructed and trained in the use of the software. A record of the persons trained to use particular software should be maintained and only those persons trained should be allowed to use that software in routine clinical practice. It is possible to conceive of a password system whereby access to the clinical software for routine purposes can only be gained by trained persons.

6.2.4.7. Record logbook

A logbook should be maintained which records all software testing activities, problems and solutions, changes in procedures and actions taken. New software or software updates, including the software name, version and the date implemented, should be recorded. The logbook may serve as a basis for audits.

6.2.4.8. Audits Audits should be undertaken on a regular basis. Such audits should

consider the technical problems associated with software as well as clinical software results. It is essential that the results of audits be communicated to all those involved.

Clinical audits The purpose of a clinical audit is to certify that results from software are

as expected and are consistent. Examples of clinical audits are: (1) Retrospective review of results. A retrospective review of software results

with diagnosis: Did the results correspond with the diagnosis? If not, then action is required. Abnormal sensitivities and specificities are of interest since they may indicate improper use of the system or faulty components. This is also particularly useful when a systematic change has been made in the procedure method (e.g. acquisition or processing parameters, colour tables).

(2) Inter- and intra-observer comparison. Reprocessing a set of reference studies within the department to determine inter- and intra-observer variations in the results and to determine the reproducibility with respect to previous results from the same reference studies.

The data set used at the installation of the software could serve as a set of reference studies, or as another reference data set with known results. A useful method of comparing results from two analyses of the same data is the Bland– Altman method [22], whereby the mean value of two results is plotted against the difference in these two results. Such a plot gives an excellent insight into variations between two measurements and any offsets between two measurements over a wide range of values (e.g. for low, medium and high ejection fractions). A linear regression comparison is not sensitive and is inappropriate for these comparisons.

Technical audits The purpose of technical audits is to review problems, find solutions and

achieve improvements. They are useful for following up long term, outstanding problems, so that they are not forgotten. The logbook should contain all necessary information and it is, therefore, an invaluable source of information. Feedback and follow-up of audits is essential. The results of audits must be achieve improvements. They are useful for following up long term, outstanding problems, so that they are not forgotten. The logbook should contain all necessary information and it is, therefore, an invaluable source of information. Feedback and follow-up of audits is essential. The results of audits must be

Examples of technical audits are: (1) Problems and solutions. A regular review and documentation of all

problems and their solutions, including the frequencies of problems, helps to monitor software and operator performance.

(2) Software failures/errors. When a software failure or error occurs, it is necessary to establish under what conditions it occurred and whether it could recur. A review of the failure of the software to generate results in the way expected must be made regularly and communicated to all involved and to the software supplier in order to obtain an improvement and solution. An example of software failure is when automatic edge detection fails. A review requires understanding the circumstances under which the software fails and assessing whether this is due to improper use or to a software error. Examples of improper use are not using the correct pixel size (zoom factor), incorrect positioning of ROIs, using the software for a purpose other than that intended (e.g. software for left ventricular ejection fraction program used to process the right ventricular ejection fraction). It should be noted that problems encountered may not necessarily relate to an error with the software itself, but to another part of the procedure, for example:

• The technical performance of the study (e.g. timing/time sequence of the study, labelling of the radiopharmaceutical, injection technique, collimator selected, patient positioning);

• The data acquisition (e.g. matrix size, pixel size, images/time per frame for a dynamic study, count statistics); • The data analysis and display (e.g. ROI selection, background ROI positioning, colour scale).

(3) Software modifications and updates. A mechanism to review the effects of software modifications and updates with all persons involved should be established, not only by discussion but also by actual demonstration and training in the use of the new software.

6.2.4.9. Each clinical procedure (1) Follow the standard procedure, especially for patient positioning, data

acquisition, data analysis and display.

(2) Note any divergent methods with the patient study documentation. Non- standard studies must not be included in establishing one’s own library of reference studies.

(3) Record ROIs selected with the study results (e.g. make a hard copy). (4) If quantitative data do not agree with visual images, then this must be

investigated. (5) For quantification that relies on a standard amount of radioactivity measured in a radionuclide activity calibrator, make sure that the calibrator quality control is performed and is acceptable.

6.2.4.10. Procedure manual It is important to develop and to maintain a departmental clinical

procedure manual that describes the full details of the clinical procedure and the use of the clinical software. This manual should be updated at least annually and whenever modifications have been made. This manual should be an integral part of the quality handbooks of the department and should always be available for reference. All users must be familiar with its contents.

6.2.4.11. Change in clinical procedures It may be necessary to make a systematic change in a procedure with

respect to acquisition parameters (e.g. matrix size, pixel size) or processing parameters (e.g. change limits between which parameters are calculated, change filter type and/or filter parameters, introduce a new parameter or new fitting algorithm, use another colour table and/or colour characteristics (logarithmic versus linear scale)).

If this is the case, then: (1) Assess the effects on quantitative results and on the images, using

reference studies. (2) Edit the departmental clinical procedure manual. (3) Transfer this knowledge to other users, including nuclear medicine

physicians. (4) Introduce the modification into routine practice. (5) Note the modification in the logbook together with the date

implemented. (6) Perform a clinical audit to investigate the effect of changes made (see Clinical audits in Section 6.2.4.8).

It should be noted that a modification to a procedure may have profound implications in the interpretation of studies and should never be made on an ad hoc basis.

6.2.4.12. Continuing education

A plan for the review of methods and any changes should be conducted on a periodic basis. Clinical software data analysis should involve all those who participated, including the nuclear medicine physicians. A record should be kept of which software application was reviewed, who took part and when the continuing education took place. A regular plan for continuing education in the use of clinical software should be established.

6.2.4.13. Communications with other users It may be useful to form or join a local users’ group of the computer

system being used. This is often an excellent forum for discussion of common problems (and solutions), with feedback to other users of the same system, as well as to the manufacturer, software supplier, or local vendor.

Within the group, sample studies could be circulated for processing a specific clinical software package and a comparison of results made. Knowledge of the variation and reproducibility of results is essential for all users of the same software, but might also be useful on a wider scale for presentation at a regional nuclear medicine meeting.