top of page

Neuropsychology 2.0: Sailing the SHiP-NP To New Ports

Summary


Neuropsychological tests are regularly updated, but clinical service, scoring and data management tools have been slow to develop and leverage web-based technology. This article describes efforts to advance these tools. Examples are provided using the Structured History Protocol for Neuropsychology (SHiP-NP; see here) and a data management tool designed to facilitate the clinical practice (e.g., data visualization and report writing) and engage patients through an integrated environment that allows for data entry, incentive points and gift cards, secure communication and file exchange, and personalized information and educational materials.


Stuck In Dry-Dock


The utility of neuropsychological testing is well established in numerous clinical scenarios. Tests are regularly updated, but clinical service, scoring and data management tools have been slow to develop and leverage web-based technology. In short, the clinic service methods tend to be practitioner-specific in scope, scoring is often labor-intensive and involves a manual lookup component, and data management tools are typically customized desktop applications (e.g., Excel spreadsheet, MS Access) that do little more than score and store data. Correspondingly, few clinical data standards have been adopted, large-scale data collection efforts are rare and quickly lose momentum without funding, and data repositories that do exist are generally single clinic-based, have little or no interoperability, and do not engage patients in the data collection process. These factors have kept the neuropsychological ship stuck in dry dock, but advances in information technology and data standards offer promising opportunities to move the field forward.


The SHiP-NP Sets Sail


The Structured History Protocol for Neuropsychology (SHiP-NP) is a standardized data definition of commonly used clinical assessment tests and patient history instruments. The aim of SHiP-NP is to increase data harmonization within the field and facilitate data aggregation to the Nation Institute of Mental Health Data Archive. SHiP-NP development involves both public and private sector investigators who also formed the National Neuropsychology Network (NNN: see here) to organize and coordinate the interest of other practitioners. NNN is building a pilot SHiP-NP database application and making it freely available to practitioners with the goal of widespread adoption.


The expert lead initiative ensures the clinical utility and theoretical underpinnings of the SHiP-NP are solid. Thus, the main challenge to widespread adoption is applied in nature. That is, how to design a system that addresses the practical mechanics of getting patients to complete the instruments and getting practitioners to enter and use results as part of their day-to-day operations and decision making. Designing such a vessel to navigate the field forward involves identifying the needs and requirements of targeted users.


Now Boarding


Knowing the passengers and use for a vessel are key to its design. Historically, neuropsychological applications have been practitioner-centric and focused on generating the main work product, a clinical report for a single patient. However, there are three main stakeholders, each with different interests and needs:

  1. Practitioners

    1. Test scoring

    2. Pooling results (e.g., in a table, separated into cognitive domains)

      1. Single visit and repeat testing results

    3. Report writing

  2. Patients

    1. Document symptom history

    2. Obtain disease information and education

    3. Track outcomes, monitor goals

  3. Organizations and Associations

    1. Aggregate data

      1. Test development and validation, norms, etc.

    2. Implement and track clinical standards

    3. Research advancement of the field

When application development addresses the needs of all three stakeholders, the relationship between the practitioner and patient becomes much more collaborative and interactive, a continuous quality improvement feedback loop is put in place, and the clinical setting becomes framed within larger organizational goals for advancing the field.


A Portal Portending Ports


Vessel design also incorporates the ports to be visited. A cautionary note is necessary when anticipating the direction and needs of neuropsychology, however, a strong case can be made that advancement of the field will likely travel to some of the following ports:

  1. Increased standardization and data harmonization

  2. New test development

    1. Methods

      1. Testing using Virtual Reality (VR) and Virtual Augmented Reality (VAR)

      2. Test administration outside the clinic via web-based computer and other device administration (e.g., improved ecological validity)

      3. Improved test accessibility (e.g., blind)

    2. New domains (e.g., Social Functioning tests administered via VAR)

  3. Test scoring and interpretation

    1. Data visualization

    2. Connecting results to literature references

    3. Monitoring patient outcomes and recommendations

  4. Patient engagement and outcome tracking

    1. Web-based portal

      1. Communication

      2. Test administration

      3. Instrument data entry completion with incentives

      4. Connecting test scores to appropriate educational materials and training

    2. Monitoring outcomes

  5. Large scale development

    1. Multiple clinics, research projects, surveys, etc. within a single, integrated environment

Christening The SHiP-NP


Taking both the stakeholders and future needs of neuropsychology into account, an example SHiP-NP vessel was built, christened, and set forth on its maiden voyage. Below are the components and related feature set separated out by stakeholder interest


Patients

The main challenges to patient engagement and completion of assessment instruments are addressed in a separate blog article (see article titled “Patient Disease Surveys: Turning Giants Into Windmills” here). With modifications related to the SHiP-NP, example solutions from the web-based patient portal include:


Guided Data Entry

The portal interface should simplify and walk patients through data entry, examples include (see images):

  • An easily identifiable, single-click button to start data entry.

  • Visual cues that indicate progress.

  • A self-paced, stop any time environment with automatic navigation to next item upon return


Visual Appeal

Form design is a key issue for engaging subjects, minimizing the data entry burden, maximizing completion rates and ensuring accurate data entry. A couple of examples from the SHiP-NP will be discussed here. More form design tips can be found in the blog article titled “Eyes vs. Fingers: Form Design Lessons For Accurate Data Entry” here.


Standard instrument design uses a text-heavy, item-by-item format which works sufficiently well and sometimes there are little or no alternatives. However, instrument items often have a physical and/or tangible target (e.g., happy, sleep) or a complicated concept that is straightforward to demonstrate (e.g., syncope), creating an opportunity to incorporate comprehension aids (e.g., icons, pictures, videos, audio). This often makes data entry more pleasant, faster, easier, more accurate and may help lower the required reading level. Moreover, pairing items with visuals can help declutter reports (e.g., item prompt text can be eliminated; see ‘Practitioner’ section below).


As an example, note in the comparison below the utility of using even simple icons and buttons.

The use of color and space (including empty space) are powerful design characteristics. Vertical space is generally easier for users to navigate. In the comparison below, the combination of icons, space and color was used to ease the mental effort required to identify the correct response. This arrangement also eliminated excess verbiage (i.e., “Maternal”, “Paternal”). Note family relationship items only appear if “Yes” endorsed to having a family history of Alzheimer’s disease.


Form Without icons, space, color.

Form With icons, space color.

Incentives

The use of incentives for instrument completion is a powerful mechanism to engage and retain patients and increase data capture rates. Various incentives can be used (e.g., gamification, points, tickets or discounts to events, parking validation, etc.). As one example (see image below), the SHiP-NP registry allows patients to earn points for completing instruments which can be exchanged for gift cards (e.g., store merchants, restaurants, etc.).

Secure Communication

Patients completing instruments often have questions about items. There should be a secure, fast way to ask questions and receive answers. An integrated messaging system addresses this need well (see image). There is also a file transfer and device camera picture capture (e.g., phone) feature for clinical service enhancement (e.g., document clinical features, send patient finished report/summary).


Reports

Using the patient’s data to dynamically create report content that is meaningful, personally relevant, and actionable provides a powerful “Me-First” reason for patients to complete instruments. In addition to more traditional data driven charts, graphs, etc., any web-based content can be used (e.g., videos, links, podcasts, pictures, etc.) and integrated with logic to determine when it appears. Consider the example below that was incorporated into the SHiP-NP registry whereby concentration problems either reported by the patient or demonstrated on test scores trigger attention-related report content to appear with specific recommendations and educational materials (see image).

Report content can be based on data entered by the patient, staff or imported from other systems and the trigger expression can be as sophisticated as needed (i.e., full access to javascript programming language). More information about organizing and creating engaging report content can be found in a separate blog article titled, “Disease Information: Making It Relevant To Patients” (see here).


It should be noted that reports can also be a mechanism for patients to verify responses. Giving patients access to a “SHiP-NP Summary” report PRIOR to the clinic appointment for response verification is a good example (see report component examples in the “Practitioner” section below).


Practitioner

One of the primary reasons for limited adoption of data standardization initiatives within clinical settings is the cumbersome, often delayed, process of receiving and consuming the results. The goal is to have the instruments completed, scored, and put into a reviewable format at the time of the clinic visit, and then integrated with cognitive testing results. The practical mechanics and logistical efforts by staff and patients toward this aim are discussed in the article titled “Patient Disease Surveys: Turning Giants Into Windmills” here).


Consider just the data definition alone in the challenge of obtaining this goal. The SHiP-NP contains over 2000 unique variables, the bulk of which are not relevant to any given clinical case. For example, there are multiple questions about various languages (e.g. Spanish, German), but patients do not see these questions unless they speak a second language. Moreover, cognitive test scores add another 1000+ variables. For large-scale adoption to occur, practitioners must be provided tools to overcome the daunting task of getting their hands around the data quickly for a given case. Using the same technology as with patients, connecting the data to dynamic report content directly addresses this issue.


The main principles guiding the development of the clinical report were:

  • Hide irrelevant information

    • e.g., only show diseases with a positive family history

  • Visualize Results

    • Highlight problems with visual cues

      • e.g., Red checkmark associated with problem area

  • Color-code scales

    • e.g., Color coded red (poor) to green (excellent) T-score range

  • Use spatial location for clarification

    • e.g., right/left separation of mother versus father’s side family disease history

  • Use icons to minimize descriptive text

  • Display individual responses when clinically meaningful

    • e.g., Generate list questions marked “Severe or Extreme” on the WHO-DAS 2.0.

Using these principles, both cross-sectional and longitudinal (i.e., repeat testing) reports were designed. The report sections were separated into content domains (e.g., symptoms, education history, etc.). For more information on report design techniques see the article titled “Patient Registries: Clinical Report Design Techniques” here. The principles were used throughout the reports, several example components are provided below.


Self-Report Instruments

Results from the Neuro-QOL touches upon nearly all the principles above. Consider first a patient that does NOT endorse any problem issues as happening in the “Sometimes”, “Often”, or “Always” range. In this case, results appear as shown below.

Note in this example:

  • The section length is short

    • Individual items are not shown

    • Subscales are associated with an icon

      • Minimizes text length

      • Easily learned for quick data consumption

    • A statement appears mentioning no items in range

    • A green icon with negative symbol appears which is used throughout the report to indicate negative findings.

    • Color-coded scoring

      • Tscore and raw scores are color coded (see image)

  • Both Tscores and raw scores were color coded. Raw scores are typically NOT color coded for cognitive tests since conversion scores generally cover the entire range (1 – 99%). This is not the case for the Neuro-QOL scales (e.g., maximum Tscore for the “Upper Mobility” scale is 53.8). Thus, raw scores were also color coded to help with interpretation. For example, notice the raw vs. Tscore color discrepancy when the maximum severity is endorsed for the scales below (see image).

In contrast, when there are items endorsed within range (see image below):

  • A 'Details' summary table appears

    • Each subscale is listed and separated into response categories, creating 3 cells per subscale

    • The text of items endorsed within range are placed into their respective cells

    • The Details table appears when at least one item is endorsed within range. It is easy to appreciate the utility of knowing the Tscore shows generally good functioning within a domain (e.g., Emotional and Behavioral Dyscontrol), yet there remains important issues to address (e.g., “I had trouble controlling my temper”).

As another example, display of the results for the Family Diagnostic History instrument uses many of the same principles as the Neuro QOL, but adds the use of spatial location. Starting again with the case where a negative family diagnostic history is reported, the display includes only the pedigree tree with all family member nodes in green, a negative findings statement, and a green icon with a minus symbol (see image). Notice Right / Left positioning is used to separate the mother’s and father’s side of the family.

When a positive diagnostic history is reported, the pedigree tree color codes family members based on the number of diagnoses reported and the details table appears with icons, color coding, and a description of the various diagnoses for each family member. The icons used match those used on the questionnaire to assist with interpretation, visual appeal, and text minimization (see below).

Cognitive Tests

Computerized scoring for individual tests is increasingly common, however, the process of transforming the full test battery of raw scores into a common metric (e.g., percentile scores) and organizing the results into a clinical report remains a labor-intensive task, particularly for repeat testing. Thus, an important aim was automating this process.


As applicable, cognitive test forms were designed to capture:

  • Staff entered raw scores

  • Automated loop-up / calculated conversion scores (e.g., Scaled / T / Percentile - scores)

  • The demographics used to determine the look-up / calculated values

  • The norms used.

An example form is show below using the Trail-Making Test. Notice both the Heaton and MOANS norms were used which involves different demographic look-up values (i.e., MOANS only age). As raw values are entered, the associated Scaled / T / percentile scores are automatically generated. The demographic values used are also automatically populated based on patient characteristics.

Test scores were then automatically brought into a boilerplate clinical report and inserted into a “Dynamic Table” (see image below). More information about Dynamic Table functionality can be found in the article titled “Patient Registries: Clinical Report Design Techniques” here. In short, clinic visits were separated into columns, tests into rows, and cells contained percentile scores with an associated color-coded background and change from initial visit determined whether an indicator icon appeared.



Background color coding was based on the percentile cut-points in the table below. For those with a programming interest, the CSS class used the following hex codes, respectively (#ff0000, #e21c00, #c63800, #aa5500, #8d7100, #718d00, #55aa00, #38c600, #1ce200, #00ff00).

The presence of an indicator icon was based on percent change from baseline. Ideally, whether or not an indicator icon appears would be based on reliable change score metrics for each test, but this has not been implemented yet. Instead, the same change score percentages were used for all tests (see table below), but this could be set up as test specific.


Automated Clinic Notes

In addition to populating a report boilerplate with test values, the next level of automation involves using database variable values to build increasingly sophisticated phrases, sentences, and paragraphs. The challenge is to ensure the notes are accurate, meaningful, and useable with little or no modification. In this regard, the most fruitful report components are those easier to operationalize (e.g., Patient Appearance versus Clinical Impressions) and/or involve commonly used phrases or descriptions within a given clinical setting.


For example, the form below allows practitioners build sentences about speech and language characteristics in a point-and-click manner. A similar approach could be used to select from a standard set of recommendations.

Building longer sentences and paragraphs typically involves combining static and dynamic text. Notice in the example below, variable values can be inserted where appropriate and functions can be called (i.e., “rating” in the example) that can incorporate any number of variables and programming logic to generate the text that is returned.


Organizations / Associations


Data-related initiatives at the organizational or association-level often involve thousands of practitioners and hundreds of sites. Although multiple initiatives may be ongoing, the practical mechanics behind planning, funding, implementation, etc. are typically oriented around one initiative at a time. As a result, the scope of engineering is narrowed to a single solution, increasing the risk of redundant development and poor interoperability across initiatives.


A narrowed engineering scope is particularly detrimental to data standardization efforts due to the aim of utilization across clinical settings and research projects. Said differently, despite knowing the strength of the SHiP-NP will be derived from its use in multiple clinic settings and research designs (e.g., clinical trials, surveys, patient registries, observational designs, etc.), the rollout is designed around a single registry having the hallmark characteristic of narrowed engineering whereby the database structure is directly tied to the data definition (i.e., the table field names come from the data definition). Thus, for example, form changes or additions, adding new research projects, etc. require considerable technical expertise and knowledge of the backend table structure.


Alternatively, the rollout would be designed around sending an armada of SHiP-NPs sailing, some of the main features being:


Initiatives A Scale

Implementation of data standards is not a one-off approach. Instead, it is an iterative process of examination and adjustment involving multiple initiatives. Thus, system configuration should readily scale up and integrate the functionality required to support the management of 1000s of clinical (e.g., registries) and research initiatives.


Single Point Patient Access

As operations are scaled up, it becomes increasingly likely that patients are involved in more than one initiative. Having a single account with access to multiple initiatives provides a familiar location for participation and thus minimizes the learning curve, as well as facilitates recruitment notification and screening for other initiatives.

Build-Use-Copy-Repeat

To minimize redundant effort and best leverage previous work, “re-use” is a key design consideration. Users should be able create, utilize, copy and share system components, including the entire setup of a research project or registry (e.g., forms, data collection schedule, reports, etc.).


Data Aggregation

Combining data is a powerful technique to reduce the time required to answer empirical questions, increase the power to find an effect, and maximize the external validity of findings. Thus, the system should support both semi-automated (e.g., spreadsheet import) and automated (e.g., application program interface [API]) methods of data exchange.


Data Analysis

All data acquisition and/or aggregation initiatives involve data analysis and examination of results. At a minimum, data management systems should seamlessly export data into statistical packages. However, simple raw data export pushes down the line analytical work onto statisticians that is better leveraged within the data management system (i.e., functionality can be used across initiatives). Thus, beyond just raw data export, users should be able to create a-post-priori new recoded (e.g., dummy coding), transformed, and calculated (i.e., continuous or categorical) variable values within datasets. In addition, the data storage systems should incorporate common data cleaning and preparation techniques (e.g., descriptive statistics), as well as integrate functionality that anticipates analytical needs (e.g., combine patient medication usage into a dose equivalency scale).

Facilitate The Publication Process

Timely dissemination of findings is the main goal of nearly all data capture initiatives, however, nearly all data management systems designed for a single initiative have two major limitations:

  1. Transforming data into information requires considerable technical expertise.

  2. Generating and organizing findings is outside the development scope, and thus spread across multiple applications and locations (e.g., word processor, spreadsheet, statistical package, etc.).

Taken together, there is a large gap and often a painful process going from raw data to the end-result product (e.g., manuscript), and there is not a way to capture the work involved such that it can be re-used or leveraged by others.


Instead, users should be able to organize manuscripts and other targeted output into a biosketch-like layout and be able to copy all the related components as a starting point for subsequent output.


There should also be functionality to facilitate generating the end-result output such as:

  • Giving access to collaborating authors

  • Creating analytical and presentation components:

    • Datasets

      • Raw data and on the fly recoded, transformed and calculated variables

      • Pull in data from one or more initiatives (e.g., registry and study data)

    • Tables and charts

  • Storing related files (e.g., images, statistical output, etc.)

  • Writing the manuscript / report

    • A word processor with dynamically linked tables and charts that update on demand


In summary, targeting the needs of all three main stakeholders and integrating exciting new data standards initiatives such as the SHiP-NP within a large-scale, web-based data management system has considerable potential for advancing the field of neuropsychology. Those interested in more information or participating in the registry, send an email to info@studytrax.com .




bottom of page