site without changing your settings, you are agreeing to accept all cookies on the site.
April 2018 - Issue 5
Welcome to Battelle’s Medical Devices newsletter. We offer this newsletter as a service to our clients to keep you informed of the latest news from our researchers and the industry.
Battelle’s Medical Devices team can help you accelerate your medical product development timeline – from ideation to evaluation to commercialization. Our newsletter will help keep you up-to-date on cutting-edge medical devices work, including device security, drug delivery, usability testing and neurotechnology.
Experts warn that it is embarrassingly easy to hack medical devices.
The reason is simple. Because connected devices are somewhat new to the industry, cybersecurity is a new concept as well – one that typically falls out of the wheelhouse of the device manufacture. To be sure, much education is still required to help get device manufacturers up to speed. But what is really needed is a culture shift across the industry.
The rise in connected medical devices has substantial benefits for patients and end users. However, there are also substantial, and rising, risks. Hackers can inflict their will to compromise:
The vast majority of cybercrimes are crimes of opportunity. And, unfortunately, there is plenty of opportunity in the medical device community.
Device manufacturers want to create devices that are safe and easy to use.
These companies are great about understanding how their devices work, and what potential risks and opportunities for harm exist within the use of that product. Their design teams work around these models to design a product that is safe to the end user.
What does that mean? It means they will design around intended-use cases and provide some basic protection against misuse. The problem with this approach, however, is that it assumes the user is going to be using that device in its intended manner.
What happens if the device is used in unintended ways?
As technology advances, so must our definition of what defines a “safe” device.
The challenge facing today’s medical device manufacturers is that the definition of patient harm is expanding, both exponentially and rapidly. Hackers are introducing new ways to harm users of medical devices by stealing patient data, holding data for ransom, holding a device and its user hostage, or using a connected device as a pivot point into a larger network.
It’s no longer just about ensuring that our devices won’t harm someone — but also that our devices can’t be hacked and used to harm users or hospitals.
Many companies simply aren’t great about predicting or expecting these scenarios. The reality is that, to protect against these threats, we must design differently.
So, how does a company go about changing their culture to design devices that protect against all types of patient harm?
The answer is in something called defensive programming.
In traditional programming, you design a device for a specific use to work within a specific environment. Engineers assume that the environment in which their device will operate is safe—that nothing in the environment will attack or infiltrate the device. That type of programming opens the door to all sorts of vulnerabilities.
Engineers today must design defensively, under the assumption that their device will be placed in hostile environments.
This type of programming empowers designers to think critically throughout every stage of the product development process. Each decision about connectivity, integrations, data transfer, usability and software updates must undergo a threat profile. Engineers should ask how each design decision opens their device to risk or defends against potential threats.
Additionally, engineers must assume that the network your device will connect to will be comprised, and that there is someone out there who is actively trying to compromise your device.
How are you building redundancies and protocols into your device to protect against these threats?
The challenge for many device manufacturers is that they are design experts, not cybersecurity experts.
Most companies don’t have the resources, bandwidth or time to add this talent to their design team. Complicating the issue is the reality that cyber threats to the medical community are growing in frequency and severity. This can be an overwhelming feeling for manufacturers.
It makes sense to partner with cybersecurity experts — those who are actively involved in monitoring and identifying threats and understanding how those threats are exploited.
By Jeffrey Geppert and Stephanie Kute
The healthcare industry is facing pressure to deliver better health outcomes at lower prices. At the same time, the industry is shifting risk and accountability to providers through value-based payment models. To succeed in this market, drug and device manufacturers will need to be able to calculate and explain their value proposition to providers and payers in terms of the Clinical Quality Measures (CQMs) and value-based payment models. This is an emerging challenge for which solutions do not yet exist.
Two approaches are possible:
Using a recognized measure, such as those in the CMS Measure Inventory Tool (CMIT), has the advantage of already being vetted across the healthcare community. However, applying these measures to a specific pharmaceutical or device treatment, as opposed to measuring quality across a healthcare facility or provider, is somewhat of a new twist.
This is how it would work for a new vascular catheter technology. A hospital that currently tracks and reports a Hospital Inpatient Quality Reporting Program measure related to Vascular Catheter-Associated Infections could stratify their own risk-adjusted data based on whether patients included in that measure calculation received the new technology or not. Then, this could be repeated across multiple hospitals. If the measure was improved for patients that received the new technology, then the treatments’ value could now be communicated in terms of the established outcome measure. Capturing a new technology’s value in term of an established outcome measure could be particularly powerful since it leverages real-world evidence.
However, in cases where there is not a relevant measure already established or a single measure doesn’t capture the full value, what’s needed is a framework to assist device and pharmaceutical companies in determining a treatment’s value to payers, providers and consumers in a pay-for-value world. An inclusive framework would consist of multiple, diverse elements that support differentiation between treatments such as:
Cost may be multi-dimensional, including out-of-pocket (cost per device/dose) as well as deferred and even avoided healthcare costs (e.g., OR time, hospital stay length or readmission frequency) and non-healthcare costs (e.g., disability and time-from-work).
Outcomes are, in many cases, already being quantified via CQMs. CMS and other payers are interested in improving the connection between drugs and devices and quality measures. There is opportunity for national consensus entities to designate measures specifically related to device and drug treatment quality.
However, it may be the case that interoperability and usability of the device, the security of the device and its data, and the experience that the user has with the device (or the company) can add as much value as the traditional domains of cost and outcomes. Therefore, we must find ways to quantify these elements and weight them appropriately, relying on real world evidence.
That’s where human centric design and cybersecurity come into play. What is critical is that these elements are considered at the very beginning of the investment and design cycle.
Optimizing a framework for multiple stakeholders can quickly become complex, but it is doable. And it’s critically important. To deliver better outcomes at lower prices, we must work together. Drug and device manufacturers can shift the way they calculate and communicate their value proposition to providers and payers by considering established CQMs and developing new value-based payment models.
By Scott Danhof
When designing a medical device, nothing beats direct observation of and feedback from the people who will be using it. Ideally, this research is performed as a close partnership between the human factors researchers and the engineers who will be working on the device.
Directly participating in the user research sessions and observing users first hand provides a deeper understanding of user needs and user interface requirements than simply reading the final user research report. Building effective partnerships during the user research process will also ensure that engineers will get answers to the questions that are most critical to them for making design decisions.
To get the most out of user research, engineers need to have an understanding of how the process works and how they can participate productively. Here are five things engineers should know about user research before starting on their next medical device development project.
The earliest user research is often conducted in the homes of patients in the target population. Gathering information in a home visit allows researchers to make observations about the home environment and the ways in which patients behave in their environment that go above and beyond what users actually say.
A home interview may look like an informal conversation. Human factors researchers work hard to put subjects at ease and establish rapport. But, no matter how unstructured the discussion may look to an untrained eye, there is actually a great deal of structure and skill involved in conducting a successful home interview.
All user research interviews are conducted according to a strict protocol with specific research questions established well in advance. This protocol is designed to ensure that the research questions that the study seeks to answer are addressed during the interview and that participants are treated according to guidelines established by the Institutional Review Board (IRB) overseeing the study.
During the interview, the human factors researcher will have a set of predetermined questions to ask. However, he or she must also be able to follow the threads of conversation as they evolve and know how to elicit additional information when clarification is needed and how to gently steer the conversation back to the subject of interest if it should start to veer off track.
When human factors researchers and engineers partner during user research sessions, the engineer must understand the protocol that is being followed and the research questions that will be explored during the interview. Engineers should also prepare their own research questions before the interviews begin. During the interview itself, they should follow the researcher’s lead and allow the researcher to guide the conversation with participants.
There will be multiple opportunities over the course of the interview for engineers to ask additional questions. It is best to limit these to one or two good questions at each opportunity to ensure that all topics of interest are covered during the interview.
When asking these questions, it is important to avoid the introduction of bias in the wording of the question. Don’t ask leading questions such as, “Did you notice the flashing green light that told you that the therapy was completed?” Instead, ask open-ended questions such as, “How did you know when therapy was completed?”
Some of the most valuable and reliable information comes not from listening to what participants say, but from observing what they actually do. The rule of thumb here is to “watch first, ask next.” This is true at all stages of research, from early user studies using illustrations or 3D models to summative human factors validation studies.
Participants may tell researchers one thing but do something quite different when they actually complete the task. This is not because they are intentionally misrepresenting facts or omitting information from researchers, but simply because humans are unreliable narrators—to themselves as well as to others. They may forget to describe a step that comes automatically to them when they conduct the task or omit details about their environment or how they interact with a device because they think they are not important.
For example, in one study, researchers were asking participants to describe when and how they took their medication in a typical day. It was observed that most participants took their medication in the bathroom, even when the participants named a different location. This was an important distinction because the device had to be charged, creating a risk for patients who might accidentally drop the device into a sink or toilet while it was plugged in. This observation led to design revisions to better protect patients from this risk.
In another study, participants were asked to try several prototypes of a drug delivery device during a formative research study. After demonstrating how they would take a dose using each prototype, they were asked which model they preferred. Researchers observed that the model they selected did not always match the model that they appeared to use most easily and accurately. Reconciling these differences through further questions provided valuable insights that were used to guide the next stage of development.
A single observation may offer a clue into a potential usability issue or user need. But to have confidence in the results of a user study, researchers must look for trends across multiple participant sessions. While one observation may be an anomaly, seeing the same pattern emerge across four or five sessions strongly suggests that the observation has validity that device designers should take into account.
The number of participant sessions needed to establish a trend with confidence will depend on how strong the pattern is. Eight sessions are usually sufficient to establish a pattern. Sometimes, a strong pattern can be determined with fewer sessions. When the data is inconsistent, you may need 10 or 12 sessions to determine a pattern.
In one formative research study, the first participant was observed holding a prototype drug delivery device the wrong way. When asked how she knew how to hold it, she said it was similar in appearance to another device she used. After seeing five participants in a row make the same use error, it was clear that changes would need to be made to the form factor of the device in order to break this established mental model and ensure safe and effective use.
Not every pattern is equally important. Sometimes, patterns emerge that have no impact on safe and successful use of the device; for example, patterns that emerge in how patients plan to store a device may not be meaningful if storage methods will not effect drug efficacy or device safety. But when trends are observed that have a potential impact on patient safety or the efficacy of the device, designers should examine the trends carefully and make sure they have enough data collected to have confidence in the results.
User research is not a “one and done” proposition. Ideally, medical device manufacturers will have several chances to collect user feedback and observations over the course of device development.
The number of rounds of user research can vary widely from project to project. In general, the more complex and novel a device is, the more rounds of user research manufacturers should plan to incorporate into their timelines and budgets.
In an Agile development model, the design team may go back to users at frequent intervals as they refine the device design. More commonly, the team plans user research at a few key points in the development cycle. These may include early formative research and evaluation of design concepts, early prototypes and final prototypes. A new set of participants should be recruited for each stage of user research to avoid the emergence of biases that can result when the same participants give input during multiple user research studies.
If time and research budgets are tight, manufacturers should put more emphasis on early research. Identifying potential problems and validating the fundamentals early in the design process can help the team reduce the likelihood that they will need to make extensive, and expensive, changes later in the design process.
However, whenever significant changes are made in the design, additional research should be completed to gain confidence in the new design. If too many design iterations are allowed to go untested, there is a greater likelihood of unwelcome surprises when it comes time to conduct final human factors validation testing prior to the regulatory body submission. An iterative approach to user research will help ensure that the final design fully meets the needs of users.
Following these guidelines will help engineers establish effective partnerships with human factors researchers. When engineers and human factors researchers work together, the result is better user research—and, ultimately, better medical devices.
About the Author
Scott Danhof is a Mechanical Engineering Research Leader in Battelle’s Consumer, Industrial and Medical Technologies group. He provides leadership for project teams for a wide range of products for government and commercial clients. For the last 20 years, much of this work has focused on medical product development, including participant/user safety, regulatory compliance, design controls and usability.
Patrick Ganzer is advancing medicine at the interface between the nervous system and technology. His work is helping patients recovering from spinal cord injuries, stroke and other nervous system disorders regain lost function and independence. Read More
Annie Diorio-Blum spends a good part of her time trying to get inside people’s heads. From surgical teams in the operating room to patients managing chronic conditions at home, she brings the voice of the end user to the medical device design process. Read More