This Photo by Unknown Author is licensed under CC BY-SA
When Biotech and Life Sciences companies sign up for cloud applications you are relieved of the burden and expense of hosting the application yourself. While that is an obvious benefit the very fact that the vendor is hosting the application and your data presents certain risks.
Managing Cloud Risk
One aspect of managing that risk is by ensuring there are certain terms in the contract. Please see my blog post on that topic ->https://www.gizmofish.com/2019/08/look-before-you-sign-eight-contract-terms-biotech-and-life-science-companies-should-look-for-when-signing-up-for-a-cloud-application/ Another important aspect of managing that risk is for your IT department or a biotech and life science industry-experienced managed service company to evaluate the vendor before you sign the contract.
Here are some questions that will help you evaluate whether the vendor.
- How robust is their infrastructure?
- Infrastructure should have sufficient redundancy built into the base hardware (ex. RAID, SANs, high-availability firewalls, redundant internet connections) and software (ex. Database clustering, Virtual Machine clustering, Virtual Machine live migration) to allow for rapid recovery from a failed component or machine.
- The data center should be a tier 2 or higher and SSAE 16 certified.
- If the vendor is using a public cloud vendor such as AWS or Microsoft Azure the vendor should make use of Availability Zones and/or Regions. (I will soon publish a blog post explaining Availability Zones and Regions and how they relate to cloud applications.)
- The vendor should have separate development, test, and production environments.
- How solid are their backup and disaster recovery systems and processes?
- dequate backups of systems, files and data must be performed so that any restoration of the system will not result in an unacceptable amount of data loss. What amount of data loss is “acceptable” will depend on how critical the data is to you and how quickly it changes. If the system is backed up once per day for example that means they could lose up to 1 days’ worth of data. RPO stands for Recovery Point Objective and that means the maximum time you could lose so in this example the RPO would be 24 hours.
- Vendor should have a defined RTO (Recovery Time Objective). RTO means the time it takes for them to get the solution back online in the event of a failure. Ask them how long it takes to recover from a failure of a system component (which would be more likely) and of the entire data center (worst-case scenario). How much recovery time you are willing to accept depends on the time sensitivity of the solution to you. How much time can you afford to be without access to the solution and your data?
- Vendor should have a written business continuity plan and the vendor should test the full backup and disaster recovery plan at least once per year. This means testing a complete failure of the entire solution and confirming that they can restore the solution within their target RTO.
- Vendor should have local and offsite backup to a secure facility geographically separate from the primary facility. AWS and Azure offer storage services (Ex. AWS S3) that are spread across several data centers and thus provide this offsite protection.
- How often does the vendor backup the data offsite? This will determine how much data you could potentially lose if a data center goes down.
- The vendor should have a defined backup verification process. The validity of backed up data should be checked regularly.
- The vendor should have the capability to restore to a different data center if the primary data center is offline. If the vendor uses AWS or Azure, ask about their ability to restore to a different Availability Zone or Region.
- You should know how long they keep backup data. How far back in time could you go to retrieve data?
- Lastly have they ever had to do a complete system restore of their production system and if so, did they meet their RPO and RTO objectives? If they did not, why not and what did they do to enable them to meet their RPO and RTO objectives in the future.
- Do they have adequate security?
- End user access should be done via a secure method such as VPN, RDP, SSH, or HTTPS.
- The vendor should destroy data (ex when replacing old hard drives) and sanitize media in a secure way. Examples are following Department of Defense 5220.22-M or NIST 800-88 specifications.
- Ask how your data us safeguarded from other clients’ data in a multi-tenant architecture.
- Vendor should screen their employees and contractors preferably with criminal background checks.
- Ask the vendor if they have had any security breaches in the last 3 years. If so ask them to explain what happened and what steps were taken to prevent this type of breach from happening again.
- Do they have the right processes in place?
- Vendor should have a defined upgrade process that provides for pre-production testing and the ability to roll-back changes if necessary.
- Vendor should have documented change management procedures for applications and infrastructure. Change management process must have clear separation of duties. This means that the person executing the change cannot also be the person to approve the change
- Vendor should have processes for securely granting user access including administrator rights.
- Vendor should have defined access control processes in place to limit access to the physical infrastructure and to your data. In other words, not just anybody at the company can get access to your data or the underlying infrastructure.
This is by no means a comprehensive list of questions you could ask. And the extent to which you will investigate the vendor or course depends greatly on how critical the system is to your operations. If the system is a GxP system, then a full validation may be required. A Biotech and Life Sciences industry-experienced managed IT services firm like GizmoFish will be able to help you to minimize your risk by properly evaluating cloud vendors.