I recently volunteered as an AV tech at a science communication conference in Portland, OR. There, I handled the computers of a large number of presenters, all scientists and communicators who were passionate about their topic and occasionally laissez-faire about their system security.
As exacting as they were with the science, I found many didn’t actually see a point to the security policies their institutions had, or they had actively circumvented them.
A short survey heard reasoning like
A reasonably large portion of this group didn’t think security mattered and found it too annoying to put up with. That security is the enemy of convenience is well known, but since it is important (and it is – their universities do care), there are ways to make it as habitual as brushing your teeth – unless your research is on gum decay.
In general, a security program will be concerned with controlling access – only people who are supposed to get at the computer, the email, the network, the data store, etc, can actually get access to read or edit what is there.
What could happen if someone who is not supposed to have the ability to write or edit gets access? The possibilities run from simple mistakes to malicious sabotage, all of which could have effects on important research or an individual’s scientific reputation.
Let’s take a Google drive store of data that a group of labmates have collected over several years. They may be using Python or R to clean their records, append new data and analyze – it’s great for handling and updating large quantities of information. A student finds the data and tries to “help” by doing an update that attempts to bucket the data and deletes the original field …. and the student doesn’t mention it. Now the source is incorrect and the researchers may be making bad science without knowing it.
What if it was not a student but an industry that feels this group’s research will harm their profits? What if there is a rival who wants to know what avenues others are researching, with a hacker friend who can access unencrypted email? Might a research team get scooped or have their findings made moot by a shady advertising campaign?
Watson and Crick went to some lengths to acquire the images that Rosalind Franklin developed. Alexander Graham Bell happened to be working at the Western Union from which Antonio Meucci’s telephone patent application seemingly disappeared. And there are entire movies about Thomas Edison. Sometimes there’s a known rivalry, sometimes there’s a trusted mentor or colleague, but keeping records of the data, where it came from and who came up with it can prevent these misunderstandings and mis-attributions.
Preventing breaches against your processes is what an institution’s security policy is all about – made difficult because they *know* the researchers want to collaborate. They’ve got a policy written up and best practices outlined, but it does depend on the researcher being willing to follow a few inconvenient steps to keep their systems up to date and access secure.
Habits and tools a scientist can put in place to protect the providence of their work include
Security may be considered an inconvenience, but it’s also a solution to the risks discussed above. Scientists need to be very exacting about their research – and protecting the data that come from it.