When taking on a new role, you’ll inevitably have new tasks and responsibilities. Some of those tasks will be repeated, and at some point you might wonder whether automating the task would be more efficient than continuing to do the task manually.
For me, the threshold for automation typically happens after I’ve done a task just once. And knowing I may have to repeat a task actually influences how I undertake the task the first time. That is, while doing the task, I’m asking myself how I might change what I need to do right now, based on the likelihood that I would have to do the task 10 more times, or 1,000 more times.
I’ve worked in biotechnology companies where we would typically run an experiment, and after analyzing the results, decide to re-run the experiment — perhaps with a minor tweak to the protocol, or using a different set of subjects or experimental conditions. In some cases, the insights derived from the experiments pushed us to make the experiment operational, that is, a regular and repeated part of our workflow. This meant that we would periodically run the experiment once a week, or once a month, to generate new data and glean new insights.
Whether the actual experiment could or should be automated would depend on the nature of experiment.
However, analyzing data from the experiment was almost always a candidate for automation, and if I knew I had to re-fresh the data and do the analysis again, I would always write R scripts to automate the data analysis and report generation. For me, I wanted to automate the parts of the workflow that took time, required attention to detail and precision, were well-defined, and were often mundane and boring and error prone.
Once achieved, the automation gives one the time to think about the message in the data, and the time to translate the insights from the data into value for the organization.
Once you’ve decided that a task — like data analysis — should be automated, you are faced with the decision of How you go about automating the process? The answer may depend on whether you and your colleagues have the skills to do the automation, or the capital to hire others to do the automation, or purchase a drop-in solution that enables automation (e.g., software, etc.).
As a grad student, the balance of time and money favored me choosing to learn the tools (in this case statistics, R, python, and SQL) to do the automation myself.
However, in a business environment, learning these tools may not be the best option. Learning these tools takes a lot of time and effort, and that equates to an opportunity cost, i.e., your time and energy may be better spent applying your expertise and thinking about your problem domain. Remember that these tools ultimately result in a system that automates the task in question. This system can involve software (with source code), databases (with supporting architecture), web services (with access and security), and the time to address new issues (changes in requirements) as they arise in your process. With these considerations in mind, it may make sense to hire dedicated staff (if the process and system is large enough to warrant this choice), or purchase or subscribe to a software system that meets your automation needs.
Having said that, here’s a shameless plug for a new service we’re offering:
Yukon Data Solutions is our data analytics and report generation service. We work with biotechnology and life science teams to automate their data analytics pipelines. We focus on the rapid development and deployment of light-weight, customized automation solutions. Our goal is to take care of the painful parts of data analysis and report generation, so you can focus on the more important task of thinking about what it all means. Drop us an email if you want to determine if Yukon Data Solutions might be a good fit for your needs. One last thing: Our service is 100% satisfaction guaranteed.