Today I installed Zotero on my newly-refreshed system. It had a bit of work to do to download and sync references in my library, but it didn’t seem to sweat too bad while it was doing it. I’m still sticking with the semi-automated way of adding references by DOI using the ‘Add Item by Identifier’ button on the toolbar after copying the DOI (in text) from the article page. This seems to work great, even better in some ways than using a bookmarklet or extension, which is still not well supported in the new version of Safari. 

A while back I read about how to use a CO2 sensor to estimate the air exchange rate in a room. So I bought their recommended sensor, the Aranet4, and set it up in one of the teaching labs for last Thursday and today’s afternoon sessions. I am really surprised and impressed at the lack of change in CO2 levels in the lab during a lab session. At no point did it go above 600 ppm, and most of the time it stayed in the 400’s, indicating a very high rate of air exchange in the room.

Graph of CO2 (ppm) as a function of time. Yellow highlights were periods in which the room was occupied by 8 students.

I need to assess my file storage system, as it seems like I’ve accumulated several different storage systems, cloud file providers, etc over the past several years and I keep feeling the urge to streamline or re-think my approach. One of the key things I need to be able to feel comfortable with is having full local backups of critical data. This means I shouldn’t be storing my research files in iCloud Drive, as I am currently. At the same time, I can probably trim down the number of working directories and archive some of the older stuff either in a cloud service and/or on a local hard disk archive, which would save some space on my working laptop drive and still fulfill my need to have control of the data. I need to rationalize this before I lose something! For example, I pay for a Backblaze account, but I’m pretty sure things that are only in iCloud Drive are not being backed up to that account.

ARM CPUs and Research Software on Macs

There’s some great discussion in the comments section of the article The Case for ARM-Based Macs – TidBITS. Several of the commenters put their finger on one of the issues I haven’t seen much if any discussion about:

In the past fifteen years, a lot of developers have moved to Mac because it provides an X86+unix environment, which is a huge boon when developing software which will eventually deploy to a cloud environment, where Linux on X86 is king. The differences between BSD and Linux notwithstanding, this has made the Mac the machine of choice for a huge community of web and open source developers. We can even use tools like VMware, Virtualbox, Docker, and Kubernetes to mimic our target deployment environments.

This is definitely the case across the sciences, including several areas that overlap with my own work. Being able to install and run various bioinformatics tools and/or image analysis packages locally has allowed me to get a better handle on how these tools work. I presume that an architecture change to ARM-based CPUs will still permit most of these tools to work, but there will almost certainly be a transition cost to recompile and optimize for the new platform. The article Re-engine, Not Re-imagine by Brendan Shanks, puts an optimistic spin on the move, essentially arguing that it can and will be invisible to users. Maybe that’s the time I should look more closely at moving some of these tasks off my local machines and into something like CyVerse or some other cluster-for-hire. As a learner though, I’m hesitant to do this because it introduces another layer of abstraction that I’ve found to have its own problems.

Update: This week’s ATP touches on this concern, about whether a processor change would mean a major disruption to Unix-based programs and tools for science, although they focused on other (non-science) Unix command line tools. Their discussion reminded me that the PowerPC to Intel transition also happened in the Mac OS X era, meaning that many such programs had to be recompiled for the new Intel CPUs, and eventually they were. They also mentioned that many Unix command line programs already run on ARM chips, such as on the Raspberry Pi. So things in active development that are open-source likely will make the move fairly quickly.

Faculty governance and technology

I’ve been trying for many (like, 7?) years to help our faculty governance system to recognize the importance of the Campus Technology Council, to little avail. In the last couple years I’d pretty much quit asking for it to be considered as an ‘official’ committee, even though it’s just as active and engaged in new projects as ever. This piece by Jonathan Rees and Jonathan Poritz in Academe spells it out clearly:

Faculty must educate themselves about the possibilities and dangers of IT in order to maintain their prerogatives. Information technology might seem like merely an instrumental aspect of institutional operations that might be left entirely in the hands of administrations, like landscaping or decisions about which model of copiers to put in department offices. But when IT is a fundamental part of the creation and dissemination of new truths, and when it can be used to monitor and to control all aspects of research and teaching, it necessarily becomes one of those areas where the faculty should exercise its primary responsibility.

Since I know for a fact that faculty at my university have actively “engaged” (thrown down) with the administration over decisions about both copiers AND landscaping, it seems like I ought to be able to rouse some interest in academic technology. Anyway, I’ve ordered a copy of Poritz and Rees’s book, maybe I’ll pass it along to the appropriate committee chair when I’ve finished it.

Virtual progress

Although I’ve been on sabbatical this semester, it appears that our experiment using Chromebooks in our introductory biology courses has been going well. From what I’ve heard, only a few students have been burned by the extra layer of abstraction of running Windows in a web browser, occasionally closing the Chrome tab instead of just the program running in the virtual Windows environment. All told, I’d say that’s pretty impressive for an idea I dreamed up last winter, made possible by the excellent support from our IT department.

I can only imagine this is the early days of a growing trend, both within and beyond academic settings. I noticed a few days ago that Adobe has been working with Google to make their flagship application, Photoshop, available in a “virtual” environment. It sounds like an unholy combination of virtualization, VNC, and JavaScript, but it might work well enough to be worth it. Interesting, too, that Google is investing engineering resources to make this happen, as this clearly increases the value of Chromebooks if it can provide an adequate user experience.

While this is an example of making a particular program run virtually, Amazon continues to push forward with their more general solution, called AppStream. They’ve just announced the ability to run almost any Windows application on their virtualization platform, removing the need to manage a server on site. It costs $0.85 per hour, billing only for the time used. I’m not sure it would make sense for every app or student or teacher, but for certain programs that need to be run only occasionally, it seems like a great idea.

Google Scholar’s creator

From a nice article by Steven Levy on Anurag Acharya, the man behind Google Scholar:

I can do problems that seem very interesting me — but the biggest impact I can possible make is helping people who are solving the world’s problems to be more efficient. If I can make the world’s researchers ten percent more efficient, consider the cumulative impact of that.

What a great motive to guide your work.

Scripting Google Spreadsheet to do email merge

I recently posted about using TextExpander to semi-automate the process of sending grade updates to students. That post got me poking around for other ways to do a more thorough mail merge, and I found a tutorial for scripting Google Spreadsheet to send emails. With some minor modifications, I now have a spreadsheet set up as a grade book that can email each student with their current point total and class average at the push of a button. Below is a description of how I adapted the original spreadsheet to make it do what I wanted.

The original file is designed to collect user information with a form, save it to a spreadsheet, and email the user. Working from the copy of the tutorial spreadsheet, the first thing I did was to delete the form, as I do not need it in my application. Then I rearranged the columns and added some for my assignments and for totals. I left the original columns for first name, last name, and email address intact to minimize the need to edit the script. The script uses the first row of each column to identify which variable that column holds, so it’s important to respect those labels.

When I had the tutorial spreadsheet how I wanted it, I customized the text of the email template to suit my purposes. I added two new variables, based on two new columns, Total Points and Class Avg:

template text to send email

Then I ran the script with myself as the test recipient, and I was disappointed to find that the value for Current Avg did not get filled in. I returned to the script and began looking for the place where the data range is set, finding it in line 4. The original tutorial spreadsheet has 4 columns, so the range is set to 4. I have 5 columns I want the script to read from, so I changed the dataSheet.getMaxRows value to 5, ran the script again, and it worked as expected.

The last step I took was to customize the subject line for the automated email. In the tutorial spreadsheet, this subject line is hard-coded in the script, which seemed a little too permanent or hidden or something. I changed it to set the subject line by reading it from a cell in the ‘Email Template’ spreadsheet.

Any time I want to update my students on their grades, I just run the script by clicking on the Tools menu, selecting Script Manager, and clicking ‘Run’. This solves one more of the problems I’ve had weaning myself from the tyranny of the LMS.

Using TextExpander for email merge

TextExpander iconIt’s the end of another semester, and a lot of my lab students have been asking me what their lab grade is going to be. I don’t keep an online gradebook for my labs, so I needed a quick way to send them an update with their current grade. I keep their grades in a spreadsheet, and in the past I went so far as to create a mail merge report and send each student a PDF of their results, which was a fairly time-consuming process. Instead I turned to a Swiss Army knife called TextExpander.

TextExpander is a program that runs in the background and waits for you to type a specific sequence of keystrokes. When it detects that sequence, it fires and inserts the text you have associated with that shortcut at the point of your cursor. Not only can it insert the prescribed text, it can also do some thinking and use variables. For example, here is my snippet for the text of an email to a lab student:

Lab Grade Update

You have a %clipboard in lab, which includes your lab exam grade and all assignments handed in to date.

The real magic in this snippet is the %clipboard part, which TextExpander fills in with whatever is present on the system clipboard when the snippet is expanded. Before expanding the snippet in the body of an email, I just select the grade in my spreadsheet and copy it. When I type the couple magic keystrokes, the text above is inserted, complete with that student’s score.

OK, so this isn’t so much a real ‘mail merge’ as a ‘data merge’, but it’s still a time saver and requires effectively zero setup. It also works with whatever is on the clipboard, meaning it is not tied to a specific data store, unlike a traditional mail merge and its data mapping requirements.

Chromebooks and the ‘technology floor’

A few weeks back I wrote about using Chromebooks in some of our biology labs, and now that the Acer C720 has started shipping, I ordered two of them to start testing. I’ve only had them for a day, so this is not a performance review in any way, but I will say that it seems like a very functional computer. I’m used to the 11″ MacBook Air as my daily computer, and the screen size and keyboard are on par with that, although the color gamut seems more restricted; so far the battery life seems much better than the Air.

Part of what I want to work through as I’m testing is what, exactly, is the service model I’m aiming for — what is the purpose for these? The current computers are used to run evolution and ecology simulation software and a statistics package. I didn’t even bother requesting them this semester for our new bioinformatics exercise, opting instead to encourage students to bring their own, which worked fine. So why not just continue to do that instead of investing in lab-owned notebooks? If we are going to have to virtualize some of the software anyway, why not just give students access to it on their own machines?movie Sleepless 2017 download

This would be consistent with the trendy practice known as ‘bring your own device’ (BYOD), but I’m not convinced it’s the right way to go for us. One of the biggest weaknesses of this policy for education is that it lacks any kind of predictability. I’m not referring to predictability in terms of make and model and minimum specs, I mean whether the student brought their computer that day. There is a great benefit to being able to count on certain equipment being available and functional when planning a lab. For example, I know that we have a number of nice spectrophotometers, so I can design a lab exercise that requires them. Knowing that each student or pair of students is going to have access to a computer, and knowing what that computer is capable of, changes the design of the lab, to put it simply.

Here are a few activities that come to mind:

  • The lab manual could be moved online. As it stands, we have the manual printed for the students and (try to) collect the cost from them, which turns me into a cashier. This could be as simple as a PDF or as complex as a real ebook with interactive content.
  • We could produce short instructional videos for routine lab techniques and link to them from the online lab manual. These would be for things like pipetting, using the spectrophotometer, setting up a TLC experiment, or even setting up a slide on the microscope, which seems like a neverending mystery to many students.
  • Get into more detail on the practical side of data management and statistical testing. As it stands, we send students away and ask them to perform simple statistical tests on the data they have collected, but what they take away from this varies widely across the class. Some really get it, but others can’t get a handle on it. It would be nice to do more show-and-tell before sending them away to work alone.
  • Do some real training in literature searching. We have a light requirement for incorporating primary literature into the 2 formal lab reports, but we don’t spend time in lab talking about how to do this. I’d like to change this.

I could go on with a dozen other examples, but none of these is surprising, nor do any require anything other than a computer with Internet access. But you have to know it’ll be there. Right now, the range of access to a computing device begins at ‘none’, and having a set of lab computers would drastically improve that to ‘something’. I guess that is what I find so attractive about this whole idea: it offers a ‘technology floor’ where there is none now.

The idea of a technology floor works on a number of levels here. It supports the objectives we decide on teaching toward in any particular lab, that’s its primary job. But it also doesn’t have to remain exposed, students could choose to bring an equivalent computer of their own and use it. I’m thinking of the difference between vinyl flooring and travertine tile — they look and feel quite different, but ultimately serve the same function.