Monday, November 21, 2016

More short items, incl. postdoc opportunities

Some additional brief items:

9 comments:

  1. Academic Analytics (Caveat: I've not done business with them):
    In the "What We Do" tab on their website, the main data on which they apparently base their assessments are # of publications, # of citations, # of (federal) dollars, and # of honorific awards.

    Considering the number of "#'s" in the above list, it seems to me this does not evaluate quality, it evaluates quantity. While obviously someone should publish (i.e. the # should not be zero), quality in science can only be really assessed by inviting a bunch of peers and let them review a department.

    And note that overall quality in a department involves teaching, outreach (get people/kids interested in/familiar with the scientific way of thinking), and research. The numbers they appear to base their assessment on only deal with research (awards maybe a bit broader, though there are not that many non-research awards for professors out there).

    I think this outfit has good business sense: they see managers like numbers, they see it takes a lot of effort to collect those numbers and compare them to other places, so they step in and do the counting for you (for a fee).
    But what they are selling is not an assessment of quality. It's a bean counting service, that enables, indeed, bean counting. That should be avoided.

    ReplyDelete
  2. pcs, I should have provided a bit more context. For those who don't know, AA is an expensive subscription service that is available to universities. The idea is to provide information to help with comparisons and benchmarking, but like anything that focuses on numerical metrics, it is fraught with inherent biases and challenges. You are absolutely right that these folks focus on counting beans, and in fact only on counting nationally available beans - if someone gets a state grant, or a foundation grant that isn't from someone huge, or an industrial grant, that doesn't get picked up. That's one major issue - even if you buy into bean counting as a means of evaluation, are they properly counting beans in the most useful way?

    They are doing some more sophisticated things with the numbers than just compiling them. For example, you can either go with their weighting scheme or define your own, and then see how your department or program stacks up relative to other departments or programs in various ways. That is, they can create normalized-by-program metrics. There are still many issues with this, the biggest being that if you do this incorrectly it will actively mislead you. Ranking physics departments by citation count per faculty member completely favors departments that have large experimental high energy physics groups, for example.

    Bean counting as an exclusive judge of quality is lazy and inaccurate. Ignoring beans altogether is also not reasonable. I'm trying to get a handle on whether people out there find this bean counting tool to be a useful one for what it purports to do.

    ReplyDelete
  3. okay.
    I agree ignoring beans is not reasonable (hence my "not zero" remark), but facilitating bean counters (which is what I still think AA is doing) would not be my idea of helping to evaluate quality in a more appropriate way.

    Anyway, as I said I don't have experience, so I won't be of much help.

    ReplyDelete
  4. My point being that if a faculty member wants to spend money to see where they stand while knowing (better than administrators) what the caveats of the numbers game are, that's fine.
    If the goal is to aid bean counting, I would suggest to let the counters count the beans and see if it's worth the efforts (instead of having faculty count the beans for them).

    ReplyDelete
  5. Good points. I agree completely - I would not advocate general faculty spending time on this, and it's already unhealthy how much time some faculty spend looking at citation counts and h indices. I've started a term as department chair, however, so now I have to pay attention to this bean counting stuff because various administrators care about these things. One question chairs here were asked recently is, AA is very expensive - is it worth it?

    ReplyDelete
  6. Anonymous10:27 PM

    Hi Doug,

    Can you please comment on Microsoft's recent announcement that it is moving ahead with building an experimental quantum computer from topological "anyons"?

    http://blogs.microsoft.com/next/2016/11/20/microsoft-doubles-quantum-computing-bet/#sm.0000jwm40g173zeexr9f5tej3iuuq

    Do you think there is something real here? Why would two university professors stay in their labs but becomed "owned" by a corporation?

    Looking forward to reading your opinion!

    ReplyDelete
  7. Anon, good call - I was planning on writing something about this, but it might take a bit. Short answers to your latter two questions: For the field as a whole, yes, I think there is something to this quantum computing business, though I'm not sure I'd bet on the particular implementation backed by Microsoft. Second, if someone was offering to bankroll research you were already planning to do, at a level that you could never match by other sources, that must be pretty tempting. One thing I do wonder about is whether there are strings (non-compete clauses) that affect grad students and postdocs in those groups.

    ReplyDelete
  8. Isn't this exactly the same as what's been going on with Google and John Martinis? They still publish.
    I was going to say that Microsoft has been funding Delft for a while now - but after reading the blog post this is about Delft (and Charlie Marcus).
    It seems Microsoft has copied what Google did with John Martinis.

    ReplyDelete
  9. pcs, yes, in a nutshell.

    ReplyDelete