[ Roger Atkinson's Home Page ] [ Publications Contents ]

Technology and control of feedback: My encounters with ROCI

Roger Atkinson

I surmise that some readers will arrive at this line because they are curious to know what this strange thing ROCI might be. If you have been Facebooked and YouTubed and Twittered and iPadded, now brace yourself to be 'ROCIed' (at least in the journal publications part of your CV)!

ROCI is the Australian Research Council's "Research Outlet Consultation Interface". It is the ARC's computer program that presents an Internet interface [1] for collecting reader feedback during their Phase 1: Public Consultation, 14 February - 4 April 2011 [2]. That states the 'what' and the 'when' (when it is happening), which are the simple preliminaries to this column's long sections, firstly on the 'how' (which is more like a how artfully), and secondly the most difficult part, the 'why' (which to some extent is a why bother).

Starting with how ROCI works, I hope that all many members of HERDSA have encountered it themselves, as urged by Roger Landbeck in a February issue of HERDSA's The Weekly Email News [3]. Unfortunately, it is likely that ROCI will be shut down promptly at the end of 4 April, and by the time this issue of HERDSA News is published, it will be too late to undertake for yourself an initial or further exploration of how it works. Thus the observations that have led me to use the word control in the title, and the phrase how artfully cannot be repeated. To complicate matters, my observations were during two periods, 15-16 February and 17-19 March. At some unknown date between those periods, ROCI received a major upgrade that corrected a number of deficiencies. It is a useful digression to comment on these corrections, because an unknown number of early respondents may be unaware of the corrections, and may need reassurances that their inputs were not 'lost' when ROCI 'Mark I' was upgraded to ROCI 'Mark II'.

The most notable deficiency was that 'Mark I' did not allow respondents to revisit their own comments. With 'Mark I', hitting the 'Save' button really meant 'Save, exit, no further access to your comments'. With 'Mark II', respondents could log in again, view and edit their own comments. It's possible that 'Mark I' was disconcerting to many other respondents, as it was for me. Perhaps we have been spoilt by too much experience with better designed interfaces? In my second period of ROCI observations and entries, I noted that the entries from my first period could not be retrieved and I felt it prudent to re-enter my text, just in case. Being cautious, I had my own copy on disk, as I use copy and paste into text boxes on interfaces like ROCI, rarely doing direct typing. Re-entering was not a problem, but was it necessary?

A second deficiency in 'Mark I' was the lack of proper advice on length limits for text box entries. My carefully composed, evidence-based text for ROCI's "Step 3 Ranking" received a tart and incorrect error message, "The field Ranking Evidence must be a string with a maximum length of 1024." It was incorrect because the word "field" should have been "Tiers" (field ranking evidence was a matter for "Step 4 Fields of Research"). It needed some painful editing and an awareness that a carriage return has to be accompanied by a line feed, i.e. two characters. Painful editing, because the journal I was commenting upon, AJET, has plenty of evidence for promotion relative to several of its peers [4, 5]. I had to leave out the key URLs to get below the magic number 1024 and that for me was very frustrating indeed. Why 1024? Because that is binary 010000000000 or hexadecimal 400, some unknown programmer(s) having a bit of fun, I guess. However, the unknown programmer(s) came good in 'Mark II', which features a 'counter' for each text box, telling you the number of "Characters left in your quota". Neat, used Microsoft's ASP.NET AJAX [6].

Why quibble about interface design errors that have been corrected? Because if there is poor attention to detail, inadequate initial testing and premature release in one area of ERA, should we examine critically all other areas of ERA for similar faults? To be prudent, 'Yes'. For example, on interface design, everyone is familiar with the standard advice to examination candidates, 'Read the whole of the exam paper before starting to answer questions'. Not a principle heeded by ROCI, it did not allow you to do that readily. Taking another aspect, everyone is interested in some kind of indicators about the views held by others, a principle understood very well by media organisations in their solicitation of reader and viewer opinions, for example as in the ABC's The Drum and its Current Poll Results [7]. A principle that ROCI does not attempt to adopt, as each individual's comments and any overviews that the total feedback may reveal will be available only to "contracted peak bodies and academic groups" [2]. Two more points about this quibble. The ARC can remain indifferent, as it could claim to be understaffed in relation to the tasks it has been given (or gave to itself), and it could regard ROCI as being like a sacrificial pawn that diverts critics from bigger targets (such as the ARC's definition of research excellence).

Returning to the main theme in how, ROCI requires a respondent to "Search ranked outlets" to find a journal or conference. Selecting a journal leads to two functions, "Comment on this Outlet", and "Add Peak Body". Here is the first major element of technological control, a channelling of feedback into an individual journal's pigeon hole in ROCI's database. You have to work through to "Step 5 Comments and Conflicts" to find space ('no more than 200 words' or 1024 characters), and of course that space is pinned down to just one journal even though your comments may be pertinent for all journals. Comment upon the concept of Tiers is not encouraged!

Other instances of control are subtle put downs. "Step 1, Outlet eligibility" seeks information on "why you feel you are qualified to comment on this outlet", and asks about "your academic expertise relevant to the field (e.g. professor of the area)". How about 'part time postgraduate student in the area'? "Step 5 Comments and Conflicts" asks respondents to "Declare Conflict of Interest" (the prescribed choices are "None", "Editorial Role", "Previous Editorial Role", "Publisher" and "Other"). I for one prefer to view "Editorial Role", for example, as a virtue, not a conflict. "Step 3, Ranking" seeks evidence, which "... may include your knowledge of the field and how the outlet compares with other outlets, journal metrics, acceptance rates, editorial committee, editorial process and any other supporting evidence to justify your rank submission." Quite a number of readers may want to ask, 'Did the ARC really consider proper evidence in those categories when it produced Tiers 2010?' Different rules for ROCI respondents?

Moving on to the why aspect of ROCI, it is useful to adopt the perspective of a hierarchy of objectives, for example as used in Bloom's Taxonomy [8]. At the bottom of the hierarchy seems to be the ARC's concern about being overwhelmed by a large volume of feedback. The main clue to defining this objective appears to be clause 1.1.7 in ATM 66 [9].

Well, "100,000 pieces of feedback", "over 4,000 requests", "over 700 expert reviewers"! Needs a technological control mechanism! Thus at the simplest level, ROCI's objective is to cope with volume. For the next level of objectives, consider the phrase "over 700 expert reviewers and peak bodies to assist with the finalisation". Seemingly, this is an unwieldy number and the objective must be to reduce it, whilst continuing to minimise the scope for criticism if a "reviewer" or "peak body" who should have been consulted, was not consulted. The main clue here is on ROCI's home page [1]:
... individuals are encouraged to recommend the peak body or disciplinary group that they believe is the most appropriate to review the public consultation feedback for an outlet during Phase 2. The peak bodies nominated by individuals during Phase 1 of the public consultation will be checked and verified by the ARC before being uploaded on to the Public Consultation website. ... Please note that a peak body's representation on this list does not ensure participation during Phase 2 as this is a formal tender process. [1]
This is artfully done. Encourage a blow out in the numbers of 'peak bodies' nominated. The blow out can be monitored using the ERA's 'list of suggested peak bodies' [10]. The 'PeakBodyList' appears to have started from the ARC file 'ERA_expert_reviewers.pdf' containing a list of 62 'Learned Academies and Discipline Peak Bodies involved in developing and reviewing the ERA 2010 Ranked Journal List', and has grown steadily during the current Phase 1: Public Consultation. Regrettably I don't have a full record, but the numbers nominated grew quickly, 97 on 15 February, 304 on 13 March, and 343 on 20 March (the data entry clerk could not keep up, a simple spell check revealed 14 errors in 343 lines). Sounds overwhelming, but the key phrase is 'Phase 2 ... is a formal tender process'. If a 'nominated peak body' does not tender, either by itself or as a partner in a consortium, then it can safely excluded from further participation as far as the ARC is concerned. It will be a matter for successful tenderers or 'service provider(s)' to deal with (or not deal with) 'nominated peak bodies' that did not tender or did tender but were not awarded a contract. Numbers problem solved! Most of the 'nominated peak bodies' will be deterred from tendering, I guess, by the complexity and verbosity in ATM 66 [9]. That gets the numbers down, and then a modest number of contracts (probably much less than ERA 2010's 62 'academies') may be awarded. Make them work hard, especially in the cases of 'multiple service providers responsible for a title':

1.2.10The service provider(s) will be required to review the public consultation feedback, consult with relevant individuals and groups, work with other service provider(s) if cross-over of the ranked outlet titles occur, seek international peer review and finalise recommendations of the ranked outlets lists.
...
1.2.10The service provider(s) must also provide final rank and FoR code recommendations to the ARC. If there are multiple service providers responsible for a title, differences in recommendations must be resolved amongst the service providers before the final recommendation is provided to the ARC. [9]

It's not clear from ATM 66 whether service providers are obliged to maintain confidentialty with respect to their reports and recommendations to the ARC. ATM 66 says cryptically, 'Our Confidential Information includes the information listed below: ...[*insert details.] [9] That illustrates another level of objectives (we need a numbering scheme here, let's say 'Level 3'!), concerned with giving 'flexibility' or 'wriggle room'. If the ARC receives confidential recommendations from its 'service provider(s)' it will have a good degree of 'flexibility'.

The main clue to the next level of objectives ('Level 4'!) is also on a home page, that for ERA 2012, titled Review of the ERA 2010 Ranked Outlet Lists, first sentence [2]:

The ranked journal and ranked conference lists form an integral part of the ERA evaluation process. [2]
That's it. Principal conclusion of the Review given in the first sentence. Objective is to establish beyond question that journal and conference ranking is here to stay. Why bother? End of story. However, there may be a suggestion of a limited 'flexibility' emerging from the Review. Consider clues in the elaborateness of the process mapped out for the Review, the stimulus it has given to the emergence of new consortia tendering to become a peak body (for example, the AARE led consortium that includes HERDSA, ascilite, AVETRA, CADAD, PESA, MERGA, SORTI and ACDE, tendering for the whole of FoR13 Education), and the growing amount of public criticism of journal ranking [for example, references cited in 4 and 11; 12; 13; 14]. The ARC can afford to be quite flexible about the review outcomes in relation to rank order. Some journals promoted, some demoted, perhaps the rankings of Australian-based, open access, newer generation journals may creep up a bit relative to European or US-based, paid subscription only, traditional or older generation journals. However, rank order is not as important as the cutoffs between Tiers:
The ARC may intend to ease the pressure a little by softening its stance on the rigidly normative nature of Tiers (5% A*, 15% A, 30% B, 50% C), as a recent article in The Australian [14] may suggest. In commenting upon some data tables concerning high-performing disciplines, ARC CEO Professor Margaret Sheil is reported to have said, "It shows that the proportion of A* and A journals did not correlate directly with the performance of different disciplines" [14]. We note that references to 5% A*... seem to have disappeared from the ERA website [15], or have become deeply buried. Could we be on the verge of something less severely normative, e.g. 10% A*, 20% A, 35% B, 35% C? [4]
Whilst it seems likely that the ARC will reserve an exclusive, closed doors prerogative over any decision to become "less severely normative", it would do no harm if our "service providers" urge the ARC to go in that direction. After all, in recent years we have become "less severely normative" in relation to % professors, % associate professors, % senior lecturers, etc., and that was not ruinous to universities.

Now, we are up to 'Level 4', and borrowing again somewhat loosely from Bloom's Taxonomy, there are more levels in the hierarchy, analogous to 'analysis', 'synthesis' and 'evaluation'. But the ARC's effort seems to end at about 'Level 4', and for 'Level 5' and higher objectives, researchers into these matters have to go elsewhere, for example the analyses by Pontille and Torny (2010) [12], and Cooper and Poletti (2011) [13] (readers may decide for themselves whether these papers, both in Tier B journals, are representative of "only a few papers of very high quality", as Tiers allows for B journals, or are something less).

Rather than engaging in the current debates about the ARC's definition of research excellence, I will tender an example, which for me personally is illustrative, and it's about time I revisited the problem. Over forty years ago I was a young researcher in soil chemistry, enjoying some quite satisfying successes in getting my work into A* journals including Nature, J. Phys. Chem. and Proc. Roy. Soc. A. From those years, I remember some conversations during a visit back home to the family farm, around seeding time. At a time when my father Bill and farmhand Colin were working the seeding dayshifts and my older brother Gordon worked the nightshifts. Short growing season, you had get the seeding done quickly. This was in WA's north eastern wheatbelt, before the time that climate change really started to hit the district, and back in the days when tractors did not have air conditioned cabs and superphosphate came in 180 pound bags. Not surprisingly, journal prestige and basic research papers in soil chemistry were of little interest to them, containing nothing that would lighten their loads. So, who is to have the main say on what constitutes research excellence? They certainly felt they were not a part of the constituency addressed by research excellence in agricultural science. To represent one aspect of their views as best I can remember, the phrase 'ego trip for professors' comes to mind, though their words at the time would have been different - sadly, I cannot check, all three are now dead.

Ego trip for professors? Isn't that a bit over-stated? Surely ERA has higher level purposes than that? It's debatable. As first item of evidence I tender the ARC's own phrase from its official, formal definition of Tier A* journals (and I look forward to the ARC's tendering of evidence to the contrary):

... journals ... where researchers boast about getting accepted [15]

References

  1. ARC (2011). Welcome to the ERA 2012 Ranked Outlets Public Consultation. https://roci.arc.gov.au/
  2. ARC (2011). Review of the ERA 2010 Ranked Outlet Lists. http://www.arc.gov.au/era/era_2012/review_of_era10_ranked_outlet_lists.htm
  3. HERDSA (2011). The Weekly Email News, Wed 23 Feb 2011. ERA Ranking of Journals -Public Submissions.
  4. AJET Editorial 27(1). Review of the ERA 2010 Ranked Outlet Lists. http://www.ascilite.org.au/ajet/ajet27/editorial27-1.html. Includes key URLs. Portions of the text appeared also in the ascilite Executive Committee's advice to Members (email from Caroline Steel to Members list, subject 'Learning technologies and ERA2012', 21 Feb 2011 08:51:46 +1030).
  5. AJET Editorial 26(5). Idle Moment 40: Impact Factor revisited. http://www.ascilite.org.au/ajet/ajet26/editorial26-5.html
  6. Microsoft Corporation. ASP.NET AJAX. http://www.microsoft.com/downloads/en/details.aspx?FamilyID=ca9d90fa-e8c9-42e3-aa19-08e2c027f5d6&displaylang=en
  7. ABC. The Drum. Current Poll Results. http://www.abc.net.au/polls/thedrum/vote/total.htm
  8. Bloom, B. S. (Ed.) (1956). Taxonomy of educational objectives: Handbook 1: Cognitive domain. London: Longman.
  9. ARC (2011). ATM 66 - Provision of review and recommendations for the ERA 2012 ranked outlets. Obtain via http://www.arc.gov.au/about_arc/tenders.htm ('ATM' is 'Approach to market', i.e. a call for tenders)
  10. ARC (2011). https://roci.arc.gov.au/Home/PeakBodyList
  11. Atkinson, R. J. (2010). ARC announces a Tier Review Process. HERDSA News, 32(3). http://www.roger-atkinson.id.au/pubs/herdsa-news/32-3.html
  12. Pontille, D. & Torny, D. (2010). The controversial policies of journal ratings: Evaluating social sciences and humanities. Research Evaluation, 19(5), 347-360. http://halshs.archives-ouvertes.fr/halshs-00568746/fr/ (also at http://www.ingentaconnect.com/content/beech/rev/2010/00000019/00000005/art00004)
  13. Cooper, S. & Anna Poletti, A. (2011). The new ERA of journal ranking: The consequences of Australia's fraught encounter with 'quality'. Australian Universities' Review, 53(1), 57-65. http://www.aur.org.au/archive/53-01/aur_53-01.pdf
  14. Rowbotham, J. (2011). Journal rankings don't reflect performance. The Australian, 9 March. http://www.theaustralian.com.au/higher-education/journal-rankings-dont-reflect-performance/story-e6frgcjx-1226017977717
  15. ARC (2009). Tiers for the Australian Ranking of Journals. http://www.arc.gov.au/era/tiers_ranking.htm. Obtaining the percentages is now a little more difficult: Google "Frequently Asked Questions Bibliometrics" to locate http://www.dest.gov.au/NR/rdonlyres/8F12ADCB-C221-421E-A128-2C344CD58BDF/18968/FAQBibliometrics_17October2007.pdf

Author: Roger Atkinson retired from Murdoch University's Teaching and Learning Centre in June 2001. His current activities include publishing AJET and honorary work on the TL Forum and ascilite Conference series, and other academic conference support and publishing activities. He composed the phrases 'blood, sweat and four tiers', 'tier review process', and 'clique bodies'. Website (including this article in html format): http://www.roger-atkinson.id.au/

Please cite as: Atkinson, R. J. (2011). Technology and control of feedback: My encounters with ROCI. HERDSA News, 33(1). http://www.roger-atkinson.id.au/pubs/herdsa-news/33-1.html


[ Roger Atkinson's Home Page ] [ Publications Contents ]
Created 21 Mar 2011. Last correction: 21 Mar 2011. HTML author: Roger Atkinson [rjatkinson@bigpond.com]
This URL: http://www.roger-atkinson.id.au/pubs/herdsa-news/33-1.html