Hello VIS community,
since IEEE VIS 2020 we use a new set of keywords in several submission processes. The purpose of this blog post is to explain how the keywords matter to program committee members and to offer a little help in choosing keywords. A first blog post already covered how keywords matter for paper authors.
Read this text if you have been selected as a PC member for IEEE VIS. You should be interested to get a good set of paper suggested to you that you are interested in and maybe even excited about reviewing.
Your first contact with the new keywords will be the expertise rating interface.
Your keyword selection interface will look like this:
The page will explain in more detail what the categories “expert”, “competent”, “limited”, and “none” mean. You will recognize the new keywords used at VIS by the additional explanations added as a second line under each keyword.
Rating your expertise in this interface will be important later for the bidding interface. Do it as truthfully as possible. Paper chairs might also peak at your expertises when they need to make changes to paper assignments.
Note that setting your expertise to “expert” does not mean that you will by default or automatically be assigned papers related to the keyword. This is particularly important to remember for the large domain keywords shown above. Of course nobody can be an expert in all visualization applications to “Social Science, Education, Humanities, Journalism, …” for example. However, if you are interested in seeing recommendations in the bidding interface for education papers and you consider yourself an expert in visualization for education, then select “expert” for this keyword as those papers will show high in your list. You do not need to bid on the journalism, intelligence analysis, etc. papers that will also show up there - remember that the numbers of papers that have chosen this particular keyword is likely manageable.
The second time you’ll be confronted with keywords will be during bidding. We already described bidding in Part 1 of the keywords series. If you’ve never participated in it, take a look at this post first.
In the new area model you will have a lot more papers in your list, therefore you will not be able to look at all abstracts. Keywords will help you to filter the list to papers of interest through a) the matching score calculated between your expertise and the keywords selected by paper authors and b) through a search/filter interface on the bidding page.
You will not be part of the paper assignment process but it is worth explaining how both expertise rating and bidding influence which papers you will be assigned.
Paper-to-PC member assignment is a multi-step process at IEEE VIS:
- The initial matching of PC members to papers will be done primarily based on bids.
- Then paper chairs will check this initial assignment. If it seems sensible, they will keep it and otherwise adjust it. They will try their best to make sure that you receive papers that you wanted to or were willing to read, that match your expertise, while balancing conflicts of interest between assigned PC members and paper authors.
Step 1 on the process, however, is automatic and based on your bids - and NOT on automatic keyword matching. This is a change in the process from some previous years. Therefore for you the purpose of keywords is to help you navigate and filter through the large set of papers you will have to bid on.
What strategies should I use for rating my expertise?
Be truthful in your selection of expertise ratings. You want to bid on papers that you know something about and selecting the right keywords as “expert” will help here. You have been selected to be on the PC because the paper chairs believe you are an expert in some area of VIS. This means that you should rate yourself expert on several keywords; less than 3 is probably too few and more than 8-10 is probably too many. Note that you can still rate yourself “expert” if you are an expert in one of the subareas of a larger keyword: “Social Science, Education, Humanities, Journalism, …”, for example.
What strategies should I use for bidding?
In the new area model you will have up to 500 papers in your list and you will not be able to look at all abstracts. Therefore start by setting all papers to “reluctant”, then start organizing your list. Sort your interface by the match score, use the search box to find papers with keywords that interest you, use the filter mechanism to only look at papers with keywords you rated yourself “expert” on, and look at papers submitted to areas that match your expertise best. Also look for papers where authors made mistakes in selecting keywords - in particular those with a matching score of -1. Some hidden gems might be residing there.
- Select all papers you would be interested in reviewing to “want”. The more the better. You are much more likely to receive a paper from this list than from any other.
- Select all papers you could review because you have the expertise to “willing”. Many more would be better.
- Leave everything else as reluctant.
- You should have bid around 40 papers together in the “want” and “willing” category.
Keep in mind that the fewer papers you select the more likely you will receive one from your “reluctant” pile - and this pile will be huge and random. So do take bidding seriously and select as many papers as want or willing as you can. If you get a “reluctant” paper assigned paper chairs will look at the assignment to make sure that you could do the job but you’ll make their life significantly easier if you selected many papers that they could consider to give you instead, should a fit indeed be bad.
Hello VIS community,
keyword selection has been a familiar fixture in the submission and review processes of IEEE VIS for decades. The primary use of keywords in the current PCS system is to create a “match score” between a paper and a potential reviewer. Such match scores are displayed in several stages during the review processes. For example, during the phase for program committee (PC) members to bid on papers, PC members can sort papers in the pool according to the match scores computed based on their individual expertise. When a PC member is looking for reviewers for a specific paper, match scores are automatically computed for the potential reviewers. The algorithm for allocating papers to PC members can be configured with different weighting of bidding information and match scores. Currently, the recommendation is to rely on the bidding information only. The VIS papers co-chairs often compute, visualize, and analyze the distribution of papers in relation to keywords. In the coming year, such information will be extremely useful to the Area Curation Committee (ACC) that reviews the VIS area model and the keyword set regularly.
One major task undertaken by the reVISe committee was to define a new set of keywords as part of the unification of the three conferences. In IEEE VIS 2020, this new set of keywords was deployed in the submission and review processes for several tracks including the full paper tracks of VAST, InfoVis, and SciVis, ahead of their unification in VIS 2021. The purpose of this blog post (Part 1) is to explain how the keywords matter to paper authors and to offer a little help in choosing keywords. This will be followed by a second blog post (Part 2) where we explain how the keywords matter to PC members.
Part 1: Keywords for Paper Authors
If you are the author of a paper, short or long, sent to IEEE VIS you will have to choose keywords for your submission inside the paper submission system (PCS) where the page looks like this:
You might wonder whether it is possible to tick “wrong” boxes and how doing so would affect your paper. In this blog post we explain in a little more detail what the effect of ticking keyword boxes are in practice. In short: the main purpose of keywords is to make sure that your paper gets the best possible reviewers. But how? Let’s first take a step back and talk a little about a typical reviewing process.
Background: what are PC members and what is bidding?
Generally at VIS each paper gets assigned two reviewers from a pre-selected list. People on this list who have shown expertise in Visualization/Visual Analytics and past reviewing skills and have agreed to review and supervise reviewing of a certain number of papers in a given year. This list of people is called the program committee, or PC for short. Each paper currently gets two PC members assigned. It is important that these PC members are excited to review your paper and have expertise on the topic of the paper because they will each have to find one additional reviewer outside of the program committee, called the external reviewer.
Ok, so what does this have to do with my keywords?
As you read above, it is important that your paper gets PC members that are a good fit for your paper. One mechanism that ensures this fit is called bidding. Think of bidding as a self-declaration of PC members about which papers they would like to review. PC members see an abstract, title, and keywords for each paper submitted to the conference and need to select if they “want” to, are “willing” to or “reluctant” to review each paper from the list.
When PC members bid on papers they go through an interface that looks like this:
In the bid column on the left they give their bid for a paper. The second column is called “score”, this is a matching score calculated based on the keywords you selected for your paper and the expertise each PC member declared for the keywords you selected (details about the expertise selection will be the subject of another blog post). The score here is 0.83 which is pretty high. Next are the other information authors will see about your paper: title, they keywords you selected, and the abstract you entered.
PC members have a large number of papers to go through in the interface above and they often sort by the score column to get papers that best match their expertise or they use the search field to search for keywords. In the future there will also be features that allow PC members to filter by keywords they rated themselves highly on.
So the purpose of selecting keywords as a paper author is to get your paper rated highly for the right kind of people so that they will see your paper in their list and will declare it as a paper they want to or are willing to review.
Keyword Selection Strategies for Authors
So what can you do to get rated highly for the right PC members? It’s rather simple, select those keywords that describe an expertise that you wish your reviewers to have. In contrast to many other keyword selection exercises you do NOT select keywords to describe the content of your paper. Focus on reviewing expertise you would like to have.
For example, let’s take one of the highest cited paper of recent years:
D3: Data-Driven Documents
Michael Bostock, Vadim Ogievetsky, Jeffrey Heer
IEEE Trans. Visualization & Comp. Graphics (Proc. InfoVis), 2011
This paper could benefit from reviewers with expertise in building visualization toolkits and libraries and people and building of concrete implementations. So for this paper we would primarily select keywords: “Software Architecture, Toolkit/Library, Language” and “Software Prototype”. Now any PC member who rated themselves expert on these keywords would see the paper pop up higher in their list.
We wouldn’t select some other keywords that would describe the content of the paper because they don’t describe the reviewing expertise that would be important for the paper: For example “Computational Benchmark Studies”. The paper includes such a study but it is not the main contribution and we’d rather have reviewers with expertise on the software design.
What are the “Other” keywords and the textfields for in the keyword selection interface?
With the move of the whole conference to the new unified model there is now a process in place for updating keywords. There is a new committee called the Area Curation Committee (ACC) that every year will do an analysis of how keywords are used. They will look at whether certain keywords are too broad (too many papers select it), underused, and which keywords may be missing.
When you select “Other Data”, “Other Application Area”, “Other Contribution”, or “Other Topics and Techniques” you are encouraged to provide Missing Keywords in text fields just below the main list of keywords. The ACC will look at all suggestions for missing keywords and if they are sufficiently frequent will propose an update to the keyword list in the future. In addition, the ACC will look at feedback provided in the text field “feedback on the list of keywords” in order to suggest improvements for the future.
What is the worst thing I can do?
There are three things you can do wrong:
- Select no keyword at all. If you do this your paper will always have a match score of -1 and will be at the bottom of the list for every PC member.
- Select only one or multiple of “datatype agnostic”, “domain agnostic”, “other contribution”, or “other topic”. PC members cannot rate themselves on these keywords and hence the result will be the same as if you selected no keyword at all
- Select many (>5) or even all keywords. Doing this will ensure that you have a matching score but the score will be rather low for every PC member making your paper reside somewhere at the bottom of the bidding list.
What happens if nobody bids on my paper?
This can happen but it is rather rare. If it did happen then the paper chairs will ensure that your paper is assigned appropriate PC members.
What else can I do so people bid on my paper?
Besides selecting appropriate keywords there are a few things you can do to make sure that your paper is bid on: 1) have a descriptive title, 2) write a good abstract that is not too long and describes in the first sentences what the paper is about. Imagine a PC member reading 100+ abstracts during bidding - they will want to know foremost a) what the paper is about and b) why they should care about your paper. Make sure to say this concisely and clearly.
What is the relationship between keywords and areas?
There is no formal mapping between keywords and areas. Your selection of keywords is not restricted by the area you chose for your paper. The keywords you chose don’t affect the areas you can choose from.
The motivation for why the reVISe Committee was struck is to provide a unified and inclusive venue for the visualization community. It is a result of the growth of the community over the years, in terms of numbers, but also diversity of intellectual contribution types.
Previously, contribution types were neatly categorized into separate conferences or symposia at IEEE VIS (or even VisWeek prior to that). This required authors of the content to make decisions about which community to send specific papers to. However, through the growth of the visualization community, these boundaries became less clear. Further, the addition of new and emerging areas would further fragment the community. In response, reVISe proposed a unified governance model for visualization and visual analytics research.
The idea behind the governance model is based on a study and in-depth discussion of various governance models used in other contexts ranging from societal governance to company management and academic administration of research and conferences. Some fundamental ambitions and principles derived were:
- Transparency to outside observers of decisions and information flow
- Delegation of mandate follows responsibility (if you’re responsible, you also have decision power)
- Definition of a hierarchy of bodies with a ratification process to ensure acceptance and validity of decisions.
- Division of long term policy from operation of yearly instances of the conference.
- A large scale scope and synergy between research areas to avoid formation of silos but still operation with areas of manageable size
- Stable but dynamic committees by relying on experienced community members for strategic decision roles, but still allow for rejuvenation of governance structure by promoting roles for junior researchers
After in-depth and long discussions on various governance models an overall structure began to emerge. The starting point was the definition of a senior steering committee which carries the long term responsibility and decides on strategies and policies. The operation of the annual conference is the responsibility of the executive committee. The relation between the bodies in the governance model also relies on chain of proposal, decision and ratification. This makes it possible to have a model that builds on delegation of mandate and initiatives and still ensures consensus within the whole structure. It also creates transparency in decisions and appointments.
Some of the critical design choices are related to the role of the program committee and the curation of the areas. After a long discussion we decided to have a unified program committee for all of VIS. The main benefit of this setup is that PC members will be available to review across multiple areas and it also prevents formation of reviewing silos. The main challenge here is of course the management of the PC and the assignment of manuscripts. A consequence of the unified PC was also the need to appoint overall program chairs to coordinate the work of area program chairs and ensure that the same policy applies to all areas and assist the chairs in any overall matters arising in working with the area model.
Another important new committee in the governance structure is the area curation committee. The whole new model depends on wise, clear and dynamic definitions of areas. In view of this we decided to propose a separate committee to monitor and propose changes to the ara model. Here we have to rely on members of our community with deep insights in the field and also with foresight to have an agile approach to new topics of interest.
It is our hope that the new governance structure will serve our community well for many years to come and provide the foundation needed to let VIS thrive and generate impact far beyond the boundaries of the community.
See you at VIS 2020!
The reVISe committee,
Christoph Garth (chair), Min Chen, Alex Endert, Petra Isenberg, Alexander Lex, Shixia Liu, Anders Ynnerman.
We would like to take this opportunity to thank two previous members of the reVISe committee: Tamara Munzner (chair) and Torsten Möller, the 2017-2019 reconstruction committee (Hanspeter Pfister (chair), Hans Hagen, Daniel Keim, Tamara Munzner, Stephen North), the VEC, the VAST, InfoVis, SciVis SCs, VIS2019 and VIS2020 OCs, and everyone who participated in town-halls or gave feedback in some other way !
An earlier version of this post mistakenly contained an incorrect depiction of the governance structure.