Monitoring and assessing the risks in financial systems is critical to economic and social well-being. The financial crisis of 2007-2009 saw the breakdown of financial intermediation markets that led to many bank failures and a global credit crunch, drawing attention to the need for enhanced financial systemic risk analysis. Meeting systemic risk analysis challenges requires integration of a range of very large and complex data sources (e.g., financial contracts; counterparty reference data; financial position and transaction data; and market data) with a variety of systemic risk analysis techniques. Given this requirement, it makes sense to explore how new approaches to the analysis of large and complex data sources might be used to enhance financial systemic risk analytics capabilities. One such approach is Visual Analytics (VA). This interdisciplinary panel will explore the application of VA to financial systemic risk analysis, focusing on such questions as: what are the analytic tasks required for financial systemic risk analysis? What data are needed to support these tasks? How can interactive visualization support the provision of financial systemic risk capabilities such as the “situational awareness” and understanding interconnectedness in complex financial networks? How can visual analytics be combined with statistical analyses, semantics-based approaches, data mining, and high performance computing to enhance systemic risk analysis? And, what visual designs and interactions will best support visual analysis for financial systemic risk analysis, in particular under crisis conditions? The panel brings together perspectives from the systemic risk analysis “user communities”, i.e., financial regulators and market participants, with those of VA scientists and academic researchers to explore the capabilities and research challenges related to design, implementation and evaluation of a holistic financial systemic risk analysis framework with which to derive faster and better insights into the health of financial systems.
Reproducible research refers to the ability for third parties to independently re-create and test the results described in research papers. As the whole field of Visualization grows and gets more mature, it is necessary to promote research standards that lead to reliable work that people can trust and refer to with confidence. For instance, it is desirable to ensure people will be able to access data, parameters and software to replicate the results described in the paper.
Reproducible computational research is not a problem of Visualization alone. The whole area of Computational Science is affected by the need to make results more trustworthy and accessible. The IEEE Computing in Science and Engineering published a special issue on reproducibility in 2009. Science published one in 2011 [Peng 2011]. Vanderwalle et al. ran a study on reproducibility in Signal Processing in 2009 [Vandewalle et al. 2009]. The ACM SIGMOD conference (the top conference in Databases) started in 2008 the "Experimental Reproducibility Effort", an attempt to introduce a systematic mechanism to promote the publication of research with high reproducibility standards.
Comparatively, the field of Visualization (including InfoVis, SciVis, and VAST) has witnessed little interest and commitment to the promotion of higher reproducibility standards. Given its focus on data, experimental evaluation and computational methods, it is especially important to devise mechanisms to encourage easy access, testing and re-use of proposed solutions.
The main goal of the panel is to raise awareness about this issue and to start a constructive discussion about potential mechanisms to increase the adoption of reproducibility. As discussed elsewhere [Freire et al. 2012], several degrees of reproducibility can be expected and alternative mechanisms and standards can be used . The panel will be an opportunity to openly compare and contrast them.
Quality of visualization (QoV) has always been the main catalyst that motivates research and development in visualization. However, whenever we start to discuss QoV, the term becomes elusive. We often appear to have diffident emphases. Whilst many of us have developed algorithms and techniques for improving QoV, some have conducted empirical studies to evaluate QoV and a few have proposed metrics for measuring QoV. Some of us have believed that QoV is determined by the effectiveness of information delivery, whilst others have focused on the best user experience. This panel brings together four established visualization scientists, who represent four different schools of thought on QoV respectively. By building on their wealth of expertise and experience, they will present their definition of QoV, their understanding of principal means for measuring, evaluating and improving QoV. This panel will facilitate a timely discussion on QoV in VisWeek 2012, which will likely signify the scholarly transformation of the field of visualization towards a mature discipline.
Most of us would agree that the pursuit of excellence in data visualization research is not a nine-to-five job. In fact, between a range of possible responsibilities including teaching, writing research papers, writing grant proposals, and developing software, there is no limit to the amount of work that can be done in the name of progress. As the famous Frits H. Post formerly of Delft University of Technology would say, “The only limit is death.”
A corollary to this is that it it can be very difficult to balance personal and professional life as a visualization researcher. Left to the devices of the workplace, life in data visualization may become unbalanced. Successful balance is key to a healthy lifestyle.
This panel discusses central topics related to obtaining a healthy balance of personal and professional life as a visualization scientist. From PhD candidates, to Postdocs, to Assistant Professors, and beyond, we have all experienced the pressure to keep up with our profession.