The use, interoperability, and analytical exploitation of graph data are essential for modern digital economies. Today, thousands of computational methods (algorithms) and findable, accessible, interoperable, and reusable (FAIR) graph datasets exist. However, current computational capabilities lag when faced with the complex workflows involved in graph processing, the extreme scale of existing graph datasets, and the need to consider sustainability metrics in graph-processing operations. Needs are emerging for graph-processing platforms to provide multilingual information processing and reasoning based on the massive graph representation of extreme data in the form of general graphs, knowledge graphs, and property graphs. Because graph workloads and graph datasets are strongly irregular, and involve one or several big data “Vs” (e.g., volume, velocity, variability, vicissitude), the community needs to reconsider traditional approaches in performance analysis and modeling, system architectures and techniques, serverless and “as a service” operation, real-world and simulation-driven experimentation, etc., and provide new tools and instruments to address emerging challenges in graph processing.
Graphs or linked data are crucial to innovation, competition, and prosperity and establish a strategic investment in technical processing and ecosystem enablers. Graphs are universal abstractions that capture, combine, model, analyse, and process knowledge about real and digital worlds into actionable insights through item representation and interconnectedness. For societally relevant problems, graphs are extreme data that require further technological innovations to meet the needs of the European data economy. Digital graphs help pursue the United Nations Sustainable Development Goals (UN SDG) by enabling better value chains, products, and services for more profitable or green investments in the financial sector and deriving trustworthy insight for creating sustainable communities. All science, engineering, industry, economy, and society-at-large domains can leverage graph data for unique analysis and insight, but only if graph processing becomes easy-to-use, fast, scalable, and sustainable.
To facilitate the exchange of ideas and expertise in the broad field of high-performance large-scale graph processing, we organize GraphSys, the first workshop of Serverless, Extreme-Scale, and Sustainable Graph Processing Systems. GraphSys is a cross-disciplinary meeting venue focusing on the state-of-the-art and the emerging (future) graph processing systems. We invite experts and trainees in the field, across academia, industry, governance, and society, to share experience and expertise leading to a shared body of knowledge, to formulate together a vision for the field, and to engage with the topics to foster new approaches, techniques, and solutions.
GraphSys-2023 will be a full-day workshop with a single session (no parallel sessions). We welcome the wide community of graph processing systems, and plan to feature invited talks and presentations of papers from within and beyond the GraphMassivizer project. We also plan a panel to introduce and discuss the goals of the GraphMassivizer project and how it can contribute to the strengthening of graph processing research and community.
For more details on submissions and topics of interest, please check the call for papers.
The GraphSys workshop is technically sponsored by the Graph-Massivizer project (Grant number 101093202, https://graph-massivizer.eu) funded by the Horizon Europe research and innovation program of the European Union for the period 2023-2026, which studies and aims to develop a high-performance, scalable, gender-neutral, secure, and sustainable platform for massive graph processing, and by and the Horizon 2020 DataCloud project (Grant number 101016835, https://datacloudproject.eu/)
In time, we aim to align GraphSys with the Standard Performance Evaluation Corporation (SPEC)’s Research Group (RG), and, in particular, the RG Cloud Group that is taking a broad approach, relevant for both academia and industry, to cloud benchmarking, quantitative evaluation, and experimental analysis.