TBench (BenchCouncil Transactions on Benchmarks, Standards and Evaluations) Calls for Papers
Abstract
BenchCouncil Transactions on Benchmarks, Standards and Evaluations (TBench) is an open-access journal dedicated to advancing the field of benchmarks, data sets, standards, evaluations and optimizations. This journal is a peer-reviewed, subsidized open-access journal where The International Open Benchmark Council (BenchCouncil) pays the open-access fee. Authors do not have to pay any open-access publication fee. However, at least one of the authors must register BenchCouncil International Symposium on Benchmarking, Measuring and Optimizing (Bench) (https://www.benchcouncil.org/bench/ ) and present their work. It seeks a fast-track publication with an average turnaround time of one month.
We invite submissions covering a wide range of topics from various disciplines, with a particular emphasis on interdisciplinary research. Whether it pertains to computers, AI, medicine, education, finance, business, psychology, or other social disciplines, all relevant contributions are welcome.
At TBench, we prioritize the reproducibility of research. We strongly encourage authors to ensure that their articles are prepared for open-source or artifact evaluation before submission. The journal website is https://www.benchcouncil.org/tbench .
The Third BenchCouncil International Symposium on Intelligent Computers, Algorithms, and Applications (IC 2023) Call for Papers
Abstract
Sponsored and organized by the International Open Benchmark Council (BenchCouncil), the IC conference is to provide a pioneering technology map through searching and advancing state-of-the-art and state-of-the-practice in processors, systems, algorithms, and applications for machine learning, deep learning, spiking neural network and other AI techniques across multidisciplinary and interdisciplinary areas. IC 2023 invites manuscripts describing original work in the above areas and topics. All accepted papers will be presented at the IC 2023 conference and published by Springer CCIS (Indexed by EI). The IC conferences have been successfully held for two series from 2019 to 2022 and attracted plenty of paper submissions and participants. IC 2023 will be held on December 4-6, 2023 in Sanya and invites manuscripts describing original work in processors, systems, algorithms, and applications for AI techniques across multidisciplinary and interdisciplinary areas. The conference website is https://www.benchcouncil.org/ic2023/ .
Important Dates: Paper Submission: July 31, 2023, at 11:59 PM AoE Notification: September 30, 2023, at 11:59 PM AoE Final Papers Due: October 31, 2023, at 11:59 PM AoE Conference Date: December 4-6, 2023 Submission Site: https://ic2023.hotcrp.com/
2023 BenchCouncil Distinguished Doctoral Dissertation Award Call for Nomination
Abstract
BenchCouncil Distinguished Doctoral Dissertation Award is to recognize and encourage superior research and writing by doctoral candidates in the broad field of benchmarking community. This year, the award consists of two tracks: Computer Architecture track and Other Areas track. Each track carries a $1,000 honorarium and has individual nomination submission form and award subcommittee. For each track, all the candidates are encouraged to submit articles to BenchCouncil Transactions on Benchmarks, Standards, and Evaluation (TBench). Among the submissions of each track, four candidates will be selected as finalists. They will be invited to give a 30-minute presentation at the BenchCouncil Bench 2023 conference and contribute research articles to TBench. Finally, for each track, one among the four will receive the award. More information are available from https://www.benchcouncil.org/awards/index.html#DistinguishedDoctoralDissertation
Unlocking the opportunities through ChatGPT Tool towards ameliorating the education system
Mohd Javaid, Abid Haleem, Ravi Pratap Singh, Shahbaz Khan, Ibrahim Haleem Khan
Abstract
Artificial Intelligence (AI)-based ChatGPT developed by OpenAI is now widely accepted in several fields, including education. Students can learn about ideas and theories by using this technology while generating content with it. ChatGPT is built on State of the Art (SOA), like Deep Learning (DL), Natural Language Processing (NLP), and Machine Learning (ML), an extrapolation of a class of ML-NLP models known as Large Language Model (LLMs). It may be used to automate test and assignment grading, giving instructors more time to concentrate on instruction. This technology can be utilised to customise learning for kids, enabling them to focus more intently on the subject matter and critical thinking ChatGPT is an excellent tool for language lessons since it can translate text from one language to another. It may provide lists of vocabulary terms and meanings, assisting students in developing their language proficiency with resources. Personalised learning opportunities are one of ChatGPT’s significant applications in the classroom. This might include creating educational resources and content tailored to a student’s unique interests, skills, and learning goals. This paper discusses the need for ChatGPT and the significant features of ChatGPT in the education system. Further, it identifies and discusses the significant applications of ChatGPT in education. Using ChatGPT, educators may design lessons and instructional materials specific to each student’s requirements and skills based on current trends. Students may work at their speed and concentrate on the areas where they need the most support, resulting in a more effective and efficient learning environment. Both instructors and students may profit significantly from using ChatGPT in the classroom. Instructors may save time on numerous duties by using this technology. In future, ChatGPT will become a powerful tool for enhancing students’ and teachers’ experience.
Benchmarking HTAP databases for performance isolation and real-time analytics
Guoxin Kang, Simin Chen, Hongxiao Li
Abstract
Hybrid Transactional/Analytical Processing (HTAP) databases are designed to execute real-time analytics and provide performance isolation for online transactions and analytical queries. Real-time analytics emphasize analyzing the fresh data generated by online transactions. And performance isolation depicts the performance interference between concurrently executing online transactions and analytical queries. However, HTAP databases are extreme lack micro-benchmarks to accurately measure data freshness. Despite the abundance of HTAP databases and benchmarks, there needs to be more thorough research on the performance isolation and real-time analytics capabilities of HTAP databases. This paper focuses on the critical designs of mainstream HTAP databases and the state-of-the-art and state-of-the-practice HTAP benchmarks. First, we systematically introduce the advanced technologies adopted by HTAP databases for real-time analytics and performance isolation capabilities. Then, we summarize the pros and cons of the state-of-the-art and state-of-the-practice HTAP benchmarks. Next, we design and implement a micro-benchmark for HTAP databases, which can precisely control the rate of fresh data generation and the granularity of fresh data access. Finally, we devise experiments to evaluate the performance isolation and real-time analytics capabilities of the state-of-the-art HTAP database. In our continued pursuit of transparency and community collaboration, we will soon make available our comprehensive specifications, meticulously crafted source code, and significant results for public access at https://www.benchcouncil.org/mOLxPBench .
CoviDetector: A transfer learning-based semi supervised approach to detect Covid-19 using CXR images
COVID-19 was one of the deadliest and most infectious illnesses of this century. Research has been done to decrease pandemic deaths and slow down its spread. COVID-19 detection investigations have utilised Chest X-ray (CXR) images with deep learning techniques with its sensitivity in identifying pneumonic alterations. However, CXR images are not publicly available due to users’ privacy concerns, resulting in a challenge to train a highly accurate deep learning model from scratch. Therefore, we proposed CoviDetector, a new semi-supervised approach based on transfer learning and clustering, which displays improved performance and requires less training data. CXR images are given as input to this model, and individuals are categorised into three classes: (1) COVID-19 positive; (2) Viral pneumonia; and (3) Normal. The performance of CoviDetector has been evaluated on four different datasets, achieving over 99% accuracy on them. Additionally, we generate heatmaps utilising Grad-CAM and overlay them on the CXR images to present the highlighted areas that were deciding factors in detecting COVID-19. Finally, we developed an Android app to offer a user-friendly interface. We release the code, datasets and results’ scripts of CoviDetector for reproducibility purposes; they are available at: https://github.com/dasanik2001/CoviDetector
DPUBench: An application-driven scalable benchmark suite for comprehensive DPU evaluation
Zheng Wang, Chenxi Wang, Lei Wang
Abstract
With the development of data centers, network bandwidth has rapidly increased, reaching hundreds of Gbps. However, the network I/O processing performance of CPU improvement has not kept pace with this growth in recent years, which leads to the CPU being increasingly burdened by network applications in data centers. To address this issue, Data Processing Unit (DPU) has emerged as a hardware accelerator designed to offload network applications from the CPU. As a new hardware device, the DPU architecture design is still in the exploration stage. Previous DPU benchmarks are not neutral and comprehensive, making them unsuitable as general benchmarks. To showcase the advantages of their specific architectural features, DPU vendors tend to provide some particular architecture-dependent evaluation programs. Moreover, they fail to provide comprehensive coverage and cannot adequately represent the full range of network applications. To address this gap, we propose an application-driven scalable benchmark suite called DPUBench. DPUBench classifies DPU applications into three typical scenarios — network, storage, and security, and includes a scalable benchmark framework that contains essential Operator Set in these scenarios and End-to-end Evaluation Programs in real data center scenarios. DPUBench can easily incorporate new operators and end-to-end evaluation programs as DPU evolves. We present the results of evaluating the NVIDIA BlueField-2 using DPUBench and provide optimization recommendations. DPUBench are publicly available from https://www.benchcouncil.org/DPUBench.
StreamAD: A cloud platform metrics-oriented benchmark for unsupervised online anomaly detection
Cloud platforms, serving as fundamental infrastructure, play a significant role in developing modern applications. In recent years, there has been growing interest among researchers in utilizing machine learning algorithms to rapidly detect and diagnose faults within complex cloud platforms, aiming to improve the quality of service and optimize system performance. There is a need for online anomaly detection on cloud platform metrics to provide timely fault alerts. To assist Site Reliability Engineers (SREs) in selecting suitable anomaly detection algorithms based on specific use cases, we introduce a benchmark called StreamAD. This benchmark offers three-fold contributions: (1) it encompasses eleven unsupervised algorithms with open-source code; (2) it abstracts various common operators for online anomaly detection which enhances the efficiency of algorithm development; (3) it provides extensive comparisons of various algorithms using different evaluation methods; With StreamAD, researchers can efficiently conduct comprehensive evaluations for new algorithms, which can further facilitate research in this area. The code of StreamAD is published at https://github.com/Fengrui-Liu/StreamAD .