Expert consensus and reliability validation of the portfolio assessment guideline for Chinese practical writing: An empirical study based on fleiss’ kappa
Portfolio assessment has been increasingly recognized as an effective approach to fostering comprehensive writing ability. However, its application in Chinese practical writing remains limited. The lack of standardized evaluation criteria has hindered its reliability and broader implementation. This study aimed to systematically develop and validate a Portfolio Assessment Guideline for Chinese practical writing, focusing on inter-rater reliability and coverage across four core dimensions: content, logical structure, language, and format.
Methods
Five higher education experts with extensive experience in practical writing instruction and research independently rated the guideline and its scoring rubrics for two key genres (summary and official notice), and inter-rater agreement was assessed using Fleiss’ Kappa coefficients.
Findings
The Kappa values for six core modules ranged from 0.79 to 1.00, with an overall Kappa of 0.87 across 14 sub-dimensions, indicating “almost perfect” agreement. Genre-specific analysis showed high overall consistency for summary (κ=0.87) and official notice (κ=0.89), with the summary’s “logical structure” dimension achieving “substantial agreement” (κ=0.68). Based on expert feedback, descriptive indicators were refined without altering the core framework.
Value
The findings provide robust evidence for the psychometric quality of the guideline, supporting its potential application in higher education and professional training for enhancing Chinese practical writing abilities.
Evaluating barriers to establish digital trust in industry 4.0 for supply chain resilience in the Indian manufacturing industry
Recent developments in Industry 4.0 technologies have led the manufacturing industry to implement them in its supply chains. The current state of lack of trust in digital systems has made organizations eager to build resilient systems to cope with uncertain circumstances. However, the challenges with handling stakeholder data with transparency, visibility, and accountability still persist. This transition demands the establishment of digital trust for secure information sharing and mitigating risks related to cybersecurity, data privacy, and potential misuse. Through a systematic literature review, this study identifies 17 barriers to establishing digital trust and applies exploratory factor analysis to group them into key dimensions. Further, a case-based analysis in the emerging Indian manufacturing economy’s context employing Pythagorean Fuzzy Analytic Hierarchy Process-Decision-Making Trial and Evaluation Laboratory is conducted to prioritize these barriers and explore their interrelationships. The findings reveal that ‘Top management commitment’ and ‘Cybersecurity’ are the most influential barriers to be taken care of to promote collaboration and responsiveness in a digitally enabled supply chain environment. The study contributes by guiding practitioners and researchers working on the digital transformations for supply chains, highlighting digital trust as a foundational capability for achieving resiliency in Supply Chain 4.0. Being less explored in the field of supply chain digitalization, this study is a first step forward to explore digital trust in the Supply Chain 4.0 for resilience.
An evaluation framework for measuring prompt wise metrics for large language models in resource-constrained edge
Existing challenges in deploying large language models (LLMs) on resource-constrained devices stem from limited CPU throughput, memory capacity, and power budgets. Motivated by the lack of edge-specific evaluation tools, we introduce LLMEvaluator, a framework that profiles quantized LLMs — Qwen2.5, Llama3.2, Smollm2, and Granite3 — on a Raspberry Pi 4B using a suite of core and derived metrics. Our contributions include (i) a unified taxonomy that integrates latency, throughput, power variation, memory stability, and thermal behavior; (ii) prompt-wise analyses across ten NLP tasks; and (iii) correlation studies guiding optimizations. Key results show that Qwen2.5 leads in energy efficiency and throughput with a 68.44 MB memory standard deviation; Granite3 excels in memory stability , minimal load overhead, and per-token latency; Smollm2 suffers the highest total duration, longest prompt overhead, and lowest power efficiency; and Llama3.2 balances latency, throughput (8.12 tokens/s), and energy per token with moderate power variability (1.05 W std dev). Correlation analysis reveals that reducing model load time yields the largest improvement in end-to-end latency (r > 0.9), and that throughput gains directly translate into energy savings (r ≈ -0.81). LLMEvaluator empowers selection and tuning of LLMs for low-power environments.
“We don’t plagiarise, we parrot”: Cognitive load and ethical perceptions in higher education written assessment
Generative artificial intelligence has reshaped written assessment in higher education and sharpened concerns about “parroting,” the undisclosed use of AI-generated text with minimal cognitive engagement. This study examines the cognitive and ethical mechanisms underlying parroting among undergraduates in one Malaysian research university. Drawing on Cognitive Load Theory and Dual-System Theory, parroting is conceptualised across three dimensions: intrinsic load, extraneous load, and ethical rationalisation. Survey responses from 211 students were analysed using Rasch measurement to evaluate item reliability, construct separation, differential item functioning (DIF) across academic fields and item hierarchies. Results indicate that items function equivalently for engineering, non-engineering, and science students, supporting the instrument’s fairness and stability. Overall, findings show that parroting is most strongly driven by extraneous pressures such as vague instructions and heavy workload, followed by intrinsic challenges related to writing confidence and conceptual understanding. Ethical rationalisation is endorsed least frequently but becomes more salient when institutional guidance on AI use is unclear. The study offers implications for pedagogy and policy, underscoring the need for explicit AI-use guidelines, improved task design, and learning environments that promote ethically responsible engagement with generative technologies.
Benchmark-based prioritizing sustainable consumption and production practices for achieving SDG 12 in India: A multi-criteria decision-making approach
This study prioritizes sustainable consumption and production (SCP) practices to advance SDG 12 in India by employing a hybrid Grey Delphi–Grey DEMATEL framework. Twelve SCP practices identified through a comprehensive literature review were assessed by ten sustainability experts, with Grey Delphi confirming their relevance and Grey DEMATEL mapping the causal structure and influence dynamics within the system. The results show that circular economy practices, multi-stakeholder partnerships, and life cycle assessment function as core driving practices that exert substantial influence on the broader SCP landscape, while sustainable supply chain management, consumption education, urban planning, and green procurement appear as dependent practices shaped by these drivers. By integrating expert judgment and uncertainty-aware analytical techniques, the study provides a structured and replicable decision-support approach that assists policymakers, industry stakeholders, and practitioners in prioritizing impactful SCP interventions tailored to India’s socio-economic context, thereby supporting more effective progress toward sustainable development.
Data-driven financial fraud detection using hybrid artificial and quantum intelligence
The unauthorized use of a cardholder’s financial data, resulting in significant losses to individuals and companies, is known as credit card fraud. The increasing frequency and complexity of such fraud in the digital era highlight the absolutely vital need for reliable and accurate detection systems. Under the specific challenge of extreme class imbalance, this work investigates the credit card fraud identification performance of several Machine Learning (ML), Deep Learning (DL) and Quantum Machine Learning (VQC) algorithms. The study uses a commonly used dataset consisting of 284,807 anonymized credit card transactions, of which only 492 (0.17%) are fraudulent. To solve the class imbalance, we produced synthetic samples of the minority class utilizing the SMOTE, thus raising model sensitivity. Moreover, we enhanced model performance by means of hyperparameter tuning applied with Grid Search, Random Search, and Keras Tuner. Combining deep learning-based feature extraction with ensemble learning approaches, together with effective data balancing and hyperparameter tuning, yields, according to the results, a very accurate and dependable credit card fraud detection system. The hybrid model that includes AutoEncoder for feature extraction, Bagging (Random Forest), and Boosting (XGBoost) was the best, with 100% accuracy. This shows that this integrated technique is better than others. This approach provides a sensible analysis for building robust, real-time fraud detection systems for practical financial applications.
Performance comparison of permissioned and permissionless blockchain by varying workload transaction
Blockchain technology has fueled exponential growth across various industries, including finance, supply chain management, and healthcare, enabling greater transparency in transaction management and supporting decentralized implementations. This paper presents a comprehensive performance analysis of permissioned and permissionless blockchain platforms, specifically Hyperledger Fabric and Ethereum. The study evaluates these platforms with varying transaction workloads (100 to 1000 transactions) with a consistent network. Our objective is to measure key performance metrics such as send rate, throughput, latency, resource utilization, and transaction success rate using established benchmarking tools and methodologies. The findings offer valuable insights into the comparative strengths, limitations, and optimal use cases of these blockchain platforms across different performance parameters. The results indicate that Hyperledger Fabric achieves, on average, 3.5–4.5 times higher throughput and 10–12 times lower latency than Ethereum, while consuming 2.5–3 times less memory across tested workloads. In contrast, Ethereum demonstrates a higher send rate and lower CPU demand in some operations. Overall, the study suggests that Hyperledger Fabric is better suited for enterprise applications that demand high scalability and performance.
US-China geopolitical tensions and Indian stock market dynamics: evidence from NARDL and wavelet coherence
The geopolitical tension between China and the United States have increasingly shaped global financial markets; the exact impacts on emerging economies like India remain poorly explored. This study examines the impacts of changes in US-China tension on the Indian stock market based on the use of nonlinear Autoregressive Distributed Lag (NARDL) modeling and wavelet coherence analysis. With monthly observations and the newly developed U.S.-China Tension Index (UCT), the study finds asymmetric short-run effects: heightened tensions are likely to dampen sentiment and reduce returns, whereas reduced tensions offer limited relief. Interest rates are a key determinant in both the short run and long run, underscoring their inherent role in determining capital flows. Wavelet analysis captures a change in the nature of the relationship, from persistent co-movement in the early period to more prompt, temporary responses in subsequent years. These results underscore the growing significance of geopolitical attitudes to market action, especially for economies that are increasingly open to international capital flows.
The novelty of the paper arises from the application of a novel geopolitical risk metric (UCT) to an untapped economy (India) through a hybrid econometric-time-frequency method that captures hitherto unseen asymmetric and dynamic market responses.
An adaptive opposite slime mold feature selection algorithm for complex optimization problems
The slime mould algorithm (SMA) has recently emerged as a soaring metaheuristic strategy to function optimization problems due to its solid exploration-exploitation balance that enables it to converge efficiently towards high-quality solutions. In spite of its broader applications, however, there remain areas where the algorithm is constrained in diversified exploration and the scope of its exploitation mechanisms. In bridging these loopholes, this work introduces a new variant known as the adaptive opposition SMA (AOSMA). AOSMA involves an adaptive opposition-based learning (OBL) method, which learns online how to add opposition-based solutions at the iteration process to enhance exploration abilities and avoid premature convergence. The adaptive policy enables the algorithm to escape local optima more effectively by occasionally generating alternative candidate solutions. Additionally, for the sake of increased exploitation, AOSMA also incorporates a plan in which the randomly selected search agent is progressively replaced with the current best-performing agent during position updating. The replacement process increases the focus of the algorithm towards prospective regions of the search space, and thus it converges more quickly towards the global optimum. The implemented AOSMA was exhaustively validated with both qualitative and quantitative measures in terms of thirteen rigorously proven benchmark test functions involving a variety of unimodal, multimodal, and composite landscapes to test its optimization ability extensively. Comparative tests on a collection of state-of-the-art metaheuristic algorithms confirmed that AOSMA consistently produces higher or highly comparable performances across a variety of problem instances. The experimental results confirm the robustness, adaptability, and improved search ability of the algorithm, highlighting its potential as an efficient optimization method for complex real-world problems. With the efficient fusion of adaptive exploration and improved exploitation, AOSMA provides a vital contribution to the field of research into swarm intelligence and metaheuristic optimization.