The Indian Accounting Standards (Ind AS) play a pivotal role in reducing financial impropriety. These standards significantly enhance the accountability, accuracy, and transparency of financial reporting, thereby serving an essential function in deterring financial malfeasance. Such malfeasance includes deceptive accounting practices, misleading reporting, and the distortion of earnings, all of which undermine investor confidence, disrupt market integrity, and adversely affect the economy. The Ind AS, aligned with the International Financial Reporting Standards (IFRS), provide a comprehensive and robust framework that substantially improves the quality of financial reporting. The article outlines the significant benefits of Ind AS for financial reporting, such as increased transparency and accuracy. It presents case studies illustrating how the application of the standard has effectively addressed and mitigated financial discrepancies. Furthermore, the article examines the challenges organisations face in adopting Ind AS, including the complexities of transitioning from previous accounting standards and the need for extensive system reforms and personnel training. By elucidating these challenges, the article offers a thorough analysis of the effectiveness of Ind AS in addressing financial malpractice. It emphasises its role in fostering a more transparent and responsible financial reporting environment.
A framework for evaluating cultural bias and historical misconceptions in LLMs outputs
Large Language Models (LLMs), while powerful, often perpetuate cultural biases and historical inaccuracies from their training data, marginalizing underrepresented perspectives. To address these issues, we introduce a structured framework to systematically evaluate and quantify these deficiencies. Our methodology combines culturally sensitive prompting with two novel metrics: the Cultural Bias Score (CBS) and the Historical Misconception Score (HMS). Our analysis reveals varying cultural biases across LLMs, with certain Western-centric models, such as Gemini, exhibiting higher bias. In contrast, other models, including ChatGPT and Poe, demonstrate more balanced cultural narratives. We also find that historical misconceptions are most prevalent for less-documented events, underscoring the critical need for training data diversification. Our framework suggests the potential effectiveness of bias-mitigation techniques, including dataset augmentation and human-in-the-loop (HITL) verification. Empirical validation of these strategies remains an important direction for future work. This work provides a replicable and scalable methodology for developers and researchers to help ensure the responsible and equitable deployment of LLMs in critical domains such as education and content moderation.
Medical image fusion based on deep neural network via morphologically processed residuals
Medical image fusion enhances the intrinsic statistical properties of original images by integrating complementary information from multiple imaging modalities, producing a fused representation that supports more accurate diagnosis and effective treatment planning than individual images alone. The principal challenge lies in combining the most informative features without discarding critical clinical details. Although various methods have been explored, it remains difficult to consistently preserve structural and functional features across modalities. To address this, we propose a deep neural network–based framework that incorporates morphologically processed residuals for competent fusion. The network is trained to directly map source images into weight maps thereby overcoming the limitations of traditional activity-level measurements and weight assignment algorithms, and enabling adaptive and reliable weighting of different modalities. The framework further employs image pyramids in a multi-scale design to align with human visual perception, and introduces a local similarity–based adaptive rule for decomposed coefficients to maintain consistency and fine detail preservation. An edge-preserving strategy combining linear low-pass filtering with nonlinear morphological operations is used to emphasize regions of high amplitude and preserve optimally sized structural boundaries. Residuals derived from the linear filter guide the morphological process ensuring significant regions are retained while reducing artifacts. Experimental results demonstrate that the proposed method effectively integrates complementary information from multimodal medical images while mitigating noise, blocking effects, and distortions, leading to fused images with improved clarity and clinical value. This work provides an advanced and reliable fusion approach that contributes substantially to the field of medical image analysis, offering clinicians enhanced visualization tools for decision-making in diagnosis and treatment planning.
The prevailing data-driven paradigm in AI has largely neglected the generative nature of data. All data, whether observational or experimental, are produced under specific conditions, yet current approaches treat them as context-free artifacts. This neglect results in uneven data quality, limited interpretability, and fragility when models face novel scenarios. Evaluatology reframes evaluation as the process of inferring the influence of an evaluated object on the affected factors and attributing the evaluation outcome to specific ones. Among these factors, a minimal set of indispensable elements determines how changes in conditions propagate to outcomes. This essential set constitutes the evaluation conditions. Together, the evaluated object and its evaluation conditions form a self-contained evaluation system — a structured unit that anchors evaluation to its essential context. We propose an evaluatology-based paradigm, which spans the entire AI lifecycle — from data generation to training and evaluation. Within each self-contained evaluation system, data are generated and distilled into their invariant informational structures. These distilled forms are abstracted into reusable causal-chain schemas, which can be instantiated as training examples. By explicitly situating every learning instance within such condition-aware systems, evaluation is transformed from a passive, post-hoc procedure into an active driver of model development. This evaluation-based paradigm enables the construction of causal training data that are interpretable, traceable, and reusable, while reducing reliance on large-scale, unstructured datasets. This paves the way toward scalable, transparent, and epistemically grounded AI.
Corrigenda
Corrigendum to “A Framework for Evaluating Cultural Bias and Historical Misconceptions in LLM Outputs” [BenchCouncil Transactions on Benchmarks, Standards and Evaluations 5 (2025) 100235]