settings,A,,0,minutes,0,5,55,150,1,question_pagination,asc,0,2000,500 question,"1.Which challenges must auditors consider when applying traditional sampling to AI systems? (choose two)","multi_choice",multi_choice,1.00,1,,,,"

Explanation: AI outputs may vary for the same input over time and models may adapt continuously (drift), making traditional audit sampling less reliable. Reference: ISACA AAIA Study Guide, AI Audit Sampling Challenges

" answer,"A. AI outputs may vary for the same input over time",text,1,0,,1 answer,"B. There is limited use of open-source tools",text,0,0,,2 answer,"C. Models may adapt continuously (drift)",text,1,0,,3 answer,"D. Audit samples can be recompiled into binaries",text,0,0,,4 question,"2.What is the role of predefined AI-specific playbooks in incident response planning?","single_choice",single_choice,1.00,2,,,,"

Explanation: Predefined AI-specific playbooks provide structured steps for handling known AI failure types, ensuring a consistent and effective incident response. Reference: ISACA AAIA Study Guide, Incident Response Planning

" answer,"A. To eliminate the need for data validation",text,0,0,,1 answer,"B. To document software licensing requirements",text,0,0,,2 answer,"C. To provide structured steps for handling known AI failure types",text,1,0,,3 answer,"D. To simplify model compression",text,0,0,,4 question,"3.Why must data scarcity be addressed during the AI development process?","single_choice",single_choice,1.00,3,,,,"

Explanation: Data scarcity can lead to overfitting and poor generalization in models, reducing their effectiveness and reliability. Reference: ISACA AAIA Study Guide, Data Quality and Model Development

" answer,"A. It may result in longer training cycles and high storage cost",text,0,0,,1 answer,"B. It can lead to overfitting and poor generalization in models",text,1,0,,2 answer,"C. It lowers CPU usage during preprocessing",text,0,0,,3 answer,"D. It ensures redundancy across backup datasets",text,0,0,,4 question,"4.Why is data quality assessment critical during AI audits?","single_choice",single_choice,1.00,4,,,,"

Explanation: Data quality assessment ensures the completeness, accuracy, and validity of the training and testing datasets, which is crucial for reliable AI outcomes. Reference: ISACA AAIA Study Guide, Data Quality in AI Audits

" answer,"A. It eliminates the need for audit interviews",text,0,0,,1 answer,"B. It ensures the completeness, accuracy, and validity of the training and testing datasets",text,1,0,,2 answer,"C. It improves encryption efficiency",text,0,0,,3 answer,"D. It accelerates the user interface design",text,0,0,,4 question,"5.Which of the following metrics would be most relevant for measuring the success of an organization’s AI governance program?","single_choice",single_choice,1.00,5,,,,"

Explanation: Model auditability and explainability rate are key metrics for measuring the effectiveness of AI governance and oversight. Reference: ISACA AAIA Study Guide, AI Governance Metrics

" answer,"A. Average model training time",text,0,0,,1 answer,"B. Data ingestion rate",text,0,0,,2 answer,"C. Model auditability and explainability rate",text,1,0,,3 answer,"D. Number of servers used for training",text,0,0,,4 question,"6.What distinguishes supervised learning from unsupervised learning in machine learning models?","single_choice",single_choice,1.00,6,,,,"

Explanation: Supervised learning maps input to output based on labeled data, while unsupervised learning does not use labeled outputs. Reference: ISACA AAIA Study Guide, Machine Learning Fundamentals

" answer,"A. Unsupervised learning relies on labeled data sets to train models",text,0,0,,1 answer,"B. Supervised learning maps input to output based on labeled data",text,1,0,,2 answer,"C. Supervised learning algorithms are used exclusively for clustering tasks",text,0,0,,3 answer,"D. Unsupervised learning requires predefined output variables",text,0,0,,4 question,"7.Which mechanism best supports AI supervision after model deployment?","single_choice",single_choice,1.00,7,,,,"

Explanation: Real-time monitoring dashboards and alert systems provide ongoing oversight and quick detection of issues after deployment. Reference: ISACA AAIA Study Guide, AI Monitoring and Supervision

" answer,"A. Peer review of model source code",text,0,0,,1 answer,"B. Static code analysis before training",text,0,0,,2 answer,"C. Server capacity testing",text,0,0,,3 answer,"D. Real-time monitoring dashboards and alert systems",text,1,0,,4 question,"8.The ________ is a governance mechanism that ensures decisions regarding AI implementation, risk tolerance, and policy creation are consistently overseen by senior leadership.","single_choice",single_choice,1.00,8,,,,"

Explanation: An AI Steering Committee oversees AI strategy, risk, and policy decisions at the senior leadership level. Reference: ISACA AAIA Study Guide, AI Governance Structures

" answer,"A. AI Steering Committee",text,1,0,,1 answer,"B. IT Support Team",text,0,0,,2 answer,"C. Data Engineering Squad",text,0,0,,3 answer,"D. Change Advisory Board",text,0,0,,4 question,"9.Which quality assurance practices strengthen AI audit reports? (choose two)","multi_choice",multi_choice,1.00,9,,,,"

Explanation: Verification of evidence and peer review of findings help ensure the accuracy and credibility of AI audit reports. Reference: ISACA AAIA Study Guide, AI Audit Quality Assurance

" answer,"A. Verification of evidence used for conclusions",text,1,0,,1 answer,"B. Peer review of findings",text,1,0,,2 answer,"C. Relying solely on third-party frameworks",text,0,0,,3 answer,"D. Excluding internal controls from reporting",text,0,0,,4 question,"10.Which practices promote secure and responsible AI model deployment? (choose two)","multi_choice",multi_choice,1.00,10,,,,"

Explanation: Validating model behavior before deployment and implementing rollback procedures for faulty models are best practices for responsible AI deployment. Reference: ISACA AAIA Study Guide, Secure AI Deployment

" answer,"A. Ignoring edge cases to simplify design",text,0,0,,1 answer,"B. Skipping documentation to increase agility",text,0,0,,2 answer,"C. Validating model behavior before live deployment",text,1,0,,3 answer,"D. Implementing rollback procedures for faulty models",text,1,0,,4