본문 바로가기
AWS/자격시험 준비

AI Practitioner Study Guide (62 items)

by Pacloud 2024. 9. 2.
반응형

I have summarized the learning topics (exam scope) included in the exam guide directly provided by AWS. While the guide states that this list is not exhaustive, the content below is more than sufficient to pass the exam without any difficulty. The exam scope consists of 5 domains (areas). Each domain has multiple tasks, and each task has its objectives. There are a total of 62 objectives. Recall that the official version of the AI Practitioner exam has 65 questions. If you organize each objective as if you're preparing for an interview, you can pass the exam with ease.

 

I've attached the guide file for your direct reference if you'd like to check it yourself.

AWS-Certified-AI-Practitioner_Exam-Guide.pdf
0.16MB

Exam scope and weighs

  도메인 비중
1
Fundamentals of AI and ML
20%
2
Fundamentals of Generative AI
24%
3
Applications of Foundation Models
28%
4
Guidelines for Responsible AI
14%
5
Security, Compliance, and Governance for AI Solutions
14%

Domain 1. Fundamentals of AI and ML (16 items)

1.1. Explain basic AI concepts and terminologies (5 items)

더보기
  • Define basic AI terms
    • Examples: AI, ML, deep learning, neural networks, computer vision, natural language processing (NLP), model, algorithm, training and inferencing, bias, fairness, fit, large language model (LLM)
  • Describe similarities and differences between AI, ML, and deep learning
  • Describe various types of inferencing
    • Examples: batch, real-time
  • Describe different types of data in AI models
    • Examples: labeled and unlabeled, tabular, time-series, image, text, structured and unstructured
  • Describe supervised learning, unsupervised learning, and reinforcement learning

1.2. Identify practical use cases for AI (5 items)

더보기
  • Recognize applications where AI/ML can provide value
    • Examples: assist human decision making, solution scalability, automation
  • Determine when AI/ML solutions are not appropriate
    • Examples: cost-benefit analyses, situations when a specific outcome is needed instead of a prediction
  • Select appropriate ML techniques for specific use cases
    • Examples: regression, classification, clustering
  • Identify examples of real-world AI applications
    • Examples: computer vision, NLP, speech recognition, recommendation systems, fraud detection, forecasting
  • Explain capabilities of AWS managed AI/ML services
    • Examples: SageMaker, Amazon Transcribe, Amazon Translate, Amazon Comprehend, Amazon Lex, Amazon Polly

1.3 Describe the ML development lifecycle (6 items)

더보기
  • Describe components of an ML pipeline
    • Examples: data collection, exploratory data analysis (EDA), data pre-processing, feature engineering, model training, hyperparameter tuning, evaluation, deployment, monitoring
  • Understand sources of ML models
    • Examples: open source pre-trained models, training custom models
  • Describe methods to use a model in production
    • Examples: managed API service, self-hosted API
  • Identify relevant AWS services and features for each stage of an ML pipeline
    • Examples: SageMaker, Amazon SageMaker Data Wrangler, Amazon SageMaker Feature Store, Amazon SageMaker Model Monitor
  • Understand fundamental concepts of ML operations (MLOps)
    • Examples: experimentation, repeatable processes, scalable systems, managing technical debt, achieving production readiness, model monitoring, model re-training
  • Understand model performance metrics and business metrics to evaluate ML models
    • Examples: accuracy, Area Under the ROC Curve (AUC), F1 score, cost per user, development costs, customer feedback, return on investment (ROI)

Domain 2. Fundamentals of Generative AI (11 items)

2.1. Explain the basic concepts of generative AI (3 items)

더보기
  • Understand foundational generative AI concepts
    • Examples: tokens, chunking, embeddings, vectors, prompt engineering, transformer-based LLMs, foundation models, multi-modal models, diffusion models
  • Identify potential use cases for generative AI models
    • Examples: image, video, and audio generation; summarization; chatbots; translation; code generation; customer service agents; search; recommendation engines
  • Describe the foundation model lifecycle
    • Examples: data selection, model selection, pre-training, fine-tuning, evaluation, deployment, feedback

2.2. Understand the capabilities and limitations of generative AI for solving business problems (4 items)

더보기
  • Describe the advantages of generative AI
    • Examples: adaptability, responsiveness, simplicity
  • Identify disadvantages of generative AI solutions
    • Examples: hallucinations, interpretability, inaccuracy, nondeterminism
  • Understand various factors to select appropriate generative AI models
    • Examples: model types, performance requirements, capabilities, constraints, compliance
  • Determine business value and metrics for generative AI applications
    • Examples: cross-domain performance, efficiency, conversion rate, average revenue per user, accuracy, customer lifetime value

2.3. Describe AWS infrastructure and technologies for building generative AI applications (4 items)

더보기
  • Identify AWS services and features to develop generative AI applications
    • Examples: Amazon SageMaker JumpStart; Amazon Bedrock; PartyRock, an Amazon Bedrock Playground; Amazon Q
  • Describe the advantages of using AWS generative AI services to build applications
    • Examples: accessibility, lower barrier to entry, efficiency, cost-effectiveness, speed to market, ability to meet business objectives
  • Understand the benefits of AWS infrastructure for generative AI applications
    • Examples: security, compliance, responsibility, safety
  • Understand cost tradeoffs of AWS generative AI services
    • Examples: responsiveness, availability, redundancy, performance, regional coverage, token-based pricing, provision throughput, custom models

Domain 3. Applications of Foundation Models (16 items)

3.1. Describe design considerations for applications that use foundation models (6 items)

더보기
  • Identify selection criteria to choose pre-trained models
    • Examples: cost, modality, latency, multi-lingual, model size, model complexity, customization, input/output length
  • Understand the effect of inference parameters on model responses
    • Examples: temperature, input/output length
  • Define Retrieval Augmented Generation (RAG) and describe its business applications
    • Examples: Amazon Bedrock, knowledge base
  • Identify AWS services that help store embeddings within vector databases
    • Examples: Amazon OpenSearch Service, Amazon Aurora, Amazon Neptune, Amazon DocumentDB (with MongoDB compatibility), Amazon RDS for PostgreSQL
  • Explain the cost tradeoffs of various approaches to foundation model customization
    • Examples: pre-training, fine-tuning, in-context learning, RAG
  • Understand the role of agents in multi-step tasks
    • Examples: Agents for Amazon Bedrock

3.2. Choose effective prompt engineering techniques (4 items)

더보기
  • Describe the concepts and constructs of prompt engineering
    • Examples: context, instruction, negative prompts, model latent space
  • Understand techniques for prompt engineering
    • Examples: chain-of-thought, zero-shot, single-shot, few-shot, prompt templates
  • Understand the benefits and best practices for prompt engineering
    • Examples: response quality improvement, experimentation, guardrails, discovery, specificity and concision, using multiple comments
  • Define potential risks and limitations of prompt engineering
    • Examples: exposure, poisoning, hijacking, jailbreaking

3.3. Describe the training and fine-tuning process for foundation models (3 items)

더보기
  • Describe the key elements of training a foundation model
    • Examples: pre-training, fine-tuning, continuous pre-training
  • Define methods for fine-tuning a foundation model
    • Examples: instruction tuning, adapting models for specific domains, transfer learning, continuous pre-training
  • Describe how to prepare data to fine-tune a foundation model
    • Examples: data curation, governance, size, labeling, representativeness, reinforcement learning from human feedback (RLHF)

3.4. Describe methods to evaluate foundation model performance (3 items)

더보기
  • Understand approaches to evaluate foundation model performance
    • Examples: human evaluation, benchmark datasets
  • Identify relevant metrics to assess foundation model performance
    • Examples: Recall-Oriented Understudy for Gisting Evaluation (ROUGE), Bilingual Evaluation Understudy (BLEU), BERTScore
  • Determine whether a foundation model effectively meets business objectives
    • Examples: productivity, user engagement, task engineering

Domain 4. Guidelines for Responsible AI (11 items)

4.1. Explain the development of AI systems that are responsible (7 items)

더보기
  • Identify features of responsible AI
    • Examples: bias, fairness, inclusivity, robustness, safety, veracity
  • Understand how to use tools to identify features of responsible AI
    • Examples: Guardrails for Amazon Bedrock
  • Understand responsible practices to select a model
    • Examples: environmental considerations, sustainability
  • Identify legal risks of working with generative AI
    • Examples: intellectual property infringement claims, biased model outputs, loss of customer trust, end user risk, hallucinations
  • Identify characteristics of datasets
    • Examples: inclusivity, diversity, curated data sources, balanced datasets
  • Understand effects of bias and variance
    • Examples: effects on demographic groups, inaccuracy, overfitting, underfitting
  • Describe tools to detect and monitor bias, trustworthiness, and truthfulness
    • Examples: analyzing label quality, human audits, subgroup analysis, Amazon SageMaker Clarify, SageMaker Model Monitor, Amazon Augmented AI (Amazon A2I)

4.2. Recognize the importance of transparent and explainable models (4 items)

더보기
  • Understand the differences between models that are transparent and explainable and models that are not transparent and explainable
  • Understand the tools to identify transparent and explainable models
    • Examples: Amazon SageMaker Model Cards, open source models, data, licensing
  • Identify tradeoffs between model safety and transparency
    • Examples: measure interpretability and performance
  • Understand principles of human-centered design for explainable AI

Domain 5. Security, Compliance, and Governance for AI Solutions (8 items)

5.1. Explain methods to secure AI systems (4 items)

더보기
  • Identify AWS services and features to secure AI systems
    • Examples: IAM roles, policies, and permissions; encryption; Amazon Macie; AWS PrivateLink; AWS shared responsibility model
  • Understand the concept of source citation and documenting data origins
    • Examples: data lineage, data cataloging, SageMaker Model Cards
  • Describe best practices for secure data engineering
    • Examples: assessing data quality, implementing privacy-enhancing technologies, data access control, data integrity
  • Understand security and privacy considerations for AI systems
    • Examples: application security, threat detection, vulnerability management, infrastructure protection, prompt injection, encryption at rest and in transit

5.2. Recognize governance and compliance regulations for AI systems (4 items)

더보기
  • Identify regulatory compliance standards for AI systems
    • Examples: International Organization for Standardization (ISO), System and Organization Controls (SOC), algorithm accountability laws
  • Identify AWS services and features to assist with governance and regulation compliance
    • Examples: AWS Config, Amazon Inspector, AWS Audit Manager, AWS Artifact, AWS CloudTrail, AWS Trusted Advisor
  • Describe data governance strategies
    • Examples: data lifecycles, logging, residency, monitoring, observation, retention
  • Describe processes to follow governance protocols
    • Examples: policies, review cadence, review strategies, governance frameworks such as the Generative AI Security Scoping Matrix, transparency standards, team training requirements